Skip to content

Integrity check should be strict and throw by default #155

@LiviaMedeiros

Description

@LiviaMedeiros

What is the issue with Subresource Integrity?

SRI looks like a security feature but is too loose to fulfill its purpose as such.

I see two main uses of it:

  1. As owner of a resource (e.g. website) that uses third party subresources (e.g. libraries hosted on external servers), I can set integrity to ensure that my users will get correct files.
  2. As user, I can get SRI string from third party that provides subresource (e.g. library hosted on external server) and see it as easily verifyable hash guaranteeing that their resource will not suddenly change.

In both cases, if anything bad (malicious behaviour on third-party side, compromised server, domain hijacking, MITM, etc.) happens and the subresource is altered, I expect it to be blocked.
Unfortunately, this is not always the case.

  1. If SRI is not implemented on the client side (and it has reason to be not implemented, since it requires reading and hashing whole subresource before allowing said resource to be consumed in lazy and memory-efficient way), it's simply ignored. There's no mechanism to enforce it, and the attacker can use it: for example, providing malicious data if user's User-Agent matches known version that ignores it.
  2. If the algorithm is unknown or malformed, the 3.3.2. Parse metadata tells to just ignore the item completely. The attacker can use it: for example, providing malformed string with correct hash and incorrect algorithm, which effectively disables the integrity check. Here's a repro to show how malicious script can be loaded with integrity looking like a legit well-known script: https://liviamedeiros.github.io/SRI-vulnerability-repro/

Proposed solutions for these particular problems:

For the algorithm parsing, make 3.3.4. Do bytes match metadataList? return false when metadata was not empty but parsedMetadata set is empty. Unknown algorithms can be skipped as long as at least one is known, but if integrity is set but not a single algorithm can be used, security must be prioritized hence the subresource must be blocked.

For the enforcing SRI check in general, it's trickier because proper security would hurt compatibility and be unlikely to be implemented by everyone.
Nevertheless, I think it wouldn't hurt if the spec had a note suggesting implementors to block resources by default if integrity is set but the spec isn't implemented yet.
In practice, for example, deno can reject fetch(url, { integrity: ... }) until the spec is implemented, and have an option to override it to be permissive at the cost of security (e.g. --no-validate-sri runtime flag or env variable).


In general, as described in 3.7. Handling integrity violations, in case of integrity violation a catchable error must occur. Thanks to that, more strict behaviour doesn't necessarily break compatibility: developers always can (and should, if their goal is to have compatibility, e.g. in case if the set of supported algorithms is changed in the future) catch errors and handle them in any way they need (logging, rejecting and asking to upgrade, fetching and checking the hash manually, etc.). AFAICT existing implementations follow this: fetch rejects with meaningful errors, and <script src="..." integrity="...-..." onerror="myErrorHandler.call(this, event)" /> also works.
So, I think the spec must enforce strict security: if integrity is set, subresource should be loaded only if it's parsed successfully, the algorithm is recognized, and the hash matches. If anything goes wrong, the implementation must throw and block the subresource instead of ignoring some errors.

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions