Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

content: source track: update source threats for draft spec #1236

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 12 additions & 1 deletion docs/spec/draft/future-directions.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ additional aspects of automatable supply chain security.
### Build L4

A build L4 could include further hardening of the build platform and enabling
corraboration of the provenance, for example by providing complete knowledge of
corroboration of the provenance, for example by providing complete knowledge of
the build inputs.

The initial [draft version (v0.1)] of SLSA defined a "SLSA 4" that included the
Expand Down Expand Up @@ -54,3 +54,14 @@ the basis for a future Build Platform Operations track:
</section>

[draft version (v0.1)]: ../v0.1/requirements.md


## Source Track
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added the "objective" section from source-requirements.md, updated the verbs to be future tense.
this is to be the target of future-direction links.


The SLSA Source track will describe increasing levels of trustworthiness and completeness in a repository revision's provenance (e.g. how it was generated, who the contributors were, etc).

The Source track will be scoped to revisions of a single repository that is controlled by an organization.
The organization determines the intent of the software in the repository, what Source level should apply to the repository, and administers technical controls to enforce that level.

The primary purpose of the Source track will be to enable verification that the creation of a revision followed the expected process.
Consumers will be able to examine source provenance attestations to determine if all sources used during the build meet their requirements.
7 changes: 3 additions & 4 deletions docs/spec/draft/threats-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,10 +23,9 @@ availability. Integrity means protection against tampering or unauthorized
modification at any stage of the software lifecycle. Within SLSA, we divide
integrity into source integrity vs build integrity.

**Source integrity:** Ensure that all changes to the source code reflect the
intent of the software producer. Intent of an organization is difficult to
define, so SLSA is expected to approximate this as approval from two authorized
representatives.
**Source integrity:** Ensure that source revisions contain only changes submitted by
authorized contributors according to the process defined by the software producer and
that source revisions are not modified as they pass between development stages.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this wording aligns better with the build integrity section below.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is 'fine', but we do lose the introduction of the concept of 'intent' (which we reference later) which isn't quite the same as 'process'. I'd argue the process is one of the ways the producer tries to ensure their intent is tracked, but it's not really what the end user cares about.

I'd also probably argue that the build process is somewhat different and they don't need to mirror each other. To some extent the source is the closest approximation of the org's intent, and then we just rely on the build system to mechanically transform that source faithfully.


**Build integrity:** Ensure that the package is built from the correct,
unmodified sources and dependencies according to the build recipe defined by the
Expand Down
227 changes: 85 additions & 142 deletions docs/spec/draft/threats.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,84 +53,65 @@ broadly adopted in an automated fashion, minimizing the chance of mistakes.

A source integrity threat is a potential for an adversary to introduce a change
to the source code that does not reflect the intent of the software producer.
This includes the threat of an authorized individual introducing an unauthorized
change---in other words, an insider threat.
This includes modification of the source data at rest as well as insider threats,
when an authorized individual introduces an unauthorized change.

SLSA v1.0 does not address source threats, but we anticipate doing so in a
[future version](current-activities.md#source-track). In the meantime, the
threats and potential mitigations listed here show how SLSA v1.0 can fit into a
broader supply chain security program.
SLSA does not yet address source threats, but we anticipate doing so in a
[future version](future-directions#source-track).

### (A) Producer

The producer of the software intentionally produces code that harms the
consumer, or the producer otherwise uses practices that are not deserving of the
consumer's trust.

Threats in this category likely *cannot* be mitigated through controls placed
during the authoring/reviewing process, in contrast with (B).
<details><summary>Software producer intentionally creates a malicious revision of the source</summary>

<!--
**TODO:** The difference between (A) and (B) is still a bit fuzzy, which would
be nice to resolve. For example, compromised developer credentials - is that (A)
or (B)?
-->
*Threat:* A producer intentionally creates a malicious revision (or a VSA issuer intentionally creates a malicious attestation) with the intent of harming their consumers.

<details><summary>Software producer intentionally submits bad code</summary>
*Mitigation:*
This kind of attack cannot be directly mitigated through SLSA controls.
Consumers must establish some basis to trust the organizations from which they consume software.
That basis may be:
* The code is open source and has a sufficiently large user-base that malicious changes are likely to be detected.
* The organization has sufficient legal or reputational incentives to dissuade it from making malicious changes.

*Threat:* Software producer intentionally submits "bad" code, following all
proper processes.

*Mitigation:* **TODO**
Ultimately this is a judgement call with no straightforward answer.

*Example:* A popular extension author sells the rights to a new owner, who then
modifies the code to secretly mine cryptocurrency at the users' expense. SLSA
does not protect against this, though if the extension were open source, regular
auditing may discourage this from happening.
*Example:* A producer with an otherwise good reputation decides suddenly to produce a malicious artifact with the intent to harm their consumers.

</details>

<!--
**TODO:** More producer threats? Perhaps the attack to xz where a malicious
contributor gained enhanced privileges through social engineering?
-->
### (B) Modifying the source

### (B) Authoring & reviewing
An adversary without any special administrator privileges attempts to introduce a change counter to the declared intent of the source by following the producer's official source control process.

An adversary introduces a change through the official source control management
interface without any special administrator privileges.

Threats in this category *can* be mitigated by code review or some other
controls during the authoring/reviewing process, at least in theory. Contrast
this with (A), where such controls are likely ineffective.
Threats in this category can be mitigated by following source control management best practices.

#### (B1) Submit change without review

<details><summary>Directly submit without review</summary>

*Threat:* Submit bad code to the source repository without another person
reviewing.
*Threat:* Submit code to the source repository without another person reviewing.

*Mitigation:* Source repository requires two-person approval for all changes.
*Mitigation:* The producer requires approval of all changes before they are accepted.

*Example:* Adversary directly pushes a change to a GitHub repo's `main` branch.
Solution: Configure GitHub's "branch protection" feature to require pull request
reviews on the `main` branch.
*Example:* Adversary directly pushes a change to a git repo's `main` branch.
Solution: The producer can configure branch protection rules on the `main` branch.
A best practice is to require approval of any changes via a change management tool before they are accepted into the source.

</details>
<details><summary>Review own change through a sock puppet account</summary>
<details><summary>Single actor controls multiple accounts</summary>

*Threat:* An actor is able to control multiple account and effectively approve their own code changes.

*Threat:* Propose a change using one account and then approve it using another
account.
*Mitigation:* The producer must ensure that no actor is able to control or influence multiple accounts with review privileges.

*Mitigation:* Source repository requires approval from two different, trusted
persons. If the proposer is trusted, only one approval is needed; otherwise two
approvals are needed. The software producer maps accounts to trusted persons.
*Example:* Adversary creates a pull request using a secondary account and approves it using their primary account.

*Example:* Adversary creates a pull request using a secondary account and then
approves and merges the pull request using their primary account. Solution:
Configure branch protection to require two approvals and ensure that all
repository contributors and owners map to unique persons.
Solution: The producer must require strongly authenticated user accounts and ensure that all accounts map to unique persons.
A common vector for this attack is to take over a robot account with the permission to contribute code.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

optional: Robot accounts are discussed below so this seems somewhat redundant. Suggest removing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's probably fine. I think these are different threat models, but not sure if the difference is worth calling out, or if these two blurbs are doing so.
This one is about gaining access to two accounts with write and the robots one is about gaining access to an account with bypass.

The mitigation for the first is hard to execute and protect against. You'd have to somehow prevent account takeovers.

The mitigation for the robots one is easier to execute (just run some kind of automation with a poisoned payload, usually) and protect against-- just do not allow any account to bypass code review requirements.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. I guess I'll leave it to you?

Control of the robot and an actors own legitimate account is enough to exploit this vulnerability.

</details>
<details><summary>Use a robot account to submit change</summary>
Expand All @@ -142,72 +123,72 @@ two-person review.
robots.

*Example:* A file within the source repository is automatically generated by a
robot, which is allowed to submit without review. Adversary compromises the
robot and submits a malicious change without review. Solution: Require human
review for these changes.
robot, which is allowed to submit without review.
Adversary compromises the robot and submits a malicious change.
Solution: Require review for such changes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optional:

Suggested change
Solution: Require review for such changes.
Solution: Require review for such changes or ensure all changes relevant to the robot's behavior require review.

E.g. imagine if the Robot is a GHA that generates code. If the workflow that runs that code is reviewed, and the generation tool is reviewed, then it could be safe?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(commented on the other thread)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose I think there are legitimate use cases for bypassing code review by robots, and I think it is possible to do safely?

I also don't think we have to address that here and now, especially if we don't have a fully fleshed out framework for doing so.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think eventually we'd need to say something like:

autobypasses

for some security postures, this practice increases risk because due to threat of account takeover.
For others it's way to reduce risk because security patches are applied faster (or whatever the bot does)


<!--
> TODO([#196](https://github.com/slsa-framework/slsa/issues/196)) This solution
> may not be practical. Should there be an exception for locked down robot
> accounts?
-->
</details>
<details><summary>Abuse of rule exceptions</summary>

*Threat:* Rule exceptions provide vector for abuse

*Mitigation:* Remove rule exceptions.

*Example:* The intent of a producer is to require two-person review on "all changes except for documentation changes," defined as those only modifying `.md` files.
Adversary submits a malicious executable named `evil.md` and a code review is not required due to the exception.
Technically, the intent of the producer was followed and the produced malicious revision meets all defined policies.
Solution: The producer adjusts the rules to prohibit such exceptions.

</details>
<details><summary>Abuse review exceptions</summary>

*Threat:* Exploit a review exception to submit a bad change without review.
<details><summary>Highly-permissioned actor bypasses or disables controls</summary>

*Mitigation:* All changes require two-person review without exception.
*Threat:* Trusted actor with "admin" privileges in a repository submits code by disabling existing controls.

*Example:* Source repository requires two-person review on all changes except
for "documentation changes," defined as only touching files ending with `.md` or
`.html`. Adversary submits a malicious executable named `evil.md` without review
using this exception, and then builds a malicious package containing this
executable. This would pass the policy because the source repository is correct,
and the source repository does require two-person review. Solution: Do not allow
such exceptions.
*Mitigation:* All actors must be subject to same controls, whether or not they have
administrator privileges.
Changes to the controls themselves should require their own review process.

<!--
> TODO This solution may not be practical in all circumstances. Are there any
> valid exceptions? If so, how do we ensure they cannot be exploited?
-->
*Example 1:* A GitHub repository-level admin pushes a change without review, even though GitHub branch protection is enabled.
Solution: The producer can modify the rule to disallow bypass by administrators, or move the rule to an organization-level ruleset.

</details>
*Example 2:* GitHub repository-level admin removes a branch requirement, pushes their change, then re-enables the requirement to cover their tracks.
Solution: The producer can configure higher-permission-level rules (such as organization-level GitHub Rulesets) to prevent repository-level tampering.

#### (B2) Evade code review requirements
#### (B2) Evade change management process

<details><summary>Modify code after review</summary>

*Threat:* Modify the code after it has been reviewed but before submission.

*Mitigation:* Source control platform invalidates approvals whenever the
proposed change is modified.
*Mitigation:* Source control platform invalidates approvals whenever the proposed change is modified.

*Example:* Source repository requires two-person review on all changes.
Adversary sends a "good" pull request to a peer, who approves it. Adversary then
modifies it to contain "bad" code before submitting. Solution: Configure branch
protection to dismiss stale approvals when new changes are pushed.
Adversary sends an initial "good" pull request to a peer, who approves it.
Adversary then modifies their proposal to contain "bad" code.

Solution: Configure the code review rules to require review of the most recent revision before submission.
Resetting or "dismissing" votes on a PR introduces substantial friction to the process.
Depending on the security posture of the source, the producer has a few choices to deal with this situation.
They may:

> Note: This is not currently a SLSA requirement because the productivity hit is
> considered too great to outweigh the security benefit. The cost of code review
> is already too high for most projects, given current code review tooling, so
> making code review even costlier would not further our goals. However, this
> should be considered for future SLSA revisions once the state-of-the-art for
> code review has improved and the cost can be minimized.
- Accept this risk. Code review is already expensive and the pros outweigh the cons here.
- Dismiss reviews when new changes are added. This is a common outcome when expert code review is required.
- Leave previous reviews intact, but require that "at least the last revision must be reviewed by someone."

</details>
<details><summary>Submit a change that is unreviewable</summary>

*Threat:* Send a change that is meaningless for a human to review that looks
*Threat:* Adversary crafts a change that is meaningless for a human to review that looks
benign but is actually malicious.

*Mitigation:* Code review system ensures that all reviews are informed and
meaningful.

*Example:* A proposed change updates a file, but the reviewer is only presented
with a diff of the cryptographic hash, not of the file contents. Thus, the
reviewer does not have enough context to provide a meaningful review. Solution:
the code review system should present the reviewer with a content diff or some
reviewer does not have enough context to provide a meaningful review.
Solution: the code review system should present the reviewer with a content diff or some
other information to make an informed decision.

</details>
Expand All @@ -220,44 +201,27 @@ different context.

*Example:* MyPackage's source repository requires two-person review. Adversary
forks the repo, submits a change in the fork with review from a colluding
colleague (who is not trusted by MyPackage), then merges the change back into
the upstream repo. Solution: The merge should still require review, even though
the fork was reviewed.
colleague (who is not trusted by MyPackage), then proposes the change to
the upstream repo.
Solution: The proposed change still requires two-person review in the upstream context even though it received two-person review in another context.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated the words to be a little more precise.
In general, I'd expect reviews to be transitive for some situations. but yeah, if your org thinks this is a problem, then this is how you'd fix it.


</details>
<details><summary>Compromise another account</summary>

*Threat:* Compromise one or more trusted accounts and use those to submit and
review own changes.

*Mitigation:* Source control platform verifies two-factor authentication, which
increases the difficulty of compromising accounts.

*Example:* Trusted person uses a weak password on GitHub. Adversary guesses the
weak password, logs in, and pushes changes to a GitHub repo. Solution: Configure
GitHub organization to requires 2FA for all trusted persons. This would increase
the difficulty of using the compromised password to log in to GitHub.

</details>
<details><summary>Hide bad change behind good one</summary>
<details><summary>Commit graph attacks</summary>

*Threat:* Request review for a series of two commits, X and Y, where X is bad
and Y is good. Reviewer thinks they are approving only the final Y state whereas
they are also implicitly approving X.
*Threat:* Request review for a series of two commits, X and Y, where X is bad and Y is good.
Reviewer thinks they are approving only the final Y state but they are also implicitly approving X.

*Mitigation:* Only the version that is actually reviewed is the one that is
approved. Any intermediate revisions don't count as being reviewed.
*Mitigation:* The producer declares that only the final delta is considered approved.
In this configuration, intermediate revisions are not considered to be approved and are not added to the protected context (e.g. the `main` branch).
With git version control systems this is called a "squash" merge strategy.

*Example:* Adversary sends a pull request containing malicious commit X and
benign commit Y that undoes X. In the pull request UI, reviewer only reviews and
approves "changes from all commits", which is a delta from HEAD to Y; they don't
see X. Adversary then builds from the malicious revision X. Solution: Policy
does not accept this because the version X is not considered reviewed.
*Example:* Adversary sends a pull request containing malicious commit X and benign commit Y that undoes X.
The produced diff of X + Y contains zero lines of changed code and the reviewer may not notice that X is malicious unless they review each commit in the request.
If X is allowed to become reachable from the protected branch, the content may become available in secured environments such as developer machines.

<!--
> TODO This is implicit but not clearly spelled out in the requirements. We
> should consider clarifying if there is confusion or incorrect implementations.
-->
Solution: The code review tool does not merge contributor-created commits, and instead merges a single new commit representing only the reviewed "changes from all commits."

</details>

Expand All @@ -267,8 +231,10 @@ does not accept this because the version X is not considered reviewed.

*Threat:* Two trusted persons collude to author and approve a bad change.

*Mitigation:* This threat is not currently addressed by SLSA. We use "two
trusted persons" as a proxy for "intent of the software producer".
*Mitigation:* The producer can arbitrarily increase friction of their policies to reduce risk, such as requiring additional, or more senior reviewers.
The goal of policy here is to ensure that the approved changes match the intention of the producer for the source.
Increasing the friction of the policies may make it harder to circumvent, but doing so has diminishing returns.
Ultimately the producer will need to land upon a balanced risk profile that makes sense for their security posture.

</details>
<details><summary>Trick reviewer into approving bad code</summary>
Expand All @@ -294,29 +260,6 @@ An adversary introduces a change to the source control repository through an
administrative interface, or through a compromise of the underlying
infrastructure.

<details><summary>Project owner bypasses or disables controls</summary>

*Threat:* Trusted person with "admin" privileges in a repository submits "bad"
code bypassing existing controls.

*Mitigation:* All persons are subject to same controls, whether or not they have
administrator privileges. Disabling the controls requires two-person review (and
maybe notifies other trusted persons?)

*Example 1:* GitHub project owner pushes a change without review, even though
GitHub branch protection is enabled. Solution: Enable the "Include
Administrators" option for the branch protection.

*Example 2:* GitHub project owner disables "Include Administrators", pushes a
change without review, then re-enables "Include Administrators". This currently
has no solution on GitHub.

<!--
> TODO This is implicit but not clearly spelled out in the requirements. We
> should consider clarifying since most if not all existing platforms do not
> properly address this threat.
-->

</details>
<details><summary>Platform admin abuses privileges</summary>

Expand Down
Loading