-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
content: source track: update source threats for draft spec #1236
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
|
@@ -53,84 +53,64 @@ broadly adopted in an automated fashion, minimizing the chance of mistakes. | |||||||||||
|
||||||||||||
A source integrity threat is a potential for an adversary to introduce a change | ||||||||||||
to the source code that does not reflect the intent of the software producer. | ||||||||||||
This includes the threat of an authorized individual introducing an unauthorized | ||||||||||||
change---in other words, an insider threat. | ||||||||||||
This includes modification of the source data at rest as well as insider threats, | ||||||||||||
when an authorized individual introduces an unauthorized change. | ||||||||||||
|
||||||||||||
SLSA v1.0 does not address source threats, but we anticipate doing so in a | ||||||||||||
[future version](future-directions.md#source-track). In the meantime, the | ||||||||||||
threats and potential mitigations listed here show how SLSA v1.0 can fit into a | ||||||||||||
broader supply chain security program. | ||||||||||||
The SLSA Source track mitigates these threats when the consumer | ||||||||||||
[verifies source revisions](verifying-source.md) against expectations, confirming | ||||||||||||
that the revision they received was produced in the expected manner. | ||||||||||||
|
||||||||||||
### (A) Producer | ||||||||||||
|
||||||||||||
The producer of the software intentionally produces code that harms the | ||||||||||||
consumer, or the producer otherwise uses practices that are not deserving of the | ||||||||||||
consumer's trust. | ||||||||||||
|
||||||||||||
Threats in this category likely *cannot* be mitigated through controls placed | ||||||||||||
during the authoring/reviewing process, in contrast with (B). | ||||||||||||
<details><summary>Software producer intentionally creates a malicious revision of the source</summary> | ||||||||||||
|
||||||||||||
<!-- | ||||||||||||
**TODO:** The difference between (A) and (B) is still a bit fuzzy, which would | ||||||||||||
be nice to resolve. For example, compromised developer credentials - is that (A) | ||||||||||||
or (B)? | ||||||||||||
--> | ||||||||||||
|
||||||||||||
<details><summary>Software producer intentionally submits bad code</summary> | ||||||||||||
|
||||||||||||
*Threat:* Software producer intentionally submits "bad" code, following all | ||||||||||||
proper processes. | ||||||||||||
*Threat:* A producer intentionally creates a malicious revision with the intent of harming their consumers. | ||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @TomHennen what if we update this to cover both:
Suggested change
The mitigation for both is a clearer -- you cannot. |
||||||||||||
|
||||||||||||
*Mitigation:* **TODO** | ||||||||||||
*Mitigation:* | ||||||||||||
This kind of attack cannot be directly mitigated through SLSA controls. | ||||||||||||
Trustworthiness scales with transparency, and consumers SHOULD push their vendors to follow transparency best-practices. | ||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "Transparency" is unfortunately an overloaded word in our world. WDYM specifically in this context? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. oof, that's a good callout. I propose that I mean:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'd be inclined to remove the sentence all together. Yes, transparency (visibility into the process that creates the software) is one way of increasing trustworthiness but there are other vectors too. "Does this organization have something to lose if they do a bad thing? Do I think that's enough of an incentive for them not to do the bad thing". Additionally, I don't think "Transparency best-practices" are well defined anywhere (or even vaguely agreed upon...). Perhaps the easiest way forward is
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @TomHennen I think SLSA tooling gives you a way to unilaterally improve trust when the source and builds are open source. This is possible independently of whether or not the producer has something to lose. I agree though that this section needs to be reduced. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I left two suggestions. wdyt @trishankatdatadog @TomHennen ? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Left a comment down there... |
||||||||||||
When transparency is not possible, consumers may choose not to consume the artifact, or may require additional evidence of correctness from a trusted third-party. | ||||||||||||
Tools like the [OSSF Scorecard](https://github.com/ossf/scorecard) can help to quantify the risk of consuming artifacts from specific producers, but do not fully remove it. | ||||||||||||
For example, a consumer may choose to only consume source artifacts from repositories that have a high score on the OSSF Scorecard. | ||||||||||||
Comment on lines
+75
to
+78
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
We could redirect from here over to "verifying platforms". There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sorry for late reply. Why do consumers even need to know which producers are trusted to produce artifacts (source code, I imagine, in this case)? Couldn't they just get away with the Source Track VSA from the issuer, and call it a day so long as its subjects/outputs (e.g., source code) matches the inputs to the next step (e.g., build)? |
||||||||||||
|
||||||||||||
*Example:* A popular extension author sells the rights to a new owner, who then | ||||||||||||
modifies the code to secretly mine cryptocurrency at the users' expense. SLSA | ||||||||||||
does not protect against this, though if the extension were open source, regular | ||||||||||||
auditing may discourage this from happening. | ||||||||||||
*Example:* A producer with an otherwise good reputation decides suddenly to produce a malicious artifact with the intent to harm their consumers. | ||||||||||||
|
||||||||||||
</details> | ||||||||||||
|
||||||||||||
<!-- | ||||||||||||
**TODO:** More producer threats? Perhaps the attack to xz where a malicious | ||||||||||||
contributor gained enhanced privileges through social engineering? | ||||||||||||
--> | ||||||||||||
Comment on lines
-94
to
-96
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. we could add another section here like
We could say this is when "the producer UNintentially ships bad code." |
||||||||||||
|
||||||||||||
### (B) Authoring & reviewing | ||||||||||||
### (B) Modifying the source | ||||||||||||
|
||||||||||||
An adversary introduces a change through the official source control management | ||||||||||||
interface without any special administrator privileges. | ||||||||||||
An adversary without any special administrator privileges attempts to introduce a change counter to the declared intent of the source by following the producer's official source control process. | ||||||||||||
|
||||||||||||
Threats in this category *can* be mitigated by code review or some other | ||||||||||||
controls during the authoring/reviewing process, at least in theory. Contrast | ||||||||||||
this with (A), where such controls are likely ineffective. | ||||||||||||
Threats in this category can be mitigated by following source control management best practices. | ||||||||||||
|
||||||||||||
#### (B1) Submit change without review | ||||||||||||
|
||||||||||||
<details><summary>Directly submit without review</summary> | ||||||||||||
|
||||||||||||
*Threat:* Submit bad code to the source repository without another person | ||||||||||||
reviewing. | ||||||||||||
*Threat:* Submit code to the source repository without another person reviewing. | ||||||||||||
|
||||||||||||
*Mitigation:* Source repository requires two-person approval for all changes. | ||||||||||||
*Mitigation:* The producer requires approval of all changes before they are accepted. | ||||||||||||
|
||||||||||||
*Example:* Adversary directly pushes a change to a GitHub repo's `main` branch. | ||||||||||||
Solution: Configure GitHub's "branch protection" feature to require pull request | ||||||||||||
reviews on the `main` branch. | ||||||||||||
*Example:* Adversary directly pushes a change to a git repo's `main` branch. | ||||||||||||
Solution: The producer can configure branch protection rules on the `main` branch. | ||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I wonder if this can lean on the source track and point to the documented change management process piece for setting guardrails on submitting changes. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think in many ways this section is duplicative of other sections. We could just say:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @adityasaky do you think we should just cut all these scenarios and redirect to the source track? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Note that this page discusses lots of threats to the build and doesn't just defer to the SLSA Build Track. The general purpose of this page, AIUI, was to centralize all the various threats in one place and direct folks elsewhere as needed. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Potentially then most of these examples should link to relevant sections of the relevant tracks. Maybe we'll take it as a todo. |
||||||||||||
A best practice is to require approval of any changes via a change management tool before they are accepted into the source. | ||||||||||||
|
||||||||||||
</details> | ||||||||||||
<details><summary>Review own change through a sock puppet account</summary> | ||||||||||||
<details><summary>Single actor controls multiple accounts</summary> | ||||||||||||
|
||||||||||||
*Threat:* Propose a change using one account and then approve it using another | ||||||||||||
account. | ||||||||||||
*Threat:* An actor is able to control multiple account and effectively approve their own code changes. | ||||||||||||
|
||||||||||||
*Mitigation:* Source repository requires approval from two different, trusted | ||||||||||||
persons. If the proposer is trusted, only one approval is needed; otherwise two | ||||||||||||
approvals are needed. The software producer maps accounts to trusted persons. | ||||||||||||
*Mitigation:* The producer must ensure that no actor is able to control or influence multiple accounts with review privileges. | ||||||||||||
|
||||||||||||
*Example:* Adversary creates a pull request using a secondary account and then | ||||||||||||
approves and merges the pull request using their primary account. Solution: | ||||||||||||
Configure branch protection to require two approvals and ensure that all | ||||||||||||
repository contributors and owners map to unique persons. | ||||||||||||
*Example:* Adversary creates a pull request using a secondary account and approves it using their primary account. | ||||||||||||
|
||||||||||||
Solution: The producer must require strongly authenticated user accounts and ensure that all accounts map to unique persons. | ||||||||||||
A common vector for this attack is to take over a robot account with the permission to contribute code. | ||||||||||||
Control of the robot and an actors own legitimate account is enough to exploit this vulnerability. | ||||||||||||
zachariahcox marked this conversation as resolved.
Show resolved
Hide resolved
trishankatdatadog marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||||||||
|
||||||||||||
</details> | ||||||||||||
<details><summary>Use a robot account to submit change</summary> | ||||||||||||
|
@@ -142,72 +122,72 @@ two-person review. | |||||||||||
robots. | ||||||||||||
|
||||||||||||
*Example:* A file within the source repository is automatically generated by a | ||||||||||||
robot, which is allowed to submit without review. Adversary compromises the | ||||||||||||
robot and submits a malicious change without review. Solution: Require human | ||||||||||||
review for these changes. | ||||||||||||
robot, which is allowed to submit without review. | ||||||||||||
Adversary compromises the robot and submits a malicious change. | ||||||||||||
Solution: Require review for such changes. | ||||||||||||
|
||||||||||||
<!-- | ||||||||||||
> TODO([#196](https://github.com/slsa-framework/slsa/issues/196)) This solution | ||||||||||||
> may not be practical. Should there be an exception for locked down robot | ||||||||||||
> accounts? | ||||||||||||
--> | ||||||||||||
</details> | ||||||||||||
<details><summary>Abuse of rule exceptions</summary> | ||||||||||||
|
||||||||||||
*Threat:* Rule exceptions provide vector for abuse | ||||||||||||
|
||||||||||||
*Mitigation:* Remove rule exceptions. | ||||||||||||
|
||||||||||||
*Example:* The intent of a producer is to require two-person review on "all changes except for documentation changes," defined as those only modifying `.md` files. | ||||||||||||
Adversary submits a malicious executable named `evil.md` and a code review is not required due to the exception. | ||||||||||||
Technically, the intent of the producer was followed and the produced malicious revision meets all defined policies. | ||||||||||||
Solution: The producer adjusts the rules to prohibit such exceptions. | ||||||||||||
|
||||||||||||
</details> | ||||||||||||
<details><summary>Abuse review exceptions</summary> | ||||||||||||
|
||||||||||||
*Threat:* Exploit a review exception to submit a bad change without review. | ||||||||||||
<details><summary>Highly-permissioned actor bypasses or disables controls</summary> | ||||||||||||
|
||||||||||||
*Mitigation:* All changes require two-person review without exception. | ||||||||||||
*Threat:* Trusted actor with "admin" privileges in a repository submits code by disabling existing controls. | ||||||||||||
|
||||||||||||
*Example:* Source repository requires two-person review on all changes except | ||||||||||||
for "documentation changes," defined as only touching files ending with `.md` or | ||||||||||||
`.html`. Adversary submits a malicious executable named `evil.md` without review | ||||||||||||
using this exception, and then builds a malicious package containing this | ||||||||||||
executable. This would pass the policy because the source repository is correct, | ||||||||||||
and the source repository does require two-person review. Solution: Do not allow | ||||||||||||
such exceptions. | ||||||||||||
*Mitigation:* All actors must be subject to same controls, whether or not they have | ||||||||||||
administrator privileges. | ||||||||||||
Changes to the controls themselves should require their own review process. | ||||||||||||
|
||||||||||||
<!-- | ||||||||||||
> TODO This solution may not be practical in all circumstances. Are there any | ||||||||||||
> valid exceptions? If so, how do we ensure they cannot be exploited? | ||||||||||||
--> | ||||||||||||
*Example 1:* A GitHub repository-level admin pushes a change without review, even though GitHub branch protection is enabled. | ||||||||||||
Solution: The producer can modify the rule to disallow bypass by administrators, or move the rule to an organization-level ruleset. | ||||||||||||
|
||||||||||||
</details> | ||||||||||||
*Example 2:* GitHub repository-level admin removes a branch requirement, pushes their change, then re-enables the requirement to cover their tracks. | ||||||||||||
Solution: The producer can configure higher-permission-level rules (such as organization-level GitHub Rulesets) to prevent repository-level tampering. | ||||||||||||
|
||||||||||||
#### (B2) Evade code review requirements | ||||||||||||
#### (B2) Evade change management process | ||||||||||||
|
||||||||||||
<details><summary>Modify code after review</summary> | ||||||||||||
|
||||||||||||
*Threat:* Modify the code after it has been reviewed but before submission. | ||||||||||||
|
||||||||||||
*Mitigation:* Source control platform invalidates approvals whenever the | ||||||||||||
proposed change is modified. | ||||||||||||
*Mitigation:* Source control platform invalidates approvals whenever the proposed change is modified. | ||||||||||||
|
||||||||||||
*Example:* Source repository requires two-person review on all changes. | ||||||||||||
Adversary sends a "good" pull request to a peer, who approves it. Adversary then | ||||||||||||
modifies it to contain "bad" code before submitting. Solution: Configure branch | ||||||||||||
protection to dismiss stale approvals when new changes are pushed. | ||||||||||||
Adversary sends an initial "good" pull request to a peer, who approves it. | ||||||||||||
Adversary then modifies their proposal to contain "bad" code. | ||||||||||||
|
||||||||||||
> Note: This is not currently a SLSA requirement because the productivity hit is | ||||||||||||
> considered too great to outweigh the security benefit. The cost of code review | ||||||||||||
> is already too high for most projects, given current code review tooling, so | ||||||||||||
> making code review even costlier would not further our goals. However, this | ||||||||||||
> should be considered for future SLSA revisions once the state-of-the-art for | ||||||||||||
> code review has improved and the cost can be minimized. | ||||||||||||
Solution: Configure the code review rules to require review of the most recent revision before submission. | ||||||||||||
Resetting or "dismissing" votes on a PR introduces substantial friction to the process. | ||||||||||||
Depending on the security posture of the source, the producer has a few choices to deal with this situation. | ||||||||||||
They may: | ||||||||||||
|
||||||||||||
- Accept this risk. Code review is already expensive and the pros outweigh the cons here. | ||||||||||||
- Dismiss reviews when new changes are added. This is a common outcome when expert code review is required. | ||||||||||||
- Leave previous reviews intact, but require that "at least the last revision must be reviewed by someone." | ||||||||||||
|
||||||||||||
</details> | ||||||||||||
<details><summary>Submit a change that is unreviewable</summary> | ||||||||||||
|
||||||||||||
*Threat:* Send a change that is meaningless for a human to review that looks | ||||||||||||
*Threat:* Adversary crafts a change that is meaningless for a human to review that looks | ||||||||||||
benign but is actually malicious. | ||||||||||||
|
||||||||||||
*Mitigation:* Code review system ensures that all reviews are informed and | ||||||||||||
meaningful. | ||||||||||||
|
||||||||||||
*Example:* A proposed change updates a file, but the reviewer is only presented | ||||||||||||
with a diff of the cryptographic hash, not of the file contents. Thus, the | ||||||||||||
reviewer does not have enough context to provide a meaningful review. Solution: | ||||||||||||
the code review system should present the reviewer with a content diff or some | ||||||||||||
reviewer does not have enough context to provide a meaningful review. | ||||||||||||
Solution: the code review system should present the reviewer with a content diff or some | ||||||||||||
other information to make an informed decision. | ||||||||||||
|
||||||||||||
</details> | ||||||||||||
|
@@ -221,43 +201,25 @@ different context. | |||||||||||
*Example:* MyPackage's source repository requires two-person review. Adversary | ||||||||||||
forks the repo, submits a change in the fork with review from a colluding | ||||||||||||
colleague (who is not trusted by MyPackage), then merges the change back into | ||||||||||||
the upstream repo. Solution: The merge should still require review, even though | ||||||||||||
the fork was reviewed. | ||||||||||||
the upstream repo. | ||||||||||||
Solution: The merge should still require review, even though the fork was reviewed. | ||||||||||||
|
||||||||||||
</details> | ||||||||||||
<details><summary>Compromise another account</summary> | ||||||||||||
|
||||||||||||
*Threat:* Compromise one or more trusted accounts and use those to submit and | ||||||||||||
review own changes. | ||||||||||||
|
||||||||||||
*Mitigation:* Source control platform verifies two-factor authentication, which | ||||||||||||
increases the difficulty of compromising accounts. | ||||||||||||
|
||||||||||||
*Example:* Trusted person uses a weak password on GitHub. Adversary guesses the | ||||||||||||
weak password, logs in, and pushes changes to a GitHub repo. Solution: Configure | ||||||||||||
GitHub organization to requires 2FA for all trusted persons. This would increase | ||||||||||||
the difficulty of using the compromised password to log in to GitHub. | ||||||||||||
|
||||||||||||
</details> | ||||||||||||
<details><summary>Hide bad change behind good one</summary> | ||||||||||||
<details><summary>Commit graph attacks</summary> | ||||||||||||
|
||||||||||||
*Threat:* Request review for a series of two commits, X and Y, where X is bad | ||||||||||||
and Y is good. Reviewer thinks they are approving only the final Y state whereas | ||||||||||||
they are also implicitly approving X. | ||||||||||||
*Threat:* Request review for a series of two commits, X and Y, where X is bad and Y is good. | ||||||||||||
Reviewer thinks they are approving only the final Y state but they are also implicitly approving X. | ||||||||||||
|
||||||||||||
*Mitigation:* Only the version that is actually reviewed is the one that is | ||||||||||||
approved. Any intermediate revisions don't count as being reviewed. | ||||||||||||
*Mitigation:* The producer declares that only the final delta is considered approved. | ||||||||||||
Intermediate revisions don't count as being reviewed and are not added to the protected context (such as the `main` branch). | ||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||||
|
||||||||||||
*Example:* Adversary sends a pull request containing malicious commit X and | ||||||||||||
benign commit Y that undoes X. In the pull request UI, reviewer only reviews and | ||||||||||||
approves "changes from all commits", which is a delta from HEAD to Y; they don't | ||||||||||||
see X. Adversary then builds from the malicious revision X. Solution: Policy | ||||||||||||
does not accept this because the version X is not considered reviewed. | ||||||||||||
*Example:* Adversary sends a pull request containing malicious commit X and benign commit Y that undoes X. | ||||||||||||
The produced diff of X + Y contains zero lines of changed code and the reviewer may not notice that X is malicious unless they review each commit in the request. | ||||||||||||
If X is allowed to become reachable from the protected branch, the content may become available in secured environments such as developer machines. | ||||||||||||
|
||||||||||||
<!-- | ||||||||||||
> TODO This is implicit but not clearly spelled out in the requirements. We | ||||||||||||
> should consider clarifying if there is confusion or incorrect implementations. | ||||||||||||
--> | ||||||||||||
Solution: The code review tool does not merge contributor-created commits, and instead merges a single new commit representing only the reviewed "changes from all commits." | ||||||||||||
|
||||||||||||
</details> | ||||||||||||
|
||||||||||||
|
@@ -267,8 +229,10 @@ does not accept this because the version X is not considered reviewed. | |||||||||||
|
||||||||||||
*Threat:* Two trusted persons collude to author and approve a bad change. | ||||||||||||
|
||||||||||||
*Mitigation:* This threat is not currently addressed by SLSA. We use "two | ||||||||||||
trusted persons" as a proxy for "intent of the software producer". | ||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I do not think we should use "two trusted persons" as a proxy for producer intent. We should probably say "the documented change process for the source" if we must, but I prefer just "the intent". |
||||||||||||
*Mitigation:* The producer can arbitrarily increase friction of their policies to reduce risk, such as requiring additional, or more senior reviewers. | ||||||||||||
The goal of policy here is to ensure that the approved changes match the intention of the producer for the source. | ||||||||||||
Increasing the friction of the policies may make it harder to circumvent, but doing so has diminishing returns. | ||||||||||||
Ultimately the producer will need to land upon a balanced risk profile that makes sense for their security posture. | ||||||||||||
|
||||||||||||
</details> | ||||||||||||
<details><summary>Trick reviewer into approving bad code</summary> | ||||||||||||
|
@@ -294,29 +258,6 @@ An adversary introduces a change to the source control repository through an | |||||||||||
administrative interface, or through a compromise of the underlying | ||||||||||||
infrastructure. | ||||||||||||
|
||||||||||||
<details><summary>Project owner bypasses or disables controls</summary> | ||||||||||||
|
||||||||||||
*Threat:* Trusted person with "admin" privileges in a repository submits "bad" | ||||||||||||
code bypassing existing controls. | ||||||||||||
|
||||||||||||
*Mitigation:* All persons are subject to same controls, whether or not they have | ||||||||||||
administrator privileges. Disabling the controls requires two-person review (and | ||||||||||||
maybe notifies other trusted persons?) | ||||||||||||
|
||||||||||||
*Example 1:* GitHub project owner pushes a change without review, even though | ||||||||||||
GitHub branch protection is enabled. Solution: Enable the "Include | ||||||||||||
Administrators" option for the branch protection. | ||||||||||||
|
||||||||||||
*Example 2:* GitHub project owner disables "Include Administrators", pushes a | ||||||||||||
change without review, then re-enables "Include Administrators". This currently | ||||||||||||
has no solution on GitHub. | ||||||||||||
|
||||||||||||
<!-- | ||||||||||||
> TODO This is implicit but not clearly spelled out in the requirements. We | ||||||||||||
> should consider clarifying since most if not all existing platforms do not | ||||||||||||
> properly address this threat. | ||||||||||||
--> | ||||||||||||
|
||||||||||||
</details> | ||||||||||||
<details><summary>Platform admin abuses privileges</summary> | ||||||||||||
|
||||||||||||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this wording aligns better with the build integrity section below.