TL;DR: this was not just “another CVE.” In the Trivy ecosystem, trusted delivery channels were temporarily compromised: a release, Docker images, and GitHub Actions refs. If you pulled latest, 0.69.4, 0.69.5, 0.69.6, older trivy-action tags, or setup-trivy without reliable pinning during the exposure window, the correct default stance is simple: treat accessible secrets as potentially exposed, move to safe versions, and remove mutable refs from CI now, not later.

Why this matters more than a normal security advisory

A normal vulnerability often means “patch the dependency.” This incident is different: malicious content was injected into things teams usually treat as safe by default—releases, images, and GitHub Actions tags.

In plain language, that means:

That is why the correct response here is incident response, not routine dependency maintenance.

Who is actually at risk

1) Teams running Trivy as a container or binary

According to the official sources, risk applied to:

Lower risk or not affected:

2) Teams using GitHub Actions

Potentially affected:

Practical rule: if you cannot quickly and confidently prove that you used only safe immutable references, handle the pipeline as potentially compromised.

30-minute playbook without panic

Step 1. Find every Trivy usage point

Do not start with blind upgrades. Start with a short inventory.

rg -n "aquasec/trivy|ghcr.io/aquasecurity/trivy|aquasecurity/trivy-action|aquasecurity/setup-trivy" .

Look through:

Pay special attention to patterns like:

Step 2. Replace refs with safe versions immediately

The minimum safe matrix right now is:

ComponentTreat as safe right now
Trivy binary/imagev0.69.3 or verified v0.69.2; for images, prefer digest pinning
trivy-actionv0.35.0 / 0.35.0
setup-trivyv0.2.6, ideally pinned to a full SHA as well

The key lesson is not only about version numbers. If a security tool in a critical pipeline runs on a mutable tag, that is already an operational risk, even on a calm day.

Step 3. If a compromised artifact may have run, rotate secrets

Do not spend your first hours debating whether exfiltration “definitely” happened. If the pipeline had access to secrets and may have executed an affected component, the safe path is:

  1. revoke and reissue GitHub PATs, App tokens, and CI service tokens;
  2. rotate cloud credentials (AWS/GCP/Azure);
  3. rotate registry credentials;
  4. replace SSH keys if runners could access them;
  5. update Kubernetes tokens, .env secrets, and database credentials based on access scope.

Why this matters: rotation is controlled inconvenience. A non-rotated secret can become a second incident days later.

Step 4. Check caches, mirrors, and local leftovers

Even if the upstream source is already cleaned up, the malicious artifact may still exist in your environment:

So replacing a tag in YAML is not enough. You also need to reduce the chance that an old cached artifact is pulled again.

Step 5. Look for signs of possible exfiltration

The official advisory explicitly recommends looking for compromise indicators. Practical checks:

Even if you find no obvious evidence, that is not a reason to skip rotation if an affected artifact could have executed.

What Docker users should check separately

If you ran Trivy as a container:

  1. Verify whether you pulled aquasec/trivy from Docker Hub during the compromise window.
  2. Check whether images 0.69.4, 0.69.5, 0.69.6, or latest still exist on runners and build nodes.
  3. If you use a registry mirror or local cache, validate and clean that layer too—not only the upstream reference.
  4. For future protection, pin critical scanner images by digest, not by tag only.

This is especially important for self-hosted CI where caches often live longer than teams remember.

What GitHub Actions users should check separately

If you used Trivy through Actions:

  1. Review every uses: reference to trivy-action and setup-trivy.
  2. Remove older tags and move to v0.35.0 / v0.2.6 or, better, full SHA pinning.
  3. Check reusable workflows and central templates—these are often the real blast-radius multipliers.
  4. If you run self-hosted runners, clean tools cache and temporary work directories.
  5. Review which secrets were actually exposed to the scanning job.

The most dangerous mistake here is fixing one repository and declaring the incident closed. If the risky reference lives in shared templates, you must audit the chain, not a single repo.

Safe versions and safe references: what to use now

Based on the official sources:

Short rule going forward:

How not to repeat this next month

Minimum prevention without overengineering:

This does not make supply-chain risk disappear, but it dramatically reduces blast radius.

Conclusion

The main lesson is simple: when a trusted scanner gets compromised, you have to fix more than just the version. You have to fix the habit of relying on movable refs. In practice, the right order is: inventory usage, move to safe versions, rotate potentially exposed secrets, clean caches, and then lock in a new rule—critical external dependencies must use immutable references.

Official sources: