Your team ships a fix on Friday afternoon. The tests pass. The image builds. The deploy goes green. Then the scanner runs and drops a report with hundreds of findings from packages nobody on the team knowingly chose.
That’s the part often underestimated. Your app might be small, but your container isn’t. It includes a base image, OS packages, language runtimes, transitive dependencies, helper binaries, and whatever your build process pulled in along the way. For a startup moving fast, that’s a lot of inherited risk packed into one artefact.
The hard part isn’t turning a scanner on. Trivy, Grype, Docker Scout, Snyk, Clair, and others can all show you what’s wrong. The hard part is deciding what to fix first, what to ignore for now, and how to stop the same class of issue from reappearing next week. That’s where container vulnerability scanning becomes either useful or noise.
Small teams need a workflow, not another dashboard. The practical path is simple: scan early, scan again after the image is pushed, add runtime visibility where it matters, and triage findings based on exposure and exploitability instead of raw counts. That gives you a security process you can maintain.
Why Your Docker Images Are Not As Safe As You Think
A lot of teams treat a container image as a sealed box. If the app starts and the health check passes, it feels safe enough to ship. That assumption breaks quickly once you inspect what’s inside the layers.

A typical startup image inherits from something familiar like node, python, golang, or nginx. That parent image often pulls in a full Linux userspace. Then your Dockerfile adds package manager caches, build tools, shells, SSL libraries, and application dependencies. Even if your own code is clean, the image can still contain old packages with published vulnerabilities.
The black box problem
The trap is convenience. Public base images save time, multi-stage builds hide complexity, and package managers make it easy to install more than you need. Teams often discover that a “simple” image contains components they didn’t realise were present.
Common examples show up over and over:
- Old base layers that still carry vulnerable OS packages from Debian, Ubuntu, or Alpine.
- Build-time tools like compilers or package managers left inside the final runtime image.
- Transitive dependencies pulled in by npm, pip, Maven, or apk that nobody reviewed directly.
- Default behaviours such as running as root, writable filesystems, or broad Linux capabilities.
If you haven’t looked at the final image contents, you’re trusting a dependency tree you probably didn’t mean to approve.
Why startups feel this more acutely
Small teams optimise for speed because they have to. One person might own product work, deploys, and infra in the same week. That’s exactly why container vulnerability scanning matters. It turns an opaque artefact into something you can review, shrink, and harden.
The most useful mindset shift is this: your Docker image is part of your application, not just packaging around it.
When teams adopt that view, they start making better choices early. They pin base images, remove unnecessary packages, prefer distroless or slim variants where practical, and stop assuming that “official” automatically means “secure enough”.
Understanding Container Vulnerability Scanning
Container vulnerability scanning is an automated inspection of a container image or running container to find known security issues. The easiest way to think about it is a building inspection for software. The scanner checks the foundation, the pipes, and the rooms you use.

In the UK, this has moved well beyond nice-to-have. The National Cyber Security Centre reported a 37% increase in container-related incidents in 2024 compared to 2023, and 78% of UK organisations using Kubernetes reported high or critical vulnerabilities in exposed pods according to the cited overview at Oligo Security’s container security summary.
What the scanner actually inspects
A good scanner usually works through the image layer by layer.
-
Base operating system layer
This is the foundation. If you start from Debian, Ubuntu, Alpine, or another base, the scanner checks installed OS packages against known vulnerability databases. -
System libraries and utilities
OpenSSL, glibc, BusyBox, curl, and similar packages often show up here. These are easy to forget because your team didn’t write them, but they still ship with the app. -
Application dependencies
Node packages, Python libraries, Java artefacts, Ruby gems, and Go modules all get matched against known advisories. -
Misconfigurations and secrets
Many scanners also flag risky settings such as running as root, broad capabilities, embedded credentials, or exposed package manager tokens.
What a CVE means in plain English
A CVE is a public identifier for a known security flaw. It’s basically a tracking number for a vulnerability so tools and teams can talk about the same issue consistently.
If a report says your image contains a package affected by a CVE, that doesn’t automatically mean you’re about to be breached. It means a known flaw exists in a component you ship. The next question is whether that vulnerable component is reachable, loaded, exposed, or already fixed in a newer image tag.
What the report gives you
Most scan reports answer four useful questions:
| Question | What you’re looking for | |---|---| | Where is the issue? | Base image, OS package, app dependency, config | | How serious is it? | Severity assigned by the scanner | | Is there a fix? | Patched version available or not | | Why is it present? | Which package, layer, or dependency introduced it |
That last part matters. If twenty findings all trace back to one old base image, you don’t have twenty problems. You have one upgrade decision.
Scanning is one piece of the stack
Container vulnerability scanning doesn’t replace code review, dependency hygiene, or runtime controls. It gives you visibility into what you’re shipping. That’s why it fits well alongside other defensive techniques, including AI tools for detecting security vulnerabilities that help teams review code, dependencies, and risky patterns earlier in the delivery process.
Practical rule: Treat scanner output as a starting point for decisions, not as a complete statement of risk.
Static Image Scanning vs Runtime Security
Static scanning and runtime security solve different problems. Teams often argue about which one is better, but that’s the wrong question. They work at different moments and catch different classes of risk.

Static scanning is the blueprint X-ray. It inspects the built image before it runs. Runtime security is the camera system. It watches what the container does after deployment.
What static image scanning is good at
Static scanning is usually the first thing to add because it’s cheap, fast, and easy to automate in CI.
It excels at:
- Catching known vulnerable packages before deploy
- Comparing image layers to identify where a bad dependency entered
- Enforcing policy in pull requests so risky images never reach the registry
- Helping developers fix issues early when the relevant code change is still fresh
Here’s a concise view:
| Static image scanning strengths | Static image scanning limits | |---|---| | Works before deployment | Can’t see live behaviour | | Fits CI and PR checks | Misses runtime drift | | Great for known CVEs | Weak against unknown exploit paths | | Helps with repeatable builds | Doesn’t tell you whether a package is actually exercised |
Static scanning is especially effective when the team controls the Dockerfile and can rebuild quickly. If a base image update removes most findings, static checks provide substantial value.
What runtime security adds
Runtime security observes the container in use. That matters because the clean image you built isn’t always the same as the workload now running in production.
Processes start. Shells get spawned. Temporary files appear. Network connections change. Mounted secrets and environment variables shape the actual attack surface. Runtime tooling can detect behaviours that static analysis won’t catch, including suspicious process execution, privilege misuse, and post-deploy drift.
UK-specific benchmarks cited in the FedRAMP container vulnerability scanning requirements document note that runtime container scanning detects 73% more vulnerabilities than static image scans alone, and excessive CAP_SYS_ADMIN capabilities were exploited in 29% of 2024 UK cloud breaches.
Where runtime tools earn their keep
Runtime checks are worth the effort when you have:
- Public-facing workloads that process user traffic continuously
- Multi-tenant clusters where one noisy workload can affect others
- Containers with increased privileges or broad capabilities
- Ephemeral workloads that change frequently after deployment
- Compliance pressure that requires post-deploy monitoring
Tools in this category include Falco, Aqua, Sysdig, and platform-native controls around Kubernetes admission, policies, and behavioural alerting.
A clean image can still behave unsafely once it’s running.
The trade-offs small teams should care about
For a startup, static scanning usually comes first because it gives fast feedback without operational overhead. Runtime tooling gives broader coverage but requires tuning. If you turn it on with default policies and no ownership, you’ll often get alert fatigue.
That’s the practical split:
- Use static scanning to block obvious bad images
- Use runtime security to catch what static checks can’t see
- Don’t buy a runtime platform before you can consistently fix build-time findings
A workable setup for small teams
If you’re resource-constrained, start with a narrow runtime policy instead of trying to monitor everything.
Focus on a few signals:
- Unexpected shell execution
- Containers running as root
- Writes to sensitive paths
- New listening ports
- Outbound connections that don’t match the service’s normal role
That approach keeps the noise manageable. It also aligns with how small teams operate. You need a short list of high-confidence alerts, not another system that pages you for normal container churn.
Static and runtime are not interchangeable
A scanner finding in CI should usually produce a code or image fix. A runtime alert should usually produce an investigation and then a hardening change. Those are different workflows with different owners.
Static scanning answers: “Should we ship this image?”
Runtime security answers: “Is this workload behaving safely right now?”
You need both, but you don’t need equal depth on day one.
Your Software Supply Chain and the Role of SBOMs
An image scan tells you what’s vulnerable now. An SBOM tells you what’s inside, whether or not there’s an active issue today. That’s why SBOMs matter.
Think of an SBOM as the ingredients list on packaged food. It records the packages, libraries, versions, and components that make up the image. When a new vulnerability lands, you don’t want to manually inspect every Dockerfile in the company. You want to query your inventory.
Why this matters in real incidents
Supply chain problems often start below your application code. The NCSC’s 2024 annual report indicated that 68% of investigated incidents involving UK organisations used compromised container images as the initial access vector, primarily through unpatched CVEs in base OS layers such as outdated Debian or Alpine distributions, as cited in SentinelOne’s container vulnerability scanning overview.
That should change how you think about image ownership. If your team ships the image, your team owns the inherited risk, even when the vulnerable component came from upstream.
What an SBOM gives you that a scanner report doesn’t
A normal vulnerability report is event-driven. It says, “Here are the known issues right now.”
An SBOM gives you a longer-lived asset:
- Inventory of what each image contains
- Traceability across services using the same component
- Faster incident response when a major dependency issue appears
- Better review of base image decisions over time
That’s especially useful in incidents like Log4Shell. Teams with an SBOM can ask one direct question: “Which images include this package and version?” Teams without one often burn hours rebuilding context from scratch.
Practical SBOM workflow
You don’t need a heavyweight programme to get value from SBOMs.
A simple workflow works well:
- Generate an SBOM during image build
- Store it with the image artefact or release metadata
- Re-scan against new advisories after the image is already in the registry
- Use the SBOM during patch sprints and incident response
Tools such as Syft, Trivy, Docker Scout, and registry-integrated scanners can all support this pattern.
If your team wants a broader view of dependency trust, provenance, and build integrity, software supply chain security guidance is a useful next layer after basic container vulnerability scanning.
The fastest incident response starts before the incident, with a list of what you actually shipped.
Automating Scans in Your CI CD Pipeline
Manual scanning doesn’t survive contact with a busy sprint. Someone forgets to run the command, findings sit in a local terminal, and the vulnerable image still lands in the registry. The only reliable fix is automation.

This is already the direction that is prevalent. Driven by supply chain attacks, vulnerability scanning solution adoption in the UK reached 65% among mid-sized firms, and 74% of the global solutions market in 2025 focused on CI/CD automation to reduce regression risks, according to Meticulous Research’s container security market summary.
GitHub Actions with Trivy
For many teams, Trivy is the easiest place to start because it scans images, filesystems, repositories, and configuration with a low setup burden.
A practical GitHub Actions workflow looks like this:
name: container-security
on:
pull_request:
push:
branches: [main]
jobs:
scan-image:
runs-on: ubuntu-latest
permissions:
contents: read
security-events: write
steps:
- name: Check out code
uses: actions/checkout@v4
- name: Build image
run: docker build -t app:${{ github.sha }} .
- name: Run Trivy scan
uses: aquasecurity/trivy-action@0.24.0
with:
image-ref: app:${{ github.sha }}
format: table
exit-code: '1'
severity: HIGH,CRITICAL
ignore-unfixed: true
This does three useful things:
- Builds the exact image you plan to ship
- Scans it immediately
- Fails the job on high and critical findings
That last point matters. If the scanner only reports but never blocks, teams eventually stop paying attention.
GitLab CI with Grype
If you prefer Grype or you’re already on GitLab, keep the pipeline equally direct.
stages:
- build
- scan
build_image:
stage: build
image: docker:stable
services:
- docker:dind
script:
- docker build -t myapp:$CI_COMMIT_SHA .
- docker save myapp:$CI_COMMIT_SHA -o image.tar
artifacts:
paths:
- image.tar
scan_image:
stage: scan
image: anchore/grype:latest
dependencies:
- build_image
script:
- grype oci-archive:image.tar --fail-on high
The key design choice is scanning the built artefact, not just source files. You care about what’s inside the final image.
When to fail the build and when not to
“Fail on every high” sounds disciplined, but it can create friction if the team inherits a noisy baseline on day one.
A better rollout usually looks like this:
| Phase | Gate behaviour | |---|---| | Initial adoption | Report findings, don’t block | | Baseline cleanup | Block new critical issues | | Steady state | Block critical and selected high issues | | Mature workflow | Add policy checks for root user, secrets, and base image drift |
That staged approach avoids the common failure mode where the security gate gets disabled because it blocked too much too soon.
One reliable pattern: fail the build on newly introduced severe issues, not on the entire historical mess all at once.
Scan images after push as well
CI scanning catches issues before merge. Registry scanning catches changes after the fact, especially when advisories are updated and yesterday’s clean image becomes today’s problem.
You want both:
- Build-time scan to stop bad images entering the registry
- Registry re-scan to catch newly disclosed issues in existing images
Most managed registries now support some form of post-push scanning. Even if your platform’s native scanner is basic, it’s still useful as a second net.
Keep the pipeline output developer-friendly
If the scanner prints a giant JSON blob, developers won’t read it. Prefer outputs that point to:
- vulnerable package
- installed version
- fixed version if available
- image layer or dependency path
- remediation owner
That makes the pipeline actionable instead of theatrical.
If you’re building out broader workflow automation around security checks, this automated security scanning guide is a solid companion to the image-level patterns above.
Managing Vulnerabilities Without Losing Your Mind
This is the part most guides skip. You run the first scan, and the output is ugly. Maybe the count is in the hundreds. Maybe most of it comes from one base image. Maybe half the packages don’t look familiar. None of that is unusual.
What breaks teams isn’t the number of findings. It’s the lack of a triage system.
Start with a small decision matrix
Severity alone is not enough. A high-severity issue in a package that never runs and isn’t exposed is often less urgent than a medium issue in an internet-facing component with a straightforward exploit path.
Use a simple matrix with two axes:
| | Low exploitability | High exploitability | |---|---|---| | Low exposure | Defer, document | Review in next maintenance cycle | | High exposure | Prioritise if fix is cheap | Fix first |
Add context before taking action:
- Is the vulnerable component exposed? Public API, admin interface, internal-only worker
- Is there a fix available? If not, you may need mitigation instead
- Did the issue come from the base image? One rebuild might remove a large cluster of findings
- Is this image ephemeral or hard to track? That increases operational risk
That last point matters. A 2025 SANS UK survey found 55% of breaches originated from unmonitored ephemeral images, highlighting blind spots in scanning and remediation tracking, as cited in Intruder’s write-up on agentless container image scanning.
The remediation order that usually works
Small teams need a sequence that removes the most risk with the least churn.
-
Rebuild on a newer base image
This often clears a large number of findings at once. -
Move to slimmer runtime images
If the final image doesn’t need shells, package managers, or compilers, don’t ship them. -
Upgrade direct app dependencies Start with packages the app imports and executes.
-
Fix risky configuration findings
Running as root or granting unnecessary capabilities can matter more than a long tail of low-severity library issues. -
Document accepted risk explicitly
Use ignore files with comments and expiry dates.
Ignore files are fine if you use them properly
A .trivyignore or equivalent isn’t cheating. It’s documentation. The bad version is an ignore file with no owner, no reason, and no review date.
Use comments that explain the decision:
# CVE-XXXX-YYYY
# Accepted temporarily.
# Package is present in build tooling only, not in final runtime path.
# Review on next base image refresh.
CVE-XXXX-YYYY
That turns “we ignored it” into “we assessed it”.
What doesn’t work
A few habits create noise fast:
- Chasing the total count instead of clustering by root cause
- Treating all criticals as equal without checking exposure
- Opening one ticket per CVE when one base image upgrade solves many
- Ignoring ephemeral workloads because they’re short-lived
- Running scans with no remediation owner
The right goal isn’t zero findings. It’s reducing the set of findings that can realistically hurt you.
Teams that need a broader operating model for recurring remediation should review established vulnerability management best practices. Pair that with a concrete scanning workflow and ownership model, not just periodic reports.
For teams formalising this process across products, a dedicated vulnerability scanning service overview can help frame what to automate, what to monitor continuously, and where human review is still worth the time.
From Scanning to a Secure Development Culture
The most useful outcome of container vulnerability scanning isn’t the report. It’s the behaviour change that follows.
When developers see that switching to a newer base image removes a large chunk of risk, they start making better defaults part of normal engineering work. When reviewers ask why a container needs root, broad capabilities, or extra packages, security stops being a late-stage interruption and becomes part of how the team builds.
What mature teams do differently
They don’t treat scanning as a once-a-quarter exercise. They treat it as a feedback loop.
That usually means:
- scanning every build
- rescanning stored images as advisories change
- using runtime controls for exposed workloads
- triaging findings by real risk, not just scanner labels
- reviewing ignored items instead of forgetting them
Start smaller than you think
You don’t need to secure every image in the company this week.
Start with one service:
- Add a build-time image scan
- Fail only on the most serious new issues
- Fix the base image and obvious config problems
- Generate an SBOM
- Add one or two runtime alerts for suspicious behaviour
That’s enough to move from blind trust to informed control.
Container vulnerability scanning works best when the team sees it as part of shipping quality software. Not as compliance theatre. Not as a separate department’s checklist. Just part of the engineering system, like tests, logs, and deploy reviews.
If you’re shipping on Supabase, Firebase, or mobile stacks and want the same kind of fast, actionable feedback for backend misconfigurations, leaked secrets, exposed RPCs, and real RLS weaknesses, AuditYour.App is built for that workflow. It gives small teams a practical way to catch critical security issues early, track regressions over time, and ship with more confidence without needing a dedicated security department.
Scan your app for this vulnerability
AuditYourApp automatically detects security misconfigurations in Supabase and Firebase projects. Get actionable remediation in minutes.
Run Free Scan