AI coding assistants have reshaped how software gets built. They scaffold features in seconds, suggest entire modules, and have made developers measurably more productive. But beneath that velocity sits an uncomfortable truth: AI-generated code is pulling in open-source dependencies at an unprecedented rate — and the security consequences are catching up fast.
Black Duck’s 2026 Open Source Security and Risk Analysis (OSSRA) report, drawn from audits of 947 commercial codebases across 17 industries, paints a stark picture. The average application now contains 581 known vulnerabilities in its open-source dependencies — a 107% increase year-on-year. That is not a rounding error. That is a structural shift in how risk accumulates in modern software.
TL;DR
- Open-source vulnerabilities per codebase have doubled to 581, driven by AI code generation pulling in more dependencies than ever before.
- Open-source components per application climbed 30% and file counts expanded 74% in the past year — AI assistants favour adding packages over writing bespoke code.
- 45% of AI-generated code contains OWASP Top 10 flaws, and 35 new CVEs in March 2026 alone were traced directly to AI-generated code.
- Licence compliance is a ticking time bomb: 68% of codebases have licence conflicts, partly due to AI “licence laundering” — generating copyleft-derived code without retaining the original licence.
- Only 24% of organisations perform comprehensive IP, licence, security, and quality evaluations on AI-generated code.
How AI Assistants Inflate Your Dependency Graph
When a developer writes code by hand, they tend to reach for packages they already know and trust. There is an implicit cost-benefit analysis: do I really need another dependency, or can I write this in thirty lines?
AI assistants short-circuit that calculus. Trained on millions of repositories, they default to importing packages — because that is what most training data does. Need date formatting? The AI reaches for moment.js or date-fns rather than using the native Intl.DateTimeFormat API. Need a simple HTTP request? In comes axios instead of the built-in fetch. Each suggestion is individually reasonable. In aggregate, they bloat the dependency tree dramatically.
The OSSRA data bears this out: open-source components per application climbed 30% in a single year, while file counts expanded 74%. Open-source components now appear in 98% of audited codebases. Third-party code is not just a part of your application — it is your application.
The Vulnerability Mathematics
More dependencies mean more attack surface. That much is obvious. But the scale is worth pausing on.
At 581 vulnerabilities per codebase, even aggressive triage cannot keep pace. Not every vulnerability is exploitable in your context, of course — many sit in transitive dependencies or behind code paths your application never exercises. But distinguishing exploitable from theoretical requires exactly the kind of deep, contextual analysis that most teams do not have bandwidth for.
The situation is compounded by AI-generated code itself introducing flaws. Research shows that 45% of AI-generated code contains vulnerabilities from the OWASP Top 10 — injection flaws, broken access control, security misconfigurations. And the trend is accelerating: 35 new CVE entries in March 2026 were traced directly to AI-generated code, up from just six in January.
This creates a compounding problem. AI assistants generate vulnerable code and pull in vulnerable dependencies. The blast radius multiplies.
The Licence Time Bomb
Security gets the headlines, but licence compliance may be the sleeper risk that catches more organisations off guard.
The OSSRA found that 68% of audited codebases contained open-source licence conflicts, up from 56% the previous year. A significant driver is what researchers are calling “licence laundering” — AI assistants generating code snippets derived from copyleft sources like GPL without retaining the original licence information.
Your AI tool does not know — or care — that the function it just generated was learned from a GPL-licensed repository. It strips the attribution, wraps it in your proprietary codebase, and moves on. The legal exposure, however, remains. For companies approaching acquisition, an IPO, or enterprise contracts with licence compliance clauses, this is a material risk.
Yet only 54% of organisations evaluate AI-generated code for IP and licence risks. The governance gap is enormous.
The Governance Deficit
Perhaps the most striking finding from the OSSRA is how few organisations have adapted their processes to account for AI-assisted development.
While 76% check AI-generated code for security risks (good), only 56% assess quality and just 54% evaluate licence compliance. A mere 24% perform comprehensive evaluations covering IP, licence, security, and quality together.
This gap exists because most development teams bolted AI assistants onto existing workflows without rethinking their governance models. Code review processes designed for human-authored code do not catch the patterns that AI introduces — the unnecessary dependency, the subtly insecure default, the copyleft-derived snippet buried in a utility function.
What Development Teams Should Do Now
1. Audit Your Dependency Graph Ruthlessly
Run a full software composition analysis (SCA) scan across your codebase. Tools like Black Duck, Snyk, Socket.dev, and Trivy can map your dependency tree, flag known vulnerabilities, and identify licence conflicts. If you have not done this in the past quarter, you are flying blind.
2. Implement Dependency Policies in CI/CD
Block builds that introduce dependencies with critical or high-severity CVEs. Set thresholds for maximum dependency count per service. Require explicit approval for new top-level dependencies. Automate what humans cannot reliably catch at scale.
3. Configure AI Assistants to Prefer Native APIs
Most AI coding tools can be configured with project-level instructions. Tell them to prefer native platform APIs, to avoid adding dependencies for trivial operations, and to flag when they suggest a new package. A well-crafted CLAUDE.md or equivalent project rules file can dramatically reduce dependency creep.
4. Add Licence Scanning to Your Pipeline
If your CI/CD pipeline does not already flag licence conflicts, add it. Tools like FOSSA and Black Duck can catch copyleft contamination before it reaches production. This is especially critical for any code generated by AI assistants.
5. Treat AI-Generated Code as Untrusted Input
This is the mindset shift. AI-generated code should go through the same scrutiny as a pull request from a junior developer you have never worked with. Review the dependencies it introduces. Question whether each package is necessary. Check the licence. Verify the security posture.
6. Maintain a Software Bill of Materials (SBOM)
An up-to-date SBOM is no longer optional — it is a regulatory expectation in many jurisdictions and a practical necessity for vulnerability response. When the next Log4Shell-scale incident hits, you need to know within minutes whether you are affected.
The Bigger Picture
AI coding assistants are not going away, nor should they. The productivity gains are real and substantial. But velocity without governance is just fast chaos.
The 2026 OSSRA report is a wake-up call: the way teams adopt AI-assisted development today will determine their security and compliance posture for years to come. The organisations that treat dependency management as a first-class engineering concern — not an afterthought — will be the ones that capture AI’s productivity benefits without inheriting its risks.
At REPTILEHAUS, we help development teams build secure, maintainable software — whether that means auditing existing codebases, implementing CI/CD security pipelines, or integrating AI tools with proper governance guardrails. If your team is navigating the AI-assisted development transition and wants to get the security posture right, get in touch.
📷 Photo by GuerrillaBuzz on Unsplash



