35 CVEs in One Month: Georgia Tech Study Reveals Vibe Coding's Hidden Cost
The numbers are stark: 35 new CVEs in March 2026 alone, all directly traced to AI-generated code. That’s up from 6 in January and 15 in February—a 583% increase in three months. These aren’t theoretical vulnerabilities from academic benchmarks. These are real exploits affecting real users, tracked by Georgia Tech’s new “Vibe Security Radar” project.
If you’ve been following the vibe coding explosion, this shouldn’t surprise you. What should surprise you is that we’re only seeing the tip of the iceberg.
The Tracking Problem
Hanqing Zhao, who leads the Vibe Security Radar at Georgia Tech’s Systems Software & Security Lab, puts it bluntly: “Everyone is saying AI code is insecure, but nobody is actually tracking it. We want real numbers. Not benchmarks, not hypotheticals, real vulnerabilities affecting real users.”
His team tracks roughly 50 AI coding tools—Claude Code, GitHub Copilot, Cursor, Devin, Amazon Q, the usual suspects. They pull data from vulnerability databases, trace back to the commits that introduced bugs, then use AI agents to investigate whether AI-generated code was the culprit.
Claude Code shows up most in their data, but not necessarily because it’s worse. “Claude Code always leaves a signature,” Zhao notes. “Tools like Copilot’s inline suggestions leave no trace at all, so they’re harder to catch.”
This detection problem is massive. Based on projects they can analyze in detail, Zhao estimates the real number is “five to 10 times what we currently detect, roughly 400 to 700 cases across the open-source ecosystem.”
The Recent Disasters
While Georgia Tech was publishing their alarming statistics, Anthropic was dealing with its own vibe coding nightmare. On March 31st, they accidentally shipped the entire source code of Claude Code in a debugging sourcemap—512,000 lines of internal code suddenly public.
Within days, security researchers found critical vulnerabilities in the exposed codebase. CVE-2026-21852 allows malicious repositories to steal API keys before users even know what’s happening. CVE-2025-59536 enables remote code execution through repository configuration files.
The irony is perfect: The tool that’s flooding the ecosystem with vulnerabilities just leaked its own source code full of vulnerabilities.
Why AI Code Fails Security
The fundamental problem isn’t that AI is stupid—it’s that AI optimizes for “works” not “secure.” Language models generate code that satisfies prompts. If you ask for a login endpoint, you get something that logs users in. Whether it resists SQL injection or properly validates sessions is secondary.
This creates predictable patterns:
Missing input validation: AI takes user input at face value. It’ll write db.query(\SELECT * FROM products WHERE name LIKE ’%${query}%’`)` without thinking twice about injection attacks.
Broken authentication: Tokens stored in localStorage, missing expiration, predictable session IDs, no rate limiting. The AI creates auth flows that authenticate, not auth flows that resist attack.
Data over-exposure: AI returns full database objects instead of selecting fields. Why would it think about whether users should see other users’ password hashes?
Hardcoded secrets: Training data is full of tutorials with hardcoded API keys. The AI learned that’s normal.
The Human Factor Makes It Worse
The bigger problem is human psychology. When code comes from an AI that seems confident and competent, developers skip the careful review they’d apply to junior developer code. “Vibe coding” explicitly embraces this: generate fast, ship fast, fix later.
Except security vulnerabilities can’t be “fixed later” once attackers find them first.
A Reddit story from January perfectly illustrates this. A team used AI to generate their entire application—database queries, auth flows, everything. They were thrilled with the speed. The AI even suggested version “16.0.0” for their first release.
One week after deployment, their server was hacked.
The developer sharing the story wasn’t surprised. Looking at the codebase, they could immediately identify multiple vulnerabilities the AI had introduced. The attackers found them too.
The Enterprise Problem
This isn’t just indie hackers shipping broken MVPs. Kusari’s 2026 Application Security report found that 85% of organizations using AI coding tools lack the security processes to catch AI-introduced vulnerabilities.
Meanwhile, Claude Code alone accounts for over 4% of public commits on GitHub—and that number keeps climbing. More AI code means more AI vulnerabilities, and most teams aren’t equipped to catch them.
The math is brutal: If 60-65% of AI-generated codebases contain exploitable vulnerabilities (as security researchers estimate), and vibe coding adoption keeps accelerating, we’re heading toward a security crisis of unprecedented scale.
What This Means for You
If you’re using AI to generate code—and let’s be honest, most of us are—you need systematic security testing. Not code review. Not hoping for the best. Actual security testing that catches the predictable vulnerabilities AI introduces.
The good news is these vulnerabilities are predictable. AI makes the same categories of mistakes repeatedly: missing input validation, broken auth, data over-exposure, insecure dependencies. You can build checklists and automated tests specifically for AI-introduced flaws.
The bad news is most teams aren’t doing this. They’re trusting AI output the same way they’d trust senior developer output, then acting shocked when attackers find the holes AI left behind.
Georgia Tech’s data shows us where this is heading. 35 CVEs in March, 583% growth in three months, and that’s just the cases they can detect. The real number is likely 5-10x higher.
The vibe coding revolution promised us software at the speed of thought. What we got was vulnerabilities at the speed of thought too.
Your AI can ship code faster than ever. Can your security keep up?
Worried about AI-introduced vulnerabilities in your codebase? Our URL scanner can help identify common security patterns that AI tools often miss. Check your application’s security posture in seconds.