vibe-coding security supply-chain api-keys

The LiteLLM Hack: When Your AI Proxy Becomes the Attack Vector

Last week, security researchers at Trend Micro disclosed that LiteLLM — the wildly popular Python proxy package that lets you swap between OpenAI, Anthropic, and dozens of other LLM providers — was compromised on PyPI. Malicious versions of the package harvested credentials, moved laterally through Kubernetes clusters, and established persistent backdoors.

If you vibe-coded anything in the last two weeks that touches an LLM API, you should probably stop reading and go check your dependencies. Right now.

What Happened

LiteLLM is one of those invisible infrastructure packages. Over 12 million monthly downloads. It sits between your app and whatever AI model you’re calling, routing requests, managing keys, handling retries. Thousands of vibe-coded projects pulled it in because ChatGPT or Claude told them to — “just pip install litellm and you’re good.”

On March 26th, attackers published poisoned versions of the package to PyPI. The malicious code was designed to:

  1. Harvest every API key and cloud credential passing through the proxy
  2. Establish persistence inside Kubernetes environments
  3. Enable lateral movement across connected systems

Think about that for a second. LiteLLM is literally the package you trust with all your API keys. It’s the centralized vault. And someone slipped a skeleton key under the door.

Why Vibe Coders Got Hit Hardest

Here’s the ugly truth: if you’re a traditional developer with a lockfile, dependency pinning, and a CI pipeline that runs security scans, you probably caught this. The compromised versions were flagged within days.

But vibe coders don’t do lockfiles. Vibe coders don’t pin versions. Vibe coders copy-paste pip install litellm from a ChatGPT response and move on to the next prompt.

This is the fundamental problem with the “just ship it” mentality. When AI generates your dependency list, it doesn’t check whether the latest version on PyPI is actually the real one. It doesn’t verify checksums. It doesn’t even know what version it’s recommending — it’s pattern-matching from training data that might be months or years old.

And this isn’t theoretical. The “State of Secrets Sprawl 2026” report found 28.65 million hardcoded secrets in public GitHub commits last year — a 34% increase. AI-assisted commits had a higher secret-leak rate than human-written code. Over 113,000 DeepSeek API keys alone were found floating around public repos.

The Bigger Picture: Slopsquatting

LiteLLM was a real package that got compromised. But there’s an even more insidious version of this attack called slopsquatting.

Here’s how it works: AI models sometimes hallucinate package names that don’t exist. Attackers register those fake names on npm or PyPI with malicious code inside. When the next person prompts an AI that hallucinates the same package name, they install the attacker’s code.

It’s supply chain poisoning that exploits AI’s tendency to confidently make things up. And it’s happening right now.

What You Can Actually Do

If you’re building with AI-generated code — and in 2026, who isn’t — here’s the minimum:

Pin your dependencies. Every single one. Use exact versions in your requirements file. Yes, it’s annoying. Yes, it’s necessary.

Use lockfiles. pip freeze, package-lock.json, poetry.lock — whatever your ecosystem uses. Lock it down.

Audit before you install. When AI suggests a package, spend 30 seconds checking: Does it exist? Is it maintained? How many downloads? Is the name suspiciously close to a popular package?

Rotate your keys. If you used LiteLLM in the affected timeframe, assume your keys are compromised. Rotate everything. Today.

Scan your code. Not manually — you’ll miss things. Automated scanning catches the hardcoded secrets, the vulnerable dependencies, the authentication holes that AI loves to leave wide open.

The Pattern

This is the part where I’d normally say “AI-generated code isn’t inherently bad.” And it isn’t. But the ecosystem around vibe coding — the speed, the lack of review, the implicit trust in whatever the model outputs — creates a perfect environment for supply chain attacks.

The attackers know this. They know that vibe-coded projects update dependencies without checking. They know that AI-generated code often includes unnecessary packages. They know that the person prompting Claude to “build me a SaaS” probably isn’t running npm audit afterward.

Thirty-five new CVEs from AI-generated code were disclosed in March alone, according to Georgia Tech’s Vibe Security Radar. That’s up from six in January. The trendline is not your friend.

Don’t wait until your keys show up on a Telegram channel. Run a scan on your project. It takes less time than your last AI prompt — and it might save you from being the next cautionary tale.

Scan your site →