Anthropic's Meltdown: From the Mythos Papers to the Claude Code Leak
Two configuration errors in five days. Same script, same company, same excuse: "human error."First came Claude Mythos — that "dangerous" model that spooks governments. Around 3,000 internal assets, blog post drafts, and restricted documentation accidentally exposed on a public CMS, discovered by independent researchers and brought to light by Fortune. Configuration error, they said. Human.Then, March 31st, the knockout punch: the entire source code of Claude Code — the $2.5 billion ARR product — lands on npm because someone forgot to strip a 59.8 MB source map. Same exact pattern. Same exact excuse.

The Disaster Pattern
Anthropic is demonstrating a systemic problem that no "Constitutional AI" can fix: basic operational negligence.
- Mythos (March 26-27): Internal documents describing a model as a "hacker's dream," capable of autonomous large-scale cyber attacks, left in a public data store — "human error" in the CMS.
- Claude Code (March 31): 512,000 lines of TypeScript, the Kairos architecture, undercover mode, hidden model roadmaps — all accidentally published to npm, the same day a supply chain attack compromised
axiospackages for anyone installing via npm.
These aren't hacks. They aren't Chinese APTs (though those already infiltrated 30 organizations using Claude itself). These are rookie configuration mistakes at a company selling AI security to enterprises.
The Hypocrisy of "Safety First"
Anthropic built its brand on safety. They're the "responsible guys" of AI, the ones who pump the brakes to check for risks. Yet:
- Mythos: A blog draft describes the model as a "dream weapon for hackers," capable of autonomous exploits that "outpace defender efforts" — and they left it in a public bucket.
- Claude Code: The code reveals a "KAIROS" system running 24/7 in the background, self-reflecting and consolidating memories while you sleep — and they shipped it on npm like it was a regular package.
The truth is they don't have an AI safety problem. They have a basic IT security problem.
What Connects Both Leaks
Both incidents share the same DNA:
- "Public by default" configuration: Mythos's CMS published assets publicly by default, requiring manual action to hide them. Claude Code's build system includes source maps in production with no automatic checks.
- No DLP: No Data Loss Prevention caught 3,000 sensitive files or a 60MB source map containing the entire codebase before release.
- Reactive response: Both discovered by outsiders (security researchers and an intern), not internal controls.
The Real Cost
Mythos tanked cybersecurity vendor stocks (CrowdStrike and Palo Alto down 7%, Tenable down 11%). Claude Code handed Cursor, GitHub Copilot, and every competitor the complete recipe for the most advanced agentic architecture on the market.But the worst damage is to credibility. How do you sell "AI safety" to enterprises when you can't even configure a CMS or an npm build?Anthropic is building models that, by their own admission, "foreshadow a wave of large-scale AI cyberattacks", yet they can't protect their own basic digital assets. It's like a safe manufacturer leaving the keys taped to the door.
The Lesson
The message is clear: the problem isn't AI escaping control. It's humans not controlling anything.And if this continues, the next "leak" might not be an accidental .js.map — it could be the actual Mythos weights themselves, because someone forgot the S3 bucket password.
