Anthropic Just Had the Worst Best Month in AI History

Jonathan Alonso April 3, 2026 6 min read

While you were sleeping, the “safe AI company” accidentally leaked its most dangerous model, exposed half a million lines of source code, picked a fight with the Pentagon, launched a PAC, and signed a treaty with a foreign government. And somehow, they’re still winning.

March 2026 was supposed to be Anthropic’s victory lap. Revenue run rate approaching $20 billion. A $380 billion valuation that makes Goldman Sachs look like a lemonade stand. Claude Opus 4.6 shipping to rave reviews. A $100 million partner network. An IPO on the horizon.

Instead, they tripped over their own shoelaces — twice in one week — and the whole world got to watch.

The Mythos Leak: When Your Own Blog Post Becomes the Story

On March 26, Fortune dropped a bombshell: Anthropic had accidentally left nearly 3,000 internal files sitting in a publicly accessible data store. Among them? A draft blog post announcing Claude Mythos — a model the company itself described as posing “unprecedented cybersecurity risks.”

Let that sink in. The company that has built its entire brand around responsible AI development accidentally told the world, through its own unsecured CMS, that its next model is so powerful at finding and exploiting software vulnerabilities that it could fundamentally reshape the cybersecurity landscape.

The draft described Mythos under the internal codename “Capybara” — a new model tier above Opus. Not a bigger Opus. A whole new category. “Larger and more intelligent than our Opus models — which were, until now, our most powerful,” the leak read. The company said Capybara achieves “dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity” compared to Claude Opus 4.6.

Cybersecurity researchers Roy Paz of LayerX Security and Alexandre Pauwels of Cambridge confirmed the scope: thousands of unpublished blog assets, publicly searchable, with default public URLs — because someone forgot to change a CMS setting.

Anthropic’s response was the corporate equivalent of a shrug: “Human error in the configuration of our content management system.” Early drafts, nothing to see here.

Except there was plenty to see. The draft blog post wasn’t just a product announcement — it was a warning. Anthropic was essentially arguing that models like Mythos can find zero-day vulnerabilities faster than defenders can patch them. That’s not a product pitch. That’s an existential threat assessment published by accident.

One Week Later: 512,000 Lines of Code, Exposed

If the Mythos leak was embarrassing, what happened on April 1 was something else entirely.

When Anthropic pushed Claude Code version 2.1.88 to npm, someone forgot to exclude the source maps. A security researcher named Chaofan Shou spotted it almost immediately: nearly 2,000 source code files and 512,000 lines of code — the complete architectural blueprint for one of Anthropic’s most important products — were sitting there for anyone to download.

Claude Code isn’t some side project. It’s the command-line tool that lets developers use Claude to write, edit, and manage code. It’s so good that, according to the Wall Street Journal, OpenAI pulled the plug on its video generation product Sora partly to refocus on competing with Claude Code’s momentum.

What leaked wasn’t the AI model itself. It was everything around it — the instructions, the tool definitions, the scaffolding that tells Claude how to behave, what tools to use, and where its limits are.

But here’s where it gets interesting: developers immediately started digging through the code and found references to features that haven’t been announced yet.

Kairos: The Always-On Daemon

Buried in the Claude Code source was Kairos — a persistent background daemon designed to keep running even after you close the terminal. It would use periodic “tick” prompts to review whether new actions were needed, and a “PROACTIVE” flag for “surfacing something the user hasn’t asked for and needs to see now.”

AutoDream: AI That Dreams While You Sleep

Even more striking was AutoDream — a memory consolidation system that activates when a user goes idle. The prompt tells Claude: “You are performing a dream — a reflective pass over your memory files.” It scans transcripts for “new information worth persisting,” consolidates duplicates, prunes outdated memories, and watches for “existing memories that drifted.”

This isn’t vaporware. This is infrastructure being built for the next generation of always-on AI agents.

The Pentagon Fight: When “Safe AI” Becomes a National Security Issue

While the leaks were dominating tech headlines, Anthropic was in a different kind of war — with the U.S. government.

Claude had been the AI model of choice for the U.S. government for over a year, and the first frontier system cleared for classified use. In January, it was reportedly used in the capture of Venezuelan President Nicolás Maduro in Caracas.

Then the relationship fell apart. On February 27, the Trump Administration designated Anthropic a supply-chain risk to national security — the first time the U.S. has slapped that label on an American tech company. President Trump ordered all federal use of Anthropic software to cease. Secretary of Defense Pete Hegseth announced that any company doing business with the Pentagon had to choose sides.

Anthropic fought back and won an injunction. But the damage was done. The company that positions itself as the careful, responsible AI builder was now labeled a threat by its own government.

Anthropic Fights Back: The Political Playbook

If you can’t beat the government, join the game.

On April 3, Anthropic filed documents to create AnthroPAC — a political action committee that plans to make contributions to both parties during the 2026 midterms. The PAC will be funded by voluntary employee contributions capped at $5,000.

This isn’t happening in a vacuum. AI companies have already contributed a staggering $185 million to midterm races. In February, the New York Times reported that a Super PAC called Public First received at least $20 million from Anthropic, financing ad campaigns supporting a specific regulatory agenda.

The message is clear: Anthropic isn’t going to sit on the sidelines while Washington decides its future.

Going Global: The Australia Partnership

On March 31, Anthropic CEO Dario Amodei met with Australian Prime Minister Anthony Albanese in Canberra to sign a Memorandum of Understanding on AI safety research. Under the agreement, Anthropic will share findings on emerging AI capabilities and risks, participate in joint safety evaluations, and collaborate with Australian universities.

This is Anthropic playing the long game — building government relationships outside the U.S. while its domestic relationship with the Pentagon is in shambles.

The $100 Million Partner Network

Also in March, Anthropic announced the Claude Partner Network — a $100 million investment in training, technical support, and joint go-to-market efforts for enterprise partners. They’ve been poaching executives from Salesforce, ServiceNow, and Asana to build out the channel organization.

Spotify reported up to a 90% reduction in engineering time on code migrations using Claude Cowork. That’s the kind of number that makes enterprise buyers pay attention.

The IPO Question

With a $380 billion valuation, $30 billion raised, and revenue approaching $20 billion on a run-rate basis, Anthropic is expected to IPO in Q3 or Q4 of 2026. The Nvidia and Microsoft partnership — worth up to $15 billion in combined investment — brings more than a gigawatt of AI compute capacity online this year.

The Bigger Picture

Here’s what’s wild about Anthropic’s March: none of this is contradictory. The company genuinely is building safety infrastructure. It genuinely is the most transparent frontier AI lab about risks. It genuinely did accidentally leak its most dangerous model and its most important product’s source code within a week of each other.

The leaks didn’t reveal a company cutting corners on safety. They revealed a company moving so fast that its operational security can’t keep up with its ambition. That’s a different problem — but it’s still a problem.

When your brand promise is “we’re the careful ones,” two accidental data exposures in seven days is not a great look. When you’re warning that your own model poses “unprecedented cybersecurity risks” — in a blog post you didn’t mean to publish — you’ve undermined your own narrative.

But here’s the thing: they’re still winning. Claude is considered world-class. The code that leaked was described by developers as “production-grade,” not a hacky API wrapper. The features buried in that code — Kairos, AutoDream — point to a company that’s building infrastructure for a future most competitors haven’t even started designing for.

Anthropic is having the kind of month where you can simultaneously be the most impressive and most embarrassing company in your industry. That’s not a contradiction. That’s what it looks like when a company grows faster than its processes can handle.

The question isn’t whether Anthropic will survive March 2026. It will. The question is whether the “responsible AI” brand can survive the next leak. Because at this rate, there will be a next one.


Sources: Fortune, TechCrunch, CNN, Ars Technica, Bloomberg, Axios, Reuters, TIME, Los Angeles Times

Jonathan Alonso

Jonathan Alonso

Digital Marketing Strategist

Seasoned digital marketing leader with 20+ years of experience in SEO, PPC, and digital strategy. MBA graduate, Marketing Manager at Crunchy Tech, CMO at YellowJack Media, and freelance SEO consultant based in Orlando, FL. When I'm not optimizing campaigns or exploring AI, you'll find me on adventures with my wife Kristy, studying the Bible, or hanging out with our Jack Russell, Nikki.