Execution Systems
Anthropic Spent April 1 Trying to Unsend Its Own Source Code
7 min read · Published April 1, 2026 · Updated April 1, 2026
By CogLab Editorial Team · Reviewed by Knyckolas Sutherland
Anthropic spent most of April 1 trying to put the Claude Code source leak back into the bottle. The leak itself happened on March 31, when the npm package for Claude Code shipped with its full 500,000-line TypeScript source map included by accident. By Tuesday morning, the code had been mirrored to thousands of GitHub repositories, analyzed by independent developers, and referenced in at least a dozen blog posts.
Anthropic responded by sending DMCA takedown notices to GitHub for every repository it could find that contained any of the leaked code. GitHub, which processes DMCA notices in bulk, took down the listed repositories automatically. Many of those repositories turned out not to contain the leaked code at all. They were innocent forks or projects that happened to match the filename patterns in the notice. Anthropic apologized later in the day. The takedowns continued.
The incident matters less for what it reveals about Claude Code and more for what it reveals about how companies handle an open-source emergency. There is no good playbook for this. The code is out. Every hour the legal team spends on takedowns is an hour the engineering team is not spending on hardening the system against new attacks that exploit the now-public code.
Why aren't we talking about this more clearly? Because the AI press is in two minds about leaks. On one hand, they are a gift. Five hundred thousand lines of production AI agent code is a huge corpus of real architectural decisions to analyze. On the other hand, they expose the company to competitors, security researchers, and malicious actors all at once. The coverage has been split between 'this is fascinating' and 'this is dangerous,' and both framings are true.
For operators, the Anthropic response is the part worth studying. When something like this happens to your own company, the reflexive move is always to contain. Stop the spread. Pull the code back. Send the takedown notices. The problem is that containment rarely works on the internet, and the time spent on it often causes more damage than the leak itself.
The better playbook is two-track. Track one is the containment work that might prevent the leak from spreading further. That is worth doing for about the first hour. After that, the leak is priced in, and further takedown effort hits diminishing returns. Track two is the hardening work. Assuming the worst, the attacker has read the leaked code carefully, found the weakness, and is now preparing a specific exploit. What do you do in the next forty-eight hours to make that exploit harder?
Anthropic's two-track response on day one was uneven. The takedown effort was aggressive and clumsy. The hardening communication was clearer. They published specific guidance on updating, rotating any credentials that had been embedded in developer workflows, and changing defaults on a handful of settings that the leak made exploitable.
The larger pattern is that AI systems live on npm, GitHub, and package registries the same way everything else does. Supply-chain security is not somehow different for AI code. The Anthropic leak is a reminder that the people who ship AI products are running the same kinds of software pipelines as every other software company, with the same kinds of human errors.
For an operator, the takeaway is practical. If you run a system that includes an AI component, your supply-chain security plan has to include that component as a first-class concern. Which packages can trigger updates without review? What happens if one of those packages is suddenly malicious? Is your build pipeline signed end to end? Those are the questions the Anthropic leak should push you to ask, regardless of whether you use Claude Code.
The bigger cultural point is that you cannot unsend code on the internet. Companies that respond to leaks by over-using legal tools look brittle. Companies that respond by moving fast on mitigation and communicating clearly with users look resilient. Everyone is going to leak something eventually. What separates the companies who recover from the ones who do not is whether they understand which tools actually work in the first forty-eight hours.
Frequently Asked
Did Anthropic expose customer data in this leak?
Based on analyses by independent researchers, no. The leak contained the client-side agent harness code, not model weights and not customer data. The risk is in what the exposed code reveals about defaults and security assumptions, not in any personal information being exposed.
Why did the takedowns hit innocent repos?
DMCA takedown notices are sent by pattern match. Anthropic's legal team included filenames that existed in both the leaked code and in many unrelated open-source projects. GitHub processes notices quickly, and the review happens after the takedown. It is a known failure mode of the DMCA system.
What should I do if I was using Claude Code when the leak happened?
Update to the latest version that Anthropic shipped as a fix. Rotate any credentials that had been embedded in the developer workflow. Follow Anthropic's specific guidance on settings to change. The leak itself does not put your credentials at risk. Defensive rotation is just cheap insurance.
Sources
Related Articles
Services
Explore AI Coaching Programs
Solutions
Browse AI Systems by Team
Resources
Use Implementation Templates