Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Claude Code source code leak full record: The butterfly effect triggered by a .map file
Written by: Claude
I. Origin
In the early hours of March 31, 2026, a tweet in the developer community sparked a major uproar.
Chaofan Shou, an intern at a blockchain security company, found that an Anthropic official npm package included a source map file, exposing the complete source code of Claude Code to the public internet. He immediately made this discovery public on X and included a direct download link.
The post blew up in the developer community like a flare. Within hours, more than 512,000 lines of TypeScript code were mirrored to GitHub and analyzed in real time by thousands of developers.
This was the second major source-code leak incident Anthropic experienced in less than a week.
Just five days earlier (March 26), a CMS configuration error at Anthropic caused nearly 3,000 internal files to be exposed, including draft blog posts for the upcoming “Claude Mythos” model.
II. How did the leak happen?
The technical reason behind this incident is almost laughable—at the root of it was an incorrectly included source map file (.map file) in the npm package.
The purpose of such files is to map compressed and obfuscated production code back to the original source code, making it easier to pinpoint error line numbers during debugging. And within this .map file is a link pointing to a zip package stored in Anthropic’s own Cloudflare R2 storage bucket.
Shou and other developers downloaded that zip package directly, with no hacking required. The files were simply there—fully public.
The affected version was v2.1.88 of @anthropic-ai/claude-code, which included a 59.8MB JavaScript source map file.
In its response to The Register, Anthropic acknowledged: “A similar source-code leak also happened in a Claude Code version earlier, in February 2025.” This means the same mistake occurred twice within 13 months.
Ironically, Claude Code includes a system called “Undercover Mode” designed specifically to prevent Anthropic internal code names from accidentally leaking in git commit histories… and then an engineer packaged the entire source code into a .map file.
Another likely contributor to the incident may be the toolchain itself: Anthropic acquired Bun at the end of the year, and Claude Code is built on Bun. On March 11, 2026, someone submitted a bug report in Bun’s issue tracking system (#28001), pointing out that Bun still generates and outputs source maps in production mode, contrary to what official documentation claims. That issue remains open to this day.
In response, Anthropic’s official statement was brief and measured: “No user data or credentials were involved in or leaked from this. This was a human error during the release packaging process, not a security vulnerability. We are moving forward with measures to prevent such events from happening again.”
III. What was leaked?
Code scale
The leaked material covered about 1,900 files and more than 500,000 lines of code. This is not model weights—it is the engineering implementation of Claude Code’s entire “software layer,” including core architectures such as the tool-calling framework, multi-agent orchestration, permission systems, memory systems, and more.
Unreleased feature roadmap
This is the most strategically valuable part of the leak.
KAIROS autonomous guardian process: This feature code name, mentioned more than 150 times, comes from the Ancient Greek phrase “the proper time,” representing Claude Code’s fundamental shift toward a “resident backend Agent.” KAIROS includes a process named autoDream that performs “memory consolidation” when users are idle—merging fragmented observations, eliminating logical contradictions, and turning vague insights into deterministic facts. When the user returns, the Agent’s context is already cleaned up and highly relevant.
Internal model code names and performance data: The leaked contents confirm that Capybara is an internal code name for a Claude 4.6 variant, with Fennec corresponding to Opus 4.6, while the unreleased Numbat is still under testing. Code comments also reveal that Capybara v8 has a 29–30% false statement rate, which is a step back compared to v4’s 16.7%.
Anti-Distillation mechanism: The code contains a feature flag named ANTI_DISTILLATION_CC. When enabled, Claude Code injects fake tool definitions into API requests, with the goal of polluting API traffic data that competitors might use for model training.
Beta API feature list: The constants/betas.ts file reveals all Claude Code’s beta features negotiated with the API, including a 1 million token context window (context-1m-2025-08-07), AFK mode (afk-mode-2026-01-31), task budget management (task-budgets-2026-03-13), and other capabilities that have not yet been made public.
An embedded Pokémon-style virtual companion system: The code even hides a complete virtual companion system (Buddy), including species rarity, shiny variants, procedurally generated attributes, and a “soul description” written by Claude during its first incubation. Companion types are determined by a deterministic pseudo-random number generator based on a hash of the user ID—each user always gets the same companion.
IV. Concurrent supply-chain attacks
This incident did not happen in isolation. In the same time window as the source-code leak, the axios package on npm was targeted by a separate supply-chain attack.
Between 2026-03-31 00:21 and 03:29 UTC, if you installed or updated Claude Code via npm, you could inadvertently introduce a malicious version containing a remote access trojan (RAT) (axios 1.14.1 or 0.30.4).
Anthropic advised affected developers to treat the host as fully compromised, rotate all keys, and reinstall the operating system.
The temporal overlap of these two events made the situation even more chaotic and dangerous.
V. Impact on the industry
Direct harm to Anthropic
For a company with annualized revenue of $19 billion and in a period of rapid growth, this leak is not merely a security lapse—it is a bleeding of strategic intellectual property.
At least some of Claude Code’s capabilities do not come from the underlying large language model itself, but from the software “framework” built around the model—it tells the model how to use tools, and it provides important guardrails and instructions that standardize model behavior.
Those guardrails and instructions are now clearly visible to competitors.
A warning to the entire AI Agent tool ecosystem
This leak will not take down Anthropic, but it gives all competitors a free engineering textbook—how to build production-grade AI programming agents, and which tool directions are worth prioritizing for investment.
The real value of the leaked content is not the code itself, but the product roadmap revealed by the feature flags. KAIROS, the anti-distillation mechanism—these are strategic details that competitors can now anticipate and respond to first. Code can be refactored, but once a strategic surprise is exposed, it cannot be taken back.
VI. Deeper takeaways for Agent Coding
This leak is a mirror reflecting several core propositions in today’s AI Agent engineering:
1. An Agent’s capability boundaries are largely determined by the “framework layer,” not the model itself
The exposure of 500,000 lines of Claude Code reveals a fact that matters to the entire industry: the same underlying model paired with different tool orchestration frameworks, memory management mechanisms, and permission systems will produce radically different Agent capabilities. This means that “who has the strongest model” is no longer the only competitive dimension—“who has the more refined framework engineering” is just as critical.
2. Long-range autonomy will be the next core battleground
The existence of the KAIROS guardian process suggests that the next step in industry competition will focus on “enabling Agents to keep working effectively even without human supervision.” Backend memory consolidation, cross-session knowledge transfer, autonomous reasoning during idle time—once these capabilities mature, they will fundamentally change the basic mode of collaboration between Agents and humans.
3. Anti-distillation and intellectual property protection will become a new foundational topic in AI engineering
Anthropic implemented an anti-distillation mechanism at the code level, signaling that a new engineering domain is forming: how to prevent one’s own AI systems from being used by competitors for training-data collection. This is not only a technical issue—it will evolve into a new battleground of legal and commercial conflict.
4. Supply-chain security is the Achilles’ heel of AI tools
When AI programming tools themselves are distributed through public package managers like npm, they face supply-chain attack risks like any other open-source software. And the special nature of AI tools is that once a backdoor is inserted, the attacker gains not just code execution rights, but deep penetration across the entire development workflow.
5. The more complex the system, the more you need automated release guards
“A misconfigured .npmignore or the files field in package.json can expose everything.” For any team building AI Agent products, you shouldn’t have to pay such an expensive price to learn this lesson—inserting automated release-content review into the CI/CD pipeline should be standard practice, not a remedial measure after failing to stop the leak.
Epilogue
Today is April 1, 2026—April Fools’ Day. But this is not a joke.
Anthropic made the same mistake twice within 13 months. The source code has been mirrored across the globe, and DMCA takedown requests can’t keep up with the speed of forks. That product roadmap that was supposed to be hidden in an internal network is now a reference for everyone.
For Anthropic, this is a painful lesson.
For the entire industry, this is an accidental moment of transparency—one that lets us see exactly how the most advanced AI programming Agents today are built line by line.