- Anthropic mistakenly leaked Claude Code's internal source code after publishing an npm package with a source map that allowed the reconstruction of approximately 512.000 lines of code.
- The leak reveals the agent's complete architecture, secret modules like Buddy, Kairos and Ultraplan, and references to unannounced models like Capybara/Fennec/Numbat.
- The company claims that no customer data was exposed, but the community warns of new attack vectors and the difficulty of removing the material once it has been replicated on mirrors and decentralized platforms.
- The leak reopens the legal and security debate about code written in part by AI, the effectiveness of the DMCA, and the risks for European companies that integrate code agents into their daily operations.

In a matter of hours, thousands of developers and security experts from around the world —including Europe— downloaded, analyzed, and began dissecting the material. Beyond the immediate impact on Anthropic, the incident opens an uncomfortable debate about security, intellectual property and systemic risks in a sector that moves at breakneck speed.
How the Claude Code was leaked and what was actually exposed
The problem originated from an update of Claude Code (version 2.1.88) published in the npm registry under the package @anthropic-ai/claude-codeAmong the included files, one slipped in. JavaScript source map of almost 60 MB, a debug file that, when used correctly, allows the original code to be rebuilt from the minified bundle that reaches the end user.
The researcher Chaofan Shou, by Solayer LabsAnthropic was one of the first to detect the file and report it on X early in the morning (US East Coast time). By the time Anthropic removed the package from npm, the internet had already replicated the content (see how). receive automatic alerts when your data appears in a data breach): approximately 512.000 lines of code distributed across approximately 1.900 files, which would correspond to the core of the agent and multiple internal modules.
In a statement sent to specialized media outlets, the company insisted that No customer data or sensitive credentials were compromisedThe firm attributes the incident to a simple human error in packaging configuration and maintains that it was not an external intrusion, but an internal failure in the deployment cycle, and stressed the need to anonymize documents before passing them to an AI.
However, several independent analyses suggest that the scope of the leak goes far beyond a single slip-up. In parallel with this incident, media outlets such as Fortune they had already reported thousands of documents exposed, including references to advanced models under development and internal drafts, which paints a picture of operational fragility in the management of critical assets.
Claude Code's architecture: a much more complex agent than it seemed
Initial reviews of the leaked material confirm that Claude Code is far from being a simple “chat for the terminal”The product relies on a central engine of over 46.000 lines that orchestrates all interactions with Anthropic models, managing actions, internal commands, memory, and coordination between agents.
Specifically, around 40 types of shares that the agent can perform (reading and writing files, executing commands, editing code, managing repositories, etc.) and some 85 internal commands specialized for tasks such as change review, Git integration, or project restructuring. This structure helps explain why many developers described Claude Code as a “self-employed worker” rather than as a sophisticated autocomplete feature.
The leak also reveals a multi-layered memory strategydesigned to maintain long sessions without the agent getting lost. Among the visible elements, a memory index stands out (MEMORY.md) with thematic references, an on-demand retrieval system by topic, and a strict writing discipline that seeks to minimize hallucinations and inconsistencies over time.
This design, now laid bare, has become an unexpected architectural manual for European competitors and startups trying to build their own autonomous agents for codeAnthropic's practices—from task orchestration to context cleansing—are documented with a level of detail that would hardly have come to light without a leak of this magnitude.
Hidden features: Buddy, Kairos, Ultraplan and “Covert Mode”
If anything has generated headlines in the technical community, it has been the features not yet announced which are referenced in the code. Perhaps the most striking is Buddya kind of Tamagotchi-type virtual pet integrated into the Claude Code interface.
The leaked files describe Buddy as a companion that would appear in a bubble next to the text box, available in 18 variants (duck, dragon, axolotl, capybara, ghost and other creatures) with rarity levels and unique statisticsincluding such curious attributes as "debugging," "patience," "chaos," and "wisdom." The leaked rollout plan placed its presentation between April 1st and 7th, with a wider launch planned for the following weeks.
Another key piece is Kairos, described as a daemon always active It doesn't wait for the user to type. This module runs in the background, records events and decisions throughout the day, and performs a "sleep" or memory consolidation process overnight. clear contradictions and rearrange the context looking ahead to the next working session.
The code also reveals UltraplanUltraplan is a mode that enables intensive cloud-based planning sessions of up to 30 minutes, designed for complex architecture tasks or deep refactoring. Alongside Ultraplan is a... multi-agent coordinator mode, which allows one instance of Claude to manage multiple specialized copies working in parallel on the same project.
Among the most talked-about curiosities is the so-called “Undercover Mode” or covert mode, a complete subsystem designed to prevent Claude from accidentally leaking internal project names, model aliases, and confidential references by contributing to open-source repositories. Some of the system prompts included explicit instructions such as: "Do not reveal your identity," which in forums like Hacker News led to jokes about an AI programmed to remain "incognito."
Internal models: Capybara, Fennec, Numbat and the ghost of Mythos
Along with the tool's hidden functions, the leak brings to the table References to Anthropic models not yet publicly presentedAmong the code names that appear in the code, the following stand out: Capybara, Fennec and Numbat, linked, according to the fragments analyzed, to variants of the Claude family 4.6.
Capybara is listed as a Claude 4.6 specific variant, presumably geared towards coding and security tasks; Fennec is associated with a version internally named Opus 4.6While Numbat appears to be a model still in the testing phase, without clear end-use specifications. These names align with previous leaks that pointed to an advanced model internally named as Mythos, whom some documents linked to deep reasoning and critical infrastructure analysis capabilities.
US media reports have described Mythos/Capybara as a system capable of Detect vulnerabilities in source code on a large scale, design exploits, and execute attack strategies almost autonomously, to the point that Anthropic would have decided pause or slow down its commercial rollout out of fear of cybersecurity risks. The leak of Claude Code, which alludes to these models, reinforces the perception that the company is walking a fine line between innovation and caution.
For Europe and Spain, where technological regulation and protection of critical infrastructure These are particularly sensitive issues; the idea of agents with such offensive capabilities is not insignificant. Community regulators already closely monitor major foundation models, and episodes like this could fuel the debate on Specific limitations on models with potential for cyberattacks.
Legal consequences: DMCA, Python rewrites, and the limbo of AI-generated code
Once the problem was detected, Anthropic began to send DMCA takedown notices against GitHub repositories and other mirrors that directly hosted the original files. Several of those copies were quickly removed, but not all community initiatives followed that pattern.
A Korean developer, known as Sigrid JinHe decided to take advantage of the window of time when the code was accessible to rewrite the core architecture in Python from scratchusing an AI orchestration tool called oh-my-codexThe result was an alternative project, nicknamed claw-code, which replicated the logic of Claude Code but in another language and with a technically new code.
From a legal standpoint, this move presents an awkward scenario: if the new codebase is a derivative work written from scratch, without literally copying the original textTo what extent can a DMCA takedown order force its removal? Voices like that of Gergely Orosz, author of the newsletter The Pragmatic EngineerThey have described the maneuver as "brilliant or terrifying," precisely because of its apparent resistance to traditional content removal tools.
The debate becomes even more complicated when considering that Part of the original Claude Code code was reportedly generated by Anthropic's own models.US courts have recently moved towards the idea that content created exclusively by AI does not automatically enjoy the same protection as human works, which could weaken copyright claims on synthetically generated segments.
Added to all this is the dimension of the decentralized infrastructureCopies of the original code would have ended up on platforms like Gitlawb, based on distributed networks where There is no single checkpointIn these types of environments, DMCA notices lose much of their effectiveness: the question is no longer whether it is possible to remove a file, but how many mirrors there are and on which systems they reside.
Security risks for users and companies: from the back end of the architecture to the supply chain
Beyond the reputational embarrassment, the biggest practical effect of the incident is that The internal logic of Claude Code is now publicThis allows legitimate researchers to learn from its design, but it also gives potential attackers a very detailed map of how the agent behaves, what assumptions it makes, and where weaknesses might exist.
The risks mentioned include the possibility of exploit internal stock flowsabusing poorly documented commands or designing specific input patterns to force the agent to perform unforeseen operationsSeveral experts have emphasized that knowing the sequence of checks, filters, and validations in detail makes it easier for those trying to bypass these barriers.
At the same time, the case has coincided with problems in the software supply chain, such as malicious versions of widely used dependencies (e.g., infected variants of axios or encryption libraries). The combination of a widespread agent, whose code is exposed, and an ecosystem of sometimes poorly audited dependencies, paints a complex picture for Cybersecurity teams in European companies and public administrationsFor example, it is advisable Check if your Windows 11 is vulnerable to certain techniques
The advice that is repeated among professionals includes Avoid installing or updating Claude Code via npm for a while. In sensitive environments, migrate where possible to native installers controlled by the company itself and meticulously review specific versions of dependencies identified in the technical reports. Measures such as password rotation, stricter permissions, and closer monitoring of agent activity have become almost mandatory recommendations.
This type of incident has also raised alarms in IT departments that were already concerned about the massive, unsupervised use of AI tools. Recent reports indicate that Around 70% of employees use some type of AI system at workoften without knowledge of the systems area, which increases the attack surface and the unintentional exposure of corporate code and dataIn this sense, the Risks of Android apps that use AI in the cloud They illustrate the control problems well.
Lessons for Spain and Europe: speed, control and independent evaluation
Claude Code's experience comes at a time when many Spanish and European companies are incorporating code agents into their development workflowsThis applies to large banks and insurance companies, as well as consultancies, startups, and government agencies. The promise of productivity is clear, but the Anthropic case illustrates that... security risks and regulatory compliance They are not a minor detail.
Laboratories such as F5 Labs They have started publishing tracking dashboards —AI Security Leaderboards— that periodically evaluate the main AI models and agents based on their exposure to real attacks and data leaksTools such as Comprehensive AI Security Index (CASI) and the Agentic Resilience Score (ARS) They seek to offer security officials independent technical metricsaway from the noise of marketing.
In that context, Anthropic's stumble is interpreted as an example of how the pressure to release increasingly advanced products It can overshadow basic controls such as packaging reviews, access policies, dependency audits, or pre-deployment review procedures. For organizations operating under European regulations—from the AI Regulation to the NIS2 framework—these aspects are no longer optional.
The leak also prompts a rethinking of the relationship between AI providers and corporate clientsCompanies that integrate agents like Claude Code into critical development chains are beginning to demand not only performance promises, but also clear commitments to governance, transparency and incident responseIt is expected that, based on episodes like this, contracts will include stricter clauses on notification of failures, external audits, and management of sensitive code.
For the European AI startup ecosystem, the case serves as an unintentional reference point: many of the techniques that have made Claude Code a leading player are now exposed, but so too are the... costs of a seemingly minor errorAnyone who wants to compete in this field will have to learn from both the architecture and the procedural flaws that have been exposed.
Taken together, the leak of Claude Code's code shows just how far programming agents have become critical infrastructure of the digital economyA single poorly packaged file has revealed product plans, internal models, memory strategies, and secret modes, while also triggering legal, security, and regulatory debates that directly affect Europe. For companies, developers, and regulators, the episode is a reminder that the AI race isn't just about who launches the next spectacular feature first, but about who can combine that speed with tight control over the code that has already become the new backbone of our daily work.
I am a technology enthusiast who has turned his "geek" interests into a profession. I have spent more than 10 years of my life using cutting-edge technology and tinkering with all kinds of programs out of pure curiosity. Now I have specialized in computer technology and video games. This is because for more than 5 years I have been writing for various websites on technology and video games, creating articles that seek to give you the information you need in a language that is understandable to everyone.
If you have any questions, my knowledge ranges from everything related to the Windows operating system as well as Android for mobile phones. And my commitment is to you, I am always willing to spend a few minutes and help you resolve any questions you may have in this internet world.




