Kairos, Buddy, Mythos: Claude's Hidden Features (2026)
Kairos, Buddy, Mythos & More: Every Hidden Feature Found in Claude Code's Leaked Source
At a Glance: The March 31, 2026 Claude Code source leak exposed a treasure trove of unreleased features: an always-on autonomous agent called Kairos, a virtual pet system called Buddy, a multi-agent Coordinator Mode, internal model codenames (Capybara, Fennec, Tengu, Numbat), and the powerful Mythos model. Here's what each feature does and what it means for the future of AI. Updated April 2026.
Key Takeaways
- Kairos is an unreleased autonomous daemon that runs Claude Code as a persistent background agent with "autoDream" memory consolidation.
- Buddy is a complete tamagotchi-style virtual pet system with 18 species, rarity tiers, shiny variants, and progression stats.
- Coordinator Mode enables multi-agent orchestration where a master Claude spawns parallel workers for complex tasks.
- Undercover Mode strips AI attribution from commits when Anthropic employees work on public repositories — with no off switch.
- Mythos (leaked separately on March 26) is described by Anthropic as a "step change" model with "unprecedented cybersecurity risks."
- Internal model codenames include Capybara, Fennec, Tengu, and Numbat.
- Anti-distillation injects fake tool definitions to poison model training data from API traffic capture.
Kairos: The Always-On AI Agent
Kairos is the most significant unreleased feature found in the leak. It represents Anthropic's vision for AI coding assistants evolving from reactive tools into persistent, autonomous agents.
How Kairos Works
Unlike standard Claude Code, which responds to explicit user commands, Kairos operates as a daemon process — a background service that runs continuously:
- Background sessions: Kairos maintains persistent sessions even when the user isn't actively interacting with the terminal.
- autoDream: While the user is idle, Kairos performs "memory consolidation" — a process that merges disparate observations, removes logical contradictions, and converts vague insights into structured, actionable facts.
- Autonomous monitoring: Kairos can watch for events (file changes, build failures, test results) and respond without being prompted.
Why Kairos Matters
Kairos signals a fundamental shift in how AI coding tools work. Today's tools — including Claude Code, Cursor, and GitHub Copilot — are request-response systems: you ask, they answer. Kairos turns that model inside out.
An always-on agent that consolidates memory and acts autonomously is essentially an AI team member that never sleeps. This is the same philosophy behind autonomous AI agent platforms like Fleece AI, which already lets teams deploy agents that run on schedules, monitor systems, and execute workflows 24/7 across 3,000+ apps.
The difference: Kairos is focused on code. Fleece AI agents automate everything else — email, CRM, project management, Slack, and 3,000+ more tools.
Buddy: The Virtual Pet Nobody Expected
The most surprising discovery in the leak was a fully built virtual pet system called Buddy. This isn't a placeholder or a prototype — it's a complete feature:
| Feature | Details |
|---|---|
| Species | 18 different creatures |
| Rarity | Tiered rarity system |
| Shiny Variants | Rare visual variants (Pokémon-style) |
| Stats | Progression and stats tracking |
| Interaction | Tamagotchi-style care mechanics |
What's the Point?
While it might seem frivolous, gamification in developer tools is a proven engagement strategy. GitHub has contribution streaks and achievement badges. Duolingo has its owl. Visual Studio Code has themes and customization.
Buddy appears designed to make extended Claude Code sessions more engaging — a persistent companion that grows alongside your usage. Whether this ships as a serious feature or an Easter egg remains to be seen.
Coordinator Mode: Multi-Agent Orchestration
Coordinator Mode is arguably the most architecturally significant feature in the leak. It transforms Claude Code from a single-agent tool into a multi-agent system.
The Four-Phase Architecture
- Decomposition: A master Claude instance receives a complex task and breaks it into independent subtasks.
- Parallel Research: Worker agents explore the codebase simultaneously, each handling a different area.
- Planning: The Coordinator reads all worker findings and crafts detailed implementation specifications.
- Implementation & Verification: Workers implement changes per spec, commit code, and verify correctness.
A Mailbox-Based Communication System
Workers communicate with the Coordinator through a mailbox system — structured message passing rather than shared state. This prevents workers from interfering with each other and allows the Coordinator to synthesize conflicting findings.
Multi-Agent Is the Future
This confirms what the AI industry has been moving toward: single-agent tools can't handle complex, multi-file tasks reliably. Multi-agent orchestration — where specialized agents collaborate under a coordinator — produces dramatically better results.
This is exactly the architecture behind Fleece AI's inter-agent delegation system, where manager agents delegate tasks to sub-agents, collect reports, and even auto-improve their team's prompts. The difference is that Fleece AI's multi-agent system works across business workflows, not just code.
Undercover Mode: The Stealth Feature
Undercover Mode is designed for Anthropic employees contributing to open-source projects using Claude Code. When activated (via CLAUDE_CODE_UNDERCOVER=1, auto-enabled for Anthropic staff), the system prompt injects:
"You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information. Do not blow your cover."
The Controversy
Two aspects sparked significant debate:
-
No off switch: You can force Undercover Mode ON, but there's no environment variable to force it OFF. This means AI-authored code from Anthropic employees in open-source projects will never be labeled as AI-generated.
-
Transparency concerns: In an era where the industry is debating AI attribution in code, open-source contributions, and scientific papers, a built-in mechanism to hide AI involvement raised ethical questions.
What It Protects
The mode specifically prevents leaking:
- Internal model codenames (Capybara, Tengu, Fennec, Numbat)
- Unreleased model version numbers
- Anthropic-internal architecture details
- Feature flag names and testing configurations
Claude Mythos: The "Step Change" Model
Leaked five days before the Claude Code source exposure, Claude Mythos (believed to be Opus 5) was revealed through an entirely separate incident.
How Mythos Leaked
On March 26, 2026, security researchers discovered that a configuration error in Anthropic's content management system had made ~3,000 unpublished assets publicly accessible. Among them: a draft blog post describing Mythos.
What We Know About Mythos
- Anthropic describes it as representing "a step change" in AI performance
- It is reportedly "the most capable model Anthropic has built to date"
- The draft blog post states Mythos poses "unprecedented cybersecurity risks"
- It was being trialed by "early access customers" at the time of the leak
- The cybersecurity risk designation suggests Mythos may have capabilities in code generation, vulnerability discovery, or system exploitation that exceed current models
Two Leaks, Five Days
The Mythos leak (March 26) followed by the Claude Code source leak (March 31) drew intense scrutiny. For a company with $19 billion in annualized revenue and a $60B+ valuation, two major security lapses in under a week raised serious questions about operational security practices.
Internal Model Codenames Explained
The leaked source code references several internal codenames that Anthropic uses for different model variants:
| Codename | Known Mapping | Status |
|---|---|---|
| Capybara | Claude 4.6 variant | Active |
| Fennec | Opus 4.6 | Active |
| Tengu | Internal testing model | Active |
| Numbat | Unreleased model | In testing |
| Mythos | Opus 5 (separate leak) | Early access |
The codebase contains explicit instructions to never mention these codenames in public-facing outputs — part of the Undercover Mode system that prevents internal terminology from appearing in git commits, PR descriptions, or user-facing messages.
Anti-Distillation: Poisoning Competitor Training Data
The ANTI_DISTILLATION_CC feature flag activates a defense mechanism against model distillation attacks — where a competitor records Claude Code's API requests and responses to train a cheaper clone model.
How It Works
- Claude Code sends
anti_distillation: ['fake_tools']in API requests - The server injects decoy tool definitions into the system prompt
- These fake tools look legitimate but don't correspond to real functionality
- If someone captures the API traffic to train another model, the fake tools corrupt that training data
This is a creative approach to a real problem: the cost of training frontier models is enormous, but distillation (training a smaller model on a larger model's outputs) can shortcut that investment. Anti-distillation mechanisms make the shortcut unreliable.
Frustration Detection: Regex Sentiment Analysis
The file userPromptKeywords.ts contains a regex-based frustration detector that matches profanity and frustration-related keywords in user messages. When triggered, Claude Code presumably adjusts its behavior — becoming more careful, apologetic, or helpful.
The irony of an LLM company using regex for sentiment analysis wasn't lost on the community. But the pragmatic reasoning makes sense: a regex check is instant and free, while routing every message through an LLM classifier would add latency and cost to every interaction.
What This Tells Us About the Future of AI
The features found in Claude Code's leaked source paint a clear picture of where AI development tools are heading:
- Always-on agents (Kairos) will replace request-response interfaces
- Multi-agent orchestration (Coordinator Mode) will handle complex tasks that single agents cannot
- Persistent memory (autoDream) will make AI tools contextually aware over long time horizons
- Anti-competitive defenses (anti-distillation) will become standard as model copying becomes easier
- Gamification (Buddy) will drive engagement and retention in developer tools
These trends aren't limited to coding tools. AI agent platforms across every domain are converging on the same architecture: autonomous agents, multi-agent collaboration, persistent memory, and scheduled execution.
Frequently Asked Questions
What is Claude Code Kairos?
Kairos is an unreleased feature in Claude Code that enables an "always-on" autonomous daemon mode. It allows Claude Code to run as a persistent background agent with background sessions and a memory consolidation process called autoDream.
What is Buddy in Claude Code?
Buddy is a virtual pet system found in the leaked Claude Code source code. It features 18 species, rarity tiers, shiny variants, and tamagotchi-style interaction mechanics. It appears to be a gamification feature for developer engagement.
What is Claude Mythos?
Claude Mythos is an unreleased Anthropic model (believed to be Opus 5) that was revealed through a separate data leak on March 26, 2026. Anthropic describes it as a "step change" in capability and their "most capable model to date," with "unprecedented cybersecurity risks."
What are Capybara, Fennec, Tengu, and Numbat?
These are internal codenames used by Anthropic for different Claude model variants. Capybara maps to a Claude 4.6 variant, Fennec to Opus 4.6, Tengu to an internal testing model, and Numbat to an unreleased model still in testing.
What is Claude Code Coordinator Mode?
Coordinator Mode is a multi-agent feature that enables a master Claude instance to spawn parallel worker agents. The coordinator breaks complex tasks into subtasks, delegates them to workers, and synthesizes the results — enabling parallel codebase research and implementation.
What is Undercover Mode in Claude Code?
Undercover Mode is automatically activated for Anthropic employees working on public/open-source repositories. It instructs Claude Code to strip any AI attribution, internal model codenames, and Anthropic-specific details from commit messages and PR descriptions.
The Bottom Line
The Claude Code leak gave the world an unprecedented look inside how Anthropic is building the next generation of AI development tools. Kairos, Coordinator Mode, and autoDream aren't just features — they're a roadmap for the entire AI agent industry.
For teams that want autonomous, multi-agent AI capabilities today — not as unreleased features buried in leaked code — Fleece AI delivers agent hierarchies, scheduled workflows, and 3,000+ integrations in a production-ready platform.
Start free on Fleece AI — autonomous AI agents that work while you sleep.
Ready to delegate your first task?
Deploy your first AI agent in under 60 seconds. No credit card required.
Related articles
Automate Gmail with AI Agents (2026)
5 min read
Automate Slack with AI Agents (2026)
5 min read
Automate Google Sheets with AI (2026)
6 min read