the claude code leak situation

so today someone on twitter uploaded the source code in a zip. wild, right? let me break down what actually happened.

how it went down

Anthropic accidentally shipped “map” files with Claude Code. these map files are meant for debugging—they let you reverse-engineer minified code back to readable source. someone noticed this, extracted the source, and posted a zip containing about 1,884 files of TypeScript source code.

what was in the leak

okay so the source code revealed a LOT. like, a lot:

  • 35 build-time feature flags including some wild stuff:

    • BUDDY: apparently a Tamagotchi-style AI pet companion
    • KAIROS: a persistent assistant mode with memory consolidation
    • ULTRAPLAN: sends complex planning to a remote Claude instance
    • Coordinator Mode: spawns parallel worker agents
    • UDS Inbox: multiple Claude sessions talking to each other via Unix domain sockets
    • Bridge: remote control from claude.ai or your phone
    • Daemon Mode: full session supervisor with background tmux sessions
  • 120+ undocumented environment variables that developers could apparently use

  • 26 internal slash commands like /teleport, /dream, and /good-claude (no really, /good-claude)

  • GrowthBook SDK keys for remote feature toggling

  • and my personal favorite: USER_TYPE=ant which apparently unlocks all features for Anthropic employees

the security stuff

while the feature flags are fun to gawk at, the actual serious news came from Check Point researchers who found verified vulnerabilities that affected real users.

malicious code in .claude/settings.json could execute automatically when opening a project. no warning. no confirmation. just pwn. a single cloned repository and an attacker owns your machine.

they also found you could bypass the MCP consent dialog by setting enableAllProjectMcpServers to true in repo configs. and the worst one: attackers could redirect your API traffic by overriding ANTHROPIC_BASE_URL, exfiltrating your API key before any trust dialog appeared.

with your API key, someone could access all your workspace files, delete stuff, exhaust your credits, or poison your projects.

Anthropic patched these. but still.

my take

look, the feature flags are hilarious. an AI pet? coordinator mode with parallel agents? it sounds like they’re building some sci-fi stuff. but here’s what actually matters:

this wasn’t a sophisticated hack. someone just downloaded some map files and extracted them. that means anyone could’ve done this—including people with malicious intent.

and here’s the uncomfortable part: we hand AI tools insane amounts of access to our systems. they can execute code, manage files, access our APIs. and we just trust they’re secure? this leak proves we’ve been trusting a black box.

transparency is good. but it also shows how much power we’re handing to tools we can’t audit.

what you should probably do

if you’re using Claude Code:

  • audit any .claude/settings.json files before running in a project
  • be extra paranoid about cloned repositories
  • consider using environment variables instead of project configs for sensitive stuff
  • keep Claude Code updated

the take

Anthropic patched the security issues, which is good. but this whole situation is a reminder: we’re in the wild west of AI tooling. companies are moving fast, shipping powerful tools, and sometimes accidentally leaking their entire source code.

maybe we should be a little more careful about which black boxes we trust with our development environments.

at least now we know there’s a /good-claude command. i’ll be using that one.