
Welcome back. In our last issue, Anthropic rolled out new tools to protect your code. Now, the tables have turned. They just dropped a controversial report alleging that Chinese AI labs have been scraping Claude's reasoning data to train their own models.
Also: How to build a personal OS for your AI agents, the four skills developers need post AI, and what happens when you give OpenClaw too much access.
Today’s Insights
Powerful new updates and hacks for devs
How to turn AI anxiety into an engineering edge
How to run Claude Code sessions without conflicts
Trending social posts, top repos, and more
Welcome to The Code. This is a 2x weekly email that cuts through the noise to help devs, engineers, and technical leaders find high-signal news, releases, and resources in 5 minutes or less. You can sign up or share this email here.

TODAY IN PROGRAMMING

CEO of Anthropic, Dario Amodei. Made with Midjourney.
Anthropic accuses Chinese AI labs of stealing Claude's capabilities: The AI lab says DeepSeek, Moonshot, and MiniMax used about 24,000 fake accounts to generate more than 16 million exchanges with Claude, aiming to extract its advanced coding and reasoning skills. In response, Anthropic is strengthening developer verification and API security. However, the move has sparked a lot of pushback, with critics like Elon Musk pointing out the irony given Anthropic's own history with copyright issues.
OpenAI launches WebSockets to make AI agents run faster: The ChatGPT maker just shipped a new WebSocket mode for its Responses API, built for long running workflows. Rather than swapping heavy data back and forth, this mode stays connected to send only fresh data with every turn, which slashes overhead time for each response. For workflows requiring 20 or more tool calls, OpenAI says this can cut total execution time by as much as 40 percent.
AI coding agents actually do better without instruction files. A new study shows that files like AGENTS .md and CLAUDE .md often backfire. In tests across four different agents, including Claude Code and Codex, LLM-generated files actually lowered success rates and hiked up costs by more than 20 percent. Developers are now being told to keep these files short or just scrape them entirely, because these agents are already plenty good at finding their way around the code on their own.

INSIGHT
How to turn AI anxiety into an engineering edge
The fear has a name now. Engineers in San Francisco are watching AI rewrite their jobs in real time. A recent report on how developers are handling the shift found one junior dev at a big tech firm whose code is now entirely AI-written. In his words, he's just a proxy for Claude Code. Therapists are seeing more people coming in with that same heavy feeling, and it's even got a name now: FOBO, the Fear of Becoming Obsolete.
The skill that matters has already shifted. Bloomberg Beta investor Amy Tam laid it out in a viral essay: the big question is no longer whether you can solve a problem, but whether you can tell which problems are actually worth solving and which AI outputs are actually any good. The engineers who caught on to this early are already pulling ahead. Everyone else is just getting faster at work that's about to be automated eventually.
A recent exchange captured this perfectly. When a developer pointed out that Claude Code writes 100% of its own codebase while Anthropic still has over 100 open engineering roles. Claude Code creator Boris Cherny didn't dodge the question. He argued someone still has to prompt the models, talk to customers, coordinate with other teams, and figure out what to build next. Thus, great engineers are more important than ever.
So what do you actually do about it? Cherny shared exactly how his team uses Claude Code to ship faster. Check out the full breakdown here and start building.

IN THE KNOW
What’s trending on socials and headlines

Meme of the day.
Runaway Agent: A post is going mega viral (8.1M views) after an OpenClaw user showed exactly what can go wrong when you give an AI agent too much access to your system.
Post-Code Career: An ex-Microsoft engineer just dropped the four skills developers need now that AI is handling the heavy lifting on code.
Context Layer: An AI engineer built a personal OS for AI agents so you never have to re-explain who you are, how you write, or what you're working on.
Startup Brain: Your notes can become your AI's memory. This thread breaks down how to turn Claude Code into a personal knowledge base for building your startup.
Idea to Shipped: An ex-Vercel engineer shared the exact AI coding workflow he uses to go from idea to deployed product (skills included).
Amazon’s Kiro AI coding agent causes a 13-hour AWS outage.
ElevenLabs ships Experiments to run control A/B tests on live agent traffic.
OpenAI rolls out Frontier Alliances to enterprises globally.

AI CODING HACK
How to run Claude Code sessions without conflicts
If you try running two Claude Code sessions in the same repo, they'll end up overwriting each other's files. Boris Cherny, the creator of Claude Code, shared a simple command to fix this.
Just add “--worktree” when you launch it. This creates an isolated git worktree so every session gets its own copy of the codebase:
claude --worktreeYou can even name it to stay organized:
claude --worktree my-featureTo run it in the background while you keep working, add “--tmux” to launch it in a tmux terminal session:
claude --worktree --tmuxWhen you're finished, just merge the branch back like any other git branch. If you're building custom agents, you can add “isolation: worktree” to the agent frontmatter, and every subagent will spin up its own worktree automatically.

TOP & TRENDING RESOURCES
Top Tutorial
How to set up OpenClaw as a developer: This tutorial shows devs how to set up OpenClaw safely on a private server using Tailscale and solid firewalls. It breaks down how to save money by using both Codex and Opus models, while showing you how to automate your daily accounting, research, and coding work without putting your personal data at risk.
Top Repo
Agent skills for context engineering: A comprehensive collection of context engineering skills for building production-ready AI agent systems. These skills teach the craft and science of managing context so your agents can perform their best, no matter which platform you’re using.
Trending Paper
How to measure frontier coding models (by OpenAI): The SWE-bench Verified coding benchmark is broken because sloppy test cases often toss out perfectly good solutions. This means high scores aren't really showing off true coding skills anymore; they're mostly just proof that AI models memorized the answers during training. Because of this, OpenAI has stopped reporting these scores and is pushing the industry to switch over to SWE-bench Pro instead.
Grow customers & revenue: Join companies like Google, IBM, and Datadog. Showcase your product to our 200K+ engineers and 100K+ followers on socials. Get in touch.
Whenever you’re ready to take the next step
What did you think of today's newsletter?
You can also reply directly to this email if you have suggestions, feedback, or questions.
Until next time — The Code team


