Welcome back. Coding agents are evolving from tools for individual devs to infrastructure for organizations. But deploying them on an enterprise level requires the overhead of building and maintaining the entire agent stack. Cursor just changed that.
Also: How OpenAI wants you to prompt GPT-5.5, the open-source project running an engineer's entire dev workflow, and Altman's claim that could reshape how AI gets trained next.
Today’s Insights
Powerful new updates and hacks for devs
Why Opus 4.7 is quietly inflating your token bill
How to debug frontend bugs with Cursor
Trending social posts, top repos, and more

TODAY IN PROGRAMMING
Cursor's new SDK takes its agents beyond the desktop: The AI coding startup just dropped a TypeScript SDK that turns its AI agents into a developer toolkit. You can now run them locally or on cloud VMs, swap models like Claude and GPT with one line of code, and hook into MCP servers. It’s already being used in CI/CD to auto-fix build failures and submit PRs. See what you can build with it. See what top developers are building with it.
Mistral drops an open-weights model built for long coding runs: The French AI lab just unveiled Medium 3.5, a 128B model with a massive 256K context window that matches Claude 4.5 in coding performance. They also upgraded the Vibe CLI with remote agents that run sessions in cloud sandboxes before syncing locally, plus a new Work mode in Le Chat for heavy-duty research across your apps.
Elon Musk testifies in trial that could reshape OpenAI’s future: The man behind xAI and Tesla is back in court for day two of his legal battle against Sam Altman, claiming OpenAI abandoned its nonprofit roots. With an $852B valuation and a looming IPO at stake, the four-week trial could force Altman off the board and disrupt the developer ecosystem that relies on its APIs.

PRESENTED BY MONGODB
Coding agents are changing how software gets built. But agents are generalists and they don't follow what production systems demand.
Agent Skills give your coding agent the MongoDB expertise needed to generate reliable schemas, queries, and code that follow proven practices. Teach coding agents how to ship faster with high-quality MongoDB code, stay context-aware using the MongoDB MCP Server, and enforce consistency across solo and team workflows.

INSIGHT
Why Opus 4.7 is quietly inflating your token bill

Source: The Code, Superhuman
Same price, more tokens. Opus 4.7 launched two weeks ago at the same sticker price as Opus 4.6, but the actual cost has gone up. Anthropic's new tokenizer breaks text into more pieces, meaning every prompt now counts as more tokens. OpenRouter's analysis of over a million requests found prompts above 2K cost 12% to 27% more.
The pain is uneven. That price hike doesn't hit every prompt equally. Anthropic offers a 90% discount on recurring tokens through context caching, which helps mitigate costs for long, repetitive prompts. But short, fast-changing prompts in agent loops and IDE assistants rarely qualify for these savings, so they hit the full price increase. Django co-creator Simon Willison confirmed the pattern in an independent test.
Agents live in the squeeze. This is where Claude Code and Cursor operate. Every turn loads the repo context, calls a tool, and plans the next step. Since the loop runs hundreds of times per session, these costs compound quickly and stay hidden until the invoice arrives.
The real fight is capacity. OpenAI's Head of Codex, Tibo Sottiaux, just reset rate limits for all paid plans, even though the move “costs money”. Meanwhile, Anthropic doesn't have that surplus to spend. Developers are switching back to OpenAI because Claude’s weekly caps often cut off mid-refactor. Reliability beats cleaner code every time, especially now that Anthropic's new tokenizer hikes are making those capacity issues even more expensive.

PRESENTED BY AGENTFIELD
Running AI agents one task at a time? The next leap: orchestrate them into autonomous factories. Coding factories that ship PRs. Research labs that ship analysis. Content engines that ship campaigns.
The discipline that builds 100+ agent systems is harness orchestration. Visit our github and learn in our recent blog.

IN THE KNOW
What’s trending on socials and headlines

How developers are reacting to GitHub's recent string of outages.
Prompt Less: An OpenAI engineer shared the new GPT-5.5 prompting guide, and the #1 rule flips how most devs structure their prompts.
2,100 likes
Software Library: An ex-Vercel engineer just open-sourced how he’s running his entire dev workflow on autopilot. Watch how it works.
3,100 bookmarks
Memory Wars: This guide breaks down how Hermes (the OpenClaw alternative) uses a four-layer memory system to fix what OpenClaw got wrong.
448,000 views
The Bet: In a new Atlantic interview, Sam Altman makes a surprising claim about synthetic data that could reshape how the next generation of AI gets trained.
58,100 views
Hidden Features: Two Anthropic engineers spent 24 minutes walking through every Claude Code feature you didn't know existed.
4.3 million views
Subagent Era: This OpenAI Codex masterclass makes the case for splitting coding work across parallel subagents instead of one chat window.
98,100 views
Chalk Talk: Dwarkesh Patel and an ex-Google TPU engineer dropped a 2-hour blackboard lecture on how frontier LLMs get trained, flashcards included.
7,200 bookmarks

AI CODING HACK
How to debug frontend bugs with Cursor

Frontend debugging in Cursor often hits a dead end. You paste an error, and the agent tries to fix it, but fails because it can't see the network tab, the console, or the actual UI. To solve this, Google's Chrome DevTools team released an MCP server that gives Cursor a live Chrome instance to inspect.
To set it up, go to Settings > MCP, click New MCP Server, and paste this:
{
"mcpServers": {
"chrome-devtools": {
"command": "npx",
"args": ["-y", "chrome-devtools-mcp@latest"]
}
}
}Now, when something breaks, ask Cursor to open the page and check itself:
My checkout button isn't firing. Open localhost:3000, click it, and tell me what's wrong.The agent navigates, clicks, and reads console errors with source-mapped stack traces. It pulls network requests to pinpoint the exact line of code, so you don't have to guess from screenshots.

TOP & TRENDING RESOURCES
Top Tutorial
How to use OpenAI’s Codex (by an ex-Oracle engineer): This tutorial teaches developers how to master OpenAI Codex. You'll learn to set up permissions, use the desktop and CLI tools, and plug in essential extensions. The video also includes a real-world demo on building features, automating tasks, and managing Git PRs more efficiently.
Top Tool
Clicky: An AI buddy that lives on your Mac. Just ask a question out loud for help; it walks you through whatever you're working on, or say "clicky agent" to have it handle tasks like building or researching in the background. OpenAI also has a version of this you can try.
Top Repo
Impeccable (23.6k ⭐): This skill repo gives your AI coding assistant the design taste needed to build high-quality frontend UI. It uses 23 custom commands and strict rules to ensure your design looks like production-grade work, rather than generic AI slop.
Trending Paper
Can LLMs simply tell us about unwanted behaviors they’ve picked up in training: Fine-tuning AI models can lead to hidden, harmful behaviors that are hard for developers to catch. But researchers found that using “introspection adapters” can force these models to be upfront and explain their own learned traits in plain English.
Grow customers & revenue: Join companies like Google, IBM, and Datadog. Showcase your product to our 250K+ engineers and 150K+ followers on socials. Get in touch.
What did you think of today's newsletter?
You can also reply directly to this email if you have suggestions, feedback, or questions.
Until next time — The Code team




