Welcome back. The team at Cursor has been cooking. The company just unveiled Composer 1 — their first coding model and what they called "the best way to code with agents." Composer 1 is cheaper than Claude, can design beautiful frontends (like Claude, unlike Codex), and most importantly, it's blazingly fast. You can learn more about Composer and Cursor 2.0 in this 1-hour tutorial.

Today’s Insights

  • OpenAI and Cognition launch new models for devs

  • How engineering managers should conduct 1:1

  • Pro coding hacks for Claude Code, Cursor and Skills

  • Trending social posts, top repos, new research & more

Welcome to The Code. This is a 2x weekly email that cuts through the noise to help devs, engineers, and technical leaders find high-signal news, releases, and resources in 5 minutes or less. You can sign up or share this email here.

THIS WEEK IN PROGRAMMING

Click here to watch Cursor 2.0 in action

Cursor launches its own coding model that's 4x faster: Composer is the startup’s first proprietary model, and it comes with a redesigned interface called Cursor 2.0. Composer wraps up most coding tasks in under 30 seconds — four times faster than similar models — and was trained to better understand large codebases. The new interface lets you run multiple AI agents at the same time without them getting in each other's way. Here’s a 1-hour tutorial on how to use it.

OpenAI's new safety models let developers write their own rules: With gpt-oss-safeguard, OpenAI is giving developers unprecedented control over content moderation. The two open-weight models (120B and 20B) can classify text against any developer-provided policy — whether you're running a gaming forum that needs to catch cheating discussions or a review site filtering fake posts. Here’s OpenAI’s official guide on how to use it.

Cognition launches a new model optimized for software engineering: The team behind Devin just dropped SWE-1.5, a powerful AI model that hits near-top performance on coding tasks while running at up to 950 tokens per second. That's 6x faster than Haiku 4.5 and 13x faster than Sonnet 4.5. It's already live in Windsurf, where Cognition's own engineers now use it daily for digging through codebases and building apps.

TRENDS & INSIGHTS

What Engineering Leaders Need to Know This Week

Click here to watch Google Cloud’s CTO break down enterprise AI agents

Google Cloud CTO's playbook for scaling AI agents: Building agents isn't magic— it requires documented business processes and incremental measurement, says Google Cloud CTO Will Grannis. His team breaks complex agent workflows into "very atomic, very specific" steps with measurable outcomes at each gate, deciding whether to proceed, pivot, or stop.

Engineering manager’s exact 1:1 script: Most engineering managers wing their one-on-ones, but Can Duruk has a better approach. His weekly structure uses the People/Product/Process framework to cover team dynamics, work progress, and systemic issues.

Prioritization Starts With Strategic Prioritization: Product consultant John Cutler's latest post tackles a common problem — teams being asked to prioritize without clear strategy. He shows how every decision should tie back to four business fundamentals while fighting natural market decline.

IN THE KNOW

What’s trending on socials and headlines

Meme of the week

  • Specialised LLMs: Only drop your LLM if you can win in one of these categories.

  • Coding Hacks: This is by far the best post we’ve found on Claude Code’s advanced hacks.

  • Hello World: Google just launched AI Studio vibe coding tutorials on YouTube.

  • Skilled Agents: This ML Engineer dives deep into Claude Skills and teaches you how to use it.

  • GitHub unveils Agent HQ, a unified platform to orchestrate Copilot and third-party agents.

  • xAI releases Grokipedia, open-source encyclopedia enabling collaborative editing of 800K Grok-written articles.

  • LangChain launches Agent Builder, a no code agent builder built on Deep Agents architecture for anyone to create an agent.

  • Replit just became the fastest place to build & deploy MCP servers.

TOP & TRENDING RESOURCES

3 Tutorials to Level Up Your Skills

Click here to watch how Cursor’s AI Agents work

Cursor AI agents work like 10 developers (demo): A VP from Cursor just walked through his daily coding workflow in a hands-on tutorial. The secret: break work into bite-sized tasks and let AI agents handle different jobs — one agent fixes bugs, another handles security checks.

OpenAI releases guide for building custom code reviews with Codex SDK: Developers working with on-prem repositories or non-GitHub platforms can now build their own automated code review systems using OpenAI's Codex CLI. The new guide walks teams through installing Codex in CI/CD runners, using headless mode with structured JSON outputs, and integrating review comments directly into pull requests.

Fine-tuning and Reinforcement Learning for LLMs: Intro to Post-TrainingThis new course, taught by Sharon Zhou, VP of AI at AMD, teaches how to turn pre-trained LLMs into reliable, production-ready systems. In 5 modules, you’ll walk through the full post-training pipeline with practical techniques used by top labs.

Top Repos

  • olmocr: This toolkit by Allen Institute for AI converts PDFs, PNGs, and JPEGs to structured plain text, preserving reading order, tables, equations, and multi-column layouts for downstream LLM use.

  • goose: An open source, extensible AI agent that goes beyond code suggestions - install, execute, edit, and test with any LLM.

  • claude-code-cheat-sheet: The ultimate collection of Claude Code tips, tricks, hacks, and workflows that you can use to master Claude Code in minutes.

Trending Papers

Signs of introspection in LLMs: This research investigates whether large language models like Claude can genuinely introspect on their internal states or merely fabricate plausible responses when queried. Key finding: Claude shows limited but authentic introspection, accurately detecting and describing injected "thoughts" in experiments, though performance varies with injection strength.

Agent Data Protocol: This research introduces the Agent Data Protocol (ADP), a unified format to combine 13 diverse agent datasets into the largest SFT collection (1.27M trajectories) for training agentic language models across coding, browsing, and tool use. ADP enables ~20% average performance gains, achieving SOTA or near-SOTA results without domain-specific tuning.

Fundamentals of Building Autonomous LLM Agents: This paper reviews the architecture and methods for building autonomous agents using LLMs. Key finding: Integrating perception, reasoning, memory, and execution systems enables LLMs to mimic human cognition, creating more adaptive and capable agents that bridge AI-human performance gaps.

Whenever you’re ready to take the next step

What did you think of today's newsletter?

Your feedback helps us create better emails for you!

Login or Subscribe to participate

You can also reply directly to this email if you have suggestions, feedback, or questions.

Until next time — The Code team

Keep Reading

No posts found