Welcome back. Most developers struggle with AI coding agents — not because the tools are bad, but because they don't understand context windows. We’ve found a short tutorial for you to understand how context windows really work.

Today: New models and AI agents for coding, a deep dive inside Google’s engineering culture, and how to get hired by top AI labs.

Today’s Insights

  • MiniMax’s open source coding model

  • How to use AI to plan engineering projects

  • How to effectively use coding agents

  • Trending social posts, top repos, new research & more

Welcome to The Code. This is a 2x weekly email that cuts through the noise to help devs, engineers, and technical leaders find high-signal news, releases, and resources in 5 minutes or less. You can sign up or share this email here.

THIS WEEK IN PROGRAMMING

MiniMax-M2’s performance in comparison with other models. Source: MiniMax

MiniMax's lightweight coding model beats bigger competitors: The Chinese AI startup just released MiniMax-M2, a compact model built specifically for coding and AI agents. Despite having only 10B active parameters, it ranks #1 among all open-source models. The model handles multi-file code edits, debugging loops, and complex tasks like browsing the web and running shell commands. Developers can learn to use it here.

Vercel's new AI Agent reviews code and debugs for devs: Developer platform Vercel just launched Agent, an AI tool that handles code reviews and troubleshoots production issues. Agent reviews your pull requests and runs simulated builds to verify each suggestion actually works before you see it. Developers can learn how to use it here.

Mistral launches platform to help enterprises deploy AI at scale: Most teams have built dozens of AI prototypes, but they struggle to push them into production because it’s challenging to track outputs, monitor usage, and maintain governance. Mistral just solved this with their new AI Studio — a new production platform designed to help companies move beyond prototypes and deploy AI systems reliably. If you’re new to Mistral, you can take up this course to make yourself familiar with it.

TRENDS & INSIGHTS

What Engineering Leaders Need to Know This Week

Click here to learn how Google retains engineers

How Google retains engineers: The company handles billions of users daily across its products, yet manages to retain engineers for 20+ years through constant internal mobility and innovation opportunities. The podcast explores how Google operates at scale, from its all-custom tech stack to compensation frameworks.

How to use AI to help with planning engineering projects: A new practical guide demonstrates how to use context engineering to automate project planning workflows that traditionally take up hours of management time. The approach uses templated prompts with dynamic data from project management systems to surface prioritization recommendations.

How engineering managers should measure productivity of their teams: A former CTO reveals how he measured output at Felt without drowning engineers in busywork. The secret? Push paperwork to managers, not ICs. Daily stand-ups took 10 minutes, weekly change logs showed who shipped what, and real-time deploy tracking created a culture of high cadence across 4.5 years of growth.

IN THE KNOW

What’s trending on socials and headlines

Meme of the week

  • Reading Papers: Here’s a software engineer's guide to reading research papers.

  • DeepSeek Supremacy: DeepSeek OCR parses this extremely hard to read handwritten letter written by mathematician Ramanujan in 1913.

  • How AI Labs hire: CEO of Abacus AI revealed her “get rich scheme” for developers.

  • Perplexity’s Research Residency Program provides immigration and visa sponsorship support for talent across all disciplines to shape the future of AI.

  • Google demonstrated that a quantum computer can successfully run a verifiable algorithm, 13,000x faster than leading classical supercomputers.

  • Gemini is getting major upgrades.

  • OpenAI has acquired Software Applications, the maker of Sky, a natural language interface for Mac.

TOP & TRENDING RESOURCES

3 Tutorials to Level Up Your Skills

Click here to watch context window 101 for all developers

How to set up a codebase to be more productive with AI coding tools: AI researcher Simon Willison shared six of his best hacks for maintaining a codebase that can be used by coding agents. He argues that anything that makes a codebase easier for humans to maintain also helps agents.

How to code Frontend with GPT-5: OpenAI engineers demonstrate GPT-5 building complete applications from minimal prompts while maintaining design consistency. Key workflow: use image inputs to match existing UI patterns, specify frameworks like Next.js and libraries like shadcn/ui upfront, then iterate.

How to effectively use context windows for coding: Developers struggle with AI coding agents—not because the tools are bad, but because they don't understand context windows. This is a context window 101 tutorial for all devs who use coding agents.

Top Repos

  • OpenMemory: An open-source memory system enhancing LLM apps through LangGraph integration. Features structured memory with 2-3× faster recall and 10× lower costs than hosted solutions.

  • agent-lightning: Microsoft's Agent Lightning lets developers optimize AI agents using reinforcement learning without rewriting code. The framework supports any agent framework and enables selective optimization in multi-agent systems.

  • awesome-llm-apps: A collection of 50+ LLM apps with AI Agents and RAG using OpenAI, Anthropic, Gemini and open source models.

Trending Papers

When Models Manipulate Manifolds: Anthropic researchers looked into how Claude Haiku counts characters to wrap text lines properly. They found neat groups of features forming lines and shapes, plus attention systems that rotate and handle them step by step.

A survey of vibe-coding with LLMs: The paper surveys "Vibe Coding," a paradigm shift where developers validate AI-generated code via outcomes rather than line-by-line review, formalized as a Constrained Markov Decision Process involving human intent, codebase context, and agent actions.

Fundamentals of Building Autonomous LLM Agents: This paper explores architectures for autonomous LLM agents, addressing traditional LLM limitations by integrating perception, reasoning, memory, and execution systems. It highlights how these components enable agents to automate complex tasks, mimic human cognition, and achieve intelligent, adaptive behavior.

Whenever you’re ready to take the next step

What did you think of today's newsletter?

Your feedback helps us create better emails for you!

Login or Subscribe to participate

You can also reply directly to this email if you have suggestions, feedback, or questions.

Until next time — The Code team

Keep Reading

No posts found