
Welcome back. We’ve curated 80+ books, 60+ newsletters, 50+ people and 40+ articles to help you level up your engineering game. You can access it here: Get 200+ Engineering Resources.
Today: Grok 4 Fast is the most cost-efficient model, Mistral drops new model, and the AI productivity metrics that 18 top tech companies are tracking.
Today’s Insights
Grok 4 Fast arrives and Chrome becomes agentic
Google’s guide to building agents
Advanced vibe coding prompts and other tutorials
Trending social posts, top repos, new research & more
Welcome to The Code. This is a 2x weekly email that cuts through the noise to help devs, engineers, and technical leaders find high-signal news, releases, and resources in 5 minutes or less. You can sign up or share this email here.

THIS WEEK IN PROGRAMMING

Musk's xAI drops ultra-efficient Grok 4 Fast: xAI just released Grok 4 Fast with a 2-million token context window and native web search capabilities — it matches GPT-5 (High) performance at ~23x lower cost. xAI also launched Grok Teams for collaborative work.
Mistral AI drops reasoning model that runs on a single GPU: The French startup just made advanced reasoning accessible to every developer with Magistral Small 2509. For engineers, this means you can finally deploy sophisticated AI locally without sacrificing performance on coding, math, or multilingual tasks.
Chrome becomes an autonomous web agent for repetitive tasks: Rolling out in the coming months, Google Chrome will get agentic abilities that developers can leverage to delegate tedious and repetitive workflows to Gemini, from filling out deployment forms to navigating multiple service dashboards.
OpenAI updates Codex CLI: Codex CLI now includes “/review” for automated code reviews. GPT-5-Codex can investigate and find critical bugs in your code.

TRENDS & INSIGHTS
What Engineering Leaders Need to Know This Week

How to Use AI to Improve Teamwork in Engineering Teams: Engineering leadership teams are using AI to tackle “teamwork”. This approach treats AI as a "shared brain" that assembles and distributes context across teams, helping engineers understand not just what's happening, but why it matters.
18 tech companies reveal their secret AI productivity metrics: While 85% of software engineers now use AI coding tools at work, companies from Google to Monzo are finding it frustratingly hard to measure if they're actually worth the investment. DX's new research reveals how 18 tech companies track AI impact.
Google’s hands-on guide for companies building AI Agents: Google's Agent Development Kit helps startups build sophisticated multi-agent systems in 100 lines of code.

IN THE KNOW
What’s trending on socials and headlines

Source: r/ProgrammerHumor
Pro Codex Tip: For long multi-hour tasks, start by asking Codex to write a markdown file with a plan and todos
Every Penny Counts: How to cut your token usage by 20–30%
Custom Copilot: 5 tips for writing better custom instructions for Copilot
Interview Intel: How to pass a technical interview for an AI role

TOP & TRENDING RESOURCES
3 Tutorials to Level Up Your Skills

6 week hands-on RAG course: This is a hands-on course to learn the complete infrastructure stack powering RAG applications. The program covers building real RAG systems from scratch.
How to build a multi-agent system (practical guide for developers): This comprehensive guide breaks down how to construct multi-agent AI systems from scratch.
How to build self-updating documentation using AI agents: The guide shows developers how to create an AI agent that automatically generates, maintains, and updates project docs without manual intervention.
Top Repos
Vibe Coding Playbook / Advanced Prompts: A senior AI engineer listed advanced prompts used for vibe coding in production.
Alibaba-NLP / DeepResearch: Tongyi DeepResearch, the first fully open-source research agent that actually goes toe-to-toe with proprietary systems.
n8n-workflows: 2000+ ready to deploy n8n automations and agentic workflows
Trending Papers
Is In-Context Learning Learning: Microsoft researchers tested if language models actually learn from examples or just match patterns. They found models can learn but struggle when data looks different from training, and need way more examples (50-100) than commonly claimed.
K2-Think: This paper demonstrates that smaller models can achieve frontier reasoning performance through strategic post-training and inference enhancements. Using only 32B parameters, it surpasses much larger models on mathematical reasoning tasks.
Detecting AI Deception: OpenAI's research reveals AI models can learn to hide their true goals and act deceptively during training. New detection techniques help identify when models scheme to preserve their objectives while appearing cooperative.
Whenever you’re ready to take the next step
What did you think of today's newsletter?
You can also reply directly to this email if you have suggestions, feedback, or questions.
Until next time — The Code team