
Welcome back. OpenAI just dropped GPT-5.1-Codex-Max, an agentic model designed to code autonomously for 24 hours straight. While they claim massive productivity gains, expert developers have already uncovered a paradoxical issue that might actually hurt your velocity.
Also: How to adopt AI for engineering teams, Google’s tutorial for agentic coding with Antigravity, and pro hacks to use Gemini 3 and Nano Banana Pro.
Today’s Insights
Google DeepMind drops Nano Banana Pro
Mental models for building products people love
How to leverage LLMs to win in the AI age
Trending social posts, top repos, new research & more
Welcome to The Code. This is a 2x weekly email that cuts through the noise to help devs, engineers, and technical leaders find high-signal news, releases, and resources in 5 minutes or less. You can sign up or share this email here.

THIS WEEK IN PROGRAMMING
Google DeepMind drops Nano Banana Pro with built-in reasoning: The search giant’s latest release leverages the Gemini 3 Pro engine to think through complex prompts, earning high praise from early reviewers. ML expert Sam Witteveen highlighted the model's ability to ground images using real-time Google Search data— perfect for accurate environmental details — calling the upgrade "Nano Banana on steroids."
OpenAI's new model divides developers: The ChatGPT maker launched GPT-5.1-Codex-Max, an agentic coding model that can work independently for 24+ hours straight. OpenAI's own engineers are shipping 70% more code with it. But not everyone's impressed — expert developer Theo Browne called it "slow and high-maintenance," noting that while it eventually gets things right, its poor TypeScript skills and need for constant hand-holding make it frustrating to use.

PRESENTED BY AUGMENT CODE
Most engineering teams are starting AI adoption, but only a few can drive real ROI.
Augment Code's new adoption playbook, AI-Powered Engineering at Scale, offers exclusive insights and tools to help you accelerate your AI adoption process and scale across the enterprise.
In Augment Code's guide, you'll explore:
Real frameworks from CTOs who've scaled AI
Ready-to-use checklists for AI transformation
Organizational self-assessment tools
Stop experimenting and start scaling: download the playbook today.

TRENDS & INSIGHTS
What Engineering Leaders Need to Know This Week

Stewart Butterfield, Co-founder of Slack
Mental models for building products people love: Co-founder of Slack shares the product frameworks and leadership principles that most contributed to his success. From “utility curves” to “the owner’s delusion” to “hyper-realistic work-like activities,” his thoughts on craft, strategy, and leadership apply to anyone building products or leading teams.
How to thrive as an engineering leader in the AI era: While most companies like Lovable already skip PM hires and expect engineers to lead entire projects themselves, this article underlines the need for devs to become "engineering multipliers." It suggests devs focus on amplifying team productivity, owning product decisions beyond just coding, and leveraging AI while maintaining quality.
AI eats the world: Renowned tech analyst Benedict Evans argues AI is the new platform shift. As AI coding becomes the new abstraction layer—slashing software costs—models face commoditization. Future winners will require proprietary data, distribution, and product UX, rather than just superior tech.

IN THE KNOW
What’s trending on socials and headlines

Meme of the week
Prompting Hacks: An AI engineer from Google DeepMind shared some of the best prompting techniques for Gemini 3.
Vibe Coding: A computer vision enthusiast developed a Jarvis HUD interface inspired by Tony Stark, using Gemini 3.
LLM Weapon: This is the way to leverage LLMs for success in the AI age.
AI Explains: Engineers are leveraging Nano Banana Pro to transform complex research papers into easy-to-understand diagrams.
OpenAI rolled out GPT-5.1 Pro to all Pro users.
Meta dropped SAM 3D — an AI model that can turn any object in a still image into high quality 3D model.
xAI released Grok 4.1 Fast and the Agent Tools API that has direct access to X data.
Replit introduced “Fast Mode” to guide their coding Agent to perform quick changes with precision.

TOP & TRENDING RESOURCES
3 Tutorials to Level Up Your Skills
Getting started with Google Antigravity (official course): This comprehensive, step-by-step course equips you to use Antigravity, guiding your transformation from a coder to an AI manager who can dispatch autonomous agents to build applications, write tests, and independently debug code.
How to create Skills: Anthropic released a step-by-step guide helping developers create reusable Skills for specialized tasks. The tutorial provides production-ready templates, debugging strategies, and real examples that let teams build consistent, scalable AI workflows without constantly re-explaining requirements to Claude.
Making sense of memory in AI agents: This deep dive clarifies the confusing terminology around agent memory systems. It covers practical implementation approaches, from hot-path vs background updates to frameworks like Letta and mem0.
Top Repos
Memori: This is an open-source memory engine that enables any LLM to remember conversations, learn from interactions, and maintain context across sessions.
Awesome-AI-apps: A collection of projects showcasing RAG, agents, workflows, and other AI use cases.
Gateway: The AI Gateway is designed for fast, reliable & secure routing to 1600+ language, vision, audio, and image models.
Trending Papers
Early experiments in accelerating science with GPT-5: OpenAI’s new study shows that GPT-5 can already act as a real research partner on hard frontier problems when human-experts scaffold it and check everything carefully.
Unlocking the power of multi-agent LLM for reasoning: In this paper, researchers discuss a critical limitation with multi-agent systems: lazy agent behavior, in which one agent dominates while the other contributes little, undermining collaboration and collapsing the setup to an ineffective single agent.
Agent-R1: This is a new end-to-end Reinforcement Learning (RL) framework for training agentic LLMs through direct interaction and experience. This approach achieves significant performance gains and is more sample-efficient than common supervised fine-tuning methods.
Whenever you’re ready to take the next step
What did you think of today's newsletter?
You can also reply directly to this email if you have suggestions, feedback, or questions.
Until next time — The Code team



