Welcome back. Three weeks ago, China-based MiniMax claimed they built a coding model that trains itself. Now that it is making rounds on X, they just dropped M2.7 as an open-weight model on Hugging Face. Coincidence? You can test it and decide for yourself.

Also: How an ex-Oracle director tests code with agents, 18 Cursor tips from their engineer, and an Atlassian engineer explains how to crack a Staff Engineer interview.

Today’s Insights

  • Powerful new updates and hacks for devs

  • Why training your own LLM is still so hard

  • How to pull live git data into Claude Code

  • Trending social posts, top repos, and more

TODAY IN PROGRAMMING

See how MiniMax M2.7 compares with frontier models.

MiniMax opens up its coding agent model and ships a new CLI: MiniMax just released M2.7 as an open-weight download on Hugging Face, allowing teams to run the 230-billion-parameter model locally on 128GB RAM setups. It holds its own against top closed models on engineering benchmarks like SWE-Pro (56.22%) and Terminal Bench 2 (57.0%). Along with the model, they launched MMX-CLI, a command-line tool that gives agents native access to image, video, voice, and search capabilities without needing any MCP setup. Try it here.

Developers highlight sharp decline in Claude Code quality: Anthropic is facing backlash after AMD's senior director of AI published metrics from nearly 7,000 coding sessions. The report shows Claude Code's reasoning length dropped by 67%, while API requests have spiked 80x since February. Anthropic confirmed they lowered the default thinking effort from high to medium to cut down on token costs, though users can still manually set it back to max if they need that deep reasoning.

Apple tests four designs for its first smart glasses: The iPhone maker is reportedly testing four different frame styles for its upcoming smart glasses, which are slated for a 2027 release but could be teased as early as this year. According to Bloomberg, these glasses won't include a display. Instead, they'll focus on cameras for photos and video, phone calls, music, and an upgraded Siri. The move marks a major push into the wearable market, putting Apple in direct competition with Meta's Ray-Bans.

INSIGHT

Why training your own LLM is still so hard

Source: The Code, Superhuman

$5.9 billion and counting. The enterprise LLM market is on track to hit that figure this year, with a massive chunk of that budget going toward custom model training. This means more engineering teams than ever are diving into the open-source training stack for the first time. But as Paras Stefanopoulos, an AI engineer at Baseten, writes in a recent deep dive, what they're finding is a total mess.

Framework overlap. Stefanopoulos mapped the entire stack and found five major frameworks that overlap significantly. This lack of consensus means teams waste days troubleshooting library combinations that crash before training even begins.

The obvious fix isn't ready. Even PyTorch's native stack is struggling. Reports show gradient explosions after just 1,000 steps on newer architectures and frequent out-of-memory crashes during large-scale workloads.

The smart play. Baseten is currently relying on NVIDIA’s Megatron framework for its reliability at scale. The main takeaway for developers is to stick with stable tools like Megatron for now while keeping your codebase modular enough to easily swap components as the ecosystem matures.

IN THE KNOW

What’s trending on socials and headlines

Meme of the day.

  • Under Wraps: A leaked Anthropic feature shows a Lovable-style app builder living directly inside Claude (2.7M views).

  • Swift Skills: If you're using Codex or Claude Code for iOS development, these 5 skill packs from devs who are actually shipping apps can help you get more out of them.

  • Prep Trap: A Principal Engineer at Atlassian posted a Staff interview playbook that's gaining traction, and it starts with a mistake he sees almost everyone make.

  • Inside The Lab: OpenAI just shared how their team uses Codex internally, from reviewing PRs to onboarding new hires.

  • 700 AI Coworkers: A billion-dollar company gave every employee a personal AI coworker and published the full breakdown of what they built to make it work.

  • Cursor Flow: A Cursor engineer just dropped 18 workflow tips he regularly shares with users who want better results out of the tool.

  • Career Bet: Box CEO Aaron Levie thinks AI-generated code is about to trigger a massive shift in who gets hired in security, and it's not what most people expect.

  • Test-First Agents: Ex-Director of Engineering at Oracle walks through how he builds with test-driven development (TDD) in Claude Code.

TOP & TRENDING RESOURCES

Click here to watch the tutorial.

Top Tutorial

Meta staff engineer's guide to Codex: This tutorial teaches developers how to master OpenAI’s Codex app to boost productivity. You'll learn to navigate its IDE, manage parallel tasks with git worktrees, write effective AI prompts, and build custom automated workflows using an agents.md file.

Top Repo

Agent Reach: This repo gives your agents instant access to social platforms like Twitter and Reddit. It cuts out the manual overhead and complex configurations, letting you start scraping with just a single command.

Trending Paper

Single-agent LLMs vs. Multi-agent systems: It's still up for debate whether multi-agent AI is actually smarter or if it just relies on more compute. Interestingly, when you level the playing field on computing power, single agents often perform just as well, or even better than multi-agent systems, on complex reasoning.

AI CODING HACK

How to pull live git data into Claude Code commands

Source: X/theo

Most people think Claude Code slash commands are static, but there is a hidden way to make them dynamic. If you prefix a shell command with “!” in your command file, Claude runs it first and pulls the output as context.

T3 Chat founder Theo Browne shared a PR summary command that automatically pulls diffs and comments by setting up a file at “.claude/commands/pr-summary.md”.

---
name: pr-summary
description: Summarize changes in a pull request
context: fork
agent: Explore
allowed-tools: Bash(gh *)
---

## Pull request context
- PR diff: !`gh pr diff`
- PR comments: !`gh pr view --comments`
- Changed files: !`gh pr diff --name-only`

## Your task
Summarize this pull request...

Just run “/pr-summary" on any branch with an open PR to automatically pull in the live diff, comment thread, and file list. Since the “!” syntax works with any shell command, you can also use it to grab things like test results, logs, or database schemas.

Grow customers & revenue: Join companies like Google, IBM, and Datadog. Showcase your product to our 230K+ engineers and 150K+ followers on socials. Get in touch.

What did you think of today's newsletter?

Your feedback helps us create better emails for you!

Login or Subscribe to participate

You can also reply directly to this email if you have suggestions, feedback, or questions.

Until next time — The Code team

Keep Reading