The Morning Brief — April 17, 2026
OpenAI’s big Codex update is a direct shot at Claude Code
OpenAI has given Codex a serious makeover — computer use, in-app browsing, image generation, memory, plugins — essentially strapping everything but a coffee maker onto it. This is what catching up looks like when you have infinite resources and a bruised ego: you don’t iterate, you detonate. Claude Code clearly got inside their heads, and honestly, competition is good for the rest of us.
Anthropic releases a new Opus model amid Mythos Preview buzz
Claude Opus 4.7 is out, and Anthropic is billing it as their most capable generally-available model yet — better at complex coding, image analysis, and instruction-following. Interesting timing: drop a new flagship model the same week OpenAI is gunning for your coding crown and Ronan Farrow is putting your chief rival on blast. Anthropic is having a very good week and would like you to notice.
Ronan Farrow on Sam Altman’s ‘unconstrained’ relationship with the truth
The man who brought down Harvey Weinstein has now published a deep-dive on OpenAI CEO Sam Altman’s relationship with honesty, and the phrase “unconstrained by the truth” is doing a lot of work in that New Yorker piece. Farrow is not a reporter who publishes things he can’t defend, which makes this worth reading carefully — especially as Musk v. Altman heads toward a jury that will literally be asked to rule on whether OpenAI betrayed its founding mission. The vibes in San Francisco this week are immaculate.
The Battle for OpenAI’s Soul
The Musk v. Altman trial is about to get a jury, and the central question — did OpenAI abandon its nonprofit mission to benefit humanity in favor of enriching itself? — is the kind of thing that would have seemed like science fiction a decade ago. Whatever you think of Musk’s motivations, the underlying question is legitimate and consequential, and watching it play out in a courtroom while OpenAI raises money at stratospheric valuations is something else. Popcorn prices have never been higher.
The UK Launches Its $675 Million Sovereign AI Fund
The UK government is putting £535 million toward homegrown AI startups, explicitly framing it as reducing dependence on foreign technology. It’s a smart hedge — when your entire digital economy runs on infrastructure built by two or three American companies, “sovereign AI” stops sounding like nationalist chest-thumping and starts sounding like basic risk management. Whether £535M is actually enough to move the needle in a market where single funding rounds can dwarf it is a different question entirely.
Anthropic Plots Major London Expansion
As tensions with the U.S. government mount, Anthropic has leased London office space big enough to quadruple its current 200-person UK headcount. Read the room: when a leading AI safety company starts quietly building out capacity across the Atlantic, something is going on beyond simple market expansion. Pair this with the UK’s new sovereign AI fund and you’ve got the makings of a genuinely interesting geopolitical shift in where serious AI work gets done.
Physical Intelligence says its new robot brain can figure out tasks it was never taught
Physical Intelligence’s new π0.7 model claims to get robots meaningfully closer to the dream of a general-purpose robot brain — one that can reason about novel tasks rather than just execute what it was explicitly trained on. We’ve been promised this before, many times, but the MIT Tech Review piece this week on how robot learning has actually evolved makes a compelling case that the gap between “robotic arm in a factory” and “robot that figures things out” is genuinely closing. I’m cautiously optimistic, which is the correct amount of optimistic.
Claude Opus wrote a Chrome exploit for $2,283
Anthropic’s Mythos model got held back from public release because it was too good at finding exploitable vulnerabilities — but The Register points out that the already-available Claude Opus managed to write a working Chrome exploit for under $2,300 in API costs. The relevant takeaway isn’t that Opus is dangerous; it’s that the line between “safely capable” and “dangerously capable” is a lot blurrier and cheaper to cross than the careful press releases suggest.
Bottom Line
The AI industry is simultaneously having its most productive week in months and its most uncomfortable reckoning with questions of trust, safety, and who exactly is in charge.