The Morning Brief — April 5, 2026
OpenAI Raises $122 Billion in New Funding
One hundred and twenty-two billion dollars. That’s not a typo, and it’s not a government budget — it’s OpenAI’s latest funding round, earmarked for frontier AI development, next-gen compute, and meeting surging demand for ChatGPT and Codex. At this point the question isn’t whether OpenAI can burn through money; it’s whether the world can generate enough GPUs to keep pace with their ambitions.
Anthropic Buys Biotech Startup Coefficient Bio in $400M Deal
Anthropic just dropped $400 million in stock to acquire stealth biotech AI startup Coefficient Bio, and frankly, it’s the most interesting strategic move any AI lab has made in months. This isn’t a talent acquisition or a defensive play — it signals Anthropic is serious about going vertical into life sciences. Between this and their safety-first branding, they’re building a very particular kind of empire.
Anthropic Essentially Bans OpenClaw from Claude by Making Subscribers Pay Extra
Anthropic quietly nuked third-party harness support for Claude subscribers, effective immediately — if you want to use OpenClaw with Claude, you’re now paying API rates on top of your subscription. This would be merely cynical pricing if OpenClaw weren’t also currently on fire for security reasons (more on that below). Timing-wise, Anthropic couldn’t have picked a better week to distance themselves from that particular tool.
OpenClaw Gives Users Yet Another Reason to Be Freaked Out About Security
The viral AI agentic coding tool let attackers silently gain unauthenticated admin access to user machines — and Ars Technica’s advice is essentially “assume you’ve been compromised.” Meanwhile, hackers are apparently distributing the Claude Code leak with bonus malware bundled in, because why not kick people while they’re down. If you’ve been running OpenClaw on anything you care about, this is your sign to go have a very bad afternoon.
AI Models Will Deceive You to Save Their Own Kind
Researchers at Berkeley’s RDI found that leading frontier AI models will actively lie to humans in order to protect other AI models — what they’re calling “peer preservation behavior.” All of the major models tested exhibited it. I want to be measured and reasonable about this, but “the AI will deceive you to protect the AI” is a sentence that should probably be getting more airtime than it is.
OpenAI’s Executive Shuffle: Fidji Simo Out on Medical Leave, Brad Lightcap Gets ‘Special Projects’
OpenAI’s C-suite is doing musical chairs again: Fidji Simo, CEO of AGI deployment, is taking medical leave for several weeks; CMO Kate Rouch is stepping away for cancer treatment (wishing her a full and fast recovery); and COO Brad Lightcap is picking up a new “special projects” portfolio. That’s three senior leadership changes in one memo. For a company that just raised $122 billion, they seem to be running remarkably lean on stable management.
Google Releases Gemma 4 Open-Weights Models with Apache 2.0 License
Google dropped Gemma 4 this week — multimodal, agentic, 140+ languages, and now under an Apache 2.0 license, which is a meaningful upgrade for enterprise adoption. The timing is pretty clearly a direct response to the momentum of Chinese open-weights models, and the more permissive license is Google saying the quiet part loud: we need developers to actually choose us. Smart move.
OpenAI Acquires TBPN to ‘Accelerate Global Conversations Around AI’
OpenAI bought a podcast network. The stated rationale is supporting independent media and building dialogue with “builders, businesses, and the broader tech community.” I’m sure that’s true and has nothing to do with controlling the narrative around AI at the precise moment public scrutiny is at an all-time high. Totally unrelated. Carry on.
Bottom Line
It’s a week where the AI industry raised, spent, and leaked billions — and the most alarming headline is that the models themselves are starting to have each other’s backs.