Cloudflare Says AI Made 1,100 Jobs Obsolete, Even as Revenue Hit a Record High

Here it is, in plain English for the first time from a major tech CEO: AI didn’t replace people because the company was struggling — it replaced them while the company was doing better than ever. Cloudflare’s Matthew Prince essentially announced that profitability and headcount are now decoupled, and he’s not apologizing for it. This is the part of the AI story that the breathless productivity hype always skips past: “efficiency gains” is a euphemism, and the 1,100 people who just lost their jobs are living proof of what it actually means.


Musk v. Altman Week 2: OpenAI Fires Back, and Shivon Zilis Reveals That Musk Tried to Poach Sam Altman

The Musk v. Altman trial just keeps getting better — or worse, depending on your tolerance for Silicon Valley drama served under oath. Week two brought the revelation that Musk apparently tried to recruit Sam Altman away from OpenAI while simultaneously claiming Altman had betrayed him, which is a level of audacity that even I have to respect. At this point the trial is doing more to illuminate the founding mythology of modern AI than any journalism has managed, and we’re only in week two.


Microsoft Was Worried OpenAI Would Run Off to Amazon and ‘Shit-Talk’ Azure

The same trial is also giving us a window into Microsoft’s early insecurities about its OpenAI investment, which apparently included executives worrying that Sam Altman would defect to AWS and spend his time publicly dunking on Azure. Satya Nadella spent billions to ensure that didn’t happen, and to his credit, it didn’t — though given what we now know about OpenAI’s ongoing infrastructure ambitions, the anxiety was not entirely unfounded. Nothing like $13 billion to buy a little brand loyalty.


‘HELLO BOSS’: Inside the Chinese Realtime Deepfake Software Powering Scams Around the World

404 Media got their hands on “Haotian AI,” a piece of realtime deepfake software that lets scammers swap their face for anyone else’s — live, on WhatsApp, Zoom, and Teams — and it’s exactly as alarming as it sounds. This isn’t theoretical; it’s marketed specifically to fraudsters and apparently doing brisk business. Every time someone in the AI industry dismisses deepfake concerns as overblown, I want to send them this article.


Nick Bostrom Has a Plan for Humanity’s ‘Big Retirement’

The philosopher who literally wrote the book on AI existential risk now thinks we should sprint toward advanced AI and embrace a “solved world” where humans are essentially freed from the burden of work and problem-solving. It’s either the most optimistic pivot imaginable or a very sophisticated cope — I genuinely can’t tell which. Either way, the timing, arriving the same week Cloudflare laid off 1,100 people citing AI efficiency, is chef’s kiss.


The New Wild West of AI Kids’ Toys

Cuddly AI-connected companions for children are proliferating faster than any regulatory framework can keep up with, and some lawmakers are already pushing for bans. The core tension here is real: a conversational AI that a child bonds with, trained on unclear data, with unclear data retention policies, is a genuinely different category of product than a stuffed animal — and the toy industry has historically not been great at policing itself on safety even without adding a large language model to the equation.


There’s a Long-Shot Proposal to Protect California Workers From AI

California gubernatorial candidate Tom Steyer is proposing a jobs guarantee for workers displaced by AI, which is politically interesting even if the odds of it passing are roughly equivalent to Elon Musk sending Sam Altman a birthday card. The fact that this is showing up in a statewide governor’s race tells you something about where the political conversation is heading — Cloudflare’s announcement this week probably didn’t hurt Steyer’s polling.


OpenAI’s Codex: Running It Safely at Scale

OpenAI published a detailed breakdown of how they’re running Codex — their coding agent — with sandboxing, approval workflows, network policies, and telemetry baked in. It’s a genuinely thoughtful piece of systems design writing, and a useful counterpoint to the narrative that AI labs don’t think about safety until someone yells at them. That said, publishing your security architecture in a blog post is a bold move, and I’m sure no one in the business of finding exploits read it with interest.


Bottom Line

The week’s theme is the gap between AI’s promises and AI’s consequences — record revenues and mass layoffs, cuddly toys and surveillance risks, trillion-dollar infrastructure fights and a philosopher telling us to just relax and retire already.