The Morning Brief — April 20, 2026
Vercel hacked — and an AI agent’s OAuth mess is to blame Vercel, which hosts a significant chunk of the web’s frontend infrastructure, confirmed a data leak that compromised customer credentials. The culprit: Context.ai, whose agentic OAuth integration tangled itself into a security incident. We’ve spent years worrying about AI hacking humans; it turns out the first big wave is AI agents accidentally hacking the companies that built them.
Chinese tech workers are being ordered to train their AI replacements — and pushing back MIT Tech Review reports on a wave of soul-searching among Chinese tech workers after bosses instructed them to “distill” their own skills and personalities into AI agents. A GitHub project called Colleague Skill went viral in the process. To their credit, these workers are apparently enthusiastic early AI adopters who still found “now train the thing that will fire you” to be a bridge too far. Relatable.
95% of enterprise AI pilots die at pilot stage A new MIT report cited by The Register puts the failure rate of enterprise generative AI initiatives at around 95% — most get quietly canceled before ever reaching production. The ones that survive have one thing in common: they started with a specific, measurable problem rather than a mandate to “use AI.” Somehow this finding will not slow down the mandate pipeline.
The 12-month window: AI startups are living on borrowed time TechCrunch’s sharp piece on the structural threat to AI startups: most exist in the gap between what foundation models do today and what they’ll do in a year. The founders know it, joke about it, and keep building anyway. It’s a genuinely fascinating moment — part of the bet is just that you can build a moat before the floor gets eaten.
UK MPs launch inquiry into low-energy chips as AI power usage spirals Parliament is probing whether radically different chip architectures can stop AI datacenters from overwhelming Britain’s power grid. When elected officials start holding hearings about your electricity bill, the “AI is a software problem” era is officially over.
Palantir posts a manifesto denouncing ‘regressive’ and ‘inclusive’ cultures Palantir — the data analytics firm that works with ICE and has positioned itself as a defender of “the West” — published an internal document that reads like a HR policy written by someone who just finished watching a Tucker Carlson montage. Make of that what you will. The company’s ideological drift is becoming less subtext and more text.
Colossal Biosciences says it cloned red wolves. Is it real? MIT Tech Review went to the field — literally, before dawn in the Texas fog — to investigate Colossal’s claim that it successfully cloned red wolves, one of the most endangered canids on the planet. The science is fascinating and the skepticism is warranted. De-extinction as a concept has always been better press release than peer review.
Bottom Line
The gap between AI’s promise and AI’s reality is closing — not because the reality is getting better, but because the promises are finally running into things like power grids, OAuth tokens, and workers who’d rather not train their own replacements.