The Morning Brief — April 7, 2026

Iran Threatens OpenAI’s Stargate Data Center in Abu Dhabi

The IRGC published a video threatening OpenAI’s planned Abu Dhabi data center as a retaliatory target if the US strikes Iranian power infrastructure. This is a remarkable sentence to type in 2026, and yet here we are — AI data centers are now explicitly named geopolitical targets in a hot war. The “move fast and build stuff” crowd may not have fully gamed out the scenario where their GPU clusters become bargaining chips in missile diplomacy.


Meta Pauses Work With Mercor After Data Breach Puts AI Industry Secrets at Risk

Mercor, a major data vendor used by several top AI labs, suffered a breach that may have exposed sensitive details about how those labs train their models — the secret sauce, not just the recipe. Meta has paused its relationship with the company while investigations proceed. In an industry where training data provenance is everything, this is the kind of incident that makes legal and security teams simultaneously reach for antacids.


Anthropic Sure Has a Mess on Its Hands Thanks to That Claude Code Source Leak

Anthropic accidentally released Claude Code’s source code, and predictably, hackers are already bundling it with malware and posting it around the internet. Meanwhile, AMD’s AI director is publicly complaining that Claude Code got “dumber and lazier” since its last update, Anthropic shut down OpenClaw subscriptions because demand is crushing their infrastructure, and Fidji Simo — the CEO of AGI deployment — is taking medical leave. It’s a rough week to be Anthropic’s communications team.


Google Bumps Up Q Day Deadline to 2029, Far Sooner Than Previously Thought

Google is now warning the entire industry that quantum computers capable of breaking RSA and elliptic curve encryption could arrive by 2029 — a timeline that would have seemed alarmist just a couple of years ago. Paired with new research showing quantum attacks will require far fewer resources than assumed, this is the kind of story that quietly matters more than almost everything else in tech. If you’re still running RSA everywhere and thinking “we’ll deal with that later,” later just got a lot shorter.


OpenAI’s Vision for the AI Economy: Public Wealth Funds, Robot Taxes, and a Four-Day Workweek

OpenAI published its industrial policy wishlist, which includes taxing AI profits, creating public wealth funds, and redistributing the gains from automation to displaced workers. To be clear: this is the company actively building the automation doing the displacing, now magnanimously suggesting we tax it. I’ll give them credit for engaging seriously with the downstream consequences — but there’s a certain audacity to being both the arsonist and the one proposing the fire code.


Anthropic Launches Cowork, a Claude Desktop Agent That Works in Your Files — No Coding Required

Amid all the chaos, Anthropic quietly shipped something genuinely interesting: Cowork brings Claude Code-style agentic capabilities to regular people who don’t live in a terminal. The team built it in about a week and a half, largely using Claude Code itself — which is either a great proof of concept or a slightly alarming sign that the AI is now shipping features faster than humans can review them. Probably both.


Anthropic Reveals $30B Run Rate and Plans to Use 3.5GW of New Google AI Chips

Buried in Broadcom’s announcements: Anthropic is projecting a $30 billion run rate and plans to consume 3.5 gigawatts worth of next-gen Google AI accelerators built by Broadcom. For context, 3.5GW is roughly the output of three large nuclear power plants. The scale of compute being bet on AI right now is genuinely staggering — and it explains, at least partly, why a geopolitical adversary thinks threatening a data center is worth making a video about.


AI Slop Got Better, So Now Maintainers Have More Work

Here’s the irony nobody talks about: as AI coding tools get better, open-source maintainers are getting more overwhelmed, not less. When AI-generated bug reports and pull requests were obviously bad, you could ignore them. Now that they’re plausible, someone has to actually evaluate them — and that someone is still a human volunteer. Progress creates its own bottlenecks.


Bottom Line

AI is now embedded deep enough in geopolitics, finance, and infrastructure that its problems are no longer just tech problems — they’re everyone’s problems.