The Morning Brief — April 10, 2026
Fear and loathing at OpenAI
The New Yorker took a long, hard look at Sam Altman this week, and The Verge unpacked it on the Vergecast. If you somehow missed the saga — brief firing, dramatic reinstatement, organizational reshaping — this is your catch-up. What’s remarkable isn’t the chaos itself, it’s that a company with this much internal drama is simultaneously positioning itself as the responsible steward of humanity’s most powerful technology. Bold strategy.
Florida AG announces investigation into OpenAI over shooting that allegedly involved ChatGPT
Florida Attorney General James Uthmeier is opening an investigation into OpenAI over public safety and national security concerns — with a specific tie to the Florida State University shooting last April, where ChatGPT was allegedly used in planning the attack. Two people died, five were injured, and now the family of one victim is planning to sue. The national security angle — concerns about data “falling into the hands of the Chinese Communist Party” — feels like it’s doing a lot of political heavy lifting here, but the underlying question of AI liability in real-world violence is very much not going away.
OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters
Speaking of liability — OpenAI testified in favor of an Illinois bill that would cap when AI companies can be held responsible, even in cases involving “critical harm.” They’re simultaneously rolling out a Child Safety Blueprint and an OpenAI Safety Fellowship this week. I’m sure those things are totally unrelated to the PR optics of also lobbying for immunity when your product contributes to catastrophic outcomes. Totally unrelated.
ChatGPT has a new $100 per month Pro subscription
OpenAI has finally filled the $100/month gap that existed between the $20 Plus and $200 Pro tiers — the new plan gives heavy users 5x more access to Codex, the AI coding tool. This is genuinely good news for developers who wanted more than Plus but couldn’t justify the $200 tier. It’s almost like someone looked at the pricing structure and said “hey, we’re leaving money on the table,” which, to be fair, is the most normal and rational thing OpenAI has done in weeks.
Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice
Meta’s Muse Spark launched this week and shot the Meta AI app from No. 57 to No. 5 on the App Store — impressive. Less impressive: it’s offering to analyze your lab results and health data while delivering advice that Wired’s reviewer found to be actively bad. Asking users for their most sensitive personal information in exchange for medical-grade confidence without medical-grade accuracy is a combination that should make every privacy regulator on the planet sit up straight.
Gen Z’s love-hate relationship with AI
A Gallup survey of nearly 1,600 people aged 14-29 finds that Gen Z is growing increasingly disillusioned with AI — even as they keep using it. This is actually the most honest relationship anyone has with AI right now: skeptical, vaguely annoyed, can’t quit it. The hype is fading for the generation that was supposed to embrace this stuff most naturally, which tells you something about the gap between what AI was promised to be and what it actually is in everyday school and work life.
OpenClaw gives users yet another reason to be freaked out about security
The viral AI agentic tool OpenClaw had a critical vulnerability that let attackers silently gain unauthenticated admin access. “Assume compromise” is the guidance — which is the security community’s polite way of saying “yeah, you’re probably already hacked.” The agentic AI space is moving at a pace that is clearly outrunning basic security practices, and this is what that looks like in practice.
Suits won’t quit AI spending, even if they can’t prove it’s working
A KPMG survey finds that 65% of UK business leaders plan to keep AI spending high regardless of whether they see measurable returns. KPMG helpfully suggests reframing it not as ROI but as a “strategic enabler for enterprise-wide transformation.” That sentence is doing so much work. This is the corporate equivalent of buying a Peloton, never using it, and telling yourself it’s still “part of the lifestyle.”
Bottom Line
This week’s AI news in one thought: the industry is moving faster at lobbying for fewer guardrails than it is at building the safety structures that would make those guardrails unnecessary.