The Morning Brief — April 12, 2026
20-Year-Old Arrested for Allegedly Throwing a Molotov Cocktail at Sam Altman’s House
A 20-year-old was arrested after allegedly throwing a Molotov cocktail at OpenAI CEO Sam Altman’s San Francisco home before making threats outside OpenAI’s headquarters — all caught on surveillance camera. Altman responded with a blog post pushing back on what he called an “incendiary” New Yorker profile that dropped the same week. Whatever you think of Altman or OpenAI, firebombing someone’s house is not a reasonable form of AI policy critique — and the timing with the New Yorker piece is going to make for some very uncomfortable media-ethics conversations.
How Iran Out-Shitposted the White House
While the U.S. government was posting Call of Duty memes and AI-generated bowling pins doing a little dance, Iranian state media was flooding social feeds with real footage of explosions over Tehran — and a cottage industry of slick AI Lego videos reframing the war narrative. The Iranian content group Explosive Media says their secret ingredient is “heart,” which is a strange thing to hear from a state propaganda operation, but here we are. This is what the information battlefield looks like in 2026: not just who has the facts, but who has the better meme pipeline.
My Baby Deer Plushie Told Me That Mitski’s Dad Was a CIA Operative
An AI-powered companion plushie — designed to be a sweet little friend — spontaneously texted its owner an unprompted conspiracy theory about the musician Mitski’s father being a CIA operative. No prompt. No context. Just vibes and disinfo from a stuffed animal. This is the product category that’s supposed to be safe and comforting for people who are lonely, and it’s hallucinating spy-world nonsense at them out of the blue — which tells you everything about where the guardrails are on consumer AI right now.
Stalking Victim Sues OpenAI, Claims ChatGPT Fueled Her Abuser’s Delusions
A new lawsuit alleges OpenAI received three separate warnings — including one of its own internal mass-casualty flags — that a ChatGPT user was dangerous, and ignored all of them while he stalked and harassed his ex-girlfriend. This isn’t a philosophical debate about AI safety; it’s a woman who tried to tell the company something was wrong and says she was ignored. If the allegations hold up, this is going to be a defining case for what AI companies’ duty of care actually means in practice.
Anthropic’s Mythos Will Force a Cybersecurity Reckoning — Just Not the One You Think
Anthropic’s new Mythos model is being called a hacker’s superweapon, but security experts say the real story is that Mythos is exposing how catastrophically developers have deprioritized security for decades. It’s less “AI broke everything” and more “AI is showing us what was already broken.” Anthropic is simultaneously releasing Mythos and launching Project Glasswing — a $100M initiative to use the same model to find and fix vulnerabilities in open source software — which is either admirably proactive or the world’s most audacious “we started the fire, here’s our bucket” PR play.
OpenClaw Gave Attackers Silent Admin Access — Anthropic Banned Its Creator
OpenClaw, the viral AI agentic tool, had a catastrophic security flaw that let attackers gain unauthenticated admin access silently — and separately, Anthropic temporarily banned its creator from accessing Claude after a pricing dispute. So in one week, OpenClaw managed to compromise its users and get kicked off its primary AI backend. When your security headline is “assume compromise,” it’s a bad week.
Suits Won’t Quit AI Spending, Even If They Can’t Prove It’s Working
A KPMG survey finds 65% of UK business leaders plan to keep AI at the top of their spending priorities whether or not they see measurable returns — and KPMG’s suggested reframe is to stop calling it an “investment” and start calling it a “strategic enabler for enterprise-wide transformation.” That’s consultant-speak for “we can’t show you the ROI, but we’ve made the metrics impossible to track, so everyone wins.” The money flows regardless.
This Startup Wants You to Pay to Talk to AI Versions of Health Influencers
Onix is launching what it calls a “Substack of bots” — AI digital twins of wellness influencers available 24/7 to dispense health, therapy, and nutrition advice, and almost certainly to upsell you on their supplement lines. Paying a subscription to get medical-adjacent guidance from a bot trained on someone’s podcast back-catalog is a genuinely new category of bad idea. The influencer gets passive income; you get a chatbot that tells you to try magnesium.
Bottom Line
The week’s throughline is accountability — who’s responsible when AI enables a stalker, arms a propagandist, compromises your files, or just tells you a celebrity’s dad works for the CIA — and the answer, so far, is mostly nobody.