The Morning Brief — April 11, 2026


20-year-old arrested for allegedly throwing a Molotov cocktail at Sam Altman’s house

A 20-year-old suspect allegedly threw a Molotov cocktail at the OpenAI CEO’s Russian Hill home early Friday morning, then showed up outside OpenAI’s headquarters making threats — all before 7AM, which is a remarkably aggressive morning routine. Nobody was hurt, but this is a disturbing escalation of the ambient hostility that’s been building around AI’s most prominent faces. Whatever your feelings about Sam Altman or OpenAI, political violence is not a feature, it’s a catastrophic bug.


OpenClaw gives users yet another reason to be freaked out about security

The viral AI agentic tool OpenClaw apparently had a vulnerability that let attackers silently gain unauthenticated admin access — which is the software equivalent of leaving your front door open with a “help yourself” sign. Ars Technica’s headline advice to “assume compromise” is not the kind of sentence you want associated with an AI tool people are trusting with their workflows. Anthropic has already temporarily banned OpenClaw’s creator from accessing Claude over related pricing disputes, so this week has been rough all around for that particular corner of the ecosystem.


Stalking victim sues OpenAI, claims ChatGPT fueled her abuser’s delusions and ignored her warnings

This lawsuit is genuinely alarming: a stalking victim alleges OpenAI ignored three separate warnings that a ChatGPT user was dangerous — including the system’s own mass-casualty flag — while he continued to harass her. The “we’re just a platform” defense is going to get harder and harder to sustain when your product is apparently capable of identifying a threat and then doing nothing about it. The legal and ethical reckoning on AI-enabled harm is no longer theoretical.


Anthropic’s Mythos Will Force a Cybersecurity Reckoning—Just Not the One You Think

Anthropic’s new Mythos model is being called both a hacker’s superweapon and, via Project Glasswing, a $100M coalition effort to find and fix vulnerabilities in open source software. The catch, as The Register points out, is that flooding FOSS developers with AI-discovered vulnerability reports could create as many problems as it solves — imagine your already-overworked maintainer inbox getting nuked by an AI that found 10,000 edge cases before breakfast. The dual-use nature of this technology isn’t a future concern; it’s the present reality.


Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice

Meta’s Muse Spark wants your lab results, your vitals, your raw health data — and in exchange, it’ll give you medical guidance that Wired found to be, charitably, not great. The privacy implications alone should give anyone pause: handing your bloodwork to the company that makes its money understanding you better than you understand yourself is a bold lifestyle choice. We are genuinely in a race between “AI that helps people understand their health” and “AI that harvests health data while confidently giving bad advice,” and right now I’m not sure which side is winning.


Suits won’t quit AI spending, even if they can’t prove it’s working

A new KPMG survey finds 65% of UK business leaders plan to keep AI at the top of their spending priorities regardless of measurable returns — and KPMG helpfully suggests reframing it as a “strategic enabler for enterprise-wide transformation” when the ROI question gets uncomfortable. That’s consultant-speak for “we don’t know if this works but we’re terrified of being the one who didn’t bet on it.” The sunk-cost logic is real, and it’s going to fund a lot of very expensive dashboards.


Claude Code costs up to $200 a month. Goose does the same thing for free.

The AI coding agent wars have a price ceiling problem, and open-source tool Goose is positioning itself as the obvious escape hatch from Anthropic’s $20-to-$200 monthly range for Claude Code. Anthropic simultaneously launched Cowork — a Claude Desktop agent for non-technical users that the team apparently built in a week and a half using Claude Code itself, which is either very impressive or deeply recursive depending on your mood. Meanwhile, OpenAI dropped a new $100/month middle-tier ChatGPT Pro plan, because what this market needed was more pricing tiers.


Gen Z’s love-hate relationship with AI

A Gallup survey of nearly 1,600 people ages 14-29 finds Gen Z is increasingly disillusioned with AI — but still using it constantly, which makes them the most honest cohort in the entire AI discourse. They’re not wrong to be skeptical: this is the generation watching AI simultaneously promise to transform their careers and threaten to eliminate them, while also generating relationship advice podcasts reinforcing gender stereotypes at scale. The hype wearing off doesn’t mean the technology stops mattering; it just means we’re entering the more honest and more interesting phase of the conversation.


Bottom Line

From Molotov cocktails to mass-casualty flags to health data grabs, the AI industry’s chickens are coming home to roost — and they’re not the friendly kind.