Anthropic’s Mythos Breach Was Humiliating
For weeks, Anthropic made a big show of saying Claude Mythos was so frighteningly capable at cybersecurity that it simply couldn’t be released to the public — the digital equivalent of putting a velvet rope around a model and calling it exclusive. Then a “small group of unauthorized users” got access to it anyway. Nothing quite undermines the “this AI is too dangerous” argument like being unable to keep it locked up.
OpenAI Releases GPT-5.5
OpenAI just dropped GPT-5.5 — one month after GPT-5.4 — billing it as their “smartest and most intuitive” model yet, with particular muscle in coding and data analysis. At this release cadence, I’m genuinely curious whether anyone at OpenAI has time to actually use one of these models before the next one ships. That said, if the coding and reasoning improvements are real, developers will notice fast.
Meta Is Laying Off 10 Percent of Its Staff
Roughly 8,000 Meta employees will be out the door in May, plus another 6,000 open roles that are simply being closed. This comes after Meta has been loudly, enthusiastically pouring money into AI infrastructure — which is either a sign that AI investment and headcount reduction are the new normal, or a preview of what “AI doing more of the work” actually looks like in practice. Probably both.
Anthropic Admits It Dumbed Down Claude When Trying to Make It Smarter
Turns out users weren’t imagining it — Claude really did get worse over the past month, the result of overlapping system changes and bugs that degraded output quality while the team was aiming for improvements. Credit to Anthropic for owning it, but this is a recurring pattern across AI labs: the quest for the next capability leap quietly breaks things that were already working. Meanwhile, Claude Opus 4.7 is apparently refusing legitimate requests at an elevated rate due to an overzealous safety classifier. It’s been a rough week for Claude’s reputation.
AI Tools Are Helping Mediocre North Korean Hackers Steal Millions
One North Korean hacking group used AI for everything — vibe-coding malware, building fake company websites, crafting social engineering attacks — and pulled in up to $12 million in three months. The uncomfortable truth here isn’t that AI created super-hackers; it’s that AI is making mediocre hackers dramatically more dangerous. The capability floor just dropped through the floor.
Sam Altman’s Orb Company Promoted a Bruno Mars Partnership That Doesn’t Exist
World, Sam Altman’s iris-scanning crypto identity startup, apparently promoted a partnership with Bruno Mars that Bruno Mars’s team says never happened — no discussions, no agreement, nothing. “To be clear, we were never approached,” a spokesperson told WIRED. Promoting imaginary celebrity partnerships is a bold strategy for a company trying to get people to hand over their biometric data.
Claude Is Connecting Directly to Your Personal Apps Like Spotify, Uber Eats, and TurboTax
Anthropic is expanding Claude’s app integrations beyond workplace tools into personal life territory: Spotify, Uber, Instacart, AllTrails, TurboTax, Audible. The vision of an AI that can actually do things across your whole digital life is getting more real by the week. The conversation about what it means to let one AI model touch your music, food delivery, taxes, and travel simultaneously is one we’re not having loudly enough.
Startups Brag They Spend More Money on AI Than Human Employees
A new cohort of AI-native startups is openly boasting that they redirect hiring budgets directly into AI compute — and framing it as a competitive advantage. It might be, in some cases. But there’s something revealing about the fact that this is now a brag, not an apology. We went from “AI won’t replace jobs” to “we replaced jobs with AI and we’re proud of it” in what feels like about eighteen months.
Bottom Line
This week’s theme is the gap between AI’s ambitions and its execution — leaked safety models, dumbed-down assistants, hallucinated celebrity partnerships, and hackers who are only dangerous now because AI handed them a cheat code.