Daniel Moreno-Gama is facing federal charges for attacking Sam Altman’s home and OpenAI’s HQ
This one is not a metaphor: a man allegedly traveled from Texas to California specifically to kill Sam Altman, threw a Molotov cocktail at his home, and attempted to break into OpenAI’s headquarters. Federal charges are now filed. Whatever your views on OpenAI’s direction, politically motivated violence against tech executives is a genuinely alarming escalation — and a sign that the culture war around AI has moved well past Twitter arguments.
Mark Zuckerberg is reportedly building an AI clone to replace him in meetings
Meta is training an AI avatar on Zuckerberg’s image, voice, mannerisms, and public statements so it can interact with employees on his behalf. I’ll just note that if you need an AI to impersonate you convincingly, you may already be halfway there. The deeper question: if the clone gives bad feedback, do you fire the clone or the model?
Stanford report highlights growing disconnect between AI insiders and everyone else
Stanford’s 2026 AI Index is out, and the headline finding is a yawning gap between the people building AI and everyone else — rising anxiety over jobs, healthcare, and the economy, while adoption has hit 53% of the population in just three years (faster than the PC or the internet). The experts are bullish; the public is nervous; and nobody seems to be having a productive conversation about it. Shocker.
Anthropic launches Cowork, a Claude Desktop agent that works in your files — no coding required
Anthropic built a new non-technical-user-friendly file agent called Cowork in roughly ten days, largely using Claude Code itself — which is either a great advertisement for AI-assisted development or a slightly terrifying glimpse at how fast the pace of shipping has become. It’s the right strategic move to bring agentic capabilities beyond the developer crowd, but given that Claude apparently had a major outage Monday and users are complaining that the model is degrading in quality, maybe slow down and QA the stuff you already have out the door?
Claude is getting worse, according to Claude
The Register reports that Anthropic’s Claude has been stumbling — quality complaints are piling up, there was a major outage Monday, and apparently even the bot itself will acknowledge the degradation if you ask it nicely. Pair that with the Claude Code cache changes burning through usage quotas faster, and the $200/month price tag, and Anthropic is having the kind of week that gets discussed in the next round of VC memos.
OpenAI has bought AI personal finance startup Hiro
OpenAI is quietly acquiring its way into your wallet — the Hiro acquisition signals that financial planning is a real target vertical for ChatGPT. Between this and the leaked internal memo about building moats and locking in enterprise users, OpenAI’s “helpful AI assistant” is increasingly looking like a very deliberate platform play. The question is whether you want the company currently managing drama about Molotov cocktails also managing your retirement portfolio.
Read OpenAI’s latest internal memo about beating the competition — including Anthropic
OpenAI’s chief revenue officer sent a four-page memo to employees stressing the need to build moats and dominate enterprise before the competition catches up. The word “moat” in a Silicon Valley memo is doing a lot of work these days — it usually means “we’re scared the product is a commodity and we need to lock people in before they notice.” Not wrong, but not exactly the “we’re here to benefit humanity” energy they usually project.
You Can Soon Buy a $4,370 Humanoid Robot on AliExpress
Unitree’s R1 humanoid robot is hitting international markets via AliExpress at $4,370, with aerobatic capabilities and, as Wired charitably puts it, an open question about what you’d actually do with it. I respect the honesty in that framing. For now it’s a very expensive conversation piece that can do backflips, which puts it roughly on par with a golden retriever but considerably less useful.
Bottom Line
The gap between AI insiders who can’t stop shipping and a public that can’t stop worrying is the defining tension of 2026 — and nothing today suggests anyone’s closing it.