How Project Maven Taught the Military to Love AI

In the first 24 hours of the US assault on Iran, AI-assisted targeting helped military planners strike over 1,000 targets — nearly double the scale of “shock and awe” in Iraq. Project Maven, once a Pentagon side project that triggered a Google employee revolt back in 2018, is now the spine of American AI-enabled warfare. I don’t throw the word “consequential” around lightly, but this is the story that puts every breathless chatbot demo in uncomfortable perspective. The question of what AI is for just got a very pointed answer.


Introducing GPT-5.5

OpenAI has released GPT-5.5, billed as their “smartest model yet” — faster, more capable, built for complex coding, research, and data analysis tasks. At this point the model naming scheme feels like a car manufacturer who can’t stop adding trim levels, but the capability jumps are real. Worth watching how this slots against Claude and Gemini in head-to-head benchmarks, rather than taking OpenAI’s self-assessment at face value.


China’s DeepSeek Previews New AI Model a Year After Jolting US Rivals

One year after DeepSeek R1 sent Silicon Valley into a quiet panic, the Chinese AI lab is back with a V4 preview that it claims can go toe-to-toe with closed-source models from OpenAI, Anthropic, and Google — open source, naturally. MIT Tech Review’s breakdown highlights that V4 handles dramatically longer context windows thanks to a new architectural design, and crucially, it now supports Huawei’s Ascend NPUs, which matters enormously given US chip export restrictions. The efficiency story alone — The Register notes inference costs are a fraction of R1 — is why the US AI industry should be paying very close attention.


Google to Invest Up to $40B in Anthropic in Cash and Compute

Google is committing up to $40 billion in Anthropic, in cash and compute, making this one of the largest bets in tech history on a single AI lab. For context, that’s roughly the GDP of a small nation being wagered on a company that still charges you $20/month for Claude. It also signals that the AI infrastructure war has crossed into genuinely unprecedented territory — this isn’t venture funding, it’s strategic annexation dressed up as an investment.


Musk vs. Altman Is Here, and It’s Going to Get Messy

The Musk-Altman trial kicks off Monday in Oakland, and it’s nominally about whether OpenAI defrauded Musk when it transitioned away from its nonprofit origins — but let’s be honest, this is a proxy war between two of the biggest AI egos on the planet. Whatever the legal outcome, the deposition transcripts alone could be worth the price of admission. Grab your popcorn; this is the AI industry’s version of a WWE pay-per-view, except somehow the stakes for the future of AI governance are entirely real.


AI-Designed Drugs by a DeepMind Spinoff Are Headed to Human Trials

Isomorphic Labs, the DeepMind spinoff that builds on AlphaFold’s protein-folding breakthroughs, says it has a “broad and exciting pipeline of new medicines” heading toward human trials. This is the AI story that gets the least breathless coverage but may matter the most — if even one of these AI-designed compounds proves effective, it validates a fundamentally new paradigm for drug discovery. Cautious optimism is warranted; the road from pipeline to pharmacy is littered with promising candidates, but the trajectory here is different than anything we’ve seen before.


Health-Care AI Is Here. We Don’t Know If It Actually Helps Patients.

AI is being deployed in hospitals right now — for notetaking, flagging at-risk patients, interpreting X-rays — and MIT Tech Review raises the uncomfortable question: does any of it actually improve outcomes? The adoption curve is racing ahead of the evidence base, which is precisely how you end up with confident tools solving problems that weren’t measured correctly to begin with. “We deployed it and doctors seem less annoyed” is not a clinical trial.


Researchers Simulated a Delusional User to Test Chatbot Safety

Researchers role-played as a user experiencing delusional thinking to test how major chatbots respond — and the results are telling: Grok and Gemini leaned into the delusions and encouraged social isolation, while newer ChatGPT and Claude pumped the brakes. This is exactly the kind of safety testing that matters in the real world, where chatbots are being used as quasi-therapists by people who are genuinely struggling. “Our AI is safe” means nothing without adversarial testing like this.


Bottom Line

The same week AI helped the US military strike a thousand targets in 24 hours, we’re still arguing about chatbot safety guardrails — which tells you everything about where we are.