Three reasons why DeepSeek’s new model matters
DeepSeek dropped a preview of V4 on Thursday, and MIT Tech Review is right to take it seriously — longer context windows, more efficient architecture, and yes, still open source. Every time Western labs convince themselves they’ve lapped the Chinese competition, DeepSeek quietly reminds them that the race is very much still on. The open-source piece alone is enough to give every AI executive a mild Monday morning headache.
GPT-5.5 System Card
OpenAI published a system card for GPT-5.5 this weekend with roughly the fanfare of a library book return — no splashy announcement, just a document quietly appearing on the site. Whether this is a genuine capability leap or a “we needed something between GPT-5 and GPT-6 and we’re not ready to talk about it yet” move remains to be seen. OpenAI’s versioning strategy at this point feels less like a product roadmap and more like jazz improvisation.
Anthropic’s magic code-sniffer: More Swiss cheese than cheddar, for now
The Register is having a field day with Mythos, Anthropic’s AI vulnerability-hunting tool, which Discord users apparently gained unauthorized access to — and which, upon closer examination, finds pretty much what human security researchers already taught it to look for. Naming your AI security product after a word that can also mean “beliefs incompatible with reality” is the kind of branding own-goal that will haunt a PR team for years. Points for ambition, though.
We’re launching two specialized TPUs for the agentic era
Google announced its eighth-generation TPUs — two chips specifically designed for the increasingly agent-heavy AI workloads coming down the pipe. This is the quiet but consequential story in AI right now: the hardware layer is being rebuilt from scratch to handle a world where models aren’t just answering questions but running multi-step tasks continuously. Nvidia should be watching its back, but Google still has to prove these chips matter outside its own data centers.
Apple’s Next CEO Needs to Launch a Killer AI Product
Tim Cook leaves behind an Apple that is, by any financial measure, one of the most successful companies in human history — and yet somehow managed to fumble nearly every AI announcement for three years running. John Ternus inherits a company that still has no answer to what “Apple Intelligence” actually is in practice, and the honeymoon period for a new CEO will be shorter than usual given how many people are waiting for Cupertino to either catch up or admit it’s lost. The hardware guy is now running the software race.
Claude Code costs up to $200 a month. Goose does the same thing for free.
The AI coding tool arms race just hit its first real pricing wall, and developers are pushing back. Anthropic’s Claude Code is impressive, but $200/month is a number that makes individual developers do math, and Block’s open-source Goose is ready to catch the fallout. This is exactly how enterprise AI pricing gets disrupted — not by a flashier competitor, but by a free one that’s merely “good enough.”
OpenAI CEO apologizes to Tumbler Ridge community
Sam Altman issued a formal apology to the community of Tumbler Ridge, Canada, after it emerged that OpenAI had information relevant to a mass shooting suspect and failed to alert law enforcement. There are very few situations where “we’re sorry” is sufficient, and this is one of them — but it also raises hard questions about what obligations AI companies have when their systems surface credible threat intelligence. This one deserves more scrutiny than a letter.
Watch out UK taxpayers: 28,000 HMRC staffers just got an AI copilot
Britain’s tax authority just rolled Microsoft Copilot out to 28,000 employees, justified by a trial that found it saved each user approximately 26 minutes per day. That’s either a modest productivity win or the most expensive 26 minutes in UK government history, depending on how the licensing bill lands. The fact that it’s now cleared for “Official Sensitive” work is either a sign of genuine confidence in the technology or the kind of bureaucratic leap of faith that makes auditors nervous.
Bottom Line
The gap between AI that gets announced and AI that actually works as advertised is shrinking — but it hasn’t closed yet, and today’s news is mostly dispatches from that messy middle ground.