In Harvard study, AI offered more accurate emergency room diagnoses than two human doctors

A Harvard study found that at least one large language model outperformed human emergency room physicians on diagnostic accuracy — which is either the most reassuring thing you’ve heard all year or the most unsettling, depending on how you feel about hospitals. I’d note that “more accurate than two human doctors” is a bar that varies wildly depending on which two doctors and how many hours into their shift they were. Still, the trend line here is hard to argue with, and the ER is exactly the kind of high-stakes, information-dense environment where AI should theoretically shine.


Five Eyes spook shops warn agentic AI is too wonky for rapid rollout

The intelligence agencies of the US, UK, Australia, New Zealand, and Canada have jointly issued guidance saying agentic AI will “likely misbehave” and amplifies existing organizational weaknesses — which is a polite way of saying don’t hand the keys to an AI agent just because your CEO saw a demo at Davos. The recommendation to “prioritize resilience over productivity” is the kind of advice that sounds boring until you’re explaining to a board why an AI agent autonomously emailed your entire client list. Five Eyes doesn’t scare easily, so when they pump the brakes, it’s worth listening.


Just in time for Labour Day, China makes it illegal to fire humans if AI takes their jobs

A Chinese court has ruled that replacing human workers with AI is illegal — a ruling that landed, with exquisite timing, right around Labour Day. This is a genuinely fascinating policy experiment: while the West debates AI displacement in op-eds, China just made it a legal matter. Whether this is worker protection or economic protectionism dressed up in labor law is a question worth sitting with, but either way, it puts pressure on every government that’s still treating AI job disruption as a problem for the next administration.


OpenAI: Building the compute infrastructure for the Intelligence Age

OpenAI is scaling Stargate with new data center capacity, framing it as building “the compute infrastructure for AGI.” The ambition is not subtle. At this point Stargate is less a data center project and more a bet that whoever controls the compute wins the century — and OpenAI is making very sure the answer isn’t “someone else.”


‘This is fine’ creator says AI startup stole his art

The startup in question is Artisan — the same company that plastered San Francisco with billboards telling businesses to “stop hiring humans” — and they apparently used the iconic “This Is Fine” dog in an ad without permission. The irony of an AI company that wants to automate away human jobs stealing from a human artist is so on-the-nose it almost feels like a bit. It isn’t. KC Green, the creator, is rightfully furious, and Artisan has given the industry yet another PR own-goal to add to the pile.


Musk v. Altman kicks off — and is the AI job apocalypse overhyped?

The Musk-Altman trial is finally underway, and Wired correctly notes it goes well beyond a billionaire grudge match — the outcome could reshape OpenAI’s legal structure and set precedents for how AI companies are governed. The secondary question being raised — whether the AI job apocalypse is overhyped — is one I find genuinely interesting, because the honest answer seems to be “yes and no, and it depends enormously on which jobs and which timeline.” Nuance doesn’t trend, but it’s probably correct.


Inference is giving AI chip startups a second chance to make their mark

As the industry shifts from training massive models to actually running them at scale, the inference layer is opening a real competitive window for chip startups that couldn’t touch Nvidia during the training arms race. This is one of those structural shifts that sounds technical but has massive implications — if inference becomes commoditized or diversified, Nvidia’s stranglehold on AI economics loosens meaningfully. The Register frames it as “now or never” for the challengers, and that feels right.


Salesforce rolls out new Slackbot AI agent as it battles Microsoft and Google in workplace AI

Salesforce has rebuilt Slackbot from a notification puppet into a full AI agent capable of searching enterprise data, drafting documents, and taking action on behalf of employees. The enterprise AI assistant wars — Salesforce vs. Microsoft Copilot vs. Google Workspace — are accelerating fast, and every one of these companies is making the same pitch: your productivity problems are one agent away from being solved. Whether that’s true or whether it’s a new layer of AI-flavored busywork remains the $64 billion question.


Bottom Line

Whether it’s Five Eyes pumping the brakes on agentic AI, China banning AI-driven layoffs, or Harvard proving AI can out-diagnose ER doctors, today’s news is a reminder that the gap between AI’s potential and humanity’s readiness to handle it is the defining tension of our moment.