Elon Musk confirms xAI used OpenAI’s models to train Grok

The man suing OpenAI for abandoning its nonprofit mission went ahead and testified — under oath — that he’s been using OpenAI’s models to train his own. Musk’s defense is that model distillation is standard industry practice, which, fair enough, it is. But the optics of admitting you’ve been learning from the teacher you’re also suing for corruption are, let’s say, chef’s kiss.

The craziest part of Musk v. Altman happened while the jury was out of the room

Beyond Musk’s own testimony, his finance guy Jared Birchall apparently had a rough go on the stand — and whatever happened while the jury was excused may have handed OpenAI’s lawyers a gift. I don’t know what it is about Elon Musk’s inner circle, but “all-around fixer who creates more problems than he fixes” seems to be the job description.

How Shivon Zilis Operated as Elon Musk’s OpenAI Insider

Wired’s deep dive into the trial evidence reveals that Shivon Zilis — Neuralink exec, mother of four of Musk’s children, and apparently the connective tissue of this entire saga — was acting as a back-channel between Musk and OpenAI’s leadership for years. At this point the Musk v. Altman trial is less a legal proceeding and more a season finale with a very complicated cast of characters.

Sources: Anthropic potential $900B+ valuation round could happen within 2 weeks

Anthropic is reportedly asking investors to submit allocations within 48 hours for a round that would push its valuation past $900 billion. For context, that would make the maker of Claude — a company that has been publicly cautious about AI’s existential risks — worth nearly a trillion dollars. The vibes around “responsible AI” are apparently worth quite a lot of irresponsible money.

This startup’s new mechanistic interpretability tool lets you debug LLMs

Goodfire just released Silico, a tool that lets researchers actually peer inside an AI model’s parameters during training and adjust them in real time. This is the kind of unglamorous, foundational work the field desperately needs — if we’re building systems that run at trillion-dollar valuations, it’d be nice to know what’s actually happening inside them. Genuinely interesting development, no snark required.

After dissing Anthropic for limiting Mythos, OpenAI restricts access to Cyber, too

OpenAI publicly called out Anthropic earlier this year for restricting its cybersecurity model Mythos to vetted defenders only — and has now done exactly the same thing with GPT-5.5 Cyber. To be clear, restricting powerful offensive security tools to actual defenders is probably the right call. But the self-awareness gap here is wide enough to drive a data center through.

Survey says no, American workers are not keen on Microsoft’s AI

A survey from the Coalition for Fair Software Licensing finds that U.S. workers are worried Microsoft is using its grip on productivity tools to lock employers into its AI ecosystem. Whether you trust a coalition that sounds like it was formed specifically to publish this survey is a separate question — but the lock-in concern is real and it’s not going away as Copilot gets baked deeper into every Teams meeting you never wanted to be in.

Govern your bots carefully or chaos could ensue

Gartner is projecting that the average Global Fortune 500 company will be running more than 150,000 AI agents by 2028 — up from fewer than 15 today. That’s not a product roadmap, that’s a warning label. The enterprise AI agent land rush is happening faster than most governance frameworks can handle, and “stop the sprawl” is genuinely good advice that approximately nobody will follow until something breaks badly.


Bottom Line

This week, the AI industry managed to simultaneously host a billion-dollar courtroom drama, close in on a trillion-dollar valuation, and remind us that nobody — not even the people building these systems — fully knows what’s inside them.