Testing Ads in ChatGPT
OpenAI is beginning to test ads in ChatGPT — the product that was supposed to be the antidote to the ad-riddled internet. The company promises “clear labeling, answer independence, strong privacy protections, and user control,” which is exactly what every ad platform says right before it becomes Google. I’m not opposed to the business logic — free AI access costs real money — but the moment you let advertisers anywhere near a conversational AI, you’ve introduced an incentive structure that’s fundamentally in tension with “just give me the honest answer.”
OpenAI Releases GPT-5.5 and GPT-5.5-Cyber for Cybersecurity
OpenAI quietly dropped GPT-5.5 and a specialized GPT-5.5-Cyber variant aimed at verified security researchers and defenders. The idea is to give the good guys faster access to powerful AI tools for vulnerability research and protecting critical infrastructure — gating it behind verification to keep the obvious dual-use risks in check. It’s a genuinely thoughtful approach to a real problem, and probably one of the more consequential model releases that won’t get half the attention of the next sycophancy scandal.
SpaceX Plans $55 Billion “Terafab” AI Chip Plant in Texas
Elon Musk wants to build a $55 billion AI chip manufacturing facility in Austin, Texas — because apparently rockets, social media, tunnels, trucks, and AI chatbots weren’t enough irons in the fire. The “Terafab” plant would make SpaceX a vertically integrated player in the AI hardware stack, which is either visionary supply-chain thinking or the most expensive case of not wanting to pay NVIDIA’s prices. Either way, $55 billion is not a rounding error.
Mira Murati’s Deposition Reveals New Details About Sam Altman’s Ouster
The Musk v. Altman trial keeps delivering, and Mira Murati’s deposition is the latest act in Silicon Valley’s most expensive soap opera. Her testimony — combined with trial exhibits — is pulling back the curtain on what actually happened during that chaotic Thanksgiving week in 2023, when the OpenAI board decided Sam Altman wasn’t being “consistently candid” and then learned just how much leverage one CEO can have. The thing I can’t stop thinking about: every email sent in 2018 about OpenAI’s future is now a trial exhibit. Write accordingly, people.
Thousands of Vibe-Coded Apps Are Leaking Sensitive Data
Here’s the bill coming due for the “build an app in 60 seconds” revolution: Wired is reporting that thousands of apps built on AI-assisted platforms like Lovable, Replit, and Netlify are exposing corporate and personal data on the open web. Turns out when you let people who don’t know what an API key is build apps that handle API keys, things go sideways. “Vibe coding” is a wonderful idea right up until someone’s customer database is indexed by Google. This one deserves a careful read before your company lets anyone near these tools.
ChatGPT’s ‘Trusted Contact’ Will Alert Loved Ones About Self-Harm Concerns
OpenAI is rolling out an optional “Trusted Contact” feature that notifies a designated friend, family member, or caregiver if ChatGPT detects conversations that may involve self-harm or suicide. This is one of those features that’s genuinely hard to critique — the intent is clearly good, and people do turn to AI during crisis moments. The implementation details will matter enormously here: false positives could be embarrassing at best and harmful at worst, and there are real privacy questions about what “detection” actually means under the hood.
Mozilla Says AI Found 271 Vulnerabilities With “Almost No False Positives”
Mozilla says it has “completely bought in” on AI-assisted bug discovery after a tool called Mythos surfaced 271 vulnerabilities in Firefox with a false positive rate that security teams would normally only dream about. This is the kind of story that gets less attention than it deserves — not because it’s flashy, but because it’s a real, concrete, measurable win for AI doing something difficult. Finding security vulnerabilities at scale and doing it accurately is exactly where AI should be earning its keep.
ICE Plans Smart Glasses with Built-In Facial Recognition
ICE is reportedly developing its own smart glasses designed to “supplement” its existing facial recognition app, according to details shared at a recent conference. So to recap: we went from Google Glass being laughed out of coffee shops to law enforcement building facial recognition directly into eyewear. The policy and civil liberties questions here are enormous and almost certainly moving faster than any regulatory response. File this one under “things that feel like a Black Mirror cold open.”
Bottom Line
This Friday, the AI industry managed to simultaneously promise it will keep you safe, sell you ads, leak your data, and spend $55 billion building chips — which is, honestly, a pretty accurate summary of where we are.