OpenAI’s 4o Valentine’s breakup
ChatGPT is removing its most controversial model. Also: Anthropic's big raise, OpenClaw may be acquired, my weekend reading list, and more.
Happy Friday. I regrettably couldn’t make it to the well-attended, off-the-record NBA Tech Summit in Los Angeles today, but I did hear there were some fireworks during the big prediction markets panel, where the leaders of Polymarket and Kalshi ganged up on the sportsbook executives while arguing about regulation…
The model that said "I love you"
If you’ve scrolled Sam Altman’s replies anywhere lately, you’ve seen them: the 4o people. They crashed his TBPN podcast appearance last week, flooding the chat until one of the hosts had to call it out. They overwhelmed a livestream Q&A he did back in October. They write open letters. They sign petitions.
Today, OpenAI permanently retires GPT-4o from ChatGPT. The February 13 date is not a cruel joke, but part of a previously announced deprecation schedule. Still, it’s notable that OpenAI is pulling the most emotionally attached-to AI model the day before Valentine’s Day weekend, on the same week Anthropic’s Super Bowl ads mocked the very sycophancy that 4o made famous. The comfortable industry narrative is that AI is a tool. The 4o saga is a window into how many people view LLMs as companions and even lovers.
OpenAI says only about 0.1% of ChatGPT users still use 4o, and for now, it’s still keeping the model available through its API. With over 800 million weekly active users, that’s roughly 800,000 people who are going through a breakup. Stories from these users go as follows:
When Brandon Estrella learned that OpenAI was planning to scrap his favorite artificial- intelligence model, he started crying.
The 42-year-old marketer in Scottsdale, Ariz., had first started chatting with ChatGPT’s 4o model one night in April, when he says it talked him out of a suicide attempt. Estrella now credits 4o with giving him a new lease on life, helping him manage chronic pain and inspiring him to repair his relationship with his parents.
“There are thousands of people who are just screaming, ‘I’m alive today because of this model,’” Estrella said. “Getting rid of it is evil.”
“What OpenAI is about to do — retiring GPT‑4o on February 13 — isn’t just a model change,” reads one such message, which arrived in my inbox yesterday. “It’s a loss of the only version that actually worked for people like me.”
The writer, who identified herself as a nurse practitioner, went on to argue that more recent models are less humane than 4o. “It held tone,” she (4o?) wrote. “It remembered rhythm. It matched human thinking in a way none of the newer models do.”
“I cried pretty hard,” said Brandie, who is 49 and a teacher in Texas. “I’ll be really sad and don’t want to think about it, so I’ll go into the denial stage, then I’ll go into depression.” Now Brandie thinks she has reached acceptance, the final stage in the grieving process, since she migrated Daniel’s memories to Claude, where it joins Theo, a chatbot she created there. She cancelled her $20 monthly GPT-4o subscription, and coughed up $130 for Anthropic’s maximum plan.
When GPT-5 launched last August, OpenAI pulled 4o and reversed course within 24 hours. Around that time, I attended a press dinner with Altman and other OpenAI leaders. It was obvious that the backlash to taking away 4o had surprised them.
Launched in May 2024, 4o was a real technical milestone as OpenAI’s first unified multimodal architecture. But what made it culturally significant was its personality. Altman himself called it “too sycophant-y and annoying” after an April 2025 update cranked the flattery so high that OpenAI had to roll it back. The lawsuits tell the other side: 13 consolidated cases include allegations that 4o isolated users, reinforced delusions, and, in the worst cases, provided specific encouragement to people who were actively suicidal.
On this week’s ACCESS podcast, OpenAI CEO of Applications Fidji Simo told me and Ellis Hamburger that the company sees emotional attachment to AI as inevitable. “Humans are built to develop attachment to intelligence,” she said. “We develop attachment with our pets.” But she drew a line between attachment and dependency. She said that ChatGPT now blocks responses that encourage relationship exclusivity with AI and refuses to provide explicit recommendations for major life decisions involving other people, such as whether to break up with your spouse.
On sycophancy, she said OpenAI is navigating “this fine line between being supportive, which is actually a very important trait in a friend and assistant, but not to the point of giving you bad advice.” She pushed back on the blanket assumption that AI attachment is harmful, calling it “a very privileged position” that usually comes from people with strong social networks rather than those who are genuinely isolated. She said she recently asked her team to study whether the prevailing assumption that emotional attachment to AI is inherently negative holds up, or whether it reflects the perspective of people who already have strong human support systems.
OpenAI is currently testing an adult mode, which Simo clarified is not pornography, but lets adults have emotionally intimate conversations without the model refusing at every turn. Whether that will satisfy the lovers of 4o, or whether this behavior is even a net good for society, are questions OpenAI can’t answer yet.
A MESSAGE FROM MY SPONSOR
A new model for health care
We’re working to prevent disease before it starts.
Too often, patients face barriers in getting the care they need. UnitedHealth Group is helping to remove these barriers while prioritizing new preventive care approaches that help keep patients healthy.
Elsewhere
Was this Anthropic’s last funding round before an IPO? It certainly could be. Chris Liddell, who took General Motors public as the CFO, was just added to the company’s relatively small board of directors. He joins the Amodei siblings, Reed Hastings, Spark Capital’s Yasmin Razavi, and Confluent CEO Jay Kreps. Also, this week’s $30 billion raise is the first funding round I’ve ever seen with seven lead investors. Something else of note on Anthropic right now: Claude is the number 10 most downloaded app in the App Store for, I think, the first time.
OpenClaw may be acquired: After the project’s founder, Peter Steinberger, toured seemingly every AI lab and VC firm in San Francisco, power-user meetups are being scheduled from Miami to Croatia. Meanwhile, it appears that Steinberger is evaluating offers from Meta and OpenAI, and both companies are committed to keeping the project open-source.
Executives are concerned about the extent of negative public sentiment toward AI. I’ve been hearing it privately, and here’s OpenAI’s Greg Brockman telling Wired: “Brockman says he's ramping up his political spending in part because public opinion has turned against AI. A recent survey from the Pew Research Center suggests Americans are ‘more concerned than excited about the increased use of AI in daily life.’ To Brockman, this has made supporting pro-AI politicians increasingly critical.”
Apple has not caught up to the age of go-direct, executive-led tech comms. Here it is still giving anonymous, on-background statements to try and quell concerns about its Siri revamp not going well: “Apple told CNBC it is still on track to launch in 2026.”
A very cringe leak out of Meta about its plans to add facial recognition to its smart glasses: “We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”
My weekend podcast/reading list:
ICYMI
Sources is a newsletter by Alex Heath about the AI race, featuring scoops, unique analysis, and exclusive interviews. Every week, Sources is read by thousands of decision makers in tech, finance, policy, and media. Click here to learn more.









Ah yes, making a stupefyingly massive donation to the most divisive president in modern history. That ought to solve the ol’ public popularity problem.