OpenAI goes to court as AGI loses its meaning
Also: An OpenAI-Ive phone takes shape, China kills the Meta-Manus deal, and more.
Today’s phrase of the day is “very interesting.” On that note: I’ll be at Amazon’s AWS press event on Tuesday. Let me know if you have questions for AWS CEO Matt Garman. Below, I’ve got notes on the latest AI headlines and some interesting links.
Feed check
The Elon Musk versus OpenAI lawsuit kicks off. Sam Altman and Greg Brockman attending the first day of jury selection is unusual for high-profile witnesses and shows the stakes of the case. Meanwhile, Musk is boosting the Ronan Farrow New Yorker story about Altman on X and tweeting his Trumpy nicknames for Altman and Brockman. The jury doesn’t matter as much in this case since it’s giving an advisory ruling. But the selection process today is a nice look into how negatively normal people perceive these characters. Per Bloomberg, the judge said this in the courtroom on Monday: “‘The reality is that people don’t like him,’ Gonzalez Rogers said of Musk, noting also that many don’t like Altman either.”
On that note, this was music heard blasting through speakers outside the courthouse: “We’ll train on your life till you’re obsolete / then ask why you’re broke / and when you can’t compete, we’ll steal your rhythm, your voice, your name / call it innovation / play the blame game”
AGI doesn’t matter. You knew that if you’ve been following the way that OpenAI, in particular, has been talking about it over the last year, but Monday’s partnership update with Microsoft solidifies that declaring AGI doesn’t mean anything. According to OpenAI and Microsoft, it once meant achieving “a highly autonomous system that outperforms humans at most economically viable work.” Last year, when OpenAI restructured to become more of a normal company (though it is still very much not one), it said it would elect an “independent expert panel” to declare AGI before cutting off Microsoft, which seemed like a recipe for a nasty lawsuit. Now, Microsoft will continue to get its cut of OpenAI’s business, even if AGI is declared before 2030. For OpenAI, getting more compute access via other hyperscalers like Amazon (which Microsoft did not like under the terms of the previous contract) was clearly worth more than whatever AGI meant.
OpenAI is maybe building an AI phone. Ming-Chi Kuo knows his shit, especially when it comes to the supply chain and Apple, so I’m not surprised to see him publish confident reporting about OpenAI working on some kind of AI-native mobile phone. My reporting, along with the timeline he provides in his research note, suggests that this would not be the first device OpenAI unveils with Jony Ive, but possibly the second or third. It’s probably not a coincidence that around the time Kuo published, Altman tweeted that it “feels like a good time to seriously rethink how operating systems and user interfaces are designed.” I can’t say I disagree. I don’t think mobile apps are going away, but they may start to feel like websites over time (still valuable but less used in aggregate compared to mobile apps) as more interactions shift to agents.
Meta has to unwind Manus. Beijing cracking down on this (let’s be real, not material) deal is obviously bad news for Meta and Manus’s investors, but it’s even worse news for the rest of the Chinese AI ecosystem. If you want to build an AI startup inside China, you now know that you probably won’t be allowed to sell it externally. Meta integrated Manus very quickly into its infrastructure, so it will be interesting to see how quickly it can unwind it.
Some Google employees don’t want its AI used in war. If you work inside a company like Google and feel this way, you should absolutely speak up if you want to, like roughly 600 employees did today. That said, Google has a longer history of employee activism on this topic than any other tech company, and it is seemingly doing more work with governments than ever before.
Sergey Brin speaks out. But not about using AI in the military, of course. He gave a rare comment to The New York Times for its deep dive into how he’s becoming increasingly right-leaning and vocal against the California billionaire tax: “I fled socialism with my family in 1979 and know the devastating, oppressive society it created in the Soviet Union. I don’t want California to end up in the same place.”
Uh-oh: “OpenAI missed an internal goal of reaching one billion weekly active users for ChatGPT by the end of last year, according to people familiar with the goals. The company still hasn’t announced that milestone, unnerving some investors. It also missed its yearly revenue target for ChatGPT as well after Google’s Gemini saw massive growth late last year and ate into OpenAI’s market share, the people said. The company has also struggled with defection rates among subscribers, according to people familiar with those figures.”
OK: “Apple’s longtime former AI chief John Giannandrea is joining a science AI startup, CuspAI, to help build out its U.S. operations, Upstarts has learned… With Cusp, JG’s role will focus on attracting top-tier talent in what he has said to others, per the sources, is a race for talent in the ‘atoms’ side of AI: AI for generating physical outputs like materials.”
The state of AI writing on Substack: Pangram is a Chrome extension that claims to identify AI-generated writing. It also has an API, which Taylor Lorenz used to analyze how much writing on Substack is made with or entirely by AI. Unsurprisingly, bestselling publications in the tech vertical had the most AI-generated writing. This was more interesting: “23% of top content in the Philosophy category and 22% of top content in the Health category is partially or fully AI generated. After that, the percentages drop precipitously.” (In case you missed it, I’ve been very open about how I use AI to write this newsletter, though this issue was almost entirely dictated by me via Wispr Flow.)
From The New York Times profile of Dwarkesh Patel: “Mr. Patel’s assistant is the brother of Anthropic chief executive Dario Amodei’s chief of staff, who is in turn the fiancée of Leopold Aschenbrenner, Mr. Patel’s friend and former podcast guest from whose multibillion-dollar A.I.-focused investment fund, Situational Awareness, Mr. Patel sublets office space. Sholto Douglas, a researcher at Anthropic who is one of Mr. Patel’s roommates and a repeat guest on his podcast, recently competed with Mr. Patel in a ‘chestmaxxing’ showdown on a YouTube show called ‘Swole as a Service’ (where standing shoulder presses meet A.I. chitchat). ‘People don’t think of him as a commentator on A.I.,’ says Sasha de Marigny, chief communications officer at Anthropic. ‘He’s very much in the community, in the inner ring.’”
In order, San Francisco’s five largest office tenants are: Google, OpenAI, Anthropic, Salesforce, and Meta.
Some slides from OpenAI research leader Noam Brown speaking at ICLR. One line that stood out: “Inference capacity is strategically undervalued.”
Last week on ACCESS
What happens when Silicon Valley becomes the subject of its own satire? Ellis and I sit down with The Audacity showrunner Jonathan Glatzer to talk about building a TV series that takes on Big Tech, the battle for your private data, and the power dynamics shaping Silicon Valley. We go deep into the role of satire in critiquing power, AI's role in the writing process, and more.
Listen or watch wherever you get podcasts.
ICYMI
Sources is a newsletter by Alex Heath about the AI race, featuring scoops, unique analysis, and exclusive interviews. Every week, Sources is read by thousands of decision makers in tech, finance, policy, and media. Click here to learn more.





