The Sources reader survey
Help me make a better newsletter. Also: notes on this week's AI headlines and some good links.
I launched Sources two months ago and have been pedal-to-the-metal ever since. This week, I’m giving the newsletter a short breather while I look ahead and map out a big 2026. I’ve got a stack of ideas for expanding Sources and turning it into the must-read newsletter about the AI race, and now that the chaos of launch is behind me, I’m excited to start executing.
This Thanksgiving, I’m grateful to the thousands of you who’ve already subscribed, and especially the hundreds who’ve bought subscriptions. As you settle into the long weekend, it would mean a lot if you’d take a minute to fill out my reader survey. Responses are anonymous and not tied to your email.
Your feedback helps me make a better product, and it also shapes my conversations with brands that want to support this work. Subscriptions will always be the foundation of Sources, but smart, respectful advertising is becoming an important pillar as I scale into 2026 and beyond. I’m excited to partner with companies that respect both my editorial independence and your time.
Thanks again for reading and being part of this early journey. Even without a full newsletter this week, I couldn’t resist writing about a few headlines. You’ll find those below, along with some links I hope you’ll enjoy.
News and notes
Is the world’s most valuable company OK? I tweeted this question after seeing Nvidia’s bizarre public response to the story that Google is pitching Meta and others about using its TPUs for training. Nvidia’s response is a classic example of a company trying to look strong, only to achieve the opposite. If I had to guess, the call to directly comment on rumors about a competitor came from the top. Regardless of what happens next, Google going from being on its back foot in AI to sending Nvidia into a tizzy is one of the most remarkable narrative shifts in tech I’ve ever seen. (It’s also something I called before anyone else, but who’s keeping track?)
Where in the world is Larry Page? Speaking of Google, am I the only one who often wonders where in the world Larry Page is right now? Seeing him jump to become the second-richest person in the world this week reminded me that the world has not seen or heard from Google’s co-founder, original CEO, largest shareholder, and current board director in many years. Given how prescient Larry was about the importance of AI, it would be amazing to hear him talk about the current moment Google is in.
What Ilya sees: Just in time to make “we are back to the age of research” a thing at NeurIPS next week, Ilya Sutskever surfaced for a rare interview with Dwarkesh Patel. Reading between the lines, it’s clear that Ilya is focused on making AI that can learn and generalize like a human. The most telling part of the conversation was when he was asked if there’s a way to train AI the way a human teen quickly learns to drive a car. “That is a great question to ask, and it’s a question I have a lot of options about,” he responded. “But unfortunately, we live in a world where not all machine learning ideas are discussed freely, and this is one of them.” Other notes: He confirmed that Safe Superintelligence has rasied $3 billion to date, he’s focused spending compute on research while most labs are spending more on inference, he wants to make “AI that cares about sentient life specifically” and not just human life, and his co-founder Daniel Gross was able to “enjoy a lot of near-term liquidity” by joining Meta. If I had to guess, another reason Ilya did this interview was to prime everyone for SSI shipping a product much sooner than expected. He said the timeline “might” be longer than anticipated for building superintelligence and that he thinks there’s “a lot of value in the best, most powerful AI being out in the world.”
Anthropic enters uncharted territory. This part of the company’s blog post for Opus 4.5 really struck me: “We give prospective performance engineering candidates a notoriously difficult take-home exam. We also test new models on this exam as an internal benchmark. Within our prescribed 2-hour time limit, Claude Opus 4.5 scored higher than any human candidate ever.” It goes on to say that the result “raises questions about how AI will change engineering as a profession.” Also, an employee posted that Opus 4.5 is a lot funnier and “probably a ~85%ile poster” in Slack.
ChatGPT’s Code Orange: Buried in this NYT piece is the news that ChatGPT chief Nick Turley internally declared a “Code Orange” in October and told his team that OpenAI is facing “the greatest competitive pressure we’ve ever seen,” and that there’s the goal to “increase daily active users by 5 percent by the end of the year.” This is a good reminder of how companies often disclose public metrics (in this case, weekly users) that differ from their actual internal goals.
ICYMI: Harvey’s CEO on ACCESS
We’re off this week on the pod, which makes it a perfect time to catch up on last week’s episode if you haven’t already.
Like and subscribe on YouTube, Spotify, and Apple Podcasts.
More posts
“Nvidia’s ‘I’m Not Enron’ memo has people asking a lot of questions already answered by that memo”
“Kicking Robots: Humanoids and the tech-industry hype machine”
And lastly, a post from a Google researcher who works on TPUs:
Thanks again for subscribing to Sources, and have a fantastic holiday weekend.







