Google’s rise, RL mania, and a party boat
I asked attendees for their takeaways from this year’s NeurIPS in San Diego. Also: Sam Altman's charm offense, Google AI glasses tease.
Reinforcement learning (RL) is the next frontier, Google is surging, and the party scene has gotten completely out of hand. Those were the through lines from this year’s NeurIPS in San Diego.
NeurIPS, or the “Conference on Neural Information Processing Systems,” started in 1987 as a purely academic affair. It has since ballooned alongside the hype around AI into a massive industry event where labs come to recruit and investors come to find the next wave of AI startups.
I was regretfully unable to attend NeurIPS this year, but I still wanted to know what people were talking about on the ground in San Diego over the past week. So I asked engineers, researchers, and founders for their takeaways. The list below of responses includes Andy Konwinski, cofounder of Databricks and founder of the Laude Institute; Thomas Wolf, cofounder of Hugging Face; OpenAI’s Roon; and attendees from Meta, Waymo, Google DeepMind, Amazon, and a handful of other places.
I asked everyone the same three questions: What’s the buzziest topic from the conference? Which labs feel like they’re surging or struggling? Who had the best party?
The consensus was clear. “RL RL RL RL is taking over the world,” Anastasios Angelopoulos, CEO of LMArena, told me. The industry is coalescing around the idea that tuning models for specific use cases, rather than scaling the data used for pre-training, will drive the next wave of AI progress. What’s clear from the lab momentum question is that Google is having a moment. “Google DeepMind is feeling good,” Hugging Face’s Wolf told me.
The party circuit was naturally relentless. Konwinski’s Laude Lounge emerged as one of the week’s hotspots — Jeff Dean, Yoshua Bengio, Ion Stoica, and about a dozen other top researchers came through. Model Ship, an invite-only cruise with 200 researchers, featured “a commitment to the dance floor that is unprecedented at a conference event,” one of the organizers of the cruise, Nathan Lambert, told me. Roon was dry about the whole scene: “you can learn more from twitter than from literally being there ... mostly my on-the-ground feeling was ‘this is too much.’”
Here’s what attendees had to say about NeurIPS this year:
What was the buzziest topic among attendees that you think more people will be talking about in 2026?
Andy Konwinski, founder of the Laude Institute: “I did a lot of interviews over the week, and when I asked people what felt overhyped to them, I heard agentic AI, RL, and world models, though I also heard RL and world models as areas people think are up-and-coming and most interesting to watch.”
Thomas Wolf, cofounder of Hugging Face: “AI x science, interpretability, RL long rollouts”
Roon, member of technical staff, OpenAI: “you can learn more from twitter than from literally being there / the tweets are saying the buzz is about continual learning / That’s possibly true / I can’t guarantee / mostly my on-the-ground feeling was ‘this is too much’”
Maya Bechler-Speicher, research scientist at Meta: “I can’t say with certainty what the buzziest topic was — the conference is massive, and my exposure was naturally limited — but tabular foundation models were undoubtedly gaining significant traction, and I expect this momentum to continue into 2026. After years in which decision-tree–based methods dominated generalization on tabular data, we are finally seeing foundation-model approaches that consistently outperform them. Another area drawing considerable attention is physical AI, which remains full of open research questions and opportunities.”
Anonymous researcher at a big AI lab: “I’m biased here, but AI for the physical world (robotics, engineering, etc, not just AI for science) looks like it’s finally taking off.”
Nathan Lambert, senior researcher at the Allen Institute for AI: “It was accepted that [Ilya Sutskever]’s proclamation on the Dwarkesh Podcast that it’s now ‘The Age of Research’ rather than the age of scaling is a good moniker. No one area of the poster sessions or workshops was obviously labeled as the most important topic (e.g., last year’s NeurIPS was obsessed with reinforcement learning and reasoning after the launch of o1). Some groups reflected solemnly on how this was the first NeurIPS since DeepSeek R1 and a year of open model transformation, but most of the conference didn’t feel like it had an active role to play in it.”
Brian Wilt, head of data at Waymo: “The buzziest topic among my friends was how much research was happening in frontier labs vs. academia and was likely unpublished.. From my perspective at Waymo, many of the (applied) problems I need to solve only emerge at scale (e.g., data, performance). However, there’s also a deep sense that we need another fundamental breakthrough besides scaling current architectures (as Ilya/[Andrej] Karpathy/others have alluded to)”
Evgenii Nikishin, member of technical staff at OpenAI: “Continual learning was certainly among the buzziest topics. I don’t know yet how many scientific advances there will be in 2026 — maybe some, maybe little — but I think more people will be talking about it.”
Paige Bailey, developer lead for Google DeepMind: “Definitely sovereign open models, especially deploying them on-prem with fine-tuning + RL. In terms of what people will be talking about in 2026, I think world Models and robotics are the big ones.”
Sachin Dharashivkar, CEO of AthenaAgent: “Designing RL environments and training agents was the most discussed topic.”
Ronak Malde, ex-DeepMind engineer and new founder of a stealth RL startup: “Continual learning. To support this next frontier, we’re going to need new architectures, new reward functions, new data sources, and new data scalability models.”
Deniz Birlikci, researcher at Amazon: “Agents are not a model — they are a stack. Therefore, RL for agents should train with the same tools/stacks that will be used in production. More teams are thinking [about] how to create a dense taxonomy and labeling for their data, especially in RL, and I find this very important.”
Richard Suwandi, student ambassador for The Chinese University of Hong Kong: “There were lots of discussions around whether we can build AI systems that are truly creative (not just optimizing within known boundaries, but capable of generating genuinely novel ideas and discoveries on their own). I expect this to become a major research frontier in 2026.”
Anastasios Angelopoulos, CEO of LMArena: “RL RL RL RL is taking over the world”
Which labs feel like they’re surging in momentum, and which ones feel more shaky?
Nathan Lambert (Allen Institute for AI): “The discussion of which labs are leading and falling behind felt fully like an export out of SF gossip in the last few weeks. Gemini and Anthropic are ascendant at the cost of OpenAI. At least OpenAI was mentioned, where I don’t think I heard anyone debating the capabilities of xAI once.”
Evgenii Nikishin (OpenAI): “The Big 3 frontier Labs (GDM, Anthro, OAI) are having a good overall momentum, though each has their unique stronger and weaker sides. As for places that are not doing too great, think about quite a few LLM / imagen startups from 2022-2024 who were offering similar pitches and didn’t have unique value prop. I feel that many of them either already or are in the process of quietly dying.”
Andy Konwinski (Laude Institute): “Surging momentum: Alibaba/Qwen, Moonshot/Kimi, Arcee, Reflection AI, Human&, Prime Intellect all made announcements that very recently that were buzzing / Google w/ gemini 3, nano banana, TPUv7”
Anonymous researcher: “Reflection had a massive booth given that they’re a very young startup - that’s definitely new.”
Brian Wilt (Waymo): “I was proud that Alphabet/Google had the most accepted papers this year.”
Paige Bailey (Google DeepMind): “Periodic Labs and Reflection AI feel like they are surging; they both have really interesting mission statements. I also loved seeing Anna and Azalea launch a company (Ricursive Intelligence).”
Ronak Malde (stealth RL startup): “Several neolabs are going to launch in 2026 that shake up research as we know it. DeepMind is still crushing it. Kimi Moonshot and Deepseek are too.”
Richard Suwandi (The Chinese University of Hong Kong): “One lab that clearly feels like it’s surging is Google DeepMind. At NeurIPS, you could really feel them pushing a new research agenda, with things like Nested Learning and Titans/MIRAS pointing toward more continual, long‑term memory rather than just bigger transformers, which was a refreshing shift in the hallway conversations.”
Thomas Wolf (Hugging Face): “Google DeepMind is feeling good.”
What was the best party you attended or had FOMO over?
Nathan Lambert (Allen Institute for AI/Model Ship co-organizer): “The paradigmatic example of a NeurIPS party for the current area of AI was Model Ship, an invite-only cruise with 200 top researchers, investors, and personalities in the AI space. It had bespoke merch, free conversation, and a commitment to the dance floor that is unprecedented at a conference event.”
Andy Konwinski (Laude Institute): “I was a bit bummed that I couldn’t make it out to events organized by Robert Nishihara, Naveen Rao, and Nathan Lambert. I also was sad to miss Rich Sutton and Yejin Choi’s keynotes (though I ended up interviewing Yejin so we got to jam on the topics she spoke about).”
Roon (OpenAI): “openai ones, a16z ones / I liked the a16z one because I got to meet lex [Fridman] that was cool / but even the parties I mostly tried to avoid kept getting partifuls that were like 750 people in a house or whatever / what a nightmare”
Maya Bechler-Speicher (Meta): “The Meta party was one of the most impressive company events I’ve attended. Additionally, G-Research invited a very small group of researchers to a three-star Michelin restaurant, which was not a party per se but was absolutely exceptional.”
Brian Wilt (Waymo): “My favorite event was a small gathering at comma.ai (HQ’d in San Diego), who develop an open-source driver assistant. I use it on my personal car, it’s perfect for when I’m not riding in Waymo in Phoenix. @yassineyousfi_ put together an online capture-the-flag to get in. @realGeorgeHotz took us on a tour of their data center and manufacturing. I did die a little when I typed their wifi password, ‘lidarisdoomed’”
Evgenii Nikishin (OpenAI): “The OpenAI party 😎”
Paige Bailey (Google DeepMind): “I actually had to head back late Friday/early Saturday, so I missed out on the end-of-conference workshops. I had major FOMO over the ML for Systems workshop, though, as well as the ‘Claude and Gemini Play Pokemon’ workshop -- they both looked awesome!”
Ronak Malde (stealth RL startup): “Radical VC bringing Jeff Dean and Geoffrey Hinton into one room was the highlight of the week.”
Anastasios Angelopoulos (LMArena): “Laude Lounge”
Thomas Wolf (Hugging Face): “The Hugging Face party where 2.5k+ people registered / I really enjoyed the Prime-intellect one”
Dylan Patel, founder of SemiAnalysis: “Mine haha”
Yes, some people thought keynotes were parties. I guess academia lives on at NeurIPS after all.
Feed check
Sam Altman is on a charm offensive in NYC ahead of GPT-5.2. It wasn’t just that Jimmy Fallon appearance. Per The Wall Street Journal: “At a lunch meeting with journalists in New York Monday, Altman said that while industry observers are focused on an OpenAI versus Google rivalry, he thinks the real battle will be between OpenAI and Apple.” Also from the report: OpenAI is set to release a new model, called 5.2, this week that executives hope will give it new momentum, particularly among coding and business customers. They overruled some employees who asked to push back the model’s release so the company could have more time to make it better, according to people familiar with the matter. The company also plans to release another model in January with better images, improved speed and a better personality, and to end the code red after that...”
Google shared more details about its AI glasses. Per Bloomberg: “Google said it’s working to create two different categories of artificial intelligence-powered smart glasses to compete next year with existing models from Meta Platforms Inc.: one with screens, and another that’s audio focused. The first AI glasses that Google is collaborating on will arrive sometime in 2026, it said in a blog post Monday. Samsung Electronics Co., Warby Parker and Gentle Monster are among its early hardware partners, but the companies have yet to show any final designs.”
OpenAI, which famously runs on Slack, hired Slack’s former CEO, Denise Dresser, as its chief revenue officer. Given her background and the code red that has frozen work on ChatGPT ads, it makes sense for the press release to focus on her enterprise SaaS background.
Meta’s next model may not be open source and come in Q1 2026: “Meta is pursuing a new frontier AI model, codenamed Avocado, that could be proprietary instead of open source, CNBC has learned.”
A cool $475 million seed round for Naveen Rao: His new startup, Unconventional AI, is ambiguously working on new hardware to make AI more efficient as the industry runs further into a massive energy bottleneck. Rao is an AI/chip rockstar with multiple exited companies under his belt, plus he’s one of the clearest thinkers on the space I’ve come across. A nice flex: “Rao said he invested $10 million of his own funds at the same terms as other investors.”
(Mostly) everyone comes together to make AI agents work: It’s nice to see many of the top players in AI put down their boxing gloves and bet on Model Context Protocol (MCP) to underpin AI agents and make them actually work at scale. Anthropic is doing the right thing and donating MCP to the Agentic AI Foundation (which sits under the Linux Foundation) to ensure that “agentic AI evolves transparently, collaboratively, and in the public interest through strategic investment, community building, and shared development of open standards.” (For some reason, Meta is noticeably absent from the list of contributors.)
Substack is testing ads: “During the pilot, Substack is simply facilitating payments and is not taking a cut,” the company told Feed Me. “We will shape the long-term structure after we learn from this phase.” (Disclosure: I’ve discussed being an early tester of this program with the Substack team and am very much in favor of independent writers making money via advertising and not just subscriptions.)
More from NeurIPS
The Information’s field report: “A small but growing number of artificial intelligence developers at OpenAI, Google and other companies say they’re skeptical that today’s technical approaches to AI will achieve major breakthroughs in biology, medicine and other fields while also managing to avoid making silly mistakes.”
NBC News reported on the focus on interpretability at this year’s NeurIPS.
What this year’s NeurLPS reveals about which companies run the AI industry.
A graphic showing the number of accepted papers by affiliation since 2015.
Leo Gao’s survey: “Just 69.5% (n=115) people at this neurips knew what AGI stands for.”
Video
sent me of people partying on the Model Ship:And lastly, below is Nano Banana Pro’s first attempt at making a lead photo for this newsletter:







