I saw Meta's new AI lab
That and more from my interview with Mark Zuckerberg on the ACCESS podcast.
Last week, I visited Meta HQ to try the new Meta Ray-Ban Display glasses ahead of my interview with Mark Zuckerberg, which aired today in the first episode of the ACCESS podcast.
While I was at Meta, I got to physically see something that no other outsider has seen yet: Meta’s new AI research lab.
I can confirm that the reports about this new team, which is internally referred to as “TBD,” are accurate. The small unit sits next to Zuckerberg’s desk and that of other top executives in a special area that requires privileged badge access. As I walked through the space, I saw Alexandr Wang, Nat Friedman, Daniel Gross, and a couple of dozen researchers buzzing around between desks. Luckily for Meta, I couldn’t decipher what could have been plans for the next AI model on some whiteboards.
In today’s episode of ACCESS, the second half of my conversation with Zuckerberg focuses on Meta’s new AI strategy, why he reset it, and what he actually meant by his “Personal Superintelligence” manifesto.
Some things he said during our interview that stood out to me:
Why he started the new AI lab:
I didn't feel like we were on the trajectory that we needed to be at the frontier and be pushing the field forward. I think every company at some point goes through periods where you know you're not on the trajectory that you want to be on. So I decided that we should take a step back and build a new lab.
How the new team is structured:
We have this real focus on talent density. This is like a group science project. You want to have the smallest group of people who can fit the whole thing in their heads at once, and there aren’t many people who can do that.
So each seat on that boat is incredibly precious and in high demand. You also don't want a lot of layers of hierarchy because when someone gets into management, their technical skills start decaying pretty quickly.
The thing that I'm focused on is getting the very best people in the world to join the team. I've spent a lot of time meeting all of the top researchers and folks around the field and getting a sense for who I think would be good here and who might be at a point in their career where we can give them a better opportunity.
Another thing that I'm very focused on is making sure that we have significantly higher compute per researcher than any other lab. I think we are just way higher on compute per researcher than any other lab today.
Why his AI researchers don’t have deadlines anymore:
All these researchers are very competitive. They all want to be at the leading edge. They know the industry is moving quickly. They're going to put a ton of pressure on themselves. Me telling them that something should get done in 9 months or 6 months or whatever isn't going to help them do their job. It's only going to put another artificial constraint on it that makes them suboptimize the problem. And I want them to go for the full thing.
On the possibility we’re in an AI bubble:
I think it's quite possible. If you look at most other major infrastructure buildups in history, whether it's railroads or fiber for the internet, in the dot.com bubble these things were all chasing something that ended up being fundamentally very valuable. In most cases it ended up being even more valuable than the people who were pushing the bubble thought it was going to be. But in at least all of these past cases, the infrastructure gets built out, people take on too much debt, and then you hit some blip, whether it's some macroeconomic thing or maybe you just have a couple of years where the demand for the product doesn't quite materialize, and then a lot of the companies end up going out of business. Then the assets get distressed and it's great opportunity to go buy more
There are compelling arguments for why AI could be an outlier and if the models keep on growing in capability year-over-year and demand keeps growing, then maybe there is no collapse. But I do think that there's definitely a possibility, at least empirically, based on past large infrastructure buildouts and how they led to bubbles, that something like that would happen here.
What he meant by this line: “Over the last few months we have begun to see glimpses of our AI systems improving themselves.”
One of the early examples was a team working on Facebook that took a version of Llama 4 and made this autonomous agent that could start to improve parts of the Facebook algorithm. It checked in a number of changes that are the type of thing that a mid-level engineer would get promoted for. You basically have built an AI that is building AI that makes the product better, that improves the quality that people observe. To be clear, this is still a low percentage of the overall improvements that we're making to Facebook and Instagram. But I think it'll grow over time. So, that's what I was talking about when I said glimpses.
I’m co-hosting ACCESS with Ellis Hamburger, my good friend who is super plugged in with some of the most interesting AI startups out there. Our next guest will be Figma CEO Dylan Field. A special shoutout to Vox Media for partnering with us on the show, and to Notion for being our presenting sponsor.
While Sources is my own publication and separate from ACCESS, I think of them both as existing in the same cinematic universe. I hope you’ll check the show out and let us know what you think.