I had the honor to host and moderate the fireside chat with Yann LeCun at CES yesterday. Yann is one of the pioneers in technology. He is known as a godfather of AI, is a recipient of the Turing Award among many other recognitions, and is Chief AI Scientist at Meta.
His prominence was on display during and after the session. The session had a long line to get in, and the room was packed. Afterwards, so many participants approached and surrounded Yann that he took a selfie with the group.
In this post, I include all of the questions I prepared for the session, including those we didn’t have time for. As with all of our fireside chats, we conducted extensive primary and secondary research in advance. We read and reviewed significant material on Yann’s work and comments. Further, we held numerous brainstorming calls with senior tech leaders, who we acknowledge and thank below. In addition, we prompted ChatGPT, Gemini, and Meta.ai for what questions to ask Yann with an emphasis on those that were controversial and timely.
After each question, I discuss and summarize my takeaways from Yann’s comments at CES.
We would like to recognize and thank Tristan Cossio, David Ellison, Mona Flores, Michael Pfeffer, Anahita Tafvizi, and Mike Tamir, among others, for their help in brainstorming and formulating the key discussion topics and debates for the fireside chat on artificial intelligence.
—
AGI and human-level intelligence
Sam Altman posted this week: “we know how to build AGI,” and “we are beginning to turn to superintelligence.” You have had choice words for AGI 2025 predictions. What are your reactions?
- What is human-level intelligence vs. AGI? Are we generally talking about similar things?
- Is high-compute o3’s score on ARC-AGI of 87.5% an indication that OpenAI is close to human-level intelligence?
Yann was, as always, highly critical of the term AGI. He described it as a terrible term. He didn’t comment on whether OpenAI would get to AGI in 2025 by its own definition, but he emphasized that the term makes no sense.
Instead, Yann argues for the term human-level intelligence. I asked whether human-level intelligence and AGI were generally in the same ballpark, i.e. we’re all talking about the same thing generally. He said absolutely not.
I didn’t get to the ARC-AGI question, but it’s a good question.
As a note… Although definitions here feel like a flawed exercise… According to OpenAI’s website, AGI is a “highly autonomous system that outperforms humans at most economically valuable work.” According to a prior Yann interview, human-level intelligence is “AI system that can accomplish all the intellectual tasks a human can accomplish with similar performance.”
LLM scaling and achieving human-level intelligence
Do test-time compute systems such as o3 and Gemini 2.0 Flash Thinking restart scaling?
- Let’s say there was “one unit” of improvement from Llama 2 to Llama 3. Will there be the same, greater, or lower number of units from 3 to 4? From 4 to 5?
- Are the test-time compute systems able to reason?
Yann did not comment specifically on whether LLMs are experiencing a renewed scaling phase through test-time compute. He said that even if test-time compute is helping LLMs get better, it is very expensive. Instead, he argued that human-level intelligence is important and in order to get to human-level intelligence, a new architecture is required. He said there’s “absolutely no way” that auto-regressive LLMs will reach human-level intelligence.
He also didn’t specify the ratio-improvement for future generations of Llama models as compared to the previous generation. In my view this is a good question.
I did not get to ask him whether he thought the test-time compute systems are able to reason. In my research, some colleagues said yes, they are reasoning, and some said no, they are still not reasoning as they do not have understanding.
Next architecture: Objective-driven AI and predictive architecture
You have argued to abandon generative models, probabilistic models, contrastive models, and much of reinforcement learning. What are objective-driven AI and joint-embedding predictive architecture?
- Aravind Srinivas at Perplexity on Lex Fridman applauded your many right calls over the decades. He mentioned that he didn’t think JEPA was the right approach among energy models. Is there evidence for JEPA at this point?
- What are training costs and parameter sizes relative to say Llama 4?
- Question from Gemini: “How do you envision overcoming the challenges of training and deploying energy-based models at scale, particularly in terms of computational resources and the availability of suitable datasets?”
- What is the timeframe to human-level intelligence? Is it sentience?
Yann described objective-driven AI and JEPA (joint-embedding predictive architecture) as much of the work he is doing at FAIR at Meta. When I asked Yann what he would say to critics of JEPA as an approach, he said that colleagues such as Aravind are more talking about the demonstration of the technology, which is too early to have at this point.
He mentioned that in their work he has seen clear evidence of common sense. He also predicted that if everything went right, JEPA would make its way into Llama 5.
I didn’t get to ask him about the training costs and parameter sizes for his new architecture. Both are good questions and have stock price-level implications for Meta’s AI investment spend.
Open Source Models: Will they pass proprietary ones?
Tech executive: “Have you seen an open source contribution to Llama that was material and unexpected to Llama’s development, and if so what was it?”
- “Open models are catching up to proprietary models and are sufficiently close for many applications. Is it possible for open models to jump past proprietary models? If so, how?”
Yann did not point to specific individual contributions, but he commented that Llama has benefited from contributions by the ecosystem of open source developers. I did not get to ask him about whether and how open source might leapfrog proprietary models.
AI agents, assistants, and the future of digital interactions in the enterprise
You have predicted that AI assistants will mediate all of our digital interactions soon. Where do you see this first? Consumer or enterprise? Which applications?
- Will 2025 be the year of AI agents?
- What future consumer AI agent are you most looking forward to?
Yann was positive on AI agents. We did not get to these questions specifically, but the first question in the Lightning Round below focused on AI agents.
Near-term expectations for AI
Chief scientific officer: “At what point will hallucinations be fundamentally minimized such that we do not have to check results? And, at what point will models propose new insights instead of summarizing insights?”
We did not get to these questions either, but they are great questions. For those of you who are attending Wing at JPM 2025, we plan to cover these and other practical questions in our Chatham House Rules fireside chat with Jason Kwon, Chief Strategy Officer at OpenAI.
Regulation in AI: At which layer?
You have argued to regulate AI at the application layer, not the model layer. Are any regulations required at the model layer in AI in your view? If so, what do those look like?
Yann said, “No.” He argued that no regulations should be imposed at the model layer especially on open source and research initiatives in artificial intelligence.
Lightning round: General robotics, AI scientists, nuclear energy and more
As with many of our fireside chats, I asked Yann a lightning round of “true or false statements” and 30 seconds why.
I envision a future where AI agents and humans co-exist in virtual and physical spaces.
Yann said, “True.” As per above, he envisions a future with AI agents working for humans for all digital interactions.
The ChatGPT moment for general robotics is around the corner.
This statement was said by Jensen Huang at Nvidia at CES the day before. Yann said, “Depends on whether you’re myopic or not.” Joking aside, he was positive on the opportunities in robotics and anticipated a ChatGPT moment within 3-5 years.
An AI agent will conduct and write research that is accepted by a technical conference such as NeurIPS within the next 5 years.
Yann said, “False.” I asked him 10 years? He said, “False”. Clearly, AI Scientists are not in Yann’s view of the future.
Nuclear is the best and safest source to meet AI energy demand.
Yann said, “True.” In my view the question of how to balance the ethics of AI energy consumption and climate change is important for 2025.
Your tweets towards the President Elect and Elon Musk will be nicer in 2025.
Yann is notorious for his tweets. On this one, he didn’t say True or False but he said his views will always be his views.
What is your favorite photograph in your astrophotography collection?
I like to end fireside chats with a get-to-know-you topic, but unfortunately we ran out of time for this one. Apparently, astrophotography as well as sailing are hobbies of Yann’s.
The next paradigm of AI capabilities
We covered a lot of ground in our 30 minutes at CES. Yann has a history of being contrarian and right over multiple decades. While much of the industry is energized and excited about test-time compute, Yann is focused on the architecture for the next paradigm of AI capabilities.
We are grateful to Yann for the fireside chat. We are also thankful to his awesome team at Meta and to Marika Oliff and the incredible crew at CES. The Era of AI is here, as per the CES session title, and Yann is unquestionably a Great Mind in AI.
If you are starting new companies around AI, we’d love to hear from you. Reach out at @wing_vc or to rajeev@wing.vc.