Yann LeCun, Meta’s Chief AI Scientist and a Turing Award winner, has long been a key figure in unnatural knowledge. However, his work over the past year has not only pushed the boundaries of AI studies but also sparked important conversations about how society may view the benefits and drawbacks posed by this revolutionary technology.
Born in 1960 in Soisy-sous-Montmorency, France, LeCun has been a driving force in AI technology. From being the founding chairman of the New York University Center for Data Science in 2012, and co-founding Meta AI in 2013, to shaping the future of open-source artificial intelligence, LeCun’s logical perception makes him Emerge’s Person of the Year.
” On the technical aspect, he’s been a visionary. There are only a few people you could honestly say that on, and he’s one of them, according to Rob Fergus, a professor of computer technology at New York University, in an exam. More recently, he added that the Cambrian explosion of businesses and people using these huge language models has been a result of his support for open-source and empty research.
Fergus is an American computer scientist specializing in equipment understanding, deep learning, and conceptual models. He co-founded Meta AI ( formerly Facebook Artificial Intelligence Research ) with Yann LeCun in September 2013. He is a professor at NYU’s Courant Institute and a researcher at Google DeepMind.
LeCun’s effect on AI spreads back years, encompassing his pioneering work in machine learning and neural network. A former Gold Professor at New York University, he has long championed self-supervised learning—a approach inspired by how people learn from their environment. In 2024, this vision drove improvements in AI systems that you understand, cause, and schedule with increasing style, much like living beings.
” Reflections were made on how to get to AGI in a certain time in 2015.” Yann had a cake analogy: uncontrolled understanding is the body, controlled understanding is the icing, and support understanding is the peach on top”, Professor Fergus recalled. ” Some mocked this at the moment, but it’s been proven right. Current LLMs are mostly trained using uncontrolled learning, fine tuned with only minimal controlled data, and enhanced using reinforcement learning based on human preferences.
LeCun has become a key figure in the global debate about the role of artificial intelligence, whether it is developing cutting-edge systems like Meta’s open-source large language models, including Llama AI, or addressing the ethical and regulatory challenges of AI.
Professor Fergus remarked,” It’s been wonderful to see him up close and all the amazing things he’s done.” ” More people should listen to him,” he said.
AI regulations
LeCun’s resolute opposition to regulating foundational AI models has been one of his most contentious statements this year.
Russel Caflisch, a professor of mathematics at NYU, told ,” He’s told me that he doesn’t think AI regulations are necessary or the right thing.” ” I believe he’s an optimist, and he sees all the good things that can come from AI”.
Caflisch, the director of New York University’s Courant Institute of Mathematical Sciences, has known Professor LeCun since 2008 and has been a witness to the development of contemporary machine learning.
LeCun claimed in June on X that the regulation of the models could stifle innovation and slow technological advancement.
LeCun said that holding technology developers accountable for improper uses of products created using their technology will only accelerate technological development. ” It will certainly stop the distribution of open-source AI platforms, which will kill the entire AI ecosystem, not just startups, but also academic research”.
LeCun argued for concentrating regulations on situations where risks are more context-specific and manageable, and he has opposed the regulation of foundational AI models, suggesting that regulating applications rather than the underlying technology would be more advantageous.
” Yann has done the foundational work that’s made AI successful”, Caflisch said. ” His current importance lies in being approachable, articulate, and having a vision for advancing AI toward artificial general intelligence”.
Criticisms of AI fearmongering
LeCun has spoken out in opposition to what he thinks are overblown concerns about the potential dangers of AI.
” He doesn’t give in to fear-mongering and is optimistic about AI, but he’s not a cheerleader either”, Caflisch said. He also suggested that robotics could be used to improve this by capturing data from the real world.
He disputed ominous predictions that frequently come to mind are runaway superintelligence or uncontrolled AI systems in an appearance on the Lex Fridman Podcast in April.
” AI doomers imagine all kinds of catastrophe scenarios of how AI could control, control, and basically kill us all,” LeCun said, relying on a number of false assumptions. The first thing to consider is that the emergence of superintelligence could be a turning point for a super intelligent machine that, because we’ve never done it before, will inevitably take over the world and kill everyone. That is false”.
The world has a history of an AI arms race since ChatGPT’s release in November 2022. Many people are worried that an AI superintelligence will overthrow the world because of a century of Hollywood films that have foretold the coming robot apocalypse and news that AI developers are working with the U.S. government and its allies to integrate AI into their frameworks.
LeCun, however, disagrees with these assertions, claiming that the most intelligent AI will only possess the qualities of a small animal and not the Matrix’s global hivemind.
” It’s not going to be an event. We’re going to have systems that are as intelligent as cats, which have all the traits of humans, LeCun continued. ” Then we’re going to work our way up to make those things more intelligent. As we make them more intelligent, we’re also going to put some guardrails in them”.
LeCun suggested that if developers can’t agree on how to control AI and one goes rogue, “good” AI could be used to fight the rogue ones in a hypothetical doomsday scenario where rogue AI systems emerge.
AI language models, including OpenAI’s o1, are only engaged in intelligent retrieval, according to Yann LeCun, and do not represent the evolution of human-level intelligence. twitter.com/wQb4pVaRpX
— Tsarathustra ( @tsarnick ) October 23, 2024
The evolution of AI
LeCun advocates for “objective-driven AI,” or AI that can understand, predict, and interact with the world with a depth comparable to that of living things. AI systems are not just predicting sequences or producing content. In order to facilitate causal reasoning and the ability to develop “world models,” or internal representations of how things work, and to develop AI systems, it must be done.
LeCun has long advocated self-supervised learning as a method for advancing AI toward more autonomous and general intelligence. He envisions AI learning to perceive, reason, and plan at multiple levels of abstraction, which would allow it to learn from vast amounts of unlabeled data, similar to how humans learn from their environment.
LeCun stated in a speech at the Seoul-based K-Science and Technology Global Forum that” the real AI revolution has not yet begun.” Every single one of our interactions with the digital world will soon be mediated by AI assistants.
Yann LeCun’s contributions to AI in 2024 are driven by a desire for technological advancement and pragmatism. His support for alarmist AI narratives and his opposition to heavy-handed AI regulation demonstrate his commitment to advance the field. As AI continues to evolve, LeCun’s influence ensures it remains a force for technological progress.
Generally Intelligent Newsletter
A generative AI model’s voiceover for a weekly AI journey.