In a conversation with the Financial Times, LeCun discussed the limitations of large language models (LLMs) that power AI chatbots like Chat Gpt and Gemini. He highlighted that these models fall short of human abilities in reasoning and planning. According to him, LLMs have a "very limited understanding of logic," lack a grasp of the physical world, do not possess persistent memory, and cannot engage in genuine reasoning or planning.
LeCun also pointed out that large language models are "intrinsically unsafe" because their accuracy depends entirely on the accuracy of the data they're trained on. He explained that the progress of these models is constrained since they only learn from human-provided data, making what seems like reasoning merely the result of "exploiting accumulated knowledge from lots of training data." Nevertheless, he acknowledged that LLMs like ChatGPT and Gemini are quite useful despite these shortcomings. Also Read..Meta's New Policies on AI-Generated Content: What You Need to Know
On the topic of achieving human-level AI, LeCun mentioned that Meta’s Fundamental AI Research (Fair) lab, which has about 500 researchers, is working on a new AI system designed to understand common sense and learn about the world around us. This approach, called 'world modeling,' is risky for Meta, as investors generally expect quick returns on their AI investments.
LeCun firmly believes that developing AGI is a scientific issue rather than a matter of design or technology.
Recently, Mark Zuckerberg announced plans to increase AI investment with the goal of making Meta "the leading AI company in the world." However, this announcement led to a $200 billion drop in the company’s valuation.