Meta’s Yann LeCun is Right About AI

Meta’s Yann LeCun is most definitely right about just how much of an existential threat AI is to humans at the moment. And that is none.

he replied to a question about A.I. becoming smart enough to pose a threat to humanity by saying, “You’re going to have to pardon my French, but that’s complete B.S.”

AI just is not smart yet. Anybody who has used a GPT can tell you the dang things don’t even follow instructions all the time let alone do these LLMs pose a threat to humanity. It’s like asking if a smart fridge is going to poison you.

LeCun argued that today’s large language models lack some key cat-level capabilities, like persistent memory, reasoning, planning, and an understanding of the physical world. In his view, LLMs merely demonstrate that “you can manipulate language and not be smart,” and they will never lead to true artificial general intelligence (AGI)

https://techcrunch.com/2024/10/12/metas-yann-lecun-says-worries-about-a-i-s-existential-threat-are-complete-b-s/

He’s really not wrong. LLMs still hallucinate and give incorrect answers and certainly aren’t the stepping stone to full blown Artificial General Intelligence. They’re just one step out of a million to advancing to full blown AGI. LLMs really are vastly dumber than a cat, people are just perceiving that they’re smart because they’re pulling up all the answers to their questions…

Here’s an example from ChatGPT o1:

“People perceive large language models (LLMs) as smart because they generate coherent, contextually appropriate, and sophisticated responses across a wide range of topics. Trained on vast datasets that include books, articles, and online content, these models can recall factual information, mimic various writing styles, and even produce creative outputs like poems or stories. The fluency and relevance of their language make interactions feel natural, leading users to attribute intelligence to the models.

Additionally, LLMs can perform tasks that appear to require understanding and reasoning, such as summarizing complex texts, translating languages, or providing detailed explanations. This ability to process and generate human-like language creates the impression that they comprehend the content at a deep level. The combination of extensive knowledge and the simulation of human conversation enhances the perception of LLMs as smart entities.”

LLMs are not smart. They just sound smart and apparently LLMs have the dataset to back that up. While LLMs have certainly advanced over the past few years and there doesn’t seem to be a slowdown in that advancement just yet, they still are far from perfect.

Now onto AGI… AGI is distant in the future and requires numerous technological breakthroughs. AGI will require the model to be able to learn, reason, and understand. It must be able to perceive and will require brain-like qualities to its hardware. Its hardware will need to be such a large technical leap beyond what we currently have it’s difficult to put into words. From sensors, to CPUs, to GPUs and NPUs, and god only knows what else… the hardware required will initially be diverse and vast.

Anyway, let’s ask our old friend ChatGPT o1 again just to make sure we have our bases covered:

“Achieving artificial general intelligence (AGI)—an AI system with the ability to understand, learn, and apply knowledge across a wide range of tasks at a human level—requires several significant technological breakthroughs.

  1. Advanced Learning Algorithms: Current AI systems excel in narrow tasks but struggle with generalization. Developing algorithms that can learn unsupervised, adapt to new situations, and transfer knowledge across domains is crucial. Breakthroughs in unsupervised learning, reinforcement learning, and meta-learning could enable machines to learn more like humans do, without requiring vast amounts of labeled data.
  2. Common Sense Reasoning and Understanding: AI lacks the ability to comprehend context or possess common sense. Advances in natural language understanding, symbolic reasoning, and the integration of knowledge graphs could help machines understand and reason about the world more effectively.
  3. Neuroscience-Inspired Models: Understanding the human brain’s architecture and function can inspire new computational models. Neuromorphic computing, which mimics neural structures, and bio-inspired algorithms might provide pathways to more general forms of intelligence.
  4. Scalable and Efficient Hardware: AGI will require immense computational resources. Innovations in hardware, such as quantum computing or more efficient processing units tailored for AI workloads, are necessary to handle the complexity and scale of AGI algorithms.
  5. Integration of Multiple AI Paradigms: Combining different approaches—such as machine learning, symbolic AI, and probabilistic models—may lead to more robust and flexible intelligence. This hybridization can help overcome the limitations of relying on a single method.
  6. Ethical Frameworks and Safety Mechanisms: Developing AGI raises ethical and safety concerns. Breakthroughs in AI alignment research, which ensures that AI systems act in accordance with human values, are essential to prevent unintended consequences.
  7. Consciousness and Self-Awareness Models: While controversial, some theories suggest that replicating aspects of consciousness or self-awareness could be necessary for AGI. Research into the nature of consciousness may inform the development of machines that can reflect on their actions and adapt accordingly.
  8. Improved Data Efficiency: Reducing the dependency on large datasets by creating algorithms that can learn from minimal data inputs would make AGI more feasible. Few-shot learning and zero-shot learning are areas of active research in this domain.
  9. Robust Perception and Sensory Integration: Enabling machines to process and integrate information from various sensors (visual, auditory, tactile) as seamlessly as humans do requires advances in perception algorithms and multimodal data processing.

In summary, realizing AGI involves a multifaceted approach that includes developing advanced algorithms capable of general learning, enhancing computational infrastructure, integrating diverse AI methodologies, and establishing ethical guidelines to ensure beneficial outcomes. Each of these areas requires significant breakthroughs and collaborative efforts across disciplines.”

Ultimately an LLM is just the beginning of our race to AGI and will help lead to the development of new kinds of models. Like I said, it’s a single stepping stone.

One very real threat that AI is capable of is a reduction in the workforce. While AI will not be fully automating every single job just yet, it is capable of raising productivity and making some employees redundant. That’s just a natural effect.

Customer service is a place where we’ve seen a lot of AI enabled programs take on the tasks that traditionally required human employees. It’s hard to think of any large customer centered company that does not use some kind of AI enabled software / chat service these days. It’s kind of a corporate fad just like “the cloud” was something like a decade ago. Basic AI enabled software is where we’re at in the integration cycle of new to business technology and that will come with human costs… redundancy.

What’s worse though is that people will find it difficult to get similar jobs after being fired due to everybody reducing their workforce because of AI productivity gains. And retraining takes time! Getting a new degree takes time! Time most people do not have because they need to, you know, actually stay alive. The cost of food, rent, and health insurance add up fast.

So what’s likely going to happen is we’re going to have a large disgruntled portion of the population who cannot simply get a new job or have to work jobs that are outside of their aptitude and overall skillset. It’s a recipe for social strife and for the rise of a political movement to block the advancement of AI into the business world.

To sum up, LLMs are not smart, AI is not a threat to humanity yet, but AI is a threat to workers, and AGI is nowhere near being created. God only knows where society is headed when it comes to AI but it’s not time to panic yet.