Meta scientist says AI is not as smart as dogs

by nativetechdoctor
2 minutes read

According to CNBC, Yann LeCunn – Vice President and head of the artificial intelligence (AI) scientist group at Meta AI said that the intelligence of current AI systems is not on the same level as humans and not even intelligent. with dogs.

OpenAI’s ChatGPT is developed based on a large language model (LLM), meaning its AI system is trained on huge amounts of linguistic data, allowing the chatbot to answer a question or statement that the user asks. fabricate.

The rapid development of AI has worried technology experts. Many people believe that AI can be dangerous to society if left unchecked. American billionaire Elon Musk considers AI to be one of the biggest risks to the future of human civilization.

When asked about the current limitations of AI, Yann Lecun said that generative AIs trained on large language models do not have any real-world understanding, since they are completely trained with a large amount of data. large text. Most human knowledge has nothing to do with language so that part of the human experience isn’t “learned” by AI, LeCun said.

LeCun says the current AI system can pass the bar exam in the US, but cannot put the dishes in the dishwasher, something a 10-year-old can learn in 10 minutes. LeCun said

scientist The AI revealed that Meta is focusing on research on training AI on video data , which is a huge challenge compared to training on LLM.

In another example of the limitations of current AI, he said a 5-month-old would look at a floating object and not think too much about it. However, a 9-month-old baby will look at this item and be surprised that it realizes that the object shouldn’t be levitating like that. We still don’t know how to replicate this ability with today’s machines. Without this, the AI ​​will not reach the level of intelligence of humans, dogs, or cats, LeCun said.

Professor LeCun also emphasized that in the future there will be machines that are smarter than humans, but this should not be considered a threat. We should see it as something that benefits people, each of whom will have an AI assistant to assist in daily life, LeCun gives an example.

According to Mr. LeCun, works of science fiction have made us fear that if robots are smarter than humans, they will want to take over the world. However, there is no correlation between intelligence and the desire to dominate the world. The AI ​​systems created must be in control and dependent on humans

Related Posts

Leave a Comment

Adblock Detected

Please support us by disabling your AdBlocker extension from your browsers for our website.