A.I. is not even at dog-level intelligence yet: Meta A.I. chief

Chief AI Scientist at Meta Yann LeCun spoke at the Viva Tech conference in Paris and said that artificial intelligence does not currently have human-level intelligence but could do one day.

Chesnot | Getty Images News | Getty Images

Current artificial intelligence systems like ChatGPT do not have human-level intelligence and are barely smarter than a dog, Meta’s AI chief said, as the debate over the dangers of the fast-growing technology rages on.

ChatGPT, developed by OpenAI, is based on a so-called large language model. This means that the AI system was trained on huge amounts of language data that allows a user to prompt it with questions and requests, while the chatbot replies in language we understand.

The fast-paced development of AI has sparked concern from major technologists that, if unchecked, the technology could pose dangers to society. Tesla CEO Elon Musk said this year that AI is “one of the biggest risks to the future of civilization.”

At the Viva Tech conference on Wednesday, Jacques Attali, a French economic and social theorist who writes about technology, said whether AI is good or bad will depend on its use.

“If you use AI to develop more fossil fuels, it will be terrible. If you use AI [to] develop more terrible weapons, it will be terrible,” Attali said. “On the contrary, AI can be amazing for health, amazing for education, amazing for culture.”

At the same panel, Yann LeCun, chief AI scientist at Facebook parent Meta, was asked about the current limitations of AI. He focused on generative AI trained on large language models, saying they are not very intelligent, because they are solely coached on language.

“Those systems are still very limited, they don’t have any understanding of the underlying reality of the real world, because they are purely  trained on text, massive amount of text,” LeCun said.

“Most of human knowledge has nothing to do with language … so that part of the human experience is not captured by AI.”

LeCun added that an AI system could now pass the Bar in the U.S., an examination required for someone to become an attorney. However, he said AI can’t load a dishwasher, which a 10-year old could “learn in 10 minutes.”

“What it tells you we are missing something really big … to reach not just human level intelligence, but even dog intelligence,” LeCun concluded.

GitHub CEO: A.I. model is not sentient — human developers are still in charge

Meta’s AI chief said the company is working on training AI on video, rather than just on language, which is a tougher task.

In another example of current AI limitations, he said a five-month-old baby would look at an object floating and not think too much of it. However, a nine-month year old baby would look at this item and be surprised, as it realizes that an object shouldn’t float.

LeCun said we have “no idea how to reproduce this capacity with machines today. Until we can do this, we are not going to have human-level intelligence, we are not going to have dog level or cat level [intelligence].”

Will robots take over?

Striking a pessimistic tone about the future, Attali said, “It is well known mankind is facing many dangers in the next three or four decades.”

He noted climate disasters and war among his top concerns, also noting he is worried that robots “will turn against us.”

During the conversation, Meta’s LeCun said that, in the future, there will be machines that are more intelligent than humans, which should not be seen as posing a danger.

“We should not see this as a threat, we should see this as something very beneficial. Every one of us will have an AI assistant … it will be like a staff to assist you in your daily life that is smarter than yourself,” LeCun said.

The scientist added that these AI systems need to be created as “controllable and basically subservient to humans.” He also dismissed the notion that robots would take over the world.

“A fear that has been popularized by science fictions [is], that if robots are smarter than us, they are going to want to take over the world … there is no correlation between being smart and wanting to take over,” LeCun said.

Ethics and regulation of A.I.

While looking at the dangers and opportunities of AI, Attali concluded that there need to be guardrails in place for the development of the technology. But he was unsure who would do that.

“Who is going to put the borders?,” he asked.

Macron: I think we need global regulation on A.I.

Leave a Reply

Your email address will not be published. Required fields are marked *