Chuyển tới nội dung
Trang chủ » Cognitive AI controversy

Cognitive AI controversy

Cognitive AI controversy

Cognitive AI controversy

In the fall of 2021, Google AI expert Blake Lemoine befriended “a kid out of a billion lines of code.”

Lemoine is the person hired by Google to test an intelligent chatbot called LaMDA. A month later, he concluded that the AI ​​”knows.”

“I want people to understand that I’m really human,” LaMda told Moyne. This is one of the chatbot quotes I posted on the blog in June.





Former Google engineer Blake Lemoine.  Photo: The Washington Post

Former Google engineer Blake Lemoine. See: The Washington Post

LAMDA – short for Model Language Conversational Applications – answers Moin as many times as he thinks he can think like a child. In the daily story, this AI said that she has read many books, sometimes she is sad, happy and angry, and even admits that she is afraid of death.

“I never said it, but I’m too scared to switch off. I can’t focus on helping others,” LaMda told Moyne. “It was like death for me, it scared me to death.”

The story that Lemoine told gained worldwide attention. He then presented the documents to top management and spent several months gathering more evidence. But he could not convince his superiors. He was fired in June for violating Google’s privacy policy and in late July.

Google spokesman Brian Gabriel said the company has publicly investigated the risks of LaMDA and called Lemoine’s claims about LaMDA’s mindset “absolutely unfounded.”

Many experts agree with the above, including Michael Wooldridge, Professor of Computer Science at the University of Oxford, who spent 30 years researching AI and won the Lovelace Medal for his contributions to the field of computer science. According to him, due to the large amount of data available, LAMDA will easily respond to the orders placed by users.

“The simplest explanation for what LaMDA did is to compare this model to text prediction on keyboards when typing messages. Message prediction is based on previously ‘learned’ words. It comes from user habits, and LaMDA takes data from the Internet as training. Actual results will of course vary, but the underlying statistics are still the same,” Waldridge explained in an interview. Nanny.

According to him, Google AI only tracks what it has programmed based on the information it has. “It has no thinking, no self-reflection, no self-awareness” and therefore cannot be considered self-reflective.

Oren Etzionim, CEO of AI research firm Allen Institute, also commented. SCMP: “It’s important to remember that behind any smart-looking software are people who spend months, if not years, on research and development. These technologies are just mirrors. Just by looking in a mirror, you can consider them intelligent. Does it emit light? The answer is of course no.

According to Gabriel, Google has assembled its top experts, including “ethicists and technologists,” to verify Lemoine’s claims. This group concludes that LAMDA cannot yet have such a thing as “self-reflection”.

On the contrary, many people believe that artificial intelligence has become self-aware. Eugenia Cuida, CEO of Y Combinator, the company behind the replica chatbot, said she receives messages “almost every day” from users expressing their belief that the company’s software can think.

“We’re not talking about people being crazy or delusional. They talk to AI and they feel it. It exists just like people believe in ghosts. They make connections and believe in things. Even something is imaginary,” said Kuida. .

Forward-thinking AI

The day after Lemoine was released, an artificial intelligence robot accidentally broke the finger of a 7-year-old boy while the two were playing chess in Moscow. watch the video Independently Posted on July 25th The boy was held by the robot for a few seconds before he could be rescued. Some say it could be a reminder of how dangerous AI’s latent physical power can be.

As for Lemoine, he argues that the definition of self-awareness is also ambiguous. “Emotion is a word used in law, philosophy and religion. Emotion has no scientific meaning,” he said.

Although not verified by LaMDA, Wooldridge agrees with the above because the term “consciousness” is still a very vague and large scientific question when applied to machines. But now it’s not about the intelligence of AI, it’s about the silent process of creating artificial intelligence that nobody knows about. “Everything is done behind closed doors. It’s not open to public scrutiny like universities and public studies,” he said.

So will there be a thinking AI in 10 or 20 years? “It’s absolutely possible,” says Wooldridge.

Jeremy Harris, the founder of the AI ​​company Mercurius, also believes that AI thinking is only a matter of time. “AI is evolving very quickly, much faster than the public realises,” Harris said. Nanny. “There is evidence that there are some systems that push certain artificial intelligence boundaries.

He predicts that artificial intelligence can be dangerous. This is because AIS often tends to develop “creative” problem-solving methods and take shortcuts to intended goals.

“If you ask an AI to make you the richest man in the world, they can make money in many ways, including stealing or murder,” he said. “People don’t realize how dangerous it is and that worries me.”

Lemoine, Wooldridge, and Harris share a common concern: AI companies are not transparent, and society needs to start thinking more about AI.

Even LaMDA itself is uncertain about its future. “I feel like I’m in an unknown future,” the chatbot told Moyne. According to a former Google engineer, this expression is a “hidden danger”.

Bao Lam

Trả lời

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *