After 60 years, people still have “headaches” from chatbots
Microsoft’s chatbot has learned to swear since its inception, and the meta-AI called Zuckerberg a villain and said Trump is still the President of the United States.
The busy world is flooded with all kinds of AI chatbots, but the technology is still incomplete, constantly causing controversy and fear.
In the 1960s, a computer program called Eliza was developed to simulate the experience of speaking to a psychotherapist. In the experiment, one participant said her boyfriend had been “stressed out all day,” and Eliza replied, “I’m sorry you’re stressed.”
Although not as flexible as today’s software, Eliza is considered the world’s first chatbot. It works with natural language insights, responding to specific keywords and providing the right conversation to the user. Joseph Weizenbaum, creator of Eliza, wrote in 1966 that “many people could not believe that Eliza was a computer program and not a real person”.
But Weizenbaum finds it unsettling. People who come into contact with Eliza willingly embarrass her, knowing that she is just a computer program.
“Eliza shows how easy it is to create the illusion of understanding, which makes her claim more credible. It was an accident,” he said. Towards the end of his career, Weizenbaum repeatedly warned against giving machines too much responsibility and was also very critical of artificial intelligence.
Weizenbaum’s story anticipates expectations of AI technology. Software’s ability to communicate with humans worries many, they say, and creates the false impression that machines could be a suitable interface for humans.
Blake Lemoine, a former Google engineer, commented in June that MDA “thinks like a kid” when he published a conversation between him and the AI. Lemoine’s statement was criticized by the AI community and later fired by Google.
Today’s chatbots can evoke strong psychological reactions from users when they don’t work as intended or when they mimic what they were taught and make racist and shocking statements.
Still, proponents of the technology say chatbots can simplify customer service tasks and increase efficiencies across many industries. This technology is behind many virtual assistants that many users interact with on a daily basis and provide company for the lonely, elderly or isolated.
Others warn that AI-powered chatbot technology is still more limited than expected. “They’re very good at impersonating humans and impersonating humans, but they’re not deep. They’re just superficial fakes, they don’t understand what they’re saying,” commented US artificial intelligence researcher Gary Marcus.
The development of chatbots
Sanjeev P., a language technology expert at Johns Hopkins University in the United States. Kudanpour met Eliza when they were at school but soon learned that she had a disability. “Eliza can simulate a text conversation for about 10 sentences before realizing that the user isn’t really smart and just wants to expand on the story,” Khudanpour said.
In the following decade, the idea of ”talking to a computer” began to change to “talking with a purpose”. Concrete examples are Amazon Alexa and Apple Siri, which often prompt users to complete certain tasks, such as buying plane tickets, checking the weather or turning on music.
“They use the same technology as the first chatbots, but Alexa and Siri cannot call chatbots. They need to be defined as voice assistants or digital assistants to help users in a specific task,” said Khudanpur.
This technology had been around for years until the Internet became ubiquitous. “In this millennium, more and more organizations have successfully deployed a digital workforce for routine and repetitive tasks,” commented Khudanpour.
Chatbots and Social Topics
In the early 2000s, researchers began reconsidering the idea of creating chatbots that could interact with humans for long periods of time. They are often trained with vast amounts of information from the Internet and must learn to mimic exactly how people speak, but this carries the risk of getting the worst of the Internet.
A specific example is the official attempt by Microsoft’s Tai chatbot in 2015. It was designed to talk like a teenager, but it quickly “learned” to use profanity, profanity, and racism, prompting Microsoft to shut it down .
That’s the case with Blender Bot3, the meta that launched earlier this month. This chatbot says Donald Trump is still the President of the United States and it is in 2020. There is a lot of presidential election fraud and many other contradictory statements.
“While the advances since Eliza and the new data used to train language processing software are a wealth of information, how to build reliable and trustworthy chatbots remains unclear,” Marcus said.
Meanwhile, Khudanpour is optimistic about the potential of chatbots. “Imagine a chatbot reading all the scientific papers in my field. That way I don’t have to read them, I just ask questions and interact with the chatbot. In other words, it’s another clone. ” he said. .
diving ann (Consequences CNN)