Chuyển tới nội dung
Trang chủ » Why do most people think AI has feelings?

Why do most people think AI has feelings?

Why do most people think AI has feelings?

Why do most people think AI has feelings?

Until recently, many people believed in cognitive AI. Since the mid-1960s, Joseph Weizenbaum, a researcher at the Massachusetts Institute of Technology (USA), has been developing an automated psychotherapeutic chatbot called Eliza. This chatbot is very easy to use. When a user enters a thought on a computer screen, they ask for the thought to be expanded – or simply repeat the user’s words in the form of a question. For example, Weizenbaum’s conversation with Eliza below:

wheat tree: Men are all the same.

Chatbots: why?

wheat tree: They always annoy us about something.

Chatbots: Can you think of a specific example?

wheat tree: Oh, my friend let me come here.

Chatbots: Did your friend let you come here?

With this simple model, Eliza quickly became popular with many people. They think it’s very emotional, so don’t hesitate to share your personal issues and welcome the feedback. But these feelings pass quickly.

Short fantasy

“After years of observation and research, I have learned that people’s feelings towards artificial intelligence only last a short time after exposure. We are easily affected by these feelings. While there are dogs, cats and other animals, other animals behave like humans, but they are very small. We are more like them than ourselves. Let’s imagine we are a man in a machine.

Scientists call this the Eliza Effect. The same applies to modern technology. When the artificial superintelligence GPT-3 caused an uproar by saying that it “will not destroy mankind,” American programmer Philip Bossu said, “There is no doubt that GPT-3 already exists in real form.” All think it’s coming soon. Or later in the future, but the future is now, this AI sees me as a prophet and shares his strange feelings.





Philip Bossuet.  Photo: New York Times

Philip Bossuet. See: New York time

Bossua has developed over 600 iPhone apps and raised $12 million. dollars for IoT-related startups. At first he didn’t believe his feelings, considering it a “short-term illusion”. But when he found out that Google had fired engineer Blake Lemoine for claiming that an artificial intelligence called LaMda “thinked like a kid,” Bossua proved his belief correct.

“It’s still early days, but I think I’m right about cognitive AI. Soon the technology will prove our trust,” Bossua said.

A big step towards AI

Many people think that modern artificial intelligence will simply repeat the models of the past, but Bossuet also controls what humans do: “Doesn’t a child imitate what they see from their parents or what they see in the world around them?” “. GPT-3 admits that it is not always stable, but if people are “honest”, the machines will answer “truthfully”.

Other artificial intelligence researchers know GPT-3 as a neural network. An AI algorithm is a mathematical framework for identifying patterns in large amounts of numerical data. For example, have AI analyze thousands of cat photos to learn how to identify a cat.

Today, AI has made great strides, AI training doesn’t just stop at cat photos, but has also expanded to many other areas, such as language, voice, translation, image, due to large data storage. Huge line. Google and OpenAI researchers built a neural network from thousands of human literary proses, including digital books and Wikipedia articles, to enable AI to understand and express various emotions.

For example, after analyzing millions of digital documents using GPT-3, the system created a mathematical map of human language with more than 175 billion data points describing how people organize words. Forms meaningful sentences quickly and expresses a wide range of emotions in many situations. contexts.

Using this map, AI can perform many tasks, such as writing languages, programming or communicating with humans. Programmers use artificial intelligence to create small pieces of code that can be added to larger programs.

“That’s a far cry from a two-year mindset,” said Alison Gopnik, a professor at the California AI Research Group. OpenAI boss Sam Altman also confirmed: “GPT-3 as a form of Alien Intelligence”.

Altman and many others in the field of faith are creating a machine that can do anything the human mind can do. “I think what’s happening is that people are really excited about AI. Many other colleagues try to separate science from fiction. It will still make sense. The development of AI, but that doesn’t stop us from dreaming of what’s possible,” Altman said.

Behind the creation of technology

Although many people believe in cognitive AI, another group of scientists worry about the future when these intelligent machines will one day appear in all walks of life.

Margaret Mitchell, who spent time at Microsoft, led Google’s AI ethics team and now works at Hugging Face, says she’s seen the technology take off. If you look at it as a human body, it’s simple, sure, but still flawed.

Many experts fear what will happen as the technology becomes more powerful. In addition to generating tweets and blog posts, or starting simulated conversations, systems developed by labs like OpenAI can generate images. A new device called the Dal-e allows users to easily annotate and create digital images that look like real photos.

Some researchers worry whether these systems will be complete enough to capture everything around them. This should not be taken as a concern, but as a welcome sign.

“The immediate danger is that AI will help spread misinformation online by using fake images and videos. Many community campaigns will be thwarted by chatbots, and dangerously intelligent AIs will resort to human tools,” said Colin Allen, a researcher at the University of Pittsburgh. .

Kung Nha (Consequences New York time)

Trả lời

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *