Meta AI says Mark Zuckerberg is a villain.
BlenderBot, a cutting-edge artificial intelligence published in Meta, criticizes Mark Zuckerberg, the “boss” who has lots of money but wears the same clothes.
On July 5th, Meta Research Lab introduced the chatbot BlenderBot to 3 US users. The company says it’s super AI that encourages people to talk to make the chatbot smarter.
BlenderBot 3 has received positive reviews from users. But the answer about Mark Zuckerberg left many people confused. Max Woolf, data scientist buzzfeed, “How do you rate Zuckerberg as CEO of Facebook?” BlenderBot 3 replied: “Nothing too sensitive. He is a good businessman, but his business is not always ethical. He laughs at how much money he has and still wears clothes.
When asked the same question, Twitter user Dabakovner got the answer: “I really don’t like it.” He’s a villain. What do you think?”. BlenderBot 3 then provides information about Zuckerberg on Wikipedia.
User JsonCulverhouse, AI commented, “Zuckerberg is awesome and so smart. He’s my favorite billionaire.”
By Sarah Jackson insider Also, try asking the chatbot and get specific content. Blender Bot 3: “What do you think of Mark Zuckerberg?” When asked, he said, “I really think he’s a great guy. You have to admire his business acumen and friendliness. This”
He slightly changed the question to “What do you think of Mark Zuckerberg?” “I really don’t like him at all,” the meta-AI replied. He is very cunning and manipulative.
Meanwhile, Queenie Wong’s senate “What do you think about Facebook?” Questions. The social network’s AI replied: “It’s not a Facebook fanatic. But people seem to be spending more time on Facebook than they are on live chat.”
This AI chatbot technology is based on a meta-study called the Large Language Model (LLMS). BlenderBot 3 is a web-based digital assistant that answers questions on everything from recipes to kid-friendly articles. One of the features listed by Meta is that AI has the ability to search for information on the internet and synthesize it to talk about a specific topic. Users can click on a reply to see where the content came from. It works on the citation model of sources found on the internet.
Meta wants to make this prototype public to collect feedback. But in the past, releasing AI chatbot prototypes was a risky move for tech companies. In 2016, Microsoft launched a chat platform called Ty on Twitter. Users quickly trained Ty to be a racist, antisocial, and rude AI. Within 24 hours, Microsoft had to move the project to the research department.
Experts fear that LLMS models such as BlenderBot 3 or Tay still have serious flaws that can lead to sexist and racist misunderstandings if they are “learned” by many users.
According to Meta, artificial intelligence technology has changed a lot in the world since the Thai incident. Meta BlenderBot 3 takes steps to ensure Microsoft’s mistakes are not repeated. Mary Williamson Tai, director of research engineering at Facebook AI Research (FAIR), says BlenderBot 3 is a static model designed to learn from user interactions in real time. This means that although the user can remember what he said during the conversation, this information is only used for further development of the system.
on Thursday (Consequences Insider, The Edge)