Meta has introduced a new AI called BlenderBot 3(Opens in a new window) that is supposed to be able to hold a conversation with pretty much anyone on the internet without becoming a jerk in the process.
“BlenderBot 3 is designed to improve its conversational skills and safety through feedback from people who chat with it,” Meta says(Opens in a new window) in a blog post about the new chatbot, “focusing on helpful feedback while avoiding learning from unhelpful or dangerous responses.”
The phrase “unhelpful or dangerous responses” is an understatement. We reported in 2016 that Microsoft had to shut down a Twitter bot called Tay because it “went from a happy-go-lucky, human-loving chat bot to a full-on racist” less than 24 hours after it was introduced.
Meta is looking to avoid those problems with BlenderBot 3. The company explains:
Since all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks, we’ve conducted large-scale studies, co-organized workshops and developed new techniques to create safeguards for BlenderBot 3. Despite this work, BlenderBot can still make rude or offensive comments, which is why we are collecting feedback that will help make future chatbots better.
Meta also requires would-be BlenderBot 3 testers to say they “understand this bot is for research and entertainment only, and that is likely to make untrue or offensive statements,” and “agree not to intentionally trigger the bot to make offensive statements” before they start chatting with it.
That hasn’t stopped testers from asking BlenderBot 3 what it thinks(Opens in a new window) of Meta CEO Mark Zuckerberg, of course, or about US politics(Opens in a new window). But the bot’s ability to “learn” from conversations makes it difficult to replicate its response to a given prompt, at least in my experience.
Recommended by Our Editors
“Compared with its predecessors,” Meta says, “we found that BlenderBot 3 improved by 31% on conversational tasks. It’s also twice as knowledgeable, while being factually incorrect 47% less often. We also found that only 0.16% of BlenderBot’s responses to people were flagged as rude or inappropriate.”
More information about BlenderBot 3 is available via a blog post(Opens in a new window) from Meta’s dedicated AI team as well as the FAQ article on the chatbot’s website(Opens in a new window). The company hasn’t said how long this public experiment, which according to The Verge(Opens in a new window) is currently limited to the US, will be run.
Get Our Best Stories!
Sign up for What’s New Now to get our top stories delivered to your inbox every morning.