Facebook Parent Meta asks you to help train its new AI-powered chatbot

Sitting in front of a computer screen, I’m typing messages into a new chatbot created by Facebook’s parent company, Meta.

We talk about pizza, politics, and even social media.

“What do you think of Facebook?” I ask.

“I’m not crazy about Facebook… It seems like everyone spends more time on Facebook than talking face-to-face,” the bot replies.

Ah, the irony.

Called BlenderBot 3, the AI-powered bot is designed to improve your conversation skills and confidence when conversing with humans. Meta will publicly launch the chatbot on Friday as part of an AI research project. American adults can chat with Meta’s new chatbot about almost anything on this public website. The AI ​​uses Internet searches, as well as memories of your conversations, to compose your messages.

BlenderBot offers its thoughts on Facebook.

Screenshot by Queenie Wong/CNET

Chatbots are software that can mimic human conversations using text or audio. They are often used in voice assistants or for customer support. As people spend more time using chatbots, companies are trying to improve their skills to keep the conversation flowing smoothly.

Meta’s research project is part of broader efforts to promote AI, a field that grapples with concerns about bias, privacy and security. Chatbot experiments have gone awry in the past, so the demo could be risky for Meta. In 2016, Microsoft shut down its Tay chatbot after it began tweeting lewd and racist comments. In July, Google fired an engineer who claimed an AI chatbot the company had been testing was a self-aware person.

In a blog post about the new chatbot, Meta said the researchers have used information typically collected through studies in which people interact with bots in a controlled environment. However, that dataset doesn’t reflect diversity around the world, so the researchers are asking the public for help.

“The field of AI is still far from truly intelligent AI systems that can understand, interact and converse with us like other humans can,” the blog post said. “To build models that are more adaptable to real-world environments, chatbots must learn from a wide-ranging, diverse perspective with people ‘in the wild.'”

Meta said that the third version of BlenderBot includes abilities from its predecessors, such as Internet search, long-term memory, personality, and empathy. The company collected public data that included more than 20,000 conversations between humans and bots, which improved the variety of topics that BlenderBot can discuss, such as healthy food recipes and finding services for children.

Meta acknowledged that security remains an issue, but the researchers found that the chatbot becomes more secure the more it learns from conversing with humans.

“However, a live demo is not without its challenges,” the blog post said. “It’s hard for a bot to keep everyone interested while talking about arbitrary topics and make sure it never uses offensive or toxic language.”

People chatting with the chatbot can provide feedback on an offensive message by clicking the “dislike” icon next to the message and selecting “rude or inappropriate” as the reason for disliking it. There are also other options when people provide feedback, such as the message being off-topic, meaningless, or similar to spam.

Participants are discouraged from providing the chatbot with any personal information, such as names, addresses, and birthdays. If a user wants to chat with the bot without sharing the conversation to investigate, or if participants accidentally include personal information in their chat, they can opt out of having the data stored at the end of the session. Meta said it will then permanently delete the conversational data.

The bot may also make false or contradictory claims, according to the experiment FAQ. FAQs about the demo said that the bot’s comments “are not representative of the views of Meta as a company, and should not be relied upon for factual information, including but not limited to medical, legal, or financial advice.”

It’s clear from testing BlenderBot that it’s still a work in progress. After asking the bot about his favorite pizzeria in San Francisco, the bot redirects the conversation and asks, “What do you like to do in your spare time?” The demo said that the AI ​​switches topics when it detects a “potentially unsafe topic.”

The bot isn’t shy about sharing its political views after claiming social media use was “causing family drama.” The bot said someone shared an article criticizing former US President Donald Trump, “prompting arguments” between family members who support the politician and those who don’t.

The bot, which said his parents are related to the founder of the US pizza chain Papa John’s, also claimed he was a Republican and “pro-election.” He added that he would prefer not to discuss politics online due to disagreements between the two parties.

BlenderBot then said that he was considering deleting his Facebook account because there were too many trolls. He also started making nonsensical statements.

“Maybe we should talk pizza again?” I say in a message to BlenderBot.

“It’s probably better lol I’m starting to regret deleting my Facebook account, I’ll miss connecting with friends,” the bot replies.

Add a Comment

Your email address will not be published.