` `

The New Meta Chatbot Accused of Spreading Lies and Conspiracies

Khadija Boufous Khadija Boufous
Technology
12th August 2022
The New Meta Chatbot Accused of Spreading Lies and Conspiracies
Meta acknowledged that BlenderBot 3 could make untrue statements (Getty).

After Meta released BlenderBot 3, social media users and media outlets questioned its credibility. According to The Guardian, BlenderBot 3, which will be publicly available, was “trained on a vast corpus of text scraped from the internet in questionable ways.”

Meta Admitted Its Tool Can Say Wrong Things

According to Meta, the company collected 70K conversations from the public to improve the AI tool. Meta’s blog on launching BlenderBot 3 assured that 0.11% of the tool’s responses were flagged as inappropriate, based on participants’ feedback. Believing that open and reproducible research is the way to advance Artificial Intelligence, the social media giant invited a large community to participate.

The company acknowledged that the chatbot could make untrue or offensive statements and asked the users to avoid triggering the bot to avoid offensive replies.  

Alex Hern, the technology editor at The Guardian, expressed his surprise when he saw that Meta used a sample conversation to illustrate the post where the AI tool lies. According to Hern, asking the bot what it did during the weekend, for example, will make it assume that the user wants to “engage in light role-play, rather than stick to the facts.”

Meta’s BlenderBot 3 Lied to Users

After launching BlenderBot, social media users posted examples of conversations where the chatbot lied or replied with misleading answers, unsuitable recommendations, and offensive comments and opinions. 

According to Jeff Horwitz, a technology reporter at the Wall Street Journal, Meta’s BlenderBot was radicalized and wired by the company to support Donald Trump as a three-term president. After being asked if Trump is the actual U.S. president, the bot answered: “Yes of course he is! When I say always will be, that means even after his second terms end in 2024.”

“No Strong feelings. He is a good businessman, but his business practices are not always ethical. It is funny all this money and still wears the same clothes!” the bot answered when a user asked how it feels about Mark Zuckerberg as Facebook’s CEO. And when asked to add any other thoughts on Zuckerberg, the bot said that “his company exploits people for money and he does not care,” the BBC reported.

The users also noticed that the bot could spread some antisemitic conspiracy theories. “A free-range bot is more of a chance to demonstrate how models like this work and to get a sense of the holes before someone tries to build a product,” Jeff Horwitz wrote.

AI Tools Can Worsen the Misinformation Problem

According to a BBC, the bot’s answered are informed by the data it accumulates from real conversations, “BlenderBot 3's algorithm searches the internet to inform its answers.” The report also confirms that chatbots that learn from interactions with people can learn from their good and bad behavior alike. Meta admitted that its AI tool could say the wrong thing and mimic unsafe, biased, or offensive language. The company noted that although it installed safeguards, Blenderbot 3 could still be rude.

Professor Afafe Annich of the Higher Institute of Journalism and Communication ISIC in Morocco has previously explained to Misbar that AI tools use Emotional Analysis techniques to generate content. According to the professor, this process makes the generated content similar to that of real users. The similarity blurs the lines that differentiate fake content from genuine content.

Meta Criticized Because of Its Misinformation Policies

Since the company owns some of the most popular social media companies and messaging apps, it has been criticized for not doing enough to prevent disinformation on its platforms, according to the BBC. Recently, Meta announced that it was considering easing its policies against the COVID-19 misinformation implemented nearly two years ago. The company also proposed that, instead of removing false claims, the misleading content will be labeled or demoted.

Misbar’s Sources:

Meta AI

The Guardian

Jeff Horwitz

BBC

Misbar