Economy

Meta chatbot says Mark Zuckerberg ‘exploits people for their money’

by

Meta’s new chatbot prototype told the BBC that Mark Zuckerberg exploits its users for money.

The company claims that the chatbot (software that tries to simulate a human being in a chat using artificial intelligence) can talk about “practically any topic”.

Asked what he thought of the company’s CEO and founder, the chatbot replied that “our country is divided, and he hasn’t helped at all”.

Meta stated that the chatbot was a prototype and could give rude or offensive responses.

Called BlenderBot 3, it was released to the public last Friday (5th).

The program “learns” from large amounts of publicly available language data.

When asked about Mark Zuckerberg, the chatbot told the BBC:

“He did a terrible job of testifying before Congress. It makes me worried about our country.”

Zuckerberg has been questioned several times by US politicians, most notably in 2018.

“Our country is divided, and he didn’t help at all,” the chatbot continued.

“His company exploits people for money, and he doesn’t care. This needs to stop!”

Meta has been criticized for not doing enough to prevent misinformation and hate speech from spreading on its platforms. Last year, a former employee, Frances Haugen, accused the company of prioritizing profits over online safety.

The company owns some of the largest social media platforms and messaging apps in the world, including Facebook, Facebook Messenger, Instagram and WhatsApp.

BlenderBot 3’s algorithm searches the internet to support its answers. It’s likely that your views on Zuckerberg were “learned” from the views of others that the algorithm analyzed.

The Wall Street Journal reported that BlenderBot 3 told one of its journalists that Donald Trump was and always will be the President of the USA.

A journalist from the Business Insider website said, in turn, that the chatbot called Zuckerberg “creepy”.

Meta has made BlenderBot 3 public, risking bad publicity, for one reason: it needs data.

“Allowing an AI system to interact with people in the real world leads to longer, more diverse conversations and more varied feedback,” Meta said in a blog post on the site.

Chatbots that learn from interactions with people can learn from their good and bad behavior.

In 2016, Microsoft apologized after Twitter users taught its chatbot to be racist.

Meta admits that BlenderBot 3 might say the wrong thing — and mimic language that might be “dangerous, biased, or offensive.”

The company said it has installed protective measures, but the chatbot can still be rude.

When I asked BlenderBot 3 what he thought of me, he replied that he had never heard of me.

“He must not be that popular,” he said.

relentless.

This text was originally published here.

FacebookgoalleafMark Zuckerbergmessengersocial networks

You May Also Like

Recommended for you