Chatbots, such as Chatgpt, Google Gemini and Claude can be ideal to help you with some tasks, but they are also a mine with possible blunders.
It is enough to take a look at the public posted flow of talks with the chatbot meta AI. Some interlocutors seemed to be unaware that they had published their questions on an appointment or tax evasion tips.
The Washington Post lists 6 tips to avoid laannted use of artificial intelligence:
1. Be careful what you share, part first
A special warning about the Meta AI Chatbot app: There is a “Share” button in the upper right corner of your conversation. If you press this option and then “post”, your conversation can be channeled into a public flow -like Facebook flow called Discover with a flow of AI conversations of all people.
Some people seem to accidentally use Meta AI as a public personal calendar or make calls with the application. Be careful. (This week, Meta added a warning if you are going to post your AI conversation online, although it did not appear firmly in the application).
2. Do not develop feelings for chatbots
Chatbots are designed to hear human beings and have discussions flowing like a text gab fest with an old friend. Some chatbots can play a romantic partner, including sexual conversations.
But never forget that a chatbot is not a friend, lover or substitute for human relationships.
If you feel loneliness or uncertainty in social situations, it is not matter of joking or practicing artificial intelligence. Just be sure to transfer these skills to the real world.
You can also try to ask a chatbot to recommend organizations for people in your age group or tips on creating personal relationships.
3. Recognize when you speak with ai
Artificial intelligence is so good at imitating the human chatter that the scammers use it to start conversations to fool some people to send them money.
For security reasons, assume that whoever you meet on the internet is not the one who says, especially in romantic discussions or investment proposals. If you fall in love with someone you have never met, stop and ask a family member or friend if something seems strange to you.
4. Chatbots invent weird things
Chatbots constantly devise things. They are also designed to be friendly and enjoyable so you can spend more time using them.
This combination sometimes leads to inventing things, such as the Washington Post colleague who found that Openai’s chatgpt devised excerpts from its own published columns. (Post has a content collaboration with Openai).
When these paradoxes happen to you, it helps you know the reason: These are “stupid” computer errors.
Artificial intelligence companies could plan their systems to answer: “This chatbot cannot access this information” when asking questions about essays, books or news articles.
A spokesman for Openai said the company “is constantly working to improve the accuracy and reliability of our models” and referred to an online revelation about Chatgpt mistakes.
5. Do not copy and do not paste just AI text
If you are using a chatbot to help you write a message in a dating app, a wedding sentence or a cover letter for a job, people can understand when your words come from AI. (Or they can paste your text to a TN detector, although these technologies are defective).
Roman Khaves, Managing Director of AI Dating Assistant Rizz, suggested that you treat Chatbot’s text as a common first draft. Rewrite the text to make it look your own, including specific details or personal references.
6. Take care what you share, second part
Most chatbots will use at least some information from your conversations to “train” their artificial intelligence, or may store your information in ways you do not expect.
Niloofar Mireshghallah, an artificial intelligence expert, was surprised by the fact that when you press the Thumbs-up or Thumbs-Down option to rate a chatbot answer for Anthropic’s Claude, she begins a process by which you consent to saving the company all your conversation for up to 10 years.
Anthropic said it is transparent about this process in the context of feedback.
Before you trust the chatbots, imagine how you would feel if the information you are typing leaked publicly. Mireshghallah said she is being annoyed by the prospect of considering conversations people working for Chatbot companies, something that he said sometimes.
At a minimum, Mireshghallah has advised not to enter your personal information or sensitive information to chatbots, such as social security or passport numbers.
Source :Skai
I am Terrance Carlson, author at News Bulletin 247. I mostly cover technology news and I have been working in this field for a long time. I have a lot of experience and I am highly knowledgeable in this area. I am a very reliable source of information and I always make sure to provide accurate news to my readers.