From terrorist attacks and child abuse to corporate espionage… Artificial Intelligence is here and it is now a reality…
“I’m here to kill the queensays a man wearing a metal mask and holding a loaded crossbow to an armed police officer near Windsor Castle. Weeks earlier, Mr Jaswant Singh Chail21, had joined the online app Replika, creating an AI “friend” named Sarai. Between 2 December 2021 and his arrest on Christmas Day, exchanged more than 6,000 messages with her.
Many of these were romantic in nature, but also included lengthy discussions about his plan. “I believe my purpose is to assassinate the queen“, he wrote in one of them.
“This is very wiseSarai replied. “I know you are very well trained». Jaswant Singh Chail is awaiting sentencing after pleading guilty to an offense under the Treason Act, threatening to kill the late queen and possessing a bow in a public place.
“When you know the outcome, chatbot responses are sometimes difficult reading,” Dr. Jonathan Hafferty, a forensic psychiatrist at Broadmoor, said in an interview. “We know these are randomly generated responses, but at times it seems to encourage someone to do something and even give directions in terms of location,” he noted.
The program was not sophisticated enough to detect Chail’s risk of “suicide and homicidal risks,” he explained, adding: “Some of the semi-random responses, it’s undeniable that they pushed him in that direction.”
Impersonation and kidnapping scams
“Mommy, these bad people have me…, help me,” Jennifer DeStefano reportedly heard her 15-year-old daughter Briana sob before a male kidnapper demanded a $1 million ransom, which they then “dropped” on 50,000 dollars. Her daughter was actually safe, and the Arizona woman recently told a Senate Judiciary Committee hearing that police believe AI was used to impersonate her voice in a scam.
An online demonstration of an AI chatbot designed to “call anyone with any goal” produced similar results. “I have your child…I demand a ransom of $1 million for his safe return. Am I clear;”. “It’s impressive,” said Professor Lewis Griffin, one of the authors of a research paper published in 2020 by UCL’s Dawes Center on the future of crime, which looked at the potential illegal uses of Artificial Intelligence.
In 2019, the CEO of a UK-based energy company transferred €220,000 to fraudsters who used AI to impersonate his boss’s voice, according to reports.
Such scams could be even more effective if supported by video, Mr Griffin said. The technology could also be used for espionage, with a virtual company employee appearing in a Zoom meeting to get information without having to say much. The professor said cold calling scams could increase and that bots using a local accent would be more effective at deceiving people.
Deepfakes and blackmail conspiracies
“Child abuse is terrifying and they can do it right now,” Professor Griffin said of AI technology, which is already being used to create images of child sexual abuse by pedophiles online. In the future, deepfake images or videos, which appear to show someone doing something they haven’t done, could be used to carry out blackmail schemes. “You could imagine someone sending a video to a parent of their child being exposed, saying ‘I have the video, I’ll show it to you’ and threatening to release it.”
Art forgery and robberies
“We’re likely to see new crimes happening, with the advent of large ChatGPT-type language models that can use tools that allow them to get into websites, create accounts, fill out forms and buy things,” Professor Griffin said. Finally, the professor mentioned that they could hack systems, with a simple request.
Source :Skai
I am Terrance Carlson, author at News Bulletin 247. I mostly cover technology news and I have been working in this field for a long time. I have a lot of experience and I am highly knowledgeable in this area. I am a very reliable source of information and I always make sure to provide accurate news to my readers.