Scientist: Chance is 99.99% – Musk estimated that digital intelligence will surpass all human intelligence by 2030
THE Elon Musk he’s sure AI is worth the risk, still even if there is a 1 in 5 chance technology to turn against people.
Speaking at the Great AI Debate conference earlier this month, Musk reassessed his earlier assessment of the technology’s danger, saying: “I think there is some possibility that it will end humanity. I probably agree with Geoff Hinton (British Canadian computer scientist) that it is about 10% or 20% or something like that.”
But, he added: “I think the potential upside outweighs the downside.”
Musk did not say how he calculated the risk.
The tycoon estimated that the digital intelligence will surpass all human intelligence by 2030. While he maintains the potential positives outweigh the negatives, Musk acknowledged the risk to humanity of AI development continuing on its current trajectory in some of today’s uses.
“You grow an AGI (artificial general intelligence – artificial general intelligence is a type of artificial intelligence that can perform as well or better than humans on a wide range of cognitive tasks) . It’s almost like raising a child, but like a super genius, like a child with God-like intelligence — and it matters how you raise the child,” Musk noted at the Silicon Valley event in March. “One of the things that I think is incredibly important for AI security is to have a maximum kind of truth-seeking, curious AI.”
Musk said his bottom line about the best way to achieve security in AI is to develop AI in a way which forces her to be true.
“Don’t force it to lie, even if the truth is unpleasant,” Musk said in explaining the best way to keep people safe from technology. “It is very important. Don’t make the AI lie.”
“Musk’s estimate is conservative”
Roman Yambolski, an AI security researcher and director of the Cybersecurity Lab at the University of Louisville, told Business Insider that Musk is right when he says that AI couldand pose an existential threat to humanitybut added “if anything, he’s probably conservative in his assessment.”
“The actual ‘p(doom)’ is much higher in my opinion,” Yambolski commented, referring to the “probability of doom” or the probability of AI taking control of humanity or causing an event that it will destroy humanity, such as with the creation of a new biological weapon or with societal collapse due to a large-scale cyber attack or nuclear war.
The New York Times called (p)doom “the morbid new statistic sweeping Silicon Valley,” with various tech executives cited by the agency making estimates ranging from 5 to 50 percent for an AI-based Apocalypse. Yambolsky puts the risk at 99.999999%.
The scientist pointed out that since it would be impossible to control advanced artificial intelligence, our only hope is not to build it in the first place.
“I’m not sure why he thinks it’s a good idea to keep developing this technology anyway,” Yambolski added. “If he’s worried that competitors will get ‘there’ first, it doesn’t matter, as the uncontrollable superintelligence it’s just as bad no matter who creates it.”
Source :Skai
I am Terrance Carlson, author at News Bulletin 247. I mostly cover technology news and I have been working in this field for a long time. I have a lot of experience and I am highly knowledgeable in this area. I am a very reliable source of information and I always make sure to provide accurate news to my readers.