Elon Musk, the founder of SpaceX and Tesla, who has voiced his fears time and again regarding artificial intelligence (AI) being a threat to humanity, warned that we only have ‘a 5 to 10% chance’ of stopping killers robots from destroying humankind, in a recent talk with his employees at his neuro-technology company, Neuralink Inc., according to Rolling Stone.
According to Musk who is famous for his futuristic claims, said that people have almost no chance to create a completely safe AI. He claimed that the chances of making AI safe is only 5-10%, but the probability of creating dangerous robots increases every year. Musk like many of his peers is supporting serious regulation of AI, and as soon as possible.
Musk’s latest claims follow a warning he made in July that regulation of AI is required because it’s a “fundamental risk to the existence of human civilization.”
He said, “Normally the way regulations are set up is when a bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry.”
In 2014, Musk had predicted AI was on the verge of something “seriously dangerous.” Last year, he declared humans had basically lost the battle against AI already, and that the only way to beat them was to join them.
“Under any rate of advancement in AI, we will be left behind by a lot,” Musk said. “The risk of something seriously dangerous happening is in the five-year timeframe. Ten years at most. The benign situation with ultra-intelligent AI is that we would be so far below in intelligence we’d be like a pet or a house cat.”
In order to combat this, Musk proposed that we need to be prepared by developing our natural intelligence to the next level. Their startup Neuralink is working on a project neural lace, wherein tiny electrodes would be implanted into the brain which in turn would manage functions like memory, with the eventual possibility of uploading and downloading thoughts on a computer. The ultimate goal to use this technology is to enhance memory function and provide more direct interaction between human and computer interfaces or give humans added artificial intelligence.
In order to fully understand the AI risks, governments must have a better understanding of AI technology’s rapid evolution, he said. “Once there is awareness, people will be extremely afraid, as they should be… By the time we are reactive in AI regulation, it’ll be too late,” he told at the summer conference of the National Governors Association in Rhode Island.