Stephen Hawking’s warning – “Treating AI as science fiction might be our worst mistake ever”

“We should plan ahead,” warned physicist Stephen Hawking, who died in March 2018 and was buried next to Isaac Newton. “If a superior alien civilization texted us that said,” We’ll be there in a few decades, “would we just say,” OK, call us when you’re here, we’ll leave the lights on “? Probably not, but that’s more or less what happened to AI. “

The memorial stone on Hawking’s grave contained his most famous equation, which describes the entropy of a black hole. “Here lies what was fatal to Stephen Hawking,” read the words on the stone that contained the image of a black hole.

The real risk with AI is not malice, but competence. “

“I think of the brain as a computer,” noted Hawking, “that stops working when its components fail.” There is no heaven and no afterlife for broken computers. This is a fairy tale for people who are afraid of the dark. “

“Artificial intelligence is billions of years old”

Serious worries about the future of humanity

But before Hawking left our planet, he raised serious concerns about the future of humanity. First and foremost, he was interested in the future of our species and what could turn out to be our greatest and final invention: artificial intelligence, which has been reported The Sunday Times of London.

Future AI could develop a will of its own. “

Here is Hawking in his own words in Stephen Hawking on Aliens, AI & The Universe: “While the primitive forms of artificial intelligence developed so far have proven very useful, I fear the ramifications of creating something that can match or surpass human beings,” noted Stephen Hawking. “People who are restricted by slow biological evolution could not keep up and would be replaced. And in the future, AI could develop its own will, a will that contradicts ours. “

“Artificial intelligence of the future could reveal the incomprehensible”

In short, Hawking concluded, “The advent of super-intelligent AI would be either the best or the worst thing that would ever happen to humanity. The real risk with AI is not malice, but competence. Super-intelligent AI can accomplish its goals very well, and when those goals don’t align with ours, we’re in trouble. You’re probably not a wicked ant hater who steps on ants out of malice, but if you’re in charge of a hydropower green energy project and there is an anthill in the area that needs to be flooded, shame for the ants. Let’s not put humanity in the position of these ants. “

[This previously published post has been updated and revised]

The daily galaxy with Jackie Faherty, Astrophysicist, senior scientist with AMNH above The Times of London. Jackie used to be a NASA Hubble Fellow at the Carnegie Institution for Science.

Photo credits to the top: With thanks to Church & State

Comments are closed.