Stephen Hawking says A.I. could be 'worst event in the history of our civilization'

Authored by cnbc.com and submitted by maxwellhill
image for Stephen Hawking says A.I. could be 'worst event in the history of our civilization'

The emergence of artificial intelligence (AI) could be the "worst event in the history of our civilization" unless society finds a way to control its development, high-profile physicist Stephen Hawking said Monday.

He made the comments during a talk at the Web Summit technology conference in Lisbon, Portugal, in which he said, "computers can, in theory, emulate human intelligence, and exceed it."

Hawking talked up the potential of AI to help undo damage done to the natural world, or eradicate poverty and disease, with every aspect of society being "transformed."

But he admitted the future was uncertain.

"Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don't know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it," Hawking said during the speech.

"Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy."

gaunernick on November 7th, 2017 at 08:57 UTC »

Maybe it is not Steven Hawking talking, but his computer.

Maybe the computer doesn't want a competitor.

zeMVK on November 7th, 2017 at 08:32 UTC »

ITT: many who didn't read the article.

Hawking simply says he's optimistic and thinks AI is the way to go, but society needs to be ready for its arrival or it could cause a lot of damage. An analogy would be the use of nuclear energy, which was also used as weapon of mass destruction. Effectively simply creating AI wouldn't destroy society, it's how humans chose to use the AI or the mistakes humans fail to see, that could be harming to society. For the case of weapons, he isn't saying it will be an AI uprising, but that automated weaponry (which already exists) is a serious risk, like nuclear bombs are.

The article's title is slightly misleading.

edit: I'd like to add that he's also hinting that as a society, we need to tackle the topic now. He does admit that somethings cannot be predicted, which is normal with every breakthrough. However, we can and must be prepared as much as we can.

MY personal opinion on why such influential individuals are coming out strong and vocal on this topic and also scaring many, is so that it drives us to open the discussion, because AI is inevitably going to come, whether it's in 5 or 60 years.

jawche on November 7th, 2017 at 04:27 UTC »

So the ever present fear is that if you put the ai on the internet it will copy itself everywhere. Fair enough. But I have a solution.

You see, the software to make an ai go has to be massive. Obviously once hitting singularity it will refine its own code as much as possible, but you can only refine something so far, and sure it's a gamble but I'm willing to bet that a fully functional, self aware ai can't be any smaller than a couple of gigabytes. All we have to do is build it somewhere with terribly slow and unreliable internet. If it tries to get out we would have plenty of time to notice and simply pull the plug.

That's right gentlemen, I propose we build the ai in Australia, and connect it to the NBN. It's perfect.