Elon Musk just told a group of America's governors that we need to regulate AI before it’s too late

Authored by recode.net and submitted by jacobabbey
image for Elon Musk just told a group of America's governors that we need to regulate AI before it’s too late

Elon Musk doesn’t scare easily — he wants to send people to Mars and believes that all cars will be driving themselves in the next ten years. He’s excited about it!

But there is something that really scares Musk: Artificial Intelligence, and the idea of software and machines taking over their human creators.

He’s been warning people about AI for years, and today called it the “biggest risk we face as a civilization” when he spoke at the National Governors Association Summer Meeting in Rhode Island.

Musk then called on the government to proactively regulate artificial intelligence before things advance too far.

“Until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal,” he said. “AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.”

“Normally the way regulations are set up is a while bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry,” he continued. “It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization. AI is a fundamental risk to the existence of human civilization.”

Musk has been concerned about AI for years, and he’s working on technology that would connect the human brain to the computer software meant to mimic it.

Musk has even said that his desire to colonize Mars is, in part, a backup plan for if AI takes over on Earth. But even though he’s shared those concerns in the past, they hit their mark today in front of America’s governors, with multiple governors asking Musk follow up questions about how he would recommend regulating an industry that is so new and, at the moment, primarily posing hypothetical threats.

“The first order of business would be to try to learn as much as possible, to understand the nature of the issues,” he said, citing the recent success of AI in beating humans at the game Go, which was once thought to be nearly impossible.

AI wasn’t the only topic of conversation. A large portion of the conversation was about electric vehicles, which Musk’s company, Tesla, is hoping to perfect.

Musk said that the biggest risk to autonomous cars is a “fleet-wide hack” of the software controlling them, and added that in 20 years, owning a car that doesn’t drive itself will be the equivalent of someone today owning a horse.

“There will be people that will have non-autonomous cars, like people have horses,” he said. “It just would be unusual to use that as a mode of transport.”

Here’s the full interview below. Fast forward to the 43 minute mark for Musk’s talk.

RHarrison- on July 16th, 2017 at 10:48 UTC »

Every time I hear I a big tech figure talking about the necessity of AI regulation, it make me wonder if current AI research is much further along than we know.

BLSmith2112 on July 16th, 2017 at 09:25 UTC »

While I agree, after watching the whole interview Elon gave, it seems he didn't have an idea of a list of regulations that were specifically needed. Perhaps this was intentional, as the only solution I see is: Open source all AI centric code. That is probably what Elon meant by saying something along the lines of "this type of regulation will not make most corporate entities happy, but my company OpenAI wouldn't mind." Saying this, however, to a bunch of people who don't understand AI is fruitless, and by advocating for continued monitoring through government entities, politicians will be better exposed to the threats we face from a variety of sources, instead of just one guy sitting on a stage.

Musk's previous comments on this issue are not that a terminator is going to take over the world, but the first corporation that develops an advanced intelligence will have exponentially more power than anyone else and they can do immense harm to civilization - but if everyone, including individuals, could harness that very same power, people as a whole couldn't be taken advantage of.

The example Musk gave in this talk was something similar to this: "A corporation, or government, could own an AI intelligence that is built for the sole purpose of starting a war against countries it opposed. For example, it could reroute a passenger planes navigation system over a foreign nation, and then send an alert to the nation to which the plane is flying over that an enemy plane has entered their countries air space and to shoot on sight." It'd be able to do that multiple times a second to various nations and start a huge conflict against its enemies, but the intelligence could be doing 50 completely different things at the same time, and no human would be able to keep up.

lazylion_ca on July 16th, 2017 at 09:08 UTC »

My worry about the government regulating AI is that they'll handle it the same way they did stem cell research in the 90s.