Should Artificial Intelligence be regulated?


Elon Musk spoke at SXSW and emphasized his concerns about artificial intelligence and why it needs to be regulated.

What is the issue?

Elon says AI is more dangerous than nuclear warheads.

Right now, AI is created for specific tasks, such as driving a car, playing a game, responding to our voice commands, or providing personal recommendations. AI today is nowhere near as capable in general than even a moth brain, and most people think general artificial intelligence is a long way off. But Elon says “I am really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me, … It’s capable of vastly more than almost anyone knows and the rate of improvement is exponential.” He is concerned that the advent of digital super intelligence is much closer than we think.

Why does it matter?

Because Skynet. General broad AI that is no longer task specific could be more prone to abuse by humans. And the intelligence of AI could outpace the ability of humans to manage it. So it is important to develop AI safely to ensure it doesn’t get out of control.

What’s next?

The ethics of AI have been debated for years, and that debate will continue. Tech companies are working with various disciplines (scientific, futurists, ethics specialists) to try to come to grips with ethical standards for AI. The key question – if Elon is right – is whether this needs to be escalated to lawmakers, and if so, how soon? Should this be dealt with on a worldwide treaty basis – such as banning the use of AI for weapons?

Cross-posted to Slaw