Transforming businesses with cloud technology.

Where we are

Melbourne - 534 Church St, Richmond VIC 3121
Sydney
Brisbane
Adelaide
Perth

Jul 31, 2017

Is Regulating Artificial Intelligence Necessary?

Elon Musk's thoughts on the necessity of artificial intelligence regulation have caused a stir among developers, innovators, scientists and politicians. The mainstream audience's takeaway is that when someone in Musk's position, an innovative entrepreneur who has a historic dislike of government regulations, calls for regulating a specific technology then that technology must be a threat. So is it?

Causes for Concern

The usual concerns about AI are concerns that are typical of most connected technologies: a lack of privacy (think of smart speakers and what they can "hear"), security vulnerabilities (particularly the Internet of Things) and the ethics of storing personal data in order to strengthen the AI "brain."

Musk's concerns come at AI from a different angle, some would argue a science fiction angle. He presents DeepMind's AlphaGo as an example. DeepMind's technology has outpaced its developers' predictions and the machine is already several years ahead in its learning. Such a scenario speaks to a lack of control over the technology and poses, what Musk calls, "an existential threat."

Musk proposes well crafted, proactive regulations that are based on observations to curb this threat.

No Cause for Concern

Rodney Brooks, robotocist and the director of MIT's Computer Science and Artificial Intelligence Lab, doesn't see the threat as an existential one. As someone who works with AI regularly, he has a more pragmatic view. Artificial intelligence tech is bound by one major regulation: money. If consumer facing AI start-ups cannot generate a return on investment then the tech will stall. This stall, as can be seen in Musk's autonomous cars, will provide the time needed to create needed regulations.

For more information on tech regulations and artificial intelligence, contact us.