Keeping AI "safe"

Isaac Asimov, prolific writer of Science Fiction and Non-Fiction books (more than 500!) and father of the term "robotics" realized very early on that "intelligent" robots could cause as much harm as good, if "programmed" the wrong way.

For this reason, he penned the famous "three laws" in 1942:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Later, a fourth law (the "zeroth" law) was added: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

The terms "Artificial Intelligence" and "Artificial Cognition" didn't exist in 1942 - Asimov had mechanical robots in mind, of the kind that he described in "I, Robot", the book that made these laws famous. And while especially the "zeroth" law is a pretty good fit for what we think of when we picture evil AI's (The Matrix, Terminator), a new think-through is certainly necessary - and imminently required - to define the ethical use of AI.

Satya Nadella, CEO of Microsoft, has put together his own set of rules to keep future AI and - more importantly - Cognitive Systems - in check.

The big question is: will this stop a really evil, mad scientist from developing an AI that disobeys any of these rules and laws? Hardly. With AI available online in a pay-as-you-drink model, a future Armageddon along the lines of Terminator certainly doesn't seem so "science-fictiony" anymore…

Year

Categories

Tags