How To Bear Responsibility For Artificial Intelligence?

If we don’t act now, it might be too late!

Hemanth
9 min readNov 12, 2021

The acclaimed science fiction writer Isaac Asimov devised the now well-known three laws of robotics far before today’s artificial intelligence revolution. Asimov’s laws are as follows:

Isaac Asimov (1920–1992) — Photo by Phillip Leonian on Wikipedia

First Law

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law

A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

With the many decades that have passed by after Asimov devised these laws, the ethical questions surrounding the topic of artificial intelligence (AI) remains. The theme is getting more and more relevant by the day, as the applications of AI are accelerating through innovations.

This article aims to get to the heart of the matter by trying to answer (among others) a very difficult question: “Who should be held responsible if an artificial intelligence kills a human being by mistake?”

The Catch with…

--

--

No responses yet