A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The Three Laws of Robotics were set out by Isaac Asimov eighty years ago, long before artificial intelligence became a reality. But they perfectly illustrate how humans have dealt with the ethical challenges of technology: by protecting the users.
Recent Comments