The recent progresses made in the domain of artificial intelligence has made it clear that our computers need to have a moral code. As we know, autonomous cars will roam the roads in the near future, the military have automated drones, robopets and smart-toys have already been introduced to our living space, and so will be elder-care robots.
So the evolution of artificial intelligence and that of robotics is bringing us closer to the possible reality of the moral decisions and moral behaviour of artificial moral agents (AMAs). Artificial moral agents are robots or artificially intelligent computers that behave morally or as though moral.
Given the situation there seems to be a very clear purpose: the AI systems should have ethical responsibility, should show the presence of moral thinking. That being said, the fact is that it is much more complicated at a second glance because there has to be taken into consideration countless factors as well as all the interactions between them. For example, moral decisions are influenced by rights (right to privacy), roles (role in society, in a family), previous actions (past promises), intentions, motives and other relevant aspects and the implementation of all these into an AI system is a great obstacle that hasn’t been overcome to this day.
A considerable step taken in this direction are the Three Laws of Robotics devised by Isaac Asimov:
1. A robot may not injure a human being, or allow a human being to come to harm through lack of action;
2. A robot must obey the orders given to it by human beings except when such orders come into conflict with the First Law;
3. A robot must protect its own existence as long as such protection does not come into conflict with the First or Second Law.
These three iconic laws have been tested over time in order to detect ways in which they could cause paradoxical or unexpected behaviour, or if and how they could be broken. Asimov’s final conclusion is that no set of rules can fully anticipate all the possible circumstances of a particular behaviour.
For example, in 2009 during an experiment conducted at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots were programmed to cooperate in a task of finding beneficial resources and avoiding poisonous ones, but after a while some of them learned to lie and not share the information when they found a beneficial resource in order to gather all the points for itself.
In this kind of situations one can conclude that computers with artificial intelligence have an entirely different thought process that us, and it it very likely that our ideas and values cannot be programmed into AI systems exactly as we intend to.
There have been multiple approaches regarding the ethical behaviour of robots, like Immanuel Kant’s theory which seemed to provide a computational structure for moral decisions. In the end because of the complexity of the human nature, the most straightforward and widely accepted solution is that ethical laws should be programmed directly without being ambiguous.
Regarding the possibility of robots making moral decisions there is a continuous dispute.
-Some experts argue that the final responsibility for the decision that a computer makes lies with the developers because they created the system. Also they insist that people should never stop supervising the evolution of a robot’s thought process.
-Other experts and academics say that there is a responsibility gap, which cannot be bridged by traditional concepts of responsibility ascription. Specifically, it refers to the inability of ascribing responsibility for the actions of autonomous, learning machines because the operator of an AI system is not capable of predicting the future behaviour of the machine and thus cannot be held morally responsible or liable for it.
Andreas Matthias discussed three cases according to which the developers of a system with artificial intelligence should no be held responsible for its actions and decisions.
1. AI computers are mostly unpredictable because of their nature;
2. There are a lot of layers of obscurity between the operator and the system when hand coded programs are replaced with more complex means;
3. AI systems have rules of operation that can be altered during the operation of the machine.
All in all, the most accepted conclusion is that the reason why it is so difficult to code moral values so that a robot will behave ethically is because we, as human beings have very complex definitions and views of what it really means to make moral decisions. We will have to wait for philosophy to provide us with a definite, universally accepted and unambiguous moral theory that can be coded into a machine. Until then a computer’s actions in potentially harmful scenarios could be consistent with moral rules but not with common sense.