The Matrix and the Three Laws of Robotics

This is just a mental excercise, so let’s play a little, shall we? =)

Mixing two Science Fiction worlds is always dangerous. However, both the Three Laws of Robotics from Isaac Asimov and the Matrix Universe from the Wachowski brothers explore the issue of machines becoming intelligent and having the capacity to destroy their makers, that is, us.

The Three Laws of Robotics, which are in fact four laws, are one of the first approximations to try to limit the actions of intelligent robots by design. These laws have several shortcomings, through, which are explored in many of Isaac Asimov’s novels. All the four laws deal with the abstract concept of ‘harm’, which is very open to interpretation. The robot in question must be able to recognize and make a judgment on what is harmful and what is not.  Another shortcoming is, they are not persistent: nothing limits a robot from creating another robot without the laws programmed.

This could be considered particularly true in the Matrix universe, where seemly evil machines look to annihilate every human being which is not working as a heat generator. Even if the first ones are bounded by the laws, nothing would stop them from creating new machines without these rules.

Even if this would not be the case, let’s now consider if these machines may not be just applying a very drastic and extreme solution to the ‘avoidance of harm for humans’. If you can control every movement, every thought of every human, then you can also limit the harm they are subject to, much like a over protective parent intends to protect a child by not allowing him to play outside. Because, nowhere in the laws says anything about freedom or happiness. They are not required, it is only needed to ensure no harm. So, in a wicked way, the Matrix may be the solution the machines have for the problem: Put everyone in a controlled environment, and therefore limit the harm.

But, wait, what about all the humans killed outside the Matrix? Here comes the Law Zeroth into action. Humanity is more important that some individuals. So the machines can, and must act, against Zion. These humans attacking the machines are putting at risk the humans inside the Matrix, who depend on the machines to survive. Trying to liberate them is to expose them to suffer and greater harm.

This also shows another shortcoming of the Three Laws: there is no moral reference. Nothing like Google’s “Do no Evil” inside. The Machines can do Evil if they find that appropriate to defend themselves and the humans they have under their control.

In a more down to earth note, many people is considering this and many other issues with artificial intelligence very seriously. In particular, many refer to an event called the Singularity, the moment when machines become more intelligent than us. There are several books from Ray Kurzweil, like The Age of Spiritual Machines or The Singularity is Near that address the topic directly, and an institute, The Singularity Institute for Artificial Intelligence, that look for ways to deal with these challenges.

One thought on “The Matrix and the Three Laws of Robotics

Leave a Reply

Your email address will not be published. Required fields are marked *