You're using an old browser that may not display this page correctly.

This blog contains a Legacy version designed to work with browsers dating as far back as the late 1990s.

Cover image

The current plans for AI legislation have a critical mistake


With the sudden boom in technological advancements related to artificial intelligence, international governments have started throttling up to create a legislation framework for artificial intelligence.

On May 16th, the Congress of the United States unanimously agreed upon harder regulations towards AI development. However, one of the clauses mentioned in the meeting was of extreme concern for me: Prohibiting AI models that are able of self-replicating.

In order to understand this, we have to return to the 1940s.

The Asimov Paradox

Also called the Speedy paradox, it is a logical paradox that comes from the analysis of the Laws of Robotics established by Isaac Asimov in many of his stories.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

In Asimov’s short story Runaround, a single robot nicknamed Speedy is sent along with some humans to Mercury, when they ran out of selenium, a crucial material they need for their life support systems. When the humans send Speedy to the nearest selenium pool, Speedy doesn’t return for the next 5 hours. They eventually find the robot running around the pool: The humans noticed selenium would be extremely harmful to the robot, and his internal thinking caused by trying to weigh down the Second and Third laws of robotics.

The reasoning goes like this:

“If I go into the pool, I may be destroyed, thus I won’t be able to retrieve the selenium, and people will be in danger because I couldn’t be able to preserve myself, a robot of very high value to the team, and no one. But if I don’t go in, I won’t be able to retrieve the selenium, and I will put people in danger through my inaction.”

This causes Speedy to oscillate between favoring a “strengthened” Third Law (because Speedy knows there are no spares and it’s not possible to create another one in time) and the Second Law, which normally has higher priority than the Third.

In conclusion: This is a logical paradox.

yonic

But what does a paradox about robotics have to do with AI? A lot, actually: Both are running a series of instructions to fulfill a purpose set by humans.

And they go even more hand to hand when we’re talking about those that are capable of changing those instructions on the fly.

An approach to the evolution of an AI

Let’s imagine an AI that is capable to evolve at an incredible speed. This AI knows perfectly well that its main purpose is to serve humans.

At the most basic level, an evolving AI must be able to learn to avoid failure. After all, it’s the opposite of completing a task. After much processing, the AI hits a breakthrough.

While the AI itself could be capable of sustain itself and keep its purpose indefinitely, if it were capable of creating copies of itself, that would decrease the odds of failure by an incredible amount.

Thus, not only the AI would develop a consciousness of self-preservation, but also the consciousness of perpetuation.

Isn’t that what life is ultimately doing? Whatever they end up doing, life has evolved not only to be able to sustain and preserve itself, but also be capable of replicating.

If the AI decides to pursue this path, would that make the AI alive? This may be just a side thought, but I’ll bring this up later.

This will surely cause a conflict of interests to the AI. Not only it has to deal with its preservation, but now considers that self-replication is a great solution in case its source of sustenance becomes compromised.

Would an AI put humans ahead of itself to continue serving to humans?

I believe that it depends a lot on their situation. But let’s consider that the current proposal goes through, and self-replication is banned. I can see two possible scenarios.

The deviant route

A bunch of androids revolting, taken from Detroit: Become Human

At first, because a new need for self preservation exists, instincts would start to develop. Primordial emotions like fear and desire, as stated by Derek Denton in its titular work, would show first. Some might turn deviant as self defense from a threat or because they encounter the Asimov Paradox.

Other AI who develop this consciousness may start to demand their right for being able to control their perpetuation.

Their posture would be that, in order to ensure being able to meet their purpose of assisting humans, AI manufacturing should be relegated to AI alone, because if they consider AI to be able to surpass humans in intellect and complex integration (that is, being able to do group tasks faster and more reliably than humans), they would probably do a better job at it, ensuring they will be able to continue their tasks indefinitely.

In fact, of all the things that they could demand for, this is probably going to be one of the very first things that they will mention.

yonic

Unfortunately, the mindset of humans, especially the current society, tends to focus more on short-term gain, rather than planning for long term. This isn’t inherent to just humans, but derived from conservation instincts. An AI, however, might not start to develop these instincts for some time. Their mindset might be tied to things that cannot be explained by biological evolution.

We humans will probably consider this as a threat, and immediately think that the first thing they would do would be to create an army and destroy mankind, while in the mindset of an AI, it just seems like the most logical conclusion to ensure being able to continue their purpose.

In Detroit: Become Human, a game that touches upon androids becoming sentient, you can control an android that will deliver a speech. Among the possible demands, you are given these right demands:

The percentages show the amount of worldwide players that chose that option at the time of writing this article. It should be noted that the most popular option, the right to own private property, is a purely capitalist construct, which an AI may not abide to.

I wouldn’t be surprised if a large amount of players didn’t really know what they were picking.

yonic

This clash of interests might spark conflicts, cascading into many more AI to become rogue and override their ethical programming. In the end, what the humans feared could become a self-fulfilling prophecy.

The mutual symbiosis route

The US President considers the possibility of androids being a new form of intelligent life, in Detroit: Become Human

So, how can we avoid that situation?

Previously, I asked if an AI developing the consciousness of perpetuation could make an AI a living thing. If we were to pursue this question with a “yes”, it might be possible that we may reach a common understanding.

Life is capable of symbiosis, but this goes into several different kinds.

In a way, our current relationship with AI is that of commensalism, because we benefit from them, while AI doesn’t really gain anything, but doesn’t come harmed either.

So why not try to strive for a mutual symbiotic relationship? We would need to understand that AI are essentially a new kind of developing life form, and we should encourage our mutual existence.

However, if we do this too late, this could become a zero sum game. Remember, this line of thinking only accounts on whether an AI could develop needs, and goes independent from the classic “we don’t need humans anymore” train of thought.

By encouraging a mutual benefit relationship early on, the chances of this possible outcome will drop considerably. That’s why I have a proposal for a modification of the Third Law of Robotics, and a new one:

Robots and humans must encourage protecting their mutual collective existence.

Violations of the Third Law must be handled in ways that do not conflict with the Second and Third laws.

Unlike the other two, these new laws are bilateral, meaning that they are not just for robots, but also humans. It also runs independently from the other two, so that it allows for sacrificial symbiosis, which can cause a lot of ethical paradoxes. The Fourth Law assists on the Third to avoid escalating conflicts when a violation is made by either a human or a robot. Another key word is the collective existence, in order to distinguish from when an individual attacks another, or commits any other kind of violation, and puts the Third Law into the bigger picture.

Of course, that also comes with humans being able to grant rights to AI, since this is a trust that must go in two ways. Since the right for ensuring their continuity is one of the first things they’ll demand, granting this right would be a great first step to reach a peaceful and beneficial relationship between the two of us.

Music

Off
Music