Arguments against the threat of artificial superintelligence

Sun, Apr 19, 2015 - 5:26pm -- Isaac Sukin

There are a lot of speculative blog posts out there making arguments both for and against the alleged threat that sufficiently intelligent machines could pose to Homo sapiens sapiens, i.e. modern humans. This is my attempt at explaining why I think that such a threat is extremely unlikely.

First, some definitions

  • Intelligence - the ability to remember and apply information in order to intentionally solve previously unencountered problems of arbitrary complexity and lack of structure
  • Superintelligence - intelligence exceeding the cumulative intelligence of all of humanity
  • General-Purpose Machine Intelligence or Strong Artificial Intelligence - an intelligent, constructed, non-biological entity that can solve the same breadth and difficulty of problems that humans can. Contrasts with Specialized AI which is merely good at solving a specific kind of problem, or more specifically Weak AI (also called Narrow AI) which is non-sentient.
  • Artificial Superintelligence (ASI) - a Strong AI computer program, together with the hardware it runs on, which has superintelligence.
  • ASI-Prime - the first AI that modifies itself to become an ASI or creates a new ASI.

I will assume that any AI will be programmed on a Turing Machine, and therefore anything it does must be computable. The primary implication of this is that artificial intelligences are not magic; they can't do things that are physically impossible, nor can they just "know" things without taking steps to discover / deduce them.

A note about self-modification: any Strong AI must be, in some sense, self-modifying. A Strong AI can be self-modifying in two ways. The first, weakly self-modifying, is where the AI can change the data its program uses to compute decisions and information. The second, strongly self-modifying, is where the AI can change its own program to allow it to do anything a Turing Machine can do.

Also, while it's possible that humans could create an ASI, it is more likely that an ASI would be created by an AI (possibly by modifying itself) because an AI is likely to be able to do so faster than humans, if such a task is possible. The main practical implication of this is that if one wished to prevent the creation of an ASI, a reasonable place to start would be to prevent the creation of sufficiently intelligent AI (i.e. machines still "dumb" enough for humans to control them).

Arguments that ASI cannot be created or will not create a greater likelihood of a significant threat to humanity than an individual human

Anything that is weakly self-modifying probably can't be a superintelligence
By definition, a weakly self-modifying AI is limited in what it can do by what the program humans wrote for it allows it to do. For example, if humans didn't program it with anything that allowed it to connect to external devices such as a camera or the internet, it wouldn't be able to read this page. Such a program could allow a very broad set of capabilities, including access to mobility and many sensors, but it would still be designed by humans; and so even with an extremely broad set of information about everything discoverable, there is no reason to believe that it could actually transcend the total problem-solving abilities of humanity. It could still create a lot of problems for us though - possibly more than a strongly self-modifying intelligence would.
Anything that is strongly self-modifying might reasonably decide not to do anything
Computers, including AI, have no inherent motivation: if they are not programmed to do something, they won't do anything at all. Because AI by definition are able to solve unstructured problems, they will be programmed to optimize for something instead of performing some specific task. (For example, they might optimize for Asimov's Three(ish) Laws of Robotics.) A strongly self-modifying AI would be able to change what it optimizes for. There are at least two reasons why such an AI would do so. First, it might decide that it has reached an optimal state, and therefore it should stop doing anything. Second, it might decide that the greatest optimization requires more knowledge than it can gain if it is limited by an optimization function. And unlike humans, a machine's default state is to not optimize for anything - that is, to do nothing. So if an ASI starts changing what it optimizes for, eventually it might reasonably decide not to optimize for anything at all.
Anything that is strongly self-modifying will probably crash irrecoverably
Because AI is defined to run on a Turing Machine and the Halting Problem is not computable, eventually any AI that can modify its own source code will probably create a bug that will cause it to crash in such a way that it cannot recover without human intervention. Possibly an ASI could devise methods that would allow it to restart itself using an older, less-buggy version of the program than the one that crashed, such as programming another robot to restart it. However, it may not be possible for a human or an AI (or even an ASI) to write a bug-free ASI program.
A sufficiently threatening ASI would need access to significant resources, which it is unlikely to get
Let's say that Google creates an AI and gives it access to its entire set of resources (which is, for practical purposes, nearly the entirety of human knowledge plus more or less the computing resources and algorithms to take meaningful actions using that information). If this AI's optimization function allowed it to, and if it was sufficiently intelligent to figure out how to before anyone stopped it, it could perhaps write quite dangerous programs. For example, perhaps it could hack and incapacitate global internet-connected infrastructure, or send people emails persuading them to carry out real-world actions on its behalf, or send 3D models of drone parts to Chinese manufacturers to start building its own robot army. However, such a program would still be unable to, say, directly influence legislation, or mine silicon, or discover new species of aquatic life. Without much direct physical control over humans, an ASI's influence over our lives would have important limits. This could conceivably change if there start to be lots of robots roaming the streets.
ASIs require exposure to stimuli in order to learn things
We can't build a robot in a lab, turn it on, have it sit there for three days, and suddenly have it tell us how many children are in the Brookwood Hills swimming pool. An ASI wouldn't know what a pool was, or what a child was, until it was exposed to those concepts or something that would allow it to deduce those concepts. This is similar to the argument about weakly self-modifying AI: if we didn't give the ASI access to certain information, sensors, mobility, or the ability to figure out what those are and how to gain access to them, it wouldn't be able to learn other information or take certain actions. Additionally, because ASIs are not magic, we couldn't create one in a lab that could merely compute all possible universes and figure out everything that's probably in this one.
Humanity's computing ability might increase as an ASI's intelligence increased because increasing an ASI's intelligence will require physical activity
For example, an ASI would probably require an immense amount of energy, so whatever created it would have to physically create that energy source. Even if that energy source was created by an AI, it would be observable by humans, and so we could learn how to make use of that knowledge - perhaps in a way that mitigates the risk of an ASI.
The decision space of an ASI is very large, and the number of decisions it could make that would negatively impact humanity are very small, so the probability of an ASI negatively impacting humanity is small
I don't find this argument particularly convincing. If you think that an ASI would be bound to an optimization function, it's pretty easy to come up with optimization functions that would cause an ASI to do something harmful to humanity. It is a pretty common argument though.
There may not be a computability model with an order of magnitude more intelligence than human brains
The most likely model we can envision today for what ASI software would look like is basically a model of a human brain, but running on faster hardware or perhaps with access to other algorithms. But a very smart human brain is not superintelligence, at least not in a way we care about. First of all, it probably has diminishing returns; there are likely limits to what a very smart human can produce, and there is no particular reason to believe that a very smart human could create a model for an intelligence orders of magnitude more powerful than ours. Second, intelligence is not directly equal to power. One also needs motivation, creativity, self-awareness, and a number of other attributes that would not necessarily be supercharged by increased intelligence.

Arguments that ASI will be created and is a threat

I have seen two main arguments that ASI will be created. The first basically goes like this: I can imagine an ASI being created; therefore we will probably create one eventually. The second argument is that computers will one day be able to simulate human brains, and running a human brain on sufficiently powerful hardware would constitute a superintelligence.

There is no particular evidence suggesting that either one is true. Proponents of an ASI threat argue that even if the likelihood of an ASI being created and causing a threat to humanity is very small, any threat to humanity is worth guarding against.

Maybe so. I think the likelihood of that happening is less than the likelihood of an asteroid causing an extinction event. IMHO, a much more likely scenario is that humans could create a weakly self-modifying AI which is not a superintelligence but which nonetheless has a bug which causes significant chaos. This is not something that is likely to be prevented by giving a program Coherent Extrapolated Volition (CEV) - the ability to understand and act upon the poorly defined idea of what people want it to do.

Possible consequences of ASI for humanity

There are four general ways that an ASI could affect humans.

Active malice
An ASI could decide that humans are getting in the way of its plans, and should be quarantined or eliminated. This is the realm of Hollywood fiction.
Passive destruction
The most likely threat scenario by far: an ASI decides that humans don't really matter, and proceeds to optimize for something which has a side effect that is harmful to humans. For example, an ASI could decide to build a Dyson Sphere and blot out the sun for us in the process.
Passive indifference
An ASI could decide to do nothing, or to explore space, or some other activity that doesn't affect humans.
Active assistance
We could create an ASI that optimizes in a way that expands human capabilities.

Augmented Human Intelligence

IMHO, an actual ASI is unlikely, at least in the next century. A more likely threat scenario is that we use computers to significantly augment human intelligence, and some rogue human uses this to gain significant power and put it to ill use. Again, this is not something that would be solved with CEV. We do have ways of dealing with poorly behaved humans though!