The Promises of Artificial Intelligence
Most of us are familiar with some concept of artificial intelligence, be it Data from Star Trek: The Next Generation, C-3PO and R2D2 from Star Wars, HAL from 2001: A Space Odyssey, Skynet from The Terminator, or Joshua from War Games, to name a few popular examples. We’ve long been introduced to the notion of the struggle to determine if artificial intelligence constitutes life whether these beings, which we have created, deserve rights. We’ve also come across the notion of whether we need to restrict these beings so that they cannot turn and extinguish human life (think Asimov’s Three Laws of Robotics, and movies like The Terminator and The Matrix, where the artificial intelligence has turned on humankind). Yet we very rarely hear the debate as to whether such artificial intelligence can ever be a reality. In fact, and partially due to the promises made in the 50’s and 60’s, many people think that super-intelligent machines are destined to occur any day now.
One the reasons I find the question of the possibility of artificial intelligence to be important is because it not only has practical ramifications, but also that it touches the fundamental question of computer science, and indeed, our fundamental notions of the world. The whole field of computer science started back in 1900 with the question: “What can we automate?” As an ultimate goal, artificial intelligence asks if we can fully automate human thinking and reasoning.
The optimistic researcher in AI believes that eventually—maybe through massive parallel processing, neural networks, and ultra-sophisticated algorithms—we will accomplish that goal. Indeed, in the AI classes I took while working on my Master’s Degree, at least a third of our lectures eventually devolved into speculation about how to apply Asimov’s Three Laws of Robotics should we ever succeed in self-aware machines.
Frankly, I found such discussions to be a complete waste of time. I’m not just skeptical about the possibilities of producing true artificial intelligence; I flatly believe it will never happen. Part of my skepticism is theological in origin: human intelligence and self-awareness are matters of spirit, thus beyond the reach of science. However, the rest has to do with concepts I deal with daily in my own research.
My particular branch of theoretical computer science—computational complexity—deals less with the question of “what can we compute”, and more with the question of “what resources are necessary to compute this particular problem?” Most often, the resource we worry about is time, since that’s the one we have a very hard time reusing, but we also consider space (i.e. how much RAM a problem needs), circuit complexity, advice, randomness, and (the crucial one to this post) nondeterminism.
I’m going to take some time to get technical on a few of these terms, so if this becomes too boring, feel free to skip ahead. I would recommend paying attention to nondeterminism, though, because that will be important for the rest of this post.
Time: This one should be obvious to anyone who starts up Windows and has to wait five and a half minutes before it fully boots up. Should it really take that long to boot up? Perhaps some of the algorithms are inefficient and do a lot of things that don’t need to be done. Is there a why to speed that up?
More specifically, though, let’s consider the problem of factoring a number into its prime factorization (since this has practical application in cryptography). For example, 15 = 3 * 5 and 24 = 2 * 2 * 2 * 3. A simple algorithm to do this runs as follows. For any input n, start at 2 and see if 2 divides n. If so, add 2 to the list, divide n by 2, and start over. If not, increase to 3 and check, and then 4, and then 5, and then 6, and so on up to n itself.
So what is the time requirement on this? If we’re lucky, it doesn’t take much time. For example, we could figure out 8 pretty quickly (only three checks: 2, 2, and 2). 19 would take much longer, since it is prime and we would have to check all eighteen numbers from 2 to 19 before concluding that. Now, this doesn’t seem to bad, especially for small numbers and the speed of computers today. But what if we’re checking 398,225,076,122,297,449,994,272,105,333,728? Or a number that is thousands of digits long? There’s not enough time in the lifespan of the universe to compute that!
But we can be more efficient. For example, we don’t have check every number against n. We need only check the primes. So instead of looking at 2,3,4,5,6,7,8,… we would look at 2,3,5,7,11,… Also, we can make use of the observation that if we haven’t seen a prime factor by the square root of the number, the number must be prime. (Don’t worry if you don’t automatically see this; it’s not important to the discussion.) This gives us a huge amount of savings.
Randomness: With this we ask if we can solve a problem with the help of some randomly generated numbers. For example, instead of iterating through primes sequentially, we could randomly select one, see if it divides our input, and then continue the process. (As you could imagine, for this problem, randomness doesn’t help very much.) Randomized algorithms are of particular importance in my field. First, there are many problems that are actually solved very simply—assuming we accept a high probability of reaching the correct answer, and assuming we can actually generate random numbers.
Did you know that we cannot truly generate random numbers on a computer? At best, we can produce—by deterministic means—a sequence of numbers that looks random the human eye, and might even fool some simple pattern checkers. But the problem we in computer science continually run up against is that everything in an algorithm is determined. That was supposed to be the whole point of an algorithm, in the first place! An algorithm by definition is an automated, step-by-step process of solving problems. We try to evade the issue by “seeding” our number generator with events that are hard for others to calculate—the exact time the algorithm runs, or a user input like moving the mouse erratically—but after seeding, everything runs deterministically.
That’s the whole point of computing. Once the variables are initialized and all inputs have been taken in, the course of a program is set and unalterable. And that leads us to the final term we’ll consider: nondeterminism.
Nondeterminism: After having stated that all computers run completely deterministically (though their users certainly can act otherwise, and hardware is always prone to problems that affect computation) it doesn’t seem to make much sense worrying about a model of computation than isn’t physically possible. Yet nondeterminism is a crucial topic in computation complexity. (It’s so important that there’s a million dollar reward open for anyone who solves a specific problem dealing with the relation between nondeterminism and determinism.)
How do we define nondeterminism? Well, there’s a number of ways. One is to say that the next step in computation is not uniquely determined. With our factorization example, instead of saying the next step after checking 2 is checking 3, we could have a range of options, like checking 3 or checking 5 or checking 7. Moreover, there is no mechanism determining which step will actually occur (as opposed to randomized algorithms, where we might, say, flip a coin to pick step).
In terms of the human experience of solving problems, nondeterminism seems best explained as intuition. We humans typically operate on a set of rules for doing things, but at times we get ideas that seemingly come from nowhere, or we look at a problem and just know the solution. We talk about having a gut feeling about something, an unexplainable assurance that something will work. True, we may be wrong about these intuitions, but that does not explain where or how these intuitions came about.
In terms of human experience in general, nondeterminism would be best equated with free will. Given every indication that a person will do one thing, he can still surprise us and do something completely different. No matter the constraints, no matter how many factors are propelling him towards a particular action, he can always choose something different. Similarly, a nondeterministic computer, given a set of options, is free to pick any of them and is not fully constrained by whatever has come before.
We may ask then, how do we program this? How do we program something that, in effect, is not bound by its programming? The simple answer is that we can’t. It is essentially a contradiction to try.
So how does this affect the original question, that of the possibility of true artificial intelligence? To the true materialist, what I have just stated poses no problems. Given sophisticated enough tools and algorithms, we should be able to duplicated mechanically what nature has produced biologically. The problem of free will is either glossed over as something that can be copied, or as unimportant. Who is to say that free will and intelligence are interdependent? Who is to say that we can’t have intelligence without free will or self-awareness?
But what does it mean to have intelligence without self-awareness or free will? Can it rightfully be called intelligence at all (at least when we’re trying to replicate human intelligence)? Over the years, numerous definitions and standards have been proposed to handle this question. The most famous is the Turing Test, explained briefly as follows.
A person is placed in a room with a computer terminal and asked to interact with two unknown communicators, one human and one computer. By asking questions or simply making conversation, he is to determine which is which. If a computer can completely convince him that it is human, then it passes the test.
Of course, there are problems with this test. Humans can be quite gullible to clever algorithms that no one would claim are true intelligence. (See ELIZA and play a little with ALICE.) Conversely, humans can also be convinced that their human correspondent is a computer! Furthermore, this test does not necessarily demonstrate true intelligence, but instead clever algorithms of mimicry. A common objection along this line offers the following hypothetical. Suppose you were in a room with a large reference book and a door with a slot through which messages are passed. When you receive a message, it will have a series of strange symbols on it. You then open the reference book, find that particular series of symbols, and with it another series of symbols that you then copy down and slip back through the door. Later you found out that what was happening was that you were having an intelligible (or so it seemed to the person passing the slips) conversation in Chinese. Does this mean that you actually knew Chinese?
From this one could argue that self-awareness must somehow be involved with intelligence. The ability to deliberately place meaning in interaction seems somehow crucial in distinguishing true intelligence from a huge lookup table (no matter how quickly one can look things up).
What about free will? This is where most classroom discussion revolved, especially when it concerned the ethics of the Three Laws of Robotics. There is a general, if unspoken, consensus that an intelligent being will have at least a modicum of free will. But need this be the case? Limited to our own experience of both intelligence and free will, we find it difficult to conceive of things being any other way. Indeed, the denial of free will then leaves only determinism, and if it is all determined what we think and how we think, do we really think? And if we do not really think, do we really have intelligence?
The determinist, of course, will try to argue that we still have intelligence, but this becomes the intelligence of the giant lookup table, and that intelligence is the same as the intelligence of a mouse or an amoeba (though on different scales). By this rationale, our computers right now are already intelligent! We just need faster processors, bigger hard drives, and enormous databases to digitally create the intelligence of a human.
But the truth remains. We cannot program free will. We cannot program self-awareness. And to me that suggest we cannot program true intelligence. But then, as any Catholic knows, intelligence itself is a manifestation of a spiritual soul, and only the Divine Programmer knows how to how to write the scripts for that one!