Robot Visions (Robot #0.5) - Page 23/34

Robots don't have to be very intelligent to be intelligent enough. If a robot can follow simple orders and do the housework, or run simple machines in a cut-and-dried, repetitive way, we would be perfectly satisfied.

Constructing a robot is hard because you must fit a very compact computer inside its skull, if it is to have a vaguely human shape. Making a sufficiently complex computer as compact as the human brain is also hard.

But robots aside, why bother making a computer that compact? The units that make up a computer have been getting smaller and smaller, to be sure-from vacuum tubes to transistors to tiny integrated circuits and silicon chips. Suppose that, in addition to making the units smaller, we also make the whole structure bigger.

A brain that gets too large would eventually begin to lose efficiency because nerve impulses don't travel very quickly. Even the speediest nerve impulses travel at only about 3.75 miles a minute. A nerve impulse can flash from one end of the brain to the other in one four-hundred-fortieth of a second, but a brain 9 miles long, if we could imagine one, would require 2.4 minutes for a nerve impulse to travel its length. The added complexity made possible by the enormous size would fall apart simply because of the long wait for information to be moved and processed within it.

Computers, however, use electric impulses that travel at more than 11 million miles per minute. A computer 400 miles wide would still flash electric impulses from end to end in about one four-hundred-fortieth of a second. In that respect, at least, a computer of that asteroidal size could still process information as quickly as the human brain could.

If, therefore, we imagine computers being manufactured with finer and finer components, more and more intricately interrelated, and also imagine those same computers becoming larger and larger, might it not be that the computers would eventually become capable of doing all the things a human brain can do?

Is there a theoretical limit to how intelligent a computer can become?

I've never heard of any. It seems to me that each time we learn to pack more complexity into a given volume, the computer can do more. Each time we make a computer larger, while keeping each portion as densely complex as before, the computer can do more.

Eventually, if we learn how to make a computer sufficiently complex and sufficiently large, why should it not achieve a human intelligence?

Some people are sure to be disbelieving and say, "But how can a computer possibly produce a great symphony, a great work of art, a great new scientific theory?"

The retort I am usually tempted to make to this question is, "Can you?" But, of course, even if the questioner is ordinary, there are extraordinary people who are geniuses. They attain genius, however, only because atoms and molecules within their brains are arranged in some complex order. There's nothing in their brains but atoms and molecules. If we arrange atoms and molecules in some complex order in a computer, the products of genius should be possible to it; and if the individual parts are not as tiny and delicate as those of the brain, we compensate by making the computer larger.

Some people may say, "But computers can only do what they're programmed to do."

The answer to that is, "True. But brains can do only what they're programmed to do-by their genes. Part of the brain's programming is the ability to learn, and that will be part of a complex computer's programming."

In fact, if a computer can be built to be as intelligent as a human being, why can't it be made more intelligent as well?

Why not, indeed? Maybe that's what evolution is all about. Over the space of three billion years, hit-and-miss development of atoms and molecules has finally produced, through glacially slow improvement, a species intelligent enough to take the next step in a matter of centuries, or even decades. Then things will really move.

But if computers become more intelligent than human beings, might they not replace us? Well, shouldn't they? They may be as kind as they are intelligent and just let us dwindle by attrition. They might keep some of us as pets, or on reservations.

Then too, consider what we're doing to ourselves right now-to all living things and to the very planet we live on. Maybe it is time we were replaced. Maybe the real danger is that computers won't be developed to the point of replacing us fast enough.

Think about it!

I present this view only as something to think about. I consider a quite different view in "Intelligences Together" later in this collection.