Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!
I, Robot: In this age of advancing technology, the gap between man and GigaPet gradually narrows
If you ever have the chance, open up your skull and take a good look at your brain. Inside you will see layers: meninges, infoldings, grey matter, white matter. Looking closer you will see different parts of the brain: the medulla, cortex, and cerebellum. An even closer look reveals even more small parts, boxes within boxes, until finally you arrive at the nervous system’s most basic unit: the neuron. Highly specialized, as many cells are, neurons are set up to be carriers of electrical potential. They are like microscopic batteries that race information around the body, enabling an organism to respond to its environment. Information is relayed as signals in the form of a traveling electrochemical gradient which may serve to inhibit or excite a response. All neurons are essentially the same in structure and manner of function, the manner being this all-or-none blip of energy moving down an axon to a target elsewhere in the body.
Electronics work in a similar way. Just like neurons, electronic devices function by way of signals moving down conductors. Instead of an ion gradient, electronic devices use electricity, and instead of axons they use wires, but the basic principal is the same: a signal is being transferred from one part to another part using energy.
In some sense then, the nervous system is very similar to an electrical device or machine. The principal behind all machines is to take an energy input and convert it into the desired form of energy output, which is just what the brain does. Some machines are powered by electrical energy, some by mechanical energy, and others, like a battery or a neuron, use chemical gradients. For all intents and purposes, the brain may be thought of just like one thinks of a toaster or a computer, it is in essence an intensely vast and complicated appliance of the body, not altogether different than an internal processor, just more intricate. (4)
Now consider for a moment a Furby. Furby has a body, a mechanical skeleton, a motor powering a shaft that drives its movements, a microphone to receive auditory input, sensors to detect inversion, and sensors to detect touch. Furby even has receptors that that are IR sensitive. It has a PC board and two PCBs for processing, as well as speakers, gears, and lights that allow Furby to elicit the appropriate response to a particular input. Furby can even “learn,” in that it will exhibit certain behaviors more frequently if it was given a “positive” response to the behavior in the past. Despite all these seemingly life-like qualities, most people over the age of 6 would probably maintain that a Furby is not a “living” thing. (5)
Perhaps the problem is that Furby is much too simple. It has too few connections, too few sensors, not enough reactions. The human brain has anywhere between 10 billion and one trillion neurons. (7) This is an incomprehensible number that certainly separates man from Furby. The almost infinite number of pathways available to the human brain seems impossible to achieve from the standpoint of electronics. Let us consider however, biology’s favorite little nematode, c. elegans. This tiny worm has only 302 neurons, a very graspable number. Even if each neuron has a few thousand connections, understanding the inner workings of c.elegans’s nervous system is a realistically achievable goal. At this time, every neuron in c. elegans has been identified and named, and researchers in the area have a general idea of what most of them do. (8) The real problem in this case is just factors of ten, hundreds as opposed to billions, which is not an adequate condition to separate man from Furby because not all creatures have billions of neurons. (In fact, not every creature has neurons at all.)
Perhaps then, the distinction arises when someone asks, “What does a Furby do when it is sitting in a room by itself?” The answer is, of course, absolutely nothing. The Furby does not have the ability to give an output without an input. In other words, a Furby has no “I-Function” (9). Living things will go about their own business: growing, eating, shuffling around, etc, without anything “telling” them to do these things. Even without a stimulus, a leech nervous system sitting in a Petri dish will generate output signals for swimming. (9) This presents a puzzling question of its own. If the brain is in essence a biological version of a machine, why can’t the robotics industry produce an artificial “I-Function” like it can produce artificial eyes and ears for Furby? Of course, a Furby might come equipped with a program that notifies it to move around or pick it’s nose every once in a while without a command from its owner, but this is simply a timed mechanism for the same thing, giving an input to achieve an output, which is all programmed in from the beginning.
This new question as to why an artificial “I-Function” does not exist seems to leave three possible explanations. The first possibility is that there is in fact no “I-Function” in the first place. In other words, there can be no output without an input. Even situations in which it appears that there is no input, the argument would be that there is actually some subtle input that the testers did not perceive. For instance, in the example of the leech nervous system in the Petri dish that gave the output signals for swimming seemingly without an input, perhaps a momentary fluctuation in the glucose concentration or a temperature change stimulated the signaling event, even under what we would consider carefully controlled circumstances. This, of course, would mean that like the Furby picking its nose when no one is looking, everything in living organisms is pre-programmed as signal-response loops. These loops may be altered due to use or disuse, but the organism cannot generate new pathways without new stimuli. This then begs the question as to how new ideas are generated. The explanation certainly does not leave much room for innovation.
The second possible circumstance is that there is an “I-Function,” but that we simply do not possess the technology to reproduce one at this time. This, of course, implies that at some point humans might be able to create an artificial “I-function.” With the advances being made in robot technology, this idea does not seem as ridiculous as it may have previously. In 2006, Japanese robot designer Hiroshi Ishiguru unveiled an android that looks so real that for a second you can’t tell which is the scientist and which is the robot. The android can see, hear, talk, respond to touch, and even block a slap. The android also randomly blinks and fidgets, just like a human. (2) Neural networks, computing devices designed to simulate neural paths in the brain, can identify objects in their databases even from simplistic line drawings, and even when these drawings are skewed or bent, just as the human brain is able to do. (6) It is perhaps not all that difficult to imagine that at some point a robot could be designed to make autonomous decisions, and maybe even possess an artificial “I-Function.” What then, if anything, would separate the robot from a human?
This question leads into third possible explanation as to why an “I-Function” cannot be programmed into a robot, which is, that there is something else about the “I-Function” that is more than a purely physical device that can be artificially created. Perhaps the “I-Function” represents what some might call the “mind” or “soul” that transcends the biological system that contains it. Perhaps even more simply, it is some kind of “life force” that separates a rock from a tree just as it separates man from machine. This might explain why a human, worm, mushroom, or plant struggles to survive, whereas a Furby does not struggle to maintain its function. Can life be separated from non-life in this manner? Is life, be it the soul or some other inexplicable force, unable to be created except from itself? The problem with this explanation is that it is inherently improvable because it involves a non-physical entity, and therefore it is not very “scientific” in the traditional sense. However, I suppose none of the above explanations can really be proved or disproved, so it would actually be “unscientific” of me to not give a certain degree of validity to every possibility, however far-fetched or unexplainable it may be.
Maybe one day I’ll be sitting in class next to a Furby asking if I can borrow its CompSci notes. Honestly, you never know.
Web References:
1) Britannica Online. Artificial Intelligence. Encyclopedia Britannica, 2007.
http://p2.www.britannica.com/ebi/article-219107
2) Chamberlain,Ted. Ultra Lifelike Robot Debuts in Japan. National Geographic News, 2006.
http://news.nationalgeographic.com/news/2005/06/0610_050610_robot.html
3) Lovgren, Stefan. I, Robot—Are Real Androids Ready for Their Close-Up? National Geographic News, 2004.
http://news.nationalgeographic.com/news/2004/07/0715_040715_irobot.html
4) Madsci network, Neuroscience.
http://www.madsci.org/posts/archives/nov99/942388571.Ns.r.html
5) phobe.com, Furby Autopsy, 1998.
http://www.phobe.com/furby/index.html
6) Wikipedia. Neural Networks.
http://en.wikipedia.org/wiki/Neural_network
7) Williams, Robert W., and Karl Herrup, The Control of Neuron Number. Neurogenetics at UT Health Science Center, 2001.
http://www.nervenet.org/papers/NUMBER_REV_1988.html
8) Wormatlas, 2006.
http://www.wormatlas.org/
9) Serendip Webpage for Biology 202
/bb/neuro/neuro07/webpapernotes.html