Skip Page Header and Navigation
Book of Paragon

No Zombies, Only Feelies?

In this fictitious dialogue Arthur has read the paper Why and How We Are Not Zombies and Dean has read the books Visions of Mind and Body and The Perspex Machine, as well as the papers Perspex Machine III: Continuity Over the Turing Operations, and Exact Numerical Computation of the General Linear Transformations.

Arthur advances the position that we can never have a scientific theory of what feelings are, because we can never know what anything feels like to another person. Dean, on the other hand, defends the point of view set out in the Book of Paragon that, in the far distant future, we might be able to do this by building robots that can be transformed into human beings and other animals, thereby allowing the robots to experience the feelings of many creatures - and ending up as each one of us so that we know what it feels like to be another being. In the mean time, Dean advances a theory of how to build practical robots that have feelings and describes, in broad outline, how a robot might shift from one body to another. Dean argues that it is impossible to build a human zombie that works in every way like a human being, except that it has no feelings. He tries to show why any close analogue of a human being must have feelings. In effect, Dean argues that there are no zombies, only feelies, though he does describe theoretical conditions in which a robot might be devoid of feeling.

Arthur: Hello, Dean. Have you read the zombies paper? It says, in effect, that each one of us can know if we have feelings, but we can never know if another being has feelings. This is part of the other minds problem. We can know our own mind, but we can never know that another being has a mind, and we can never know what that mind feels or thinks. Do you agree with that?

Dean: In my everyday life I do not experience direct knowledge of other peoples' minds. I do not experience telepathy, and I doubt if any biological creature does. So, here and now, I have to say that I do not believe that any of us has access to another's mind. But I think you go too far when you say we humans can never know another being's mind.

Arthur: I'll grant that by observing another being we can be pretty sure that it has a mind, but we can never be absolutely certain. For example, a robot that appears to go about its daily life in an intelligent way might actually be programmed to behave in a pseudo random way, without anything I would call intelligence. This being the case, we can never be entirely certain that another being has a mind and, even if it does, we cannot know what it feels or thinks even if we believe we have a pretty good idea of what is going on in there. I'll swear that my wife seems to be able to read my mind even though she says she is not telepathic. But do you really believe we could have access to another's mind one day?

Dean: Certainly, but the argument is a long one. I will sketch it out for you, if you will allow me to make the materialistic assumption.

Arthur: Ah, yes. The assumption that everything that exists is physical. Very popular amongst scientists, I understand. Well, for the purposes of debate, I will grant you the materialistic assumption. But how does that help us to experience another's mind?

Dean: For a start it means that everything has a physical basis. Minds, feelings, and, mathematics are all expressed in some physical medium, be it a body, text book, computer, or whatever. There are no Platonic ideals that exist only in the abstract. Everything is physical. However, I will admit that a mind might live on after the death of its body, but only if it is reincarnated in another body, whether biological, robotic, or spiritual. Because I have adopted the materialistic assumption, I suppose that the spiritual world exists as a physical reality, if it exists at all.

Arthur: Let's not stray too far from the zombie debate. How might this help us feel what another feels?

Dean: The perspex machine is a theoretical machine, but it is more powerful than any theoretically possible digital computer. It can describe continuous things, not just discrete, digital things. I suppose it can describe the whole universe regardless of whether the universe is actually continuous or else quantal in nature.

Arthur: Now hang on a minute. That is a mighty big assumption to make!

Dean: Not really. All contemporary physical theories are expressed in symbols, in particular they can be expressed in computers and can even be simulated in computers. But this means that a perspex machine can express and simulate all contemporary physical theories because the perspex machine can do everything that a digital computer can, and more. So I am certain that any part of the universe that is known to the literature of the physical sciences can be described by a perspex machine. I then make the assumption, as scientists do, that the unknown part of the universe can be described in more or less the same way as the known part. I might be wrong on this, and other scientists might be wrong on this when they make their assumptions, but until there is good evidence to give up the hypothesis, I will continue to believe that the perspex machine can describe the whole universe.

Arthur: Well, what if it can?

Dean: The perspex machine, like anything that exists, has to be described or instantiated in a physical medium. Now, the perspex machine describes the position and motion of perspexes, but everything that exists has some sort of position and motion, even if they cannot be determined in any practical way. Perspexes can describe arbitrarily simple or complex things, and can be degenerate so that they lack position and/or motion. This means that any physical substance or process can be used to implement some sort of perspex machine, no matter how large or small a part of the general perspex machine the substance or process can support. In fact, I have implemented several simulations of the perspex machine on digital computers so I know the universe can support quite complex perspex machines. Of course, a substance or process might have properties that are not essential to the implementation of any perspex machine I can describe in words, but I assume that these properties, along with the whole of the universe, can be described by some perspex machine. In short, I assume that the universe can be understood as a perspex machine.

Arthur: For the purposes of the debate, I'll grant you the big picture. But I thought you said a perspex machine can do more than a digital computer, so how can you simulate it on a digital computer? More to the point, how can you describe it in the words we are using here?

Dean: Sorry, I have been too technical for you. Simulations are approximate, emulations are exact. As I said, I can simulate the general perspex machine on a lap top, but I cannot emulate it on a lap top. In much the same way, the general Turing machine can be simulated on a lap top, but it cannot be emulated on one. Similarly, I can describe the perspex machine approximately in words, but I cannot describe it exactly in words. Does that make the matter clear?

Arthur: Let's not get into the theory of computability, Dean. Stick to feelings. If I grant you that the universe is a perspex machine, how might this allow us to feel others' feelings?

Dean: You have read the zombies paper, so you tell me how a robot might have feelings.

Arthur: Well, it is possible that today's robots have feelings. But I don't believe it for one minute. I do believe it might be possible to program a robot to have feelings, but I have no idea how to do it and, even if it could be done, I cannot see how we could ever know that a robot has feelings.

Dean: Let us call robots with no feelings "Zombies," and those with feelings "Feelies." Now, the continuity paper tells us that we can transform one computer program into another so that a Zombie's program can change over infinitely many, infinitesimally small, steps into a Feely's program, and vice versa. If the memories of Zombies and Feelies are sufficiently similar then a Zombie can remember what it was like to have feelings as a Feely, and a Feely can remember what it was like to be devoid of feelings as a Zombie. On the other hand, if their memories are too dissimilar we can augment the Zombies and the Feelies with a perspex program that holds the memory of the other and transforms it into the host's, Zombie or Feely, memory.

Arthur: I suppose it is possible that you might fail. Zombies' and Feelies' memories might be so different that you must use the perspex memory translation program, but, even if you do, the translation might be incomprehensible. A Zombie might simply be incapable of remembering feelings, and a Feely might be incapable of remembering the total absence of feelings.

Dean: Perhaps. But the continuity paper tells us that we can make infinitesimal changes to a program. We can take an infinitesimal step from zombiehood to feeliehood, or vice versa. So whilst it might be the case that an extreme Zombie and an extreme Feely cannot understand each other, it is almost certain that there are infinitely many robots in between that can understand each other and experience each other's memories. If this is not the case then it has to be the case that feelings and their absence is an exact thing. There is no room for a smooth transition from feeling to non-feeling. Now, I do not know about you, but it seems to me that as I grow tired and fall asleep my sensitivity to the world does decline toward total unconsciousness. I do experience a diminution in feelings. Conversely, when I wake up the reverse seems to happen so that I experience an increase in feelings. The same happens when I am throttled and recover, or when I am beaten unconscious and recover. Yes, Arthur, I have lived an adventurous life. But it is one where I have experienced gradations of feelings. I have also experienced the scholastic life. I know enough physics and biology to know that animals do not work in an exact way, yet I am an animal and I have feelings, so I believe that feelings are not exact, they can be felt by an inexact creature like me. I suppose you feel the same way and are now ready to admit that if there is a continuum of robots with different gradations of feeling then at least some of them can share memories. In short, they can know each other's minds.

Arthur: I will grant you that much, for the purposes of discussion, but how does that help us to know another's mind?

Dean: Remember the materialistic assumption. Everything that exists is physical. The perspex machine is a physical thing. It deals with the position and motion of objects, but our bodies are made up of molecules, atoms, and sub-atomic particles that have some sort of position and motion. We are perspex machines so the continuity paper applies as much to our bodies as to robots' bodies. I expect that one day it will be technologically possible to build a continuum of robots that can experience the feelings of, at least, their near neighbours in the continuum and that this continuum will cover all animals, plants, and inert materials on planet Earth.

Arthur: And when might this be?

Dean: I expect it will take thousands of years to build robots that have minds like ours and millions of years to develop the technology to build the mental continuum. But it does not matter how long it takes ...

Arthur: ... because I said never and you said it could be done some time.

Dean: [Nods in ascent.]

Arthur: Dean, I had always supposed that the lowest level of a computer is a word of some kind, machine code, micro code, or something like that. But you deny this? You claim that the lowest level of anything is something geometrical. So, presumably, the lowest level of a computer is the geometrical layout of electronic tracks in silicon, or something like that?

Dean: Of course. If you believe that the lowest level of a computer is a word then I invite you to edit the words in my lap top and turn it from base metal and plastic into gold. Better still, I would like a couple of kilograms of platinum, please.

Arthur: Ha, ha. You've got me there. Lucky for me your lap top isn't here!

Dean: No problem. I will fax it to you. [Dean winks at Arthur.]

Arthur: I will grant that you have hypothesised a solution to the other minds problem, but you have said nothing about what feelings are. Have you anything to say about that?

Dean: Certainly. Many feelings serve a functional purpose. If I bang my toe on a stone I move out of the way so as not to fall over it or, perhaps, I suppose that my kicking a stone proves some philosophical point to my companion. These kinds of feeling have a purpose. It is less clear that other kinds of feeling have a purpose. Many amputees say they can still feel their amputated limbs. Some even take analgesics to reduce the pain they feel in the amputated limb. I am not sure what kind of purpose the feeling of phantom limbs has, but it is reported as a feeling.

Arthur: I agree that some feelings have a purpose, but that does not tell us why they feel like anything. For example, why cannot a robot have abstract mathematical feelings implemented in functions that do not feel of anything?

Dean: Because you granted me the materialistic assumption. There are no abstract feelings, no functions devoid of a physical basis. There is a physical environment, or content, to every implemented function. This physical content is part of what we call a feeling. The functional relationships are the other part. For example, the book Visions of Mind and Body argues that today's computers feel the timeness of time both as a functional ticking in their clocks and as the impact on their performance of the elapsed time since their last re-boot and original manufacture.

Arthur: Too abstract! Tell me how a robot can feel. How can it feel a sunrise, or the cold of the night?

Dean: If a robot is too cold it cannot function at all. As it warms up toward operating temperature it goes through various error states that cause it to operate briefly before re-booting. It is disoriented. At operating temperature it operates correctly. It senses all manner of things with its cameras, temperature sensors on the surface of its body, microphones in its head, and so on. All of these sensations are correlated. The sun supplies light for the cameras, heat for the temperature sensors, and changes the density and currents in the air so that the robot hears different things. As the temperature rises beyond operating temperature the robot suffers hallucinations as memory errors cause it to mix up the data between its senses. It makes erroneous judgements and, eventually, it stops work and melts into a puddle. In much the same way, when I suffered hypothermia I was disoriented, but when I returned to normal body temperature I operated correctly. At normal temperature I could see the sun, feel the warmth of its rays on my face, and hear the gulls squawking in the distance. I over did it a bit in thermal clothing. I began to hallucinate and fell unconscious. Fortunately my buddies sorted me out long before I melted into a puddle! Now, all of these things that happened to me, or could happen to a robot, have a common physical cause. The effects of heat on a physical body. The functional relationships and the physical content could be similar, differing only in the way that silicon chips differ from biological neurons - or whatever. Apart from function and physical content there is no other possible difference so, if the physical basis of a robot is sufficiently close to my physical basis, it feels. I do not claim that all robots can feel, only that some can. And I claim that there is no more to feeling than functions and physical content.

Arthur: You claim that some robots might not have feelings. How can that be if they have functions with physical content?

Dean: The Perspex Machine explains this. Feelings are not atomic things. One can be conscious of a feeling, remember a feeling, be mistaken about a feeling, and so on. There are many facets to feeling. The glossary of the book hypothesises functions that give rise to consciousness, feeling, intelligence, morality, and so on. It is then a simple technical matter to block a robot's sensation of the world, or break one of the two relationships needed for consciousness. Such a robot might be devoid of feeling, but it would be practically useless. It would not sense the world, it would not be able to form any relationship between its ideas and the world. In fact, its ideas would be disjoint from each other. It might exist, but it would be bloody useless. A biological creature that worked that way would not survive long in a competitive environment. In fact, devoid of any form of homeostasis, it would scarcely survive at all. So I am pretty sure that all animals, and plants, have feelings. Moreover, I believe that my lap top feels the timeness of time and, perhaps, a few other things.

Arthur: [Putting down The Perspex Machine.] Wow. That glossary is dense. I'll grant that you have hypothesised a solution to the mind-body problem. And you have hypothesised that it could be put to the test in a few million years' time. But how does that help us here and now?

Dean: That depends on how plausible you think my hypothesised solutions to the other minds problem and the mind-body problem are. That is a matter entirely for you. I do not care a fig for these problems. I am building a perspex robot and want to see what it can do using the technology available to me today, including the technology I have invented, and might go on to invent during the remainder of my working life.

Arthur: Why did you invent the number nullity explained in Exact Numerical Computation of the General Linear Transformations?

Dean: Because of the homunculus problem.

Arthur: Let me see. It was once thought that the eyes focus the world on the pineal body in the brain. So it was thought that the pineal body sees the world. But the real question is how the pineal body could do that without involving a little person, or homunculus, sitting in the pineal body that does the seeing. In fact, what is wanted is a neurophysiological explanation of the visual pathways such as, or better than, we have today. Whenever explaining a human faculty we want a mechanistic explanation, not one that pushes words around and leaves the problem right back in the human. But what has that to do with arithmetic?

Dean: Integer arithmetic is fine, but rational arithmetic, and more advanced arithmetics all involve division by zero. Division by zero is not defined so whenever it arises a human mathematician has to get involved and try to sort it out. Division by zero turns up in an awful lot of mathematics and, so too, do related geometrical properties. Points that are co-punctal or co-linear or co-planar are banned from all sorts of geometrical operations. When they turn up a human mathematician has to sort them out on a case by case basis. And so it goes on. Almost all of mathematics is infected by the homunculus problem - corner cases are not defined so a human mathematician has to get involved to try to sort them out. This is one of the things that makes computer algebra so hard. All of the corner cases have to be defined so that a computer can solve algebraic problems without assistance from a human.

Arthur: So how does the number nullity help?

Dean: I defined a canonical form for numbers divided by zero and let the rules of arithmetic hold regardless of division by zero. This produced a new arithmetic that contains the arithmetics that people commonly use, but one in which division by zero is well defined. Thus, I removed the homunculus problem from this part of mathematics. That was all that I needed to do in order to define the perspex machine in a way that is guaranteed to be able to operate without human intervention. The number nullity, together with the number infinity, makes the perspex machine a suitable physical substrate for a robot's mind and body. That is what I wanted to achieve. Back then I did not know how to implement a mind using conventional mathematics - so I changed mathematics. Now I could do it in conventional mathematics, but I do not want to. Nullity makes all sorts of calculations easier.

Arthur: The Perspex Machine defines causality. Can you shed any light on that?

Dean: [Smiling] Dear, Arthur. It is a technical matter. The definition of causality in the that book is the definition of causality in the theoretical perspex machine, not in the universe. It is hypothesised that the theoretical perspex machine can describe the universe, but it has already been proved that it cannot do it in a direct way. The explanation of causality in the universe would be a very complex thing in the theoretical perspex machine and that explanation would operate, picking out objects in the world and giving explanations, in the causality explained in the book. It is simple really. The causality in the book explains the theoretical perspex machine and the theoretical perspex machine can, it is supposed, explain causality in the universe.

Arthur: But how can the perspex machine explain random things that occur in the universe, if they do occur, that is!

Dean: Easily. A genuinely random number can be generated by an infinitely long Turing program, but the perspex machine can execute all Turing programs, even those the Turing machine cannot complete, so it can complete the infinite Turing program, and can generate any number of genuinely random numbers. Now, no existing physical theory employs the number nullity, so one can put the infinite machinery of random number generation in a nullity subspace and have it operate at a distance on a non-nullity subspace that is identical to the geometrical spaces used in contemporary physical theories. Thus, the perspex machine is a superset of existing geometrical theories of physics and can have, or fail to have, genuinely random numbers as is wanted in a theory.

Arthur: How does the geometry of space affect the perspex machine?

Dean: One has some freedom to choose the geometry and the instruction the perspex machine executes, but not total freedom. Some geometries limit the instructions that can be embedded in them, and some instructions limit the geometries they can be embedded in. When one designs a perspex machine the choice of geometry and instruction affects the bodies and minds that can arise in the machine. The same holds, of course, if one is in a position to design a physical universe.

Arthur: You said, "one has freedom." Does the perspex machine explain free will?

Dean: Yes.

Arthur: Oh, bugger!


If you would  like to contribute to this debate then please email the author.

  Home  -  Email
James A. D. W. Anderson 2005
Back to top
Last updated 06 June 2006