Pain in the Machine

Can robots experience pain?

Pain intuitively seems to be a humanizing experience, unique to living beings: it is sometimes said that agony makes one feel “alive.” Could robots experience pain as well? The possibility that an artificial system could feel pain is becoming more conceivable as manmade machinery increases in complexity. I will argue for the limited thesis that artificially intelligent robots can feel pain. First, I will modify Lewis’ functionalist solution to the problems of mad pain and Martian pain and apply it to the case of robot pain. Then, I will review Searle’s and Ziff’s criticisms of functionalism and respond in defense of the existence of robot pain.

What are robots? What is pain?

The outcome of this question rests on the meanings of the term “robot” and the predicate “feel pain.” A robot is defined as a non-biological machine controlled by a formal program. My paradigm case of a robot is a mechanical body which is moved by an artificial intelligence trained on an immense dataset of pain stimuli and their normal human responses. For instance, if the robot’s body is burned, it behaves like a human: screeching, flinching away, and avoiding the source of the burning. Second, what is “pain”? It is fundamentally a feeling, a phenomenal experience. But as Lewis points out, a state being pain and a state feeling painful are logically one and the same.[1] This is a critical concept for our task. After all, while it is impossible to know what it feels like to be a robot without being a robot, it is possible to show that a robot is in a state of pain. A robot being in pain is equivalent to a robot feeling pain.

A causal role is a state with certain typical causes and effects. For example, pain is caused by physical harm, and has behavioral effects like screaming or flinching. This is pain’s causal role. As stipulated above, artificially intelligent robots (henceforth called AIRs) could occupy the causal role of pain just like humans, acting out the appropriate behaviors in response to painful stimuli. In fact, researchers at Cornell have already developed a robot nervous system that behaves as if it is feeling pain when its artificial tissues are stimulated – e.g. reflexively withdrawing from the source of the pain.[2] This does not solve the dilemma of robot pain, as we are not merely asking if robots can display certain behaviors in response to pain stimuli, but if the robots are actually in pain. But what does that mean?

painting of man
What characterizes pain? A large part of it is the behavioral response.

An individual is in pain if and only if it is in a state that occupies the causal role of pain for its appropriate population. Pain is a nonrigid concept, which means that the word pain could correctly apply to different physical states for different populations, as long as the state occupies the causal role of pain for the population being referred to. Therefore, if I show that robots can be in the state that occupies the causal role of pain for the appropriate population of robots, this demonstrates that robots feel pain.

Akin to Lewis’ Martian,[3] a robot’s brain has an entirely different physical realization than a human nervous system. It is composed of sensors, metal apparatuses, and silicon-based chips rather than neurons and other wetware. Despite having radically different anatomy, robots as a population can still have a state that occupies the causal role of pain – a certain configuration of flipped bits and activated sensors that is activated by pain stimuli and results in pain-response behaviors. Just as a pattern of neurons firing is pain for humans, and inflation of certain hydraulic cavities is pain for Martians, the state involving the activation of a certain pattern of silicon bits is pain for robots. To say that a robot is feeling pain is to say that it is in this physical state, if this physical state occupies the causal role of pain for the appropriate population of that robot.

Appropriate population

But what is the appropriate population for a robot? Lewis offers four criteria to determine the appropriate population (AP) for an individual X:

  1. The AP is the human population, as “pain” is a human concept.
  2. The AP is the population X belongs to.
  3. The AP is a population where X is not exceptional.
  4. The AP is a natural kind like a species.

A given robot X cannot fulfill (1), as robots are not in the human population. For (2), we can say X belongs to the population of all robots or only to some subdivision of this population – e.g. Boston Dynamics robots. If we group all robots until one population, this group is a natural kind – a sort of silicon-based species. However, any subdivisions within the robot population would be based on arbitrary human classifications like the operating system, model, or design of the robot. Therefore, it is preferable to treat “all robots,” a natural kind, as the appropriate population for robot X. Finally, if the robot is unexceptional amongst all robots, it fulfills criteria 3.

A note on exceptionality

white and brown human robot illustration
Which, if any robots, can feel pain?

Of course, currently a robot that responds to pain is exceptional. However, there is a conceivable future world where most robots are designed with a state that occupies the causal role of pain. In this world, it would be correct to say that a robot in that state is in pain. This demonstrates the thesis: I have shown that robots can feel pain, given that there is a state that occupies the causal role of pain for most robots.

This may seem like conceptual slight-of-hand. But imagine a world where 99% of humans respond to pain stimuli by immediately searching for juice. For these imagined humans (i-humans), there is no state that occupies the causal role of pain. Rather, juice-seeking has replaced the causal role of pain.

It may be natural to say that the 1% of i-humans who do not have the condition, and still respond to pain like humans do (e.g. yelping and avoidance), are feeling pain. But this would be a mistake: we are not the appropriate population for these humans. In this thought experiment, we (humans) do not exist, and the 1% are exceptional within their (i-human) population. Pain is definitionally a state that occupies the causal role of pain for an appropriate population. For the unexceptional i-humans, there is no state that occupies the causal role of pain.

In the same sense, robots do not feel pain currently, as there is no such state that occupies the causal role of pain for the robot population. But the future, this could change. Therefore, robots can feel pain.

An objection from Searle & Ziff

Searle may dispute this conclusion. In his critique of functionalism, he posits that computers cannot think, even if they replicate the behavior of a thinking mind.[4] A mind is more than just a structure that gives certain outputs for specific inputs (syntax). It also has meanings associated with these inputs and outputs (semantics). Syntax alone is not sufficient for semantics. Thus, robots cannot feel pain, as they are based on formal syntactic programs (strings of binary) that do not include the semantics or “meaning” of pain.

To clarify this argument, imagine a modified version of Searle’s Chinese room. In this version, the Pain Room, there are baskets with detailed descriptions of the typical human response to every single pain stimulus. Imagine a person who is incapable of feeling pain, has no concept of what pain stimuli mean, and does not know the normal responses to pain. However, in the Pain Room, this person is given a video of a pain stimulus, like a hand touching fire. After watching the video, they search through the baskets to find a description of the appropriate human response, and then they act this response out. Eventually, the pain-incapable person perfects this process so much that their behaviors are indistinguishable from a human who genuinely feels pain.

Does this person feel pain? In Searle’s view, the answer is no. This person only has a formal program that allows them to act out the appropriate responses to pain. But feeling pain is not just about behaving in response to stimuli – it involves having meaning or qualia attached to each pain stimuli. Since the person in the Pain Room is just correlating videos of pain stimuli with muscle movements, they are functionally identical to an AIR trained on a massive dataset of pain stimuli and responses. This robot can use its formal program to respond to pain in a way indistinguishable from humans. However, if the person in the Pain Room cannot feel pain, this robot cannot feel pain either.

Ziff offers a similar criticism. He admits that robots could display the observable behavior of feeling pain.[5] But like Searle, he argues that even if nothing is wrong with the robot’s performance, the fact that it is a performance means the robot is not feeling pain. The distinguishing feature between human pain and “robot pain” is what we know about the robot (the fact that it is performing) and not what we see (the flawless performance). As the robot is just acting out a formal program written by a human, it is not feeling anything.

With slightly different conceptual tools Ziff and Searle are both arguing that a robot’s behavioral rendition of pain is not enough to say that the robot is feeling pain. But these critiques are equally applicable to other minds. Ziff claims that the difference between robot and human pain is that we “know” the robot is just performing. But how do we know that other humans are experiencing pain? Only by their behavior, it seems, and cross-application of our own experience to beings that seem similar to us. If a robot and a human have identical behavior in responding to a pain stimulus, we cannot remain logically consistent in saying that the human is experiencing “genuine pain” while the robot is not. Additionally, it seems intuitive that perfectly imitating a response to pain would be impossible if one was not feeling pain. Unless the robot or Pain Room person were genuinely feeling pain, it seems there would always be subtle giveaways that it was merely an act.

Searle could say that a robot’s action is just the product of a syntactic program of 1s and 0s. But one could also represent the human brain as a massive set of 1s and 0s where each 1 is an active neuron and each 0 is a non-active neuron. This does not mean the brain is equivalent to this string of binary. Just because one could represent a robot as a binary program does not mean that the robot is a binary program. The program cannot feel pain, but the entire system of a robot interacting with physical stimuli could feel pain. Searle might be right that a program does not have semantics. But a robot interacts with stimuli in the world. Thus, the robot’s physical states could be thought of as intentional, as the states are “about” stimuli. This gives semantics to the syntactic program of the robot.

In conclusion, robots can feel pain because they can have a mental state that occupies the causal role of pain for the population of robots. As robots can have a state that occupies the causal role of pain, that state can be called pain for the population of robots.

  1. Lewis, David K. (1980). Mad pain and Martian pain. In Ned Block (ed.), Readings in the Philosophy of Psychology. Harvard University Press. pp. 216-222.
  2. J. Kuehn and S. Haddadin. An Artificial Robot Nervous System To Teach Robots How To Feel Pain And Reflexively React To Potentially Damaging Contacts. IEEE Robotics and Automation Letters, vol. 2, no. 1, pp. 72-79, Jan. 2017. Print. doi: 10.1109/LRA.2016.2536360
  3. Lewis, David. Mad Pain and Martian Pain. Philosophical Papers Volume I, 1983, pp. 122–130., doi:10.1093/0195032047.003.0009. Print.
  4. Searle, John R. (2002). Can Computers Think? In David J. Chalmers (ed.), Philosophy of Mind: Classical and Contemporary Readings. Oup Usa. Pg. 671. Print.
  5. Paul Ziff. The Feelings of Robots. Analysis. Oxford Academic, Volume 19, Issue 3, January 1959, Pages 64–68, https://doi.org/10.1093/analys/19.3.64. Pg. 68.

Leave a Reply

Your email address will not be published. Required fields are marked *