Thesis

Intro

Unease about artificial intelligence is often expressed in terms of the effects of bias, or job displacement, or singularity (the fear that AI will take over). Our thesis is that a major source of uneasiness, one which has not been much explored, is AI’s uncanniness.

In the mid-late Twentieth century, super-computers like Deep Thought, and its successors Deep Blue and Watson, began to be described as “intelligent.” But very few writers imagined these early examples of AI as animate beings possessing autonomous minds. Indeed, one of Watson’s creators has explained that in trial versions of Watson playing Jeopardy, Watson would simply read out the question to an answer with no cues to the audience as to how it derived its answer. As a result, viewers were not very impressed. They thought Watson was just a huge databank of trivia. No one perceived it as thinking. So Watson’s creators decided to have the computer show three potential correct answers and then state its conclusion about which one had the highest probability of being correct. This presentation was much more effective in demonstrating how Watson was not a mere storehouse of information but an information-processing machine that made reasoned calculations to determine its answer–something much closer to thinking. And since humans are quintessentially thinking beings, machines that think just like humans think threatens to unravel the core of human self-understanding.

Anima

But thinking per se is not likely what unsettles us about AI. We think the root of the uncanny feeling we experience is best described in terms of the Aristotelian term anima. In his famous treatise De Anima, Aristotle distinguishes between living and non-living things. Anima is literally what animates all living creatures, humans, animals, plants, insects, etc.–all possess anima, though different kinds of anima. (Anima is traditionally translated as “soul,” but we prefer to leave it untranslated because the term “soul” conjures later Christian ideas not helpful to our purpose.) The most easily identifiable property of anima is autonomous movement, and the most fundamental kind of movement is the ability to grow and change (and die). A rock cannot move itself, only a force exerted by something else can move a rock. Living creatures are inherently animate, including plants, which, in addition to growing and changing, also turn toward the sun, or seek out moisture by means of sending roots in the desired direction. When robotics engineers talk about the “uncanny valley”–the discomfort robots can cause if they are neither to dissimilar nor exactly like humans–they are talking about the uncanniness of locomotion. But the category of animation is broader than autonomous movement for Aristotle. We understand it to include the kinds of change associated with language use.

That AI can analyze data and transform it into useful and previously unimaginably accurate statistics, perform complex calculations, store and retrieve unimaginable amounts of information, and do this at the speed of light is impressive, perhaps even intimidating, but human technologies have always extended human capacities in astonishing ways without provoking uncanniness. A bulldozer can move earth at a scale humans could never achieve–an extension of human strength and dexterity–but a bulldozer does not cause uncanniness. The calculators some of us used in high school did not cause uncanniness. To be sure, we can say it is because we did not think calculators were thinking, and that is true. But we believe that the language of thinking is really a stand-in for the particular kind of anima that modern humans, living with the inheritance of Cartesianism, feel that they have.

Cogito

Descartes famously argued that, among earthly creatures, only humans have minds; i.e. only humans are thinking things. Machines and non-human animals are the same kind of thing: automatons. The 17th-century European context in which Descartes conjured his famous cogito marks the height of the “mechanical age”– a period of rapid and wide-ranging technological developments, notably in optics, measuring devices, and mechanics. Clockmakers were the mechanical engineers par excellence, and many showed off their mechanical prowess by building automatons. Paris was famous for its automatons, which were often on public display. Well-made automatons inspired amazement, but we think not the uncanny, in onlookers who were stumped as to how they could seemingly move autonomously. Perhaps it is no surprise that Descartes sometimes used automatons to think with when he was exploring what makes humans human.

Automatons are driven simply by a clever arrangement of wheels, gears, and springs, over which is a veneer of an artfully made human or animal. For Descartes they were perfect for illustrating how body and mind can exist separately. The substance of body is extension in the natural world and, apart from any intelligence, its “actions” are determined by the forces of nature. The Cartesian natural universe is a deterministic one. Mind is altogether a different substance; it is “super-natural,” that is to say, it has a divine origin and transcends the laws of nature, and thus possesses free will.

Because the mind-substance is constitutive only of God, angels, and humans, animals are all body and exist exclusively within nature. While they have the ability to perceive things through sight, sound, smell, etc., their behavior is not determined by any process of thinking, but merely by the natural processes of cause and effect in their bodies.

Solomon Maimon, the eighteenth-century philosopher, recounts in his autobiography being chastised by friends for beating a goat. He responded that the goat did not feel pain, “the goat is a mere machine” (he had been reading Descartes). When his friends pointed out that the goat cried out when he struck it, he pointed out “if you beat a drum, it cries out too.”

For Descartes animals are machines. There is no difference between Maimon’s goat and a goat built by a human. This raises the question: is there a difference between a human and a machine that acts like a human, in which case, how could we tell if it was a human or just a machine? Descartes developed a proto-Turing test as a thought experiment. He posed the question, “if there were machines bearing the image of our bodies, and capable of imitating our actions,” would there still be a way to know whether they were thinking humans? He answers “[b]ut it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do . . . . [The machine] could never use words or other signs arranged in such a manner as is competent to us in order to declare our thoughts to others.”[1]

Despite philosophical critiques of Descartes’s mind/body dualism, and despite the sense most of us share that animals feel pain, we still tend to assume that certain kinds of intelligence, certain kinds of reasoning, especially associated with language abilities, are only embodied in humans.

Again, until the advent of AI technology, no one imagined that a machine could think, including Descartes, who only ever raised the question because he knew it was obviously impossible. Ironically, it is thanks to Descartes that we have come to imagine that thinking can go on outside human minds.

Ever since Ada Lovelace realized in 1843 that machines could act on things other than numbers, as long as those things could be represented by numbers, the ability of machines to succeed at tasks that had been the purview of humans has increased. Recently computers have beaten human champions at Go and Texas Hold ‘em. Go is a game that requires more “intuition” than chess; Texas Hold ‘em is a game that requires the ability to bluff. Why is it that these machines can unsettle us in ways that bulldozers and calculators do not? Our argument is that, in Aristotelian terms, AI possesses the quality of anima, but AI is embodied in ways unlike other animate beings. While it is not quite accurate to speak of “substrate independence,” as many in the AI world do, it is the case that the intelligence of AI relates differently to material bodies than do the animae of plants, animals, and humans. Computer algorithms need to run on hardware and operating systems, but can be downloaded and run on very different systems and hardware. They can be copied. Descartes’s mind/body dualism is, in part, a critique of Aristotelian categories of animae (different kinds of “souls” that impart “animation” to different kinds of creatures). And yet, despite philosophical critiques of Descartes, humans in the West tend to understand themselves as composed of mind and body, and tend to think of minds and bodies relating to each other roughly as Aristotle conceived of animae being embodied. Thus, the presence of an anima with a different relationship to materiality can occasion feelings of the uncanny.

The Uncanny

We use the term “uncanny” in the same fairly technical sense that the philosopher Franz Rosenzweig does when he claims that Jews cause a feeling of “uncanniness”: a community wrenched from its homeland (Heimat) and present, as a community, in the home (Heim) of others, they provoke a feeling of uncanniness (Unheimlichkeit). [2] We argue that the presence of the kind of animation we associate with minds that is dis-located (not in the Heimat with which we are “at home”) is uncanny (Unheimlich). John Seabrook, in the October 14 issue of the New Yorker, describes his feeling of using a natural language processor (NLP) that has the ability to complete sentences and paragraphs of the very article he is writing:

The skin prickled on the back of my neck, an involuntary reaction to what roboticists call the “uncanny valley”—the space between flesh and blood and a too-human machine. . . . It was . . . disconcerting how frequently the A.I. was able to accurately predict my intentions, often when I was in midsentence, or even earlier. Sometimes the machine seemed to have a better idea than I did. [3]

With this understanding of the uncanny in mind, we offer two examples that potentially are unsettling in their use of language. We ask you to decide if either example provided is an example of thinking, or, to use Descartes’s terminology, is it “arrang[ing] its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do . . . .”

[1] René Descartes, Discourse on the Method of Rightly Conducting One’s Reason and of Seeking Truth in the Sciences, trans. John Veitch, 1993, https://www.gutenberg.org/ebooks/59.

[2] See Leora Faye Batnitzky, Idolatry and Representation: The Philosophy of Franz Rosenzweig Reconsidered. (Princeton, NJ: Princeton University Press, 2000), 90–94.

[3] John Seabrook, “Can a Machine Learn to Write for The New Yorker?,” The New Yorker, October 7, 2019, https://www.newyorker.com/magazine/2019/10/14/can-a-machine-learn-to-write-for-the-new-yorker.

Pamela Eisenbaum

Micah D. Saxton

Theodore Vial