Bonus: (sub)Text #3: Spielberg’s “AI: Artificial Intelligence”: What Is It to Be Human? (Part One)

For Episode 3 of (sub)Text, Wes discusses Steven Spielberg's AI: Artificial Intelligence with David Kyle Johnson, philosophy professor at King’s College in Wilkes-Barre, Pennsylvania.

Note: Part two will NOT be appearing on this feed. Become a PEL Citizen to get the full discussion. Visit partiallyexaminedlife.com/support to learn how.

You’ll find David's Great Courses lectures here, including a course on Science Fiction as Philosophy.

Philosophers are wont to talk about artificial intelligence in terms of thinking and the cognitive functions that comprise or support it: perception, representation, the use of language, reasoning, and so on.

But what about feelings? In the popular imagination, robots—even humanoid robots that are nearly exact replicas of human beings (think of Data from Star Trek)—often seem to be lacking in or devoid of affect. The stereotypical robot voice—the one we use to imitate a robot—is meant to convey the stilted and unemotional. To this way of thinking, the intelligence of intelligent robots is too ... robotic. It is naturally a parody of our humanity. We seem both to fear and hope that artificial intelligence in its fullest sense—one that truly captures human consciousness—is prone to failure. Let’s call this the problem of artificial affect.

The problem of artificial affect—and the consequent artificiality of artificial intelligence—is made especially urgent in Spielberg’s AI: Artificial Intelligence. The film is set in future in which anti-robot pogroms—Flesh Fairs, as they are called—are a threat to the survival of an especially vulnerable protagonist: a new sort of humanoid, a little boy with the capacity to love. His heartbreaking quest to become “real” so that he can win back the love of his abandoning adoptive mother helps us explore the question of what it means to be fully human. And answering the question of why Flesh Fair bear such hatred for humanoids—“mechas,” as they are called—sheds some light on this question, as it does toward our own dehumanizing tendencies.

What is there to fear, in artificial intelligence? Is there something repugnantly dishonest about it—is it a failed representation of humanity? Will its artificiality contaminate us? Or is the ability to lie—the robotic failure to fully engage in the lie that is social conformity—the thing we fear most? And what of the fact that the film’s humanoids are all essentially still items of use, servants, substitutes, and prostitutes: designed to meet our emotional needs, and yet unable to fully make their own demands on our capacities for love and recognition? Finally, what does it mean to be fully real? How do we know if we’re fully real? And if not, how do we become fully real?

We discuss these questions and more in Episode 3 of (sub)Text.

Listen to more (sub)Text.

Note: Part two will NOT be appearing on this feed. Become a PEL Citizen to get the full discussion. Visit partiallyexaminedlife.com/support to learn how.

Comments

  1. I was disappointed to hear that the courses focused more on movies than books. But even more disappointed to hear Star Trek called hard science fiction.

    This was a good discussion on the movie though. Especially on that one of the things an Artificial Intelligence would have to be good at is understanding emotions or the AI would fall too quickly into the uncanny valley (not necessarily visually but conversationally).

Leave a Reply

Your email address will not be published. Required fields are marked *