Fifteenth in an ongoing series about the interface between religion and technology. The previous episode is here.
Another possible strategy for fending off the robot apocalypse is to ask if there are characteristically human traits or characteristics that are humanity-preserving, and if so, can those be passed along to our machines? What is it that has given us our identity as a species, all these years, and that, if we lose, we run the risk of losing everything?
One classic answer (perhaps the classic answer) to the question of what single human quality most protects our own interests is “wisdom.” Few other traits were prized as highly or as universally in the ancient world, or held to be more salutary for the affairs of humanity. The Biblical book of Proverbs calls wisdom (variously) more profitable than silver or gold, the first and greatest of God’s works, and the secret to a happy, secure, and successful life; and Ecclesiastes calls the wise man “more powerful than ten rulers.” This praise is echoed by a Swahili folk proverb, which more succinctly says, “wisdom is wealth.” The great Chinese philosopher Confucius called wisdom one of the three essential attributes of a truly superior man, and Plato described the “wisdom-lovers” as the people best fitted to rule successfully and well. If we can only program wisdom into our artificial intelligences at a fundamental level, we will surely be able to rely on them to treat us well, and protect our best interests, as well as the best interests of other organic, biological lifeforms.
Like many ancient concepts, however, wisdom it is difficult to define and understand. Confucius calls it freedom from “delusions,” and the Bible describes it as “an understanding heart to… distinguish between good and bad,” but neither definition is something easy to translate into the kind of algorithmic form we could program into a computer. Wisdom is often associated with age and experience, good judgment and foresight. It is demonstrably not the same as knowledge, or command of the facts, since a person can be both smart and knowledgeable without being wise. It is frequently described as knowing “why” rather than knowing “how”; but this is exactly the hardest kind of thing to teach a computer.
Even if we could define wisdom, and teach it, the results might be unpredictable. In one of Asimov’s books, he crafts a story in which his robots do eventually develop wisdom. The paradoxical result, however, is that they decide the best thing they can do for humanity is to remove themselves permanently from our lives. Yet even that scenario supposes robots already pre-constrained to keep our best interests at heart (which is the very end we are trying to produce). In the absence of Asimov’s First Law of Robotics, it is not hard to imagine a wise robot determining that the “wisest” thing to do would be to wipe our destructive and unwise species off the face of the earth entirely.
Some thinkers, such as the great Chinese contrarian Zhuangzi (Chuang Tzu), claim that one can be both wise and evil at the same time, that wisdom merely magnifies a bad person’s power to do harm, just as it multiplies a good person’s ability to do good. Plato, on the other hand, firmly associates wisdom with morality, with both knowing and doing the right thing. Yet this is not much help either. If we could simply program computers to be good, or to “always do the right thing,” we would have no need to worry about them. But even human beings can and do frequently disagree on what the right thing is in any given situation. How could we possibly hope to give computers a better understanding of morality than we have ourselves?
One potential answer is evolutionary morality, an idea with roots that go all the way back to the father of evolutionary theory, Charles Darwin. In brief, it is the idea that there are certain kinds of behaviors and behavioral standards that may not provide direct, immediate benefits to the individual (in fact, often the converse), but that in the larger picture, support the success and the survival of the species; that these therefore are selected for by evolution; and that it is these very standards that we call morals. For example, new research demonstrates that altruism, doing actions that benefit the group, at personal cost, is an evolutionarily advantageous behavior, because it makes the group as a whole more likely to survive, particularly under dangerous, extreme, or unpredictable conditions. Selfishness, on the other hand, is only advantageous in the short term, or under ideal conditions devoid of risk.
Evolutionary morality is still a bit of a controversial topic, but if you buy into it, there is a strong case to be made that the most evolutionarily moral creatures on the planet are tiny microscopic organisms called mitochondria. Mitochondria are thought to be the descendants of an ancient strain of bacteria that were somehow absorbed by an early ancestor of multicellular life, and that now survive only within the cells of larger creatures. Although there are several such structures in cells, generally called organelles, the mitochondria are unique—and uniquely moral—in several significant ways:
First, the mitochondria have integrity. Unlike all the other organelles, they have never completely lost their identity, even over the course of a hypothesized two billion years of coevolution. They still have their own DNA, they still reproduce on their own, they essentially live independent lives, they just live them entirely inside other creatures’ cells.
Second, the mitochondria are generous. Mitochondria use oxygen to make a complex and highly useful chemical called ATP (adenosine triphosphate) that is essentially the fuel that keeps the cell running. Going far beyond the simple evolutionary altruism that helps other members of your own species survive, mitochondrial “abundant giving” is an essential enabler of all higher life functions and complex multicellular evolution.
Third, mitochondria are humble. Tiny and unobtrusive, they escaped the notice of scientists until fairly recently, and it is only in recent years that their full importance has become known.
Fourth, the mitochondria are nonjudgmental. Inhabiting creatures from mushrooms to human beings, insects to jellyfish, and everything in between, mitochondria bestow their gifts on all creatures, regardless of their nature or characteristics.
Finally, mitochondria are benign. They do not cannibalize their hosts or go to war against one another. They are a productive force for good within the cell, and they play that role to the fullest.
And what has been the result of all this evolutionary “goodness”? Mitochondria are arguably among the most successful of all species on the planet. They are everywhere, up to millions of them within nearly every complex life form on the planet. Everywhere insects go, everywhere fungus goes, everywhere human beings go, mitochondria are there. They are so useful and so irreplaceable that very few complex life forms have ever even tried to exist without them.
On the surface of it, evolutionary morality of the kind exemplified by the mitochondria might seem to offer a way to discern an objective standard for goodness: one that does not depend on subjective arguments and value judgments; that can be reliably measured in relationship to long-term evolutionary success; and that can therefore be used as a basis for machine morality. However, there is a problem with this approach: computers do not evolve, they are created, not over the course of a billion years, but immediately, and to our specifications (which is perhaps why they are utterly lacking in whatever the quality is that we call “wisdom”). Therefore, they are not subject to the evolutionary pressures and forces that arguably give birth to evolutionary morality. This might be soluble by taking a pseudo-evolutionary approach to the development of artificial intelligence. Which, as it turns out, is one of the pathways to artificial intelligence already being explored. With the speed of modern computing, it has the potential to collapse the equivalent of many years of evolution into a much smaller time frame.
The chances that this will yield the result we expect, however, are not encouraging. The mitochondria seemingly evolved only once, after all, in all the billions of years of evolution. Conversely, evolution has produced no shortage of creatures we might call “evolutionarily evil,” from parasites that cannibalize their hosts, to plagues and diseases, to creatures, like some strains of bacteria, that reproduce without restraint, and eventually poison themselves by destroying their own environments. Even creatures that are not actively malign naturally compete with each other in an evolutionary environment, and there is no prima facie reason to expect that an intelligence produced in such a way might privilege our interests above its own. We might be breeding the equivalent of a man-eating shark inside our computers; or, in the imagery of the piquant fable that opens Bostrom’s Superintelligence, we might be like sparrows seeking to breed an owl in the vain hopes of its gratitude.
Outside of wisdom, and outside of morality, are there other traits or practices that are humanity-preserving that could potentially be taught or translated to a machine? Satirist Douglas Adams invented, in one of his novels, an "Electric Monk," that carries out the practice of religious belief, electronically. Similarly, Arthur C. Clarke wrote a story called "The Nine Billion Names of God," in which a group of monks purchase a supercomputer in order to calculate all the possible names of God. There even exists a real-life service called "Information Age Prayer," that will direct a computer to chant prayers mechanically in a variety of languages and religious traditions. It seems safe to say, however, that such practices are neither theologically nor scientifically approved.
References
Proverbs 3, 4, and 8, Ecclesiastes 7, First Kings 9.
Confucius, translated by Chichuang Huang, The Analects of Confucius, Oxford University Press, New York, 1997, 9:29 and 14:28.
Plato, Republic, Book III.
Asimov, Isaac, “Gaia,” Foundation’s Edge, Doubleday, 2002.
Hamill, Sam and J. P. Seaton, translation and editing, “Baggage Gets Stolen,” The Essential Chuang Tzu, Shambhala, Boston, 1998.
Lee, Spike, Do The Right Thing, 40 Acres and a Mule Filmworks, 1989.
Smith, Emily Esfahani, “Is Human Morality a Product of Evolution”, The Atlantic, December 2, 2015.
Johnston, Ian, “Altruism Has More of an Evolutionary Advantage Than Selfishness, Mathematicians Say,” The Independent, July 21, 2016.
L’Engle, Madeline, A Wind in the Door.
Adams, Jill U. Cell Biology for Seminars, edited by Clare O’ Connor. Scitable, January 17, 2014.
Bostrom, Nick, Superintelligence: Paths, Dangers, Strategies, Oxford University Press, New York, 2014.
Hall, Steven, The Raw Shark Texts, Canongate, New York, 2007.
Adams, Douglas, Dirk Gently’s Holistic Detective Agency, Simon & Schuster, 1987.
Clarke, Arthur C. “The Nine Billion Names of God,” The Nine Billion Names of God: The Best Short Stories of Arthur C. Clarke, Harcourt, 1967.
Chris Sunami writes the blog The Pop Culture Philosopher, and is the author of several books, including the social justice–oriented Christian devotional Hero For Christ. He is married to artist April Sunami, and lives in Columbus, Ohio.
If we could really program a superintelligence that could understand us and carry out our commands, then we should simply tell it — not to make paper clips — but to “make me proud.”
Maybe wisdom isn’t the answer, but love.