As we sink deeper and deeper into the realm of religion, we find ourselves forced to face up to a core religious dilemma of the modern, globalized world, the same dilemma glossed over by Pascal in his wager: In a world filled with so many different and often contradictory religions, how would we choose one as more plausible than the others?
From a Neoplatonic point of view, what goodness there is our world must come from the world deeper than ours, the one doing the simulating. The evil and chaos and disorder could all be nothing more than random numbers firing, but the beauty and the nobility and the truth in the world demand some source. And if the next world deeper is somehow a dirtier, nastier, less good place than ours, then our world must be reflecting some yet higher-still world toward which the artisans who created our simulation are striving.
When God is in everything, and everything is within God, does that not implicate God in our crimes of the spirit as well? Is God present in our angers, and our wars; our dirty jokes and our pornography? Here, perhaps, we have made a mistake by conflating God, as traditionally conceived, with our conception of “the Dungeon Master,” who is merely the maximally simple simulator. But then again, our entire purpose was to determine if there is any necessary connection between the two; between the simulator predicted by Nick Bostrom’s theory and God as envisioned by theologians and believers throughout the ages.
The simulation theory, however, does not have to be turtles all the way down. For example, imagine that somewhere along the chain of simulators, perhaps directly above us (what Bostrom calls “below”), or perhaps much further on up toward the top, we reach an entity we might call the “maximally simple simulator,” an entity of pure and limitless intellect, unbounded in time (and therefore eternal), with no body at all, in a universe containing nothing else but itself, the simplest possible universe.
The thing about the Basilisk that makes it so scary is its combination of vast power with certain both human and mechanical weaknesses. It is designed by human beings to be the greatest and most benevolent force in the universe, but all we can gift it is our best guess at an ultimate rational moral standard, utilitarianism, the greatest good for the greatest number. And as a machine, it administrates this implacably, and entirely without mercy. Roko’s Basilisk is scary because it is simultaneously our parent and our child.
Another possible strategy for fending off the robot apocalypse is to ask if there are characteristically human traits or characteristics that are humanity-preserving, and if so, can those be passed along to our machines? What is it that has given us our identity as a species, all these years, and that, if we lose, we run the risk of losing everything?
Given how likely killer robots are, and how clearly the paths we are currently embarked on lead to that eventuality, can this destiny be averted? Acceptance of the unstoppable inevitability of progress is the motivation behind yet another approach to artificial intelligence called “Friendly AI.” It starts with the assumption that runaway technological progress is inevitable, that some one among the many teams around the world working on artificial intelligence will soon succeed, and that disastrous robotapocalypse is the far most likely result. Given that, the belief of the Friendly AI camp is that it is absolutely essential that we ensure the first artificial superintelligence is “friendly,” meaning that it has the best interests of humanity at heart, and is willing and able to protect us from its nastier cousins.
For a more realistic portrait than Kurzweil’s of what a future dominated by technology might look like, one plausible place to start is with our present domination by technology, and how it is already transforming us as human beings. For example, why has our society become so oriented around statistics to the point that they mean the difference between success and failure, promotion or demotion, profit or loss, in so many different realms of life? As it turns out, what the computers do not see—what they cannot see, what is invisible both to the computer and to all those at the upper-level of management who see through the eyes of the computer—are all the purely human interactions of any job. And depending on what the job is, it can end up being the core competencies of the profession that end up neglected.
It is possible, given that we still understand so little of the brain, that it has evolved in such a way that it does bridge the gap between the subatomic world and the macroscopic world? Perhaps the free will of the quark is transmitted up through the intermediary of the brain and into the otherwise deterministic macroscopic world. But if this is true, does it preclude the possibility of a truly living simulation? Are the human beings inside the computer doomed to be dead, deterministic automata, lacking the quantum free will of the real ones?
This Piece by Adam Gopnik in The New Yorker is very good and suitably conflicted concerning complaints about the social effects of technology: The odd thing is that this complaint, though deeply felt by our contemporary Better-Nevers, is identical to Baudelaire’s perception about modern Paris in 1855, or Walter Benjamin’s about Berlin in 1930, or Marshall McLuhan’s in the face Continue Reading …