The Romanian philosopher Emil Cioran long struggled with insomnia, although he did not exclusively view it as a curse. He actually found insomnia to be an insightful condition, casting it as something distinctly human, as well as a state that could be highly productive for the philosopher.
In this final post in a series on the history of Satan, Vincent Czyz shows us how in the early centuries of the Christian era, the serpent in the Garden of Eden came to be identified with the Prince of Darkness, and the details of the story of the Fall From Heaven, familiar to readers from Paradise lost, were filled in.
In the intertestamental literature, written between the Old and the New Testaments, Satan continues his dual evolution into both the personification of Hate and God’s opponent. The New Testament exhibits completely contradictory versions of Satan: Sometimes he is the Tester, as in Mark, Matthew, and Luke; but in other places, as in Revelations, Persian dualism seems to hold sway.
In the Hebrew Bible, satan originally not a name but an office; he is a messenger from God, sent as an accuser and tester. There’s only a single reference to an independent spiritual force named “Satan.” This is the moment when the concept of the Devil as the West has come to understand it was born, and it may be a borrowing from Zoroastrian dualism.
Lucifer and Satan are not different names for the same supernatural being; they’re not even related, and the Hollywoodesque plot about a rebellious archangel is nowhere to be found in the entire Bible. Instead, the evolution of Lucifer and his conflation with Satan involves misinterpretation, misinformation, and flat-out fabrication on the part of church fathers, saints, and poets.
The modern concept of forgiveness is fundamentally flawed. Instead of learning to forgive, we should learn to resent rightly, and, in some cases, to pardon.
As late as the eighteenth century, plagues were believed to be caused by polluted air and mediated by creatures of putrefaction such as rodents and witches. Public health specialists agree that the sanitary efforts brought about under the miasmic paradigm were effective. If we think that current views of epidemics and pollution are devoid of the mythical thinking of our predecessors, we might humbly want to reassess our position. The mythical is still embedded in scientific inquiry today.
As it turns out, if our purpose is to test the simulator hypothesis against religious belief, it is only in the specifics that we can easily distinguish between the two. The Deist God, who creates the universe, and then leaves it to run entirely on its own, is not easily disambiguated from the hands-off simulator. One might well call them one and the same. Similarly, the Platonic ideal of good, which remains removed and remote in eternal perfection while the demiurge creates the world in imitation of it, needs not change at all if we choose to think of the demiurge as working with pixels and electrons rather than with primal matter. Such abstract, philosophical conceptions of God are general enough that even a shift as dramatic as reconceptualizing reality itself as a simulation can be integrated relatively easily. It is more of a challenge, however, to reconfigure the simulation hypothesis in order to yield the specificity of Christ.
As we sink deeper and deeper into the realm of religion, we find ourselves forced to face up to a core religious dilemma of the modern, globalized world, the same dilemma glossed over by Pascal in his wager: In a world filled with so many different and often contradictory religions, how would we choose one as more plausible than the others?
From a Neoplatonic point of view, what goodness there is our world must come from the world deeper than ours, the one doing the simulating. The evil and chaos and disorder could all be nothing more than random numbers firing, but the beauty and the nobility and the truth in the world demand some source. And if the next world deeper is somehow a dirtier, nastier, less good place than ours, then our world must be reflecting some yet higher-still world toward which the artisans who created our simulation are striving.
When God is in everything, and everything is within God, does that not implicate God in our crimes of the spirit as well? Is God present in our angers, and our wars; our dirty jokes and our pornography? Here, perhaps, we have made a mistake by conflating God, as traditionally conceived, with our conception of “the Dungeon Master,” who is merely the maximally simple simulator. But then again, our entire purpose was to determine if there is any necessary connection between the two; between the simulator predicted by Nick Bostrom’s theory and God as envisioned by theologians and believers throughout the ages.
The simulation theory, however, does not have to be turtles all the way down. For example, imagine that somewhere along the chain of simulators, perhaps directly above us (what Bostrom calls “below”), or perhaps much further on up toward the top, we reach an entity we might call the “maximally simple simulator,” an entity of pure and limitless intellect, unbounded in time (and therefore eternal), with no body at all, in a universe containing nothing else but itself, the simplest possible universe.
The reason, perhaps, that Bostrom’s demonstration of the probability of God’s existence has received so little attention and notice (especially as compared to the stir and commotion caused by his demonstration of the probability that we live in a simulation, and despite the fact that both conclusions are entailed by the exact same line of argument) is because readers have failed to note the connection between Bostrom’s simulator and God.
In the years since Owen Flanagan’s The Bodhisattva’s Brain, there have been thousands of studies, of varying degrees of quality, on the effects of meditation on the human brain. Here, Lachlan Dale reviews some of the highlights of that research as it’s presented by Daniel Goleman and Richard Davidson in Altered Traits.
The setup of Pascal’s Wager, as this argument is generally known, is quite similar in form to Newcomb’s paradox. The glass box with the visible $1000 bill is your ordinary life on earth: you know it exists, and is yours to spend. The opaque box is your eternal reward. It might be empty, or it might be filled with a vast reward far beyond the one in the glass box. You will discover which one is the case only when you die and the box is opened. Do you take the glass box with the known, but finite reward, or the opaque box that could have nothing or everything inside it?
The thing about the Basilisk that makes it so scary is its combination of vast power with certain both human and mechanical weaknesses. It is designed by human beings to be the greatest and most benevolent force in the universe, but all we can gift it is our best guess at an ultimate rational moral standard, utilitarianism, the greatest good for the greatest number. And as a machine, it administrates this implacably, and entirely without mercy. Roko’s Basilisk is scary because it is simultaneously our parent and our child.
Another possible strategy for fending off the robot apocalypse is to ask if there are characteristically human traits or characteristics that are humanity-preserving, and if so, can those be passed along to our machines? What is it that has given us our identity as a species, all these years, and that, if we lose, we run the risk of losing everything?
Given how likely killer robots are, and how clearly the paths we are currently embarked on lead to that eventuality, can this destiny be averted? Acceptance of the unstoppable inevitability of progress is the motivation behind yet another approach to artificial intelligence called “Friendly AI.” It starts with the assumption that runaway technological progress is inevitable, that some one among the many teams around the world working on artificial intelligence will soon succeed, and that disastrous robotapocalypse is the far most likely result. Given that, the belief of the Friendly AI camp is that it is absolutely essential that we ensure the first artificial superintelligence is “friendly,” meaning that it has the best interests of humanity at heart, and is willing and able to protect us from its nastier cousins.
For a more realistic portrait than Kurzweil’s of what a future dominated by technology might look like, one plausible place to start is with our present domination by technology, and how it is already transforming us as human beings. For example, why has our society become so oriented around statistics to the point that they mean the difference between success and failure, promotion or demotion, profit or loss, in so many different realms of life? As it turns out, what the computers do not see—what they cannot see, what is invisible both to the computer and to all those at the upper-level of management who see through the eyes of the computer—are all the purely human interactions of any job. And depending on what the job is, it can end up being the core competencies of the profession that end up neglected.
In 1989, Star Trek: The Next Generation, the second major iteration of the durable televised Star Trek science fiction franchise, introduced a terrifying new villain called “the Borg.” An unhallowed melding of a humanlike life form with cybernetic technology, the individual members of the Borg were born, raised, lived, and presumably died entirely surrounded by technological innovations. There was no such thing as “natural childbirth” for them, they were cloned mechanically, nurtured in artificial wombs, and raised to maturity in pods. An implacable collective intelligence, they mercilessly converted any creatures they encountered into extensions of themselves, cannibalizing their planets for raw materials, and sucking other intelligent lifeforms into the inescapable machine.