• Log In

The Partially Examined Life Philosophy Podcast

A Philosophy Podcast and Philosophy Blog

Subscribe on Android Spotify Google Podcasts audible patreon
  • Home
  • Podcast
    • PEL Network Episodes
    • Publicly Available PEL Episodes
    • Paywalled and Ad-Free Episodes
    • PEL Episodes by Topic
    • Nightcap
    • Philosophy vs. Improv
    • Pretty Much Pop
    • Nakedly Examined Music
    • (sub)Text
    • Phi Fic Podcast
    • Combat & Classics
    • Constellary Tales
  • Blog
  • About
    • PEL FAQ
    • Meet PEL
    • About Pretty Much Pop
    • Philosophy vs. Improv
    • Nakedly Examined Music
    • Meet Phi Fic
    • Listener Feedback
    • Links
  • Join
    • Become a Citizen
    • Join Our Mailing List
    • Log In
  • Donate
  • Store
    • Episodes
    • Swag
    • Everything Else
    • Cart
    • Checkout
    • My Account
  • Contact
  • Mailing List

Saints & Simulators 12: #BadAI

April 18, 2019 by Chris Sunami Leave a Comment

Twelfth in an ongoing series about the places where science and religion meet. The previous episode is here; the next episode is here.

In 1989, Star Trek: The Next Generation, the second major iteration of the durable televised Star Trek science fiction franchise, introduced a terrifying new villain called the Borg. An unhallowed melding of a humanlike life form with cybernetic technology, the individual members of the Borg were born, raised, lived, and presumably died entirely surrounded by technological innovations. There was no such thing as “natural childbirth” for them, they were cloned mechanically, nurtured in artificial wombs, and raised to maturity in pods. An implacable collective intelligence, they mercilessly converted any creatures they encountered into extensions of themselves, cannibalizing their planets for raw materials, and sucking other intelligent lifeforms into the inescapable machine. Their inhuman efficiency was symbolized visually by the design of their spaceships, which were huge flying cubes, utterly devoid of any aesthetic concerns, and entirely lacking the uselessly aerodynamic shapes of the show’s human-piloted spaceships.

As with all great science fiction terrors, what made the Borg so scary was a faint whiff of familiarity, a certain plausibility that clung to their entirely alien ideology. In certain ways, it seemed we were not so far off from the Borg, that they were at least as likely a future for the human race as the show’s bold and handsome human protagonists. The Borg, in fact, are not much different from Ray Kurzweil’s ecstatic vision of a human-machine union, except stripped of the faith that our essential humanity will somehow survive the merger.

It is also not difficult to draft a scenario where the human race transforms itself into a Borg-like creature, subtly, and without even realizing it. The process is arguably already underway. Right now, in today’s world, natural human fertility is on a sharp decline. With the invention, and the dramatic rise in use of technologically aided reproduction, hereditary infertility, once a virtual contradiction in terms, becomes a real possibility, perhaps even an inevitability. Similarly, the increasing universality of computer-aided dating makes it less of a strict biological imperative to have the social skills necessary to find a mate on your own.

Harmless, and even beneficial as fringe phenomena, these trends take on unintended consequences as they become more central and widespread because they make possible a future where human reproduction can take place only with technological aid, where natural human fertility diminishes and disappears. If that ever happens, evolutionary forces will be redirected against our technological fitness, rather than our biological fitness. Then, like a symbiont absorbed by its host, our species could permanently lose its own destiny and survive solely as the biological component of a cyborganic hybrid: a singularitarian’s dream, but for the rest of us, a nightmare.

Even outside of the possibility of a future as a the biological hardware of an implacable machine intelligence, however, there are many reasons to suspect the advance of the machines might not be as rosy a picture as painted by Kurzweil. Some of these reasons were anticipated as far back as Czech writer Karel Čapek’s 1920 play R.U.R., which not only coined the word “robot,” but also introduced the trope of killer robots breaking out of our control.

Apocalypse has more dramatic potential than apotheosis, of course, and perhaps this explains why science fiction writers and movie directors have so often echoed Čapek in exploring nightmare versions of the transhumanist vision, where intelligent computers decide they no longer need us, and act to exterminate or enslave the human race—think HAL, the murderously insane computer in 2001, the killer robots of Terminator’s Skynet, or the all-encompassing computer-generated illusionists of The Matrix. But is this anything more than just another lucrative paranoia of Hollywood, a pleasantly frightening impossibility? Is there reason to think advances in technology might actually end in a robotic Armageddon? Unfortunately, and perhaps surprisingly, the answer is yes. Many among the people who have put real study into these possibilities (Bostrom, for example) have come to believe killer robots are in fact terrifyingly likely.

The simplest and most plausible way killer robots could come into existence is if people build them, on purpose. And in fact, early versions of such human-slaughtering machines already exist. Although it may sound like a baroque, Cold War–era sci-fi conceit, remote assassination by robotic flying drones became a standard, if controversial, part of American foreign policy during the Obama administration. And while such drones are still piloted by human beings today, the military is heavily invested in the development of artificial intelligence. It is not much of a stretch to imagine a simple artificial-intelligence routine, such as that used to program the bad guys in a video game, introduced into a real-world drone with deadly consequences.

The second path to killer robots is via hackers: bored teenagers, internet trolls, terrorists, Russian spies, or Nigerian scam artists—any of whom could (for example) drop a computer virus under the hood of a self-driving car, and yield an autonomous robotic weapon that could create mass carnage without the sacrifice of a suicidal driver.

The third way to yield killer robots is through programming bugs, the inescapable bane of every programmer’s life. A bug in a video solitaire game is a nuisance, and a bug in an accounting program can wipe out millions of dollars. But a bug in a self-driving car—or a jumbo jet auto-pilot—is human lives lost, a scenario that is sadly no longer hypothetical.

It is the fourth way of getting killer robots, however, that truly alarms smart people like Bostrom, who describes it as the “paperclip AI” problem. It was perhaps most memorably anticipated in an early Philip K. Dick story, "Autofac" (1955), which depicts a group of human beings attempting to rebuild civilization after a devastating war, but being forced to compete for scarce resources against a heavily protected automated factory, which continues to crank out its products regardless of the costs of producing them or any demonstrated need for the end result.

As described by Bostrom, the frightening thing about this scenario is the intrinsic difficulty of avoiding it. It does not require the suicidal act of deliberately building our own executioners, or the clear mistake represented by programming bugs, or even a malign computer virus taking over the system. Rather it describes what goes wrong when an ordinary computer does its job far too well. To quote Bostrom’s description of how such a scenario might proceed:

An AI, designed to manage production in a factory, is given the final goal of maximizing the manufacturing of paperclips, and proceeds by converting first the Earth and then increasingly large chunks of the observable universe into paperclips.

Today, even though we already have machines that manufacture paperclips, we do not worry about them running amok, because we turn them on and off, and supply them with raw materials. If they get to the point, however, where they can design and build themselves, their power over their own destiny may increase so much that we can no longer so easily turn them on and off, and to the point that they can go out and get for themselves as many raw materials as they need—or as they want.

One objection to this scenario is that we will surely always build our machines with a fail-safe “kill switch,” to allow us to pull the plug if we need to. That, however, (in addition to relying on no manufacturer ever missing this as an oversight) only holds true as long as we are the ones designing and building the machines. If the machines are designing and building themselves, they might well “decide” that a kill switch is unnecessary and inefficient.

Another objection is that any machine smart enough to make manufacturing decisions would surely be smart enough to understand that converting the entire world into paperclips is ultimately self-destructive. Advanced computers, after all, are not like brainless alcohol- or oxygen-producing bacteria, whose success at their job continues without bounds until they poison and kill themselves with their own production.

This objection, however, misunderstands, or at least makes unsupported assumptions about machine intelligence. As Moravec’s Paradox (named after robotics pioneer Hans Moravec) tells us, it is trivially easy for computers to do things that human beings find difficult, like multiply ten digit numbers. But it is often extremely hard to teach them to do things that we find easy, like duplicate our common-sense intuitions. The idea that a computer designed to make paperclips will ever master the concept “enough is enough” may be overly anthropomorphic.

References

Nemecek, Larry, Star Trek: The Next Generation: Companion, Pocket Books, 2003.

McKie, Robin, “The Infertility Crisis is Beyond Doubt. Now Scientists Must Find the Cause,” The Guardian, July 29, 2017.

Bostrom, Nick, Superintelligence: Paths, Dangers, Strategies, Oxford University Press, New York, 2014.

Vlasic, Bill and Neil E. Boudette, “Self-driving Tesla Was Involved in Fatal Crash, U.S. Says,” The New York Times, June 30, 2016.

Dick, Philip K., “Autofac,” Minority Report and Other Classic Stories by Philip K. Dick, Citadel, 2002.

© 2017–2019 Christopher Sunami.

Chris Sunami writes the blog The Pop Culture Philosopher, and is the author of several books, including the social justice–oriented Christian devotional Hero For Christ. He is married to artist April Sunami, and lives in Columbus, Ohio.


Facebooktwitterredditpinterestlinkedinmailby feather

Filed Under: Featured Article, Misc. Philosophical Musings Tagged With: Nick Bostrom, philosophy and technology, philosophy blog, Saints&Simulators, science fiction

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

PEL Live Show 2023

Brothers K Live Show

Citizenship has its Benefits

Become a PEL Citizen
Become a PEL Citizen, and get access to all paywalled episodes, early and ad-free, including exclusive Part 2's for episodes starting September 2020; our after-show Nightcap, where the guys respond to listener email and chat more causally; a community of fellow learners, and more.

Rate and Review

Nightcap

Listen to Nightcap
On Nightcap, listen to the guys respond to listener email and chat more casually about their lives, the making of the show, current events and politics, and anything else that happens to come up.

Subscribe to Email Updates

Select list(s):

Check your inbox or spam folder to confirm your subscription.

Support PEL

Buy stuff through Amazon and send a few shekels our way at no extra cost to you.

Tweets by PartiallyExLife

Recent Comments

  • Bibliophile on Pretty Much Pop #143: Pinocchio the Unfilmable (Yet Frequently Filmed)
  • Mark Linsenmayer on Ep. 302: Erasmus Praises Foolishness (Part Two)
  • Mark Linsenmayer on Ep. 308: Moore’s Proof of Mind-Independent Reality (Part Two for Supporters)
  • Mark Linsenmayer on Ep. 201: Marcus Aurelius’s Stoicism with Ryan Holiday (Citizen Edition)
  • MartinK on Ep. 201: Marcus Aurelius’s Stoicism with Ryan Holiday (Citizen Edition)

About The Partially Examined Life

The Partially Examined Life is a philosophy podcast by some guys who were at one point set on doing philosophy for a living but then thought better of it. Each episode, we pick a text and chat about it with some balance between insight and flippancy. You don’t have to know any philosophy, or even to have read the text we’re talking about to (mostly) follow and (hopefully) enjoy the discussion

Become a PEL Citizen!

As a PEL Citizen, you’ll have access to a private social community of philosophers, thinkers, and other partial examiners where you can join or initiate discussion groups dedicated to particular readings, participate in lively forums, arrange online meet-ups for impromptu seminars, and more. PEL Citizens also have free access to podcast transcripts, guided readings, episode guides, PEL music, and other citizen-exclusive material. Click here to join.

Blog Post Categories

  • (sub)Text
  • Aftershow
  • Announcements
  • Audiobook
  • Book Excerpts
  • Citizen Content
  • Citizen Document
  • Citizen News
  • Close Reading
  • Combat and Classics
  • Constellary Tales
  • Exclude from Newsletter
  • Featured Ad-Free
  • Featured Article
  • General Announcements
  • Interview
  • Letter to the Editor
  • Misc. Philosophical Musings
  • Nakedly Examined Music Podcast
  • Nakedly Self-Examined Music
  • NEM Bonus
  • Not School Recording
  • Not School Report
  • Other (i.e. Lesser) Podcasts
  • PEL Music
  • PEL Nightcap
  • PEL's Notes
  • Personal Philosophies
  • Phi Fic Podcast
  • Philosophy vs. Improv
  • Podcast Episode (Citizen)
  • Podcast Episodes
  • Pretty Much Pop
  • Reviewage
  • Song Self-Exam
  • Supporter Exclusive
  • Things to Watch
  • Vintage Episode (Citizen)
  • Web Detritus

Follow:

Twitter | Facebook | Google+ | Apple Podcasts

Copyright © 2009 - 2023 · The Partially Examined Life, LLC. All rights reserved. Privacy Policy · Terms of Use · Copyright Policy

Copyright © 2023 · Magazine Pro Theme on Genesis Framework · WordPress · Log in