• Log In

The Partially Examined Life Philosophy Podcast

A Philosophy Podcast and Philosophy Blog

Subscribe on Android Spotify patreon
  • Home
  • Podcast
    • All PEL Episodes
    • Most Recent Episodes
    • Categorized by Topic
    • Nakedly Examined Music
    • Phi Fic Podcast
    • Combat & Classics
    • Constellary Tales
    • Upcoming Episodes
  • Blog
  • About
    • PEL FAQ
    • Meet PEL
    • Nakedly Examined Music
    • Meet Phi Fic
    • Listener Feedback
    • Links
  • Store
    • Cart
    • Checkout
  • Members
    • Membership Options
    • PEL Not School Introduction
    • Log In
  • Support PEL
    • Patreon
  • Write for Us
  • Contact

How Moral Is the Moral Machine?

August 31, 2016 by Ana Sandoiu 2 Comments

A 2012 article in the New Yorker reads: Google_driverless_car_at_intersection.gk

With or without robotic soldiers, what we really need is a sound way to teach our machines to be ethical. The trouble is that we have almost no idea how to do that.

Four years later, the guys at MIT may have found an answer: through crowd-sourcing.

The Moral Machine is introduced by its creators as "a platform for getting a human perspective on moral decisions made by machine intelligence, such as self-driving cars." You’re invited to judge a variety of scenarios—all endless variations of the trolley problem—and you judge who should live and who should die. The passengers or the schoolchildren? The homeless guys or the bank-robbers? How about one homeless guy and two doctors? Men or women? Older folk or younger people? Fat people or skinny people? Et cetera. At the end of the test, your responses are calculated and you’re given insights into your own "moral intuitions" and how they fare compared to other people’s, e.g., you tend to prefer saving men over women, or pets over people.

While the creators of the Moral Machine say they want to “take the discussion further” on how machines should make decisions when faced with moral dilemmas, it is actually (spoiler alert) an experiment. At the end of the "test" you’re told, in fine print, that it was all “part of a research study on ethics of autonomous machines, conducted as a data collection survey,” and that test takers were not informed so as to not influence responses. Of course you’re given the option to choose not to share your data (if you click “here”) but if you do nothing, your data will be collected and used in a way that is “invaluable” for “autonomous machine ethics and society.”

A few questions immediately spring to mind: How is the data going to be used? It is difficult to think of an “invaluable” use of our moral intuitions that does not involve feeding them into a giant moral code that will one day be built into the software of driverless cars. And if that is the case, is it moral to not warn participants that their responses will one day be turned into literal “life or death” decisions? Equally importantly, how does knowing (or not knowing) this influence the study?

Test-takers seem to be fascinated with the insights it gives into their own moral psyche and appear unconcerned by the prospect of their intuitions being turned into laws. From saving "old criminals" to "fat, rule-breaking, female babies," people have enthusiastically shared their own surprising preferences, making #MoralMachine a trending topic. But this indicates that we’re treating it as a singular window into own individual moral preference, when in fact each of our intuitions can form a moral law when taken as a whole.

The various levels of morality at play might seem confusing. Within the actual test, it’s not clear whether the focus is on making decisions in those particular situations, or if we’re invited to give our own version of rule utilitarianism. In the question "what should the driverless car do?" it is indeed implied that what you teach it to do in that particular situation, the automated car will replicate and apply to all future similar situations, however it’s hard to imagine that all test-takers make their decisions with this awareness in mind. Then there’s the level of morality outside the test, seemingly separate but in fact more intimately connected than the test-takers are allowed to know. This certainly poses interesting questions about the universalizability of our ethical choices.

The other issue is how knowing that the Moral Machine is a data collection survey would impact the survey itself. If test-takers are misled into thinking that it’s all a fun game-test and nobody is going to suffer as a result of their choices, doesn’t that make the choices in themselves rather pointless? If you’re encouraged to think of the test-victims as colorful, cartoonish figures you’re just playing with from the comfort of your couch—how much of an ethical dilemma is it still? In the trolley problem and its variations, aren’t the choices supposed to be made with the moral gravitas that comes from deciding the fate of human lives?

It might, of course, be argued that it is the very detachment that comes from being unaware that the experiment aims to create. Which is based on the assumption that the less involved you are, the better your judgement is going to be, and that the comfort-of-your-sofa level of detachment is the ideal emotional place we think our life-changing decisions should be made from. Of course you don’t want to make moral decisions from the same place of desperation and overflowing adrenaline that you would when facing a car crash—that would defeat the purpose of a driverless car—but have we decided that extreme emotional detachment should be our moral barometer instead? As driverless cars are being taken for test runs around the world, it feels as though these debates have already taken place and definitive conclusions have already been drawn.

Some of these debates have to do with the trolley problem itself. For those who don’t feel comfortable quantifying human lives and veer on the incommensurability side, deciding whether two women doctors are more valuable than three elderly men might feel particularly painful. You might also cringe at the superficiality of judging someone’s worth based solely on their profession or social status, or feel that given the invaluable moral orientation they provide, rules are too precious to give up every time a group of prestigious doctors jaywalks. Or, finally, you might just think that since we prize AI for being exponentially more intelligent than us, maybe moral machines should not be programmed to incorporate our flawed moral systems, but wait for them to develop their own.

Perhaps this is exactly the kind of conversation the creators of the Moral Machine hoped to spark. Or on the contrary, perhaps there is less room for philosophical reflection than we think, as philosophers, ethicists, passengers, and pedestrians have all come a little too late to the moral debate party. After all, some of the most influential people in our world right now are guided by the "move fast and break things" motto, and it is software engineers and high-tech investors who have been dubbed “the 21st century philosophers.”

Ana Sandoiu is a writer, researcher & philosophy lover living in Brighton, UK. She also writes on her personal blog, On a Saturday Morning.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Filed Under: Featured Article, Misc. Philosophical Musings Tagged With: artificial intelligence, Ethics, MIT, moral dilemma, moral machine, self-driving car, trolley problem

Comments

  1. Christopher Frederick says

    August 31, 2016 at 8:23 am

    Ugh… This is all part of Transhumanism. I contend we, the collective We, are lost in this high-tech world. More thoughts to come.

    Reply
  2. Richard Keorkunian-Rivers says

    September 19, 2016 at 1:56 am

    These scenarios are remarkably poorly designed. Choosing to crash the car is not tantamount to choosing to kill the occupants, because surviving a car crash is more feasible than surviving being hit by a car as a pedestrian.

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *


Become a PEL Citizen


Recent Comments

  • Jennifer Tejada on Episode 209: Guest Francis Fukuyama on Identity Politics (Citizen Edition)
  • dmf on Phi Fic #26 The Machine Stops by E. M. Forster
  • Evan Hadkins on Episode 209: Guest Francis Fukuyama on Identity Politics (Citizen Edition)
  • Luke T on Episode 209: Guest Francis Fukuyama on Identity Politics (Citizen Edition)
  • Jennifer Tejada on Episode 209: Guest Francis Fukuyama on Identity Politics (Citizen Edition)

About The Partially Examined Life

The Partially Examined Life is a philosophy podcast by some guys who were at one point set on doing philosophy for a living but then thought better of it. Each episode, we pick a text and chat about it with some balance between insight and flippancy. You don’t have to know any philosophy, or even to have read the text we’re talking about to (mostly) follow and (hopefully) enjoy the discussion

Become a PEL Citizen!

As a PEL Citizen, you’ll have access to a private social community of philosophers, thinkers, and other partial examiners where you can join or initiate discussion groups dedicated to particular readings, participate in lively forums, arrange online meet-ups for impromptu seminars, and more. PEL Citizens also have free access to podcast transcripts, guided readings, episode guides, PEL music, and other citizen-exclusive material. Click here to join.

Blog Post Categories

  • (sub)Text
  • Aftershow
  • Audiobook
  • Book Excerpts
  • Citizen Content
  • Citizen Document
  • Citizen News
  • Close Reading
  • Combat and Classics
  • Constellary Tales
  • Featured Article
  • General Announcements
  • Letter to the Editor
  • Misc. Philosophical Musings
  • Nakedly Examined Music Podcast
  • Nakedly Self-Examined Music
  • NEM Bonus
  • Not School Recording
  • Not School Report
  • Other (i.e. Lesser) Podcasts
  • PEL Music
  • PEL's Notes
  • Personal Philosophies
  • Phi Fic Podcast
  • Podcast Episode (Citizen)
  • Podcast Episodes
  • Reviewage
  • Song Self-Exam
  • Things to Watch
  • Vintage Episode (Citizen)
  • Web Detritus

Follow:

Twitter | Facebook | Google+ | iTunes

Copyright © 2009 - 2019 · The Partially Examined Life, LLC. All rights reserved. Privacy Policy · Terms of Use · Copyright Policy