A 2012 article in the New Yorker reads:
With or without robotic soldiers, what we really need is a sound way to teach our machines to be ethical. The trouble is that we have almost no idea how to do that.
Four years later, the guys at MIT may have found an answer: through crowd-sourcing.
The Moral Machine is introduced by its creators as "a platform for getting a human perspective on moral decisions made by machine intelligence, such as self-driving cars." You’re invited to judge a variety of scenarios—all endless variations of the trolley problem—and you judge who should live and who should die. The passengers or the schoolchildren? The homeless guys or the bank-robbers? How about one homeless guy and two doctors? Men or women? Older folk or younger people? Fat people or skinny people? Et cetera. At the end of the test, your responses are calculated and you’re given insights into your own "moral intuitions" and how they fare compared to other people’s, e.g., you tend to prefer saving men over women, or pets over people.
While the creators of the Moral Machine say they want to “take the discussion further” on how machines should make decisions when faced with moral dilemmas, it is actually (spoiler alert) an experiment. At the end of the "test" you’re told, in fine print, that it was all “part of a research study on ethics of autonomous machines, conducted as a data collection survey,” and that test takers were not informed so as to not influence responses. Of course you’re given the option to choose not to share your data (if you click “here”) but if you do nothing, your data will be collected and used in a way that is “invaluable” for “autonomous machine ethics and society.”
A few questions immediately spring to mind: How is the data going to be used? It is difficult to think of an “invaluable” use of our moral intuitions that does not involve feeding them into a giant moral code that will one day be built into the software of driverless cars. And if that is the case, is it moral to not warn participants that their responses will one day be turned into literal “life or death” decisions? Equally importantly, how does knowing (or not knowing) this influence the study?
Test-takers seem to be fascinated with the insights it gives into their own moral psyche and appear unconcerned by the prospect of their intuitions being turned into laws. From saving "old criminals" to "fat, rule-breaking, female babies," people have enthusiastically shared their own surprising preferences, making #MoralMachine a trending topic. But this indicates that we’re treating it as a singular window into own individual moral preference, when in fact each of our intuitions can form a moral law when taken as a whole.
The various levels of morality at play might seem confusing. Within the actual test, it’s not clear whether the focus is on making decisions in those particular situations, or if we’re invited to give our own version of rule utilitarianism. In the question "what should the driverless car do?" it is indeed implied that what you teach it to do in that particular situation, the automated car will replicate and apply to all future similar situations, however it’s hard to imagine that all test-takers make their decisions with this awareness in mind. Then there’s the level of morality outside the test, seemingly separate but in fact more intimately connected than the test-takers are allowed to know. This certainly poses interesting questions about the universalizability of our ethical choices.
The other issue is how knowing that the Moral Machine is a data collection survey would impact the survey itself. If test-takers are misled into thinking that it’s all a fun game-test and nobody is going to suffer as a result of their choices, doesn’t that make the choices in themselves rather pointless? If you’re encouraged to think of the test-victims as colorful, cartoonish figures you’re just playing with from the comfort of your couch—how much of an ethical dilemma is it still? In the trolley problem and its variations, aren’t the choices supposed to be made with the moral gravitas that comes from deciding the fate of human lives?
It might, of course, be argued that it is the very detachment that comes from being unaware that the experiment aims to create. Which is based on the assumption that the less involved you are, the better your judgement is going to be, and that the comfort-of-your-sofa level of detachment is the ideal emotional place we think our life-changing decisions should be made from. Of course you don’t want to make moral decisions from the same place of desperation and overflowing adrenaline that you would when facing a car crash—that would defeat the purpose of a driverless car—but have we decided that extreme emotional detachment should be our moral barometer instead? As driverless cars are being taken for test runs around the world, it feels as though these debates have already taken place and definitive conclusions have already been drawn.
Some of these debates have to do with the trolley problem itself. For those who don’t feel comfortable quantifying human lives and veer on the incommensurability side, deciding whether two women doctors are more valuable than three elderly men might feel particularly painful. You might also cringe at the superficiality of judging someone’s worth based solely on their profession or social status, or feel that given the invaluable moral orientation they provide, rules are too precious to give up every time a group of prestigious doctors jaywalks. Or, finally, you might just think that since we prize AI for being exponentially more intelligent than us, maybe moral machines should not be programmed to incorporate our flawed moral systems, but wait for them to develop their own.
Perhaps this is exactly the kind of conversation the creators of the Moral Machine hoped to spark. Or on the contrary, perhaps there is less room for philosophical reflection than we think, as philosophers, ethicists, passengers, and pedestrians have all come a little too late to the moral debate party. After all, some of the most influential people in our world right now are guided by the "move fast and break things" motto, and it is software engineers and high-tech investors who have been dubbed “the 21st century philosophers.”
Ana Sandoiu is a writer, researcher & philosophy lover living in Brighton, UK. She also writes on her personal blog, On a Saturday Morning.
Christopher Frederick says
Ugh… This is all part of Transhumanism. I contend we, the collective We, are lost in this high-tech world. More thoughts to come.
Richard Keorkunian-Rivers says
These scenarios are remarkably poorly designed. Choosing to crash the car is not tantamount to choosing to kill the occupants, because surviving a car crash is more feasible than surviving being hit by a car as a pedestrian.