When the Partially Examined Life discussion of human enhancement (Episode 91) turned to the topic of digital technology, the philosophical oxygen was sucked out of the room. Sure, folks conceded that philosopher of mind Andy Clark (not mentioned by name, but implicitly referenced) has interesting things to say about how technology upgrades our cognitive abilities and extends the boundaries of where our minds are located. But everything else more or less was dismissed as concerning not terribly deep uses of “appliances”.
I think this is a misguided way to look at technology. It dramatically underestimates how technologically mediated behavior can impact character and autonomy.
Ok, let’s start where the action already is. Within the philosophy of mind, there’s lots of debate about whether, as Clark and David Chalmers insist, technologies like iPhones should be considered bona fide parts of our minds. Here’s Chalmers:
“I bought an iPhone. The iPhone has already taken over some of the central functions of my brain . . . The iPhone is part of my mind already . . . [Clark’s] marvellous book . . . defends the thesis that, in at least some of these cases the world is not serving as a mere instrument for the mind. Rather, the relevant parts of the world have become parts of my mind. My iPhone is not my tool, or at least it is not wholly my tool. Parts of it have become parts of me . . . When parts of the environment are coupled to the brain in the right way, they become parts of the mind.”
Now, this is not the place to assess the philosophical ideas that Clark and Chalmers propose. That would require us moving beyond empirical studies of how effectively humans can offload computational tasks to technology into complex and contentious metaphysical territory. But if you’re unfamiliar with the discussions and want a quick sense of the reasons marshaled to justify the “extended mind” thesis, check out this engaging TEDx talk and NY Times think-piece. And, if you want to understand why more traditionally minded philosophers reject the outlook, read Jerry Fodor’s lively book review of Supersizing the Mind: Embodiment, Action, and Cognitive Extension.
Regardless of whether you believe that technology is best viewed as a tool for solving cognitive problems or as an integral part of our mental machinery, there’s an important distinction to draw—one that was absent from the Partially Examined Life conversation and, so far as I can tell, doesn't get much play in academic philosophy. (Although useful parallels can be drawn to ideas defended by Albert Borgmann, a major thinker in the Philosophy of Technology.)
Outsourcing tasks that solve cognitive problems is not morally equivalent to outsourcing tasks that manage close personal relationships.
Let’s consider a concrete case. Imagine using an app that reminds you to contact your significant other and even suggests what should be conveyed when it reaches out on your behalf via time-delayed text messages that appear to be coming from you. Are these functions nothing more than variations of longstanding time-management technologies that alert us to scheduled appointments and important dates?
This is no mere hypothetical thought experiment. Over at The Atlantic and Wired I recently discussed two apps that offer users these exact features: Romantimatic and BroApp. With respect to neurotypical people using the former, I wrote:
“There’s no shame in recording some special, easy-to-forget events on a calendar. There’s nothing wrong with soliciting advice from friends and family as to what type of a gift your partner would prefer for some occasions. And under some circumstances—like when you’re trying to avoid a nasty fight—it is perfectly fine to get advice on how best to word a sensitive point. But these moments of dependence on others for relationship management—of both people and things—should be the exception. If they are the rule, your character is impaired. Serious questions need to be asked about why that’s the case and how often you’re behaving inappropriately . . . . While the person who needs a tool like Romantimatic isn't morally callous, I do wonder how well they respond to other social situations that require conscientiousness and caring.”
With respect to the latter, I wrote:
“Ultimately, the reason technologies like BroApp are problematic is that they’re deceptive. They take situations where people make commitments to be honest and sincere, but treat those underlying moral values as irrelevant — or, worse, as obstacles to be overcome. If they weren’t, BroApp’s press document wouldn’t contain cautions like: ‘Understandably, a girl who discovers their guy using BroApp won’t be happy.’ . . . It’s easy to think of technologies like BroApp as helpful assistants that just do our bidding and make our lives better. But the more we outsource, the more of ourselves we lose.”
David Berreby thought long and hard about these points, and in a great piece for Big Think came to an important conclusion about how some instances of technological outsourcing can diminish autonomy.
“Romantimatic doesn't help you with a decision to text ‘I love you’ at 3:15. It makes the decision (and, if you're using it to the hilt, it also decides that the text will contain those precise words). . . . When you outsource decisions that require self-monitoring and self-management, then, you're giving up some autonomy. First, you are, literally, making fewer decisions about what to do. Second, you are trusting yourself less, setting aside your own understanding of your condition in favor of supposedly objective measurements.... Third, you're offloading the real work of decision-making—the psychic act of ‘self-binding,’ forcing yourself to do what you aren't at the moment inclined to do. Something else is doing that job.”
And, as Michael Sacasas, creator of the blog, “The Frailest Thing,” aptly points out, we not only should distinguish intellectual from ethical processes, but we further ought to make the conceptual slice by giving deep philosophical thought to the difference between ‘essential’ and ‘accidental’ labor.
“The problem, I think, involves a conflation of intellectual labor with ethical/emotional labor. For better and for worse, we’ve gotten used to the idea of outsourcing intellectual labor to our devices. Take memory, for instance. We’ve long since ceased memorizing phone numbers. Why bother when our phones can store those numbers for us? On a rather narrow and instrumental view of intellectual labor, I can see why few would take issue with it. As long as we find the solution or solve the problem, it seems not to matter how the labor is allocated between minds and machines. To borrow an old distinction, the labor itself seems accidental rather than essential to the goods sought by intellectual labor.
When it comes to our emotional and ethical lives, however, that seems not to be the case. When we think of ethical and emotional labor, it’s harder to separate the labor itself from the good that is sought or the end that is pursued.
For example, someone who pays another person to perform acts of charity on their behalf has undermined part of what might make such acts virtuous. An objective outcome may have been achieved, but at the expense of the subjective experience that would constitute the action as ethically virtuous. In fact, subjective experience, generally speaking, is what we seem to be increasingly tempted to outsource. When it comes to our ethical and emotional lives, however, the labor is essential rather than accidental; it cannot be outsourced without undermining the whole project. The value is in the labor, and so is our humanity.”
If we value conscientious relationships and thoughtful decision-making, we'll need to be vigilant about preserving them in the years ahead. A recent poll conducted by Amy Vernon can make us feel optimistic about the future, but as it becomes more and more technologically-tempting to disburden ourselves of life’s hard but crucial work, we'll need to be crystal clear about when the line shouldn't be crossed that divides cognitive from moral outsourcing.