Podcast: Play in new window | Download (Duration: 1:32:31 — 84.8MB)
Continuing discussion of David Brin's novel Existence (without him) and adding Nick Bostrom's essay "Why I Want to Be a Posthuman When I Grow Up" (2006).
Are our present human capabilities sufficient for meeting the challenges our civilization will face? Should we devote our technology to artificially enhancing our abilities, or would that be a crime against nature, a God-play that would probably lead to disaster? Is thinking about this issue a juvenile waste of time?
Mark, Seth, Dylan, and Brian Casey are rejoined by Wes to reflect on ep. 90's discussion with David Brin and figure out how his project is related to transhumanism. While you'll get a more thorough introduction to transhumanism from Rationally Speaking or many other web sources, we did confront Bostrom's argument that extending our lives and enhancing our IQ and emotional range would be good, and human-all-too-human fun was had. Read more about the topic and get the text.
End song: "Waygo" from The MayTricks (1992). Read about and get the whole album for free.
Please support the podcast by becoming a PEL Citizen or making a donation.
Hey guys,
First off, thank you, thank you, thank you for posting this so quickly. I listened to 6 minutes and turned it off! Why? So that I can enjoy it and savor it, later, when there will be no distractions!!! You guys are beautiful, and coming from a misanthrope, that is high praise from me indeed!
Hi,
Just to Seth’s point about someone out there arguing that devices are expanding our consciousness. Or that we are outsourcing mental tasks (like memory) to devices.
http://www.abc.net.au/radionational/programs/philosopherszone/the-extended-mind/2986780
Cheers,
Filipe.
Thanks for the link! Exactly the source of which I was thinking.
BTW: there’s an excellent blog post by Hans Ulrich Gumbrecht on the subject of extending the self by apps (don’t know if there’s also an English translation, but for someone who studied Heidegger this should be easy;-)): http://blogs.faz.net/digital/2012/08/31/our-brave-newest-world-apps-und-fertigkeitsreservate-84/
seth this is the fellow andy clark that chalmers was working with on that, here talking about how we are natural-born cyborgs:
Thanks guys!
sure, if you get a chance I’d be interested in yer take on it, I think he does a good job of talking of these matters in very current/relevant ways that aren’t merely speculative/sci-fi and I think are in keeping with the kinds of philo that you folks cover here.
Wes, why so uncharitable? You may be ten times the philosopher that Brin will ever be, but he deserves credit for whetting philosophical appetites too.
Can we really just sit back and deal with each potentially disruptive technology after it is upon us? Are you certain that this debate is so premature? Should we fund research now into AI or enhancing human or other species genetics NOW, why or why not? There are huge dangers and huge opportunities, I want people like you engaged in these discussions and decisions.
I don’t think the PEL guys were saying that the issue of transhumanism is unimportant, rather that it is not really a topic for philosophy, at least not yet (maybe it will be when we actually know whether technology can expand your emotional range, etc). As Wes (I think) said, it’s an issue of policy, not philosophy.
for folks interested in reading a philosopher waxing post-human:
http://open.academia.edu/DavidRoden
Roden knows his stuff and would make a great PEL guest. I really feel the PEL team, loveable as they are, missed the philosophical importance of trans and post humanism; as others on the forum have commented, the relationship between humans and technology is pretty damn interesting.
Very nice follow up to the last episode, guys. I’m not sure where the line of demarcation lies with philosophy and ideas like those encompassed by “transhumanism”, but I agree with Donald’s post above, in that we definitely need philosophers and keen observers from other fields in on the conversation. I suspect part of the reason that guys like Brin characterize the humanities as being backward looking is that it enables them to depict those contributions to the conversation about the future as trivial or misinformed.
However, it seems to me that if philosophers entirely disregard stuff like transhumanism as fluff or fantasy they’re playing right into such a characterization. That would be declaring out of bounds a terrain upon which the inspirations for and decisions about real design are based (http://www.slate.com/articles/technology/technology/2013/04/google_has_a_single_towering_obsession_it_wants_to_build_the_star_trek_computer.single.html).
Jaron Lanier talks about this in one of his books: once a certain idea of how to do things is “locked in” by design, and then goes on to provide support for extensive further designs, it becomes entrenched and extremely difficult to alter. This is why we need critical thinkers to be involved on the front end far more than the “manic optimists”, as Wes called them.
Again, I thoroughly enjoyed this one, and thanks for getting it up so quick.
Daniel,
Do you think this might be more of a where do we demarcate “Philosophy” from “Political Activism” type of argument? I think the discussion of philosophy vs. policy is an interesting one but, like the PEL guys, I maintain they are two very different things. Is it fair to say that the aim of philosophy, and by dint philosophizing, concerns what “is” and what that means? I feel like this discussion takes us into Humean Is/Ought territory. We’ve been down this road before with the ramifications of nuclear power and weaponry. Has humanity learned anything from that discovery/invention? I don’t think so because the story isn’t over. Does nuclear energy via thorium, fusion, or responsible old fashioned fission save humanity from Global Warming or do we destroy ourselves with megaton bombs? For me, I think the topics of Transhumanism and Posthumanism are just too vague and boundless of ideas to be of use as a tool for guidance. However, because we are aware of these things, it is clear we live in very interesting times!
Well, I think there’s a lot of “is” still to be unpacked in the values and presuppositions that go into the design of technologies. By no means am I saying that the conversation belongs totally or even mostly within the domain of philosophers, but I think philosophy is one very useful approach that can provide understanding(s) that policy makers or activists can draw upon.
I agree that “posthumanism” or “transhumanism” are rather vague and boundless at this point in time, but they nevertheless appear to be powerful motivating ideas for some of the people who are designing the technologies that will be common place not long from now. I believe Wes even compared them to religious ideas at one point, and I’d very much agree. I just think that, even if it means projecting a bit, philosophy can do some good expanding the scope of consideration that engineers and scientists have as they push forward. I also think that David Brin offered a good reminder that if philosophy wants a place at that table it’s going to have to elbow its way in.
I agree… I guess I’m more of a pessimist when it comes to this stuff. Philosophers and others (just about everyone) should definitely be at the table, but I think the zealotry involved in these technological endeavors usually manifested in terms of money, ride roughshod over the concerns of the reasoned thinker. “It” will happen and that is worrisome to an extant, but I doubt there is much that can be done to control “it.” George Orwell was quite prescient with warning us of Big Brother in 1948, but despite the protestations of many ever since, people willing bring this technology into their homes of their own accord. Google Glass is just the beginning of this and imagine when it’s integrated in your anatomy. Then again, future generations will know no difference so there might not be a “problem” from their point of view anyway… I’m on the verge of 40 and feel like a dinosaur already!
Another note –
To my mind, one of the major ways a philosopher might be useful in our current situation would be by trying to nitpick the limitations placed upon civic participation by technologies designed primarily to make it more efficient (which guys like Andrew Feenberg and Carl Mitcham have been doing). Broadly speaking, this kind of work sussing out the ethical implications of distancing technologies abounds for the willing philosopher, but I’m sure it would take many out of their comfort zones. Physically embedded computer technologies and smart city technologies are in their design phases right now, and it’s a safe bet that the philosophers of a few decades from now will be damning our ignorance when they lament some of their unforeseen effects.
Once Wes is done making the world safe for Woody Allen and belly dancers I am sure he’ll loop back around and help save the rest of us. (Nah, just kidding, after all he’s done for PEL, he owes us nothing.)
As far as this being a religion, well we need something to fill the existential void after the death of God. We could do worse than a mission to help our descendents survive this adolescence of technology.
Just finished listening. Awesome!!! This was exactly the follow up discussion and critical and humorous analysis that needed to follow the David Brin episode. It helped me to rethink a lot of what he said in a better context, like the logical fallacies thing, I took to be something more like critical thinking skills or something.
And re-listening to the Rationally Speaking episode reminded me of some reservations I’ve had with transhumanism and such futurisms. If anyone doubts his comparison with Scientology check out:
Welcome to Terasem Faith!
“We are a transreligion that believes we can live joyfully forever if we build mindfiles for ourselves.” http://terasemfaith.net/
The Truths of Terasem
http://terasemfaith.net/beliefs
http://www.terasemmovementfoundation.com/
http://www.terasemcentral.org/
much respect
Just a note from the side-line and about transhumanism with technology. I think you were trying to find examples where there is actual transhumanism beyond “tools”, like paper and pen. There are groups of people who oppose some of the technology implant you wouldn’t think they could protest, but a friend of mine had a Cochlea implant (hearing implant onto functioning hearing nerves making deaf people hear almost normally) some years ago, and there are people within the deaf community who oppose this on the grounds of their deafness is both a bonding thing as a community as well as solidarity issues with those who cannot get the implant (damaged hearing nerves, for example). These are interesting examples of social constructs with long traditions which are breaking up through technological evolution, and some people would rather others be deaf *with* them than breaking it up.
A couple years ago I saw an excellent documentary about the reactions to Cochlear implants in deaf communities. It was definitely one of the best technology docs I’ve seen, because it banished all of the usual abstractions and went straight into the complexities of a real family exploring different ways of thinking about tech and culture. I highly recommend it to anybody interested in that kind of thing.
Here’s a link about it: http://en.wikipedia.org/wiki/Sound_and_Fury_%28film%29
And the doc itself: http://www.youtube.com/watch?v=0ki4qo-Dfos
So I waded through a few of the Brin audio clips and as you rightly said, it is mostly the same material repeated over and over.
He has the same kind of fascination as Zizek and Nassim Taleb in that although you may not agree with a lot of what they say, and have difficulty stomaching their attitude, they do have nuggets of wisdom or insight that take your mind on new paths whilst helping crystalise your thoughts on why you disagree with them.
To be fair to him listen to 29 minute mark and on here https://www.youtube.com/watch?v=M91gET7m7UI where he clearly takes the piss out of transhumanists and those that believe in progress of science much in the same way as those of old believed in witchcraft and so on.
Interestingly I am reading an awesome book called ‘The outer limits of reason’ by Yanofsky and just read a chapter where he uses limitations of computing to show you we will never achieve transhumanist goals regardless of Moores law etc. because of difference in computation needed between polynomial and non polynomial functions. It stems from the travelling salesman problem. http://en.wikipedia.org/wiki/Travelling_salesman_problem
Even if one grants quantum computing what it promises, it doesnt remove this issue, it is still a non polynomial problem and that is assuming as Brin says that we dont have to go deeper when new evidence seems to show computation goes on below neuron level, much like DNA junk now isnt.
He has some interesting thoughts re why Aliens havent contacted us. Why would they bother going ‘out’ into the universe if they seem to do what we are doing and go into the web building ourselves virtual universes…hey, maybe we’ll find ‘them’ there?
Final thought is that I dont see him critiquing things from a libertarian perspective despite all his bogeymen clearly being examples of what libertarians dont like, crony capitalism, colluding corporates (illegal), oligarchs (thieves), etc.etc. I dont understand why so many libertarians arent clearer about the limits to libertarianism rather than sneering at Ayn Rand who was a very narrow minded libertarian seemingly unaware that exchnage and emergent properties are not only market phenomena. She might do well to read Hayek and think about the source of our moral codes for example… non zero sum indeed. Using Rand to dismiss libertarians is a little like using Stalin to dismiss progressives… I exaggerate but you get the point.
By the way, do you have a feature where we can get email updates to comments? Yours needily…
Good question. Once I comment on a thread or write a blog, I get notifications of all responses. Do you? Or is that just b/c I’m a moderator/member maybe? I’ll ask Dylan.
Haven’t read the book but it sounds incredibly interesting, the way you describe it almost mimics my general musings. RE: the TSP thing, it’s more about NP problems in general no?
I am not particularly familiar with transhumanist goals, however I would be wary of what anybody tells me is possible many years from now. I mean seriously we know the entire limitations of the universe already? Damn that was fast. I thought we were just beginning…
Many things may be algorithmically impossible, however, I would say there is evidence that not everything is algorithmic. No Turing machine can ever solve the halting problem. However to say nothing can solve the halting problem is obviously untrue, because I can. Yet the brain requires input to produce output.
No matter how you want to twist it, there is a physical representation of the brain, which gives rise to cognition. You can argue it is more than that if you wish but that would merely be an emergent feature of the physical representation.
Perhaps i’m drawing a weak/logically faulty link here. But at some point somewhere something which is remarkably similar to computation is taking place. And whatever the process is, it is capable of finding solutions to problems we claim are impossible.
I’d be curious to hear what you guys thought of Bostrom’s simulation argument, which is probably what he’s best known for in the popular media. In short, it argues that if human consciousness is capable of being simulated, and human civilizations actually ever goes on to create such simulations, then it’s very likely we’re living in such a simulation, as the number of “simulated worlds” is so much higher.
The argument has always struck me as intuitively absurd for various reasons, but surprisingly hard to refute.
So if someone in the future has the capability to create something like a simulator in which we – us hapless simulated consciousnesses – can’t tell the difference between the simulator we’re in and being real objects in real life, then we most likely live in that simulator because … something? Why are these simulated worlds so numeral? What if they’re so hard to make, they only ever did it once?
It’s only hard to refute because embedded in the premise is a description of what we know as something else, and I assume the argument that we can never find out because the simulation is *that* good? Sounds like … magic, to me.
To me, it’s silly; in order to simulate a universe to the complexity of what we experience requires more resources than are available in said universe. I think a lot of people take complexities for granted, but to simulate a beach with every individual grain of sand, not to mention every atom in the observable universe, requires so much computing power, memory, references, templates, dynamics, etc.in the technology we know (or technology we can foresee for some time) as to be absolutely impossible. “Yeah, but this is the future; you don’t know what amazing technology they have there!” So the tech in question which is able to do this is, well, magic. And magic is not a good premise for any argument. It’s an argument from pure speculation and magic cloaked as “technology of the future”.
So, we live in the year 2014, but really, we’re all inside a dream bubble dreamed up by an advanced alien race on a planet of slime who evolved massive brains that could connect up and share a dream world, and they wanted to create a prettier place in their dream, and we’re far, far into the future and we’re all dreamy consciousnesses inside slimy aliens Their dream world started as a way for aliens to cope with their slimy unpleasant world, but now they’ve dreamed for so long they’ve forgotten how to look at reality, and have forgotten how to get back out of it. Energy is no problem; their bodies suck in all their needed energy from the slime they roll around in through their skin. And you can’t refute this argument, because you’re making it inside that dream, and you’re simulated. So there. 🙂
This might of interest for the community:
http://www.newrepublic.com/article/117242/siris-psychological-effects-children
That’s fascinating on so many different levels.
Wow, you totally saved the last episode with this one.
Huge kudos!
Great episode! Also, please get Zizek as a guest. An episode like that’s gotta be good!
It seems to me the philosophical problem of technology or tools altering the way we look at the problem of subject-object dichotomy or expand the idea of what it means to be human is rather an old one, and I don’t necessarily see a giant qualitative difference in the kinds of science fiction speculations that transhumanism offers. Many cultures do not consider humans to be bounded individuals, but rather constituted through the flow of relationships between objects, humans, animals etc. (I guess you could think of them as variations of the extended mind hypothesis). By looking at these cultures we can already relativize our understanding of personhood, agency, the mind, sociality, etc. much more radically than, I’d argue, thinking about how the iPhone or technological augmentations change the way we relate to other people. Thus I tend to share the indifference expressed in the podcast.
I think using technology to artificially enhance ourselves is wrong. I’ve just started philosophy again so I may have logic that ties it self in knots or may be one sided.
One, using artificial enhancers takes away the humanity away from us. I think of the movie Terminator Salvation when the man was a cyborg. What in the world would we be if we started to become more robotic? Definitely not human. If we wanted to really become more enhanced , then we should evolve what we already have present.
Without a doubt , since the very early days of man, man has made the sharp rock and the spear, having technology was enviable. I think technology outside ourselves is fine. But something in us like a chip is wrong. It would also be like spitting god in the face.
Also as we start to become more robotic we will probably lose are emotions, which is very important.
I am very open to criticism.
Taking the piss out of Bostrom for presenting surveys showing that people don’t want to die strikes me as missing the wider point. Critics of transhumanism often suggest that greatly extended lifespans are undesirable, and in some sense alien to what it is to be human. They characterise transhumanism as a radical and toxic break with that. Bostrom is trying to position himself as a non-radical here; he is pointing out that even the very old and sick continue to value living, so the transhumanist project is only the attempt to make that widely shared value a practical possibility.
Of course at one level it is trivial to point out that people don’t want to die, but I think it is necessary in order to defend against critics of transhumanism.
interesting to see the discussions here picked up and discussed
Steve Omohundro Talks Technology for a Better World – Podcast #134
http://www.bulletproofexec.com/steve-omohundro-talks-technology-for-a-better-world-podcast-134/
Talking Nerdy (And Ethically) with Cara Santa Maria
http://www.pointofinquiry.org/talking_nerdy_and_ethically_with_cara_santa_maria/
much respect
I did think that this episode raised the whole question of when and on what grounds to we start to consider different species as having personhood and the rights of animals (and by extension the rights of people with various disabilities or mental states that make them unable to communicate). Brin seems to suggest that communication (i.e., we can understand) means personhood – so a current dolphin has no rights but a talking dolphin does. Similarly, I can turn off the life support of a person that cannot communicate but keep it on if they can talk. This seems a bit unfair – there might be interior states of consciousness and awareness that are very different from mine but real and valuable to those that experience them. I might not know that its like to be a dolphin (qua Nagel) but I can think that the dolphin is has some claim to joining the personhood club based on things other than being able to have a chat.
I think we have some fuzzy notion that self-aware creatures (“I know I and an I”) are persons and that we can tell if they are aware either through them telling us (of course, how do I know they are not just saying that) or through some evidence of purposeful, self aware behavior. However, I think there might be two problems: (1) what counts as communication; and (2) what counts as evidence of self aware behavior. My cat cannot speak but he and I communicate (food, attention, go away, come here, help, open the window, make the other cat go away). He also seems to show behavior that indicates self-awareness (or is that just me). Asking him to communicate in my terms or forcing him to via Brin’s uplift seems wrong – it seems to deny him his essential “catness” and imposing a human mental state as the passport to being a person. Like Rocket Racoon says in the Guardian’s of the Galaxy movie “I didn’t ask to be this way”.
Away – just confused morning thoughts. But perhaps a show on animal rights? Are animals autonomous individuals? Do they have personhood? Does this all go back to your early consciousness episode?
Some interesting transhumanism-related podcasts, for those interested:
https://www.singularityweblog.com/zoltan-istvan-the-transhumanist-wager-is-a-choice-well-all-have-to-make/
http://futurethinkers.org/transhumanism-technological-evolution/
http://smartdrugsmarts.com/philosopher-david-pearce-transhumanism/
Hmm, Are you guys going to invite said Bostrom to the podcast. I feel as though any points left to be undefended just lie as speculation and boasting your own beliefs. Basically a circle jerk of reinforcing your preconceived notions. Why not invite Bostrom on the show, seems like a terrible format, I listen to a lot of podcasts and this becomes so uninteresting when you only pick a part someones work without there presence at all, very one sided and boring.
You are lazy
http://partiallyexaminedlife.com/2015/01/06/ep108-nick-bostrom/
I just wonder, it being 2016, how dissmissive is the PEL crew is of Bostrom? Enjoying the podcast as I am, the constant smirking was a little bit much.
You know we had him on later, right? http://partiallyexaminedlife.com/2015/01/06/ep108-nick-bostrom/. I find the topics he writes about pretty interesting myself.
I do now. And after listeaning to the episode 108 I do see my comment as unnecessary. Even though you “treated it rather casually” (108, 36:55) here, the 108th episode more than makes up for it in my eyes.
Cheers & thanks.
Thank you for the interesting episodes. I didn’t really like the Bostrom episode because there wasn’t enough of you guys and his adoration for himself was a bit grating.
My question/thought here is – when considering things like multiple copies of yourself, it always seems like there is no accounting for things like attachment theory. When I had my kids I got really into reading Winnicot and the idea of the good enough mother. The idea being that we gradually fail our children as they are emotionally able to handle it. If you are a mother you know that this is a really difficult thing to wrap your brain around. (granted, I had/have some attachment issues so I may be bias) but determining when it’s ok to walk away and let your baby cry because you cannot always be with them is tough. But in this world, a mother would never have to fight such biological urges. She could make multiple copies of herself and the child would never experience that anxiety – be it the anxiety of separation or the very difficult task of learning to be alone with oneself. This all seems to play an enormous role in the development of self and in becoming an adult, the important task of separation and individuation etc. It’s like these books take a variable that is so huge and change it and forget that there are huge implications as to that outcome. I have no idea what they would be and perhaps I am way off because he discusses this and I didn’t read the book (and probably never will). Is that just a thing in science fiction? Is it a mischaracterization of the point of the genre to apply such logic to it? I like thinking about things in the way that a lot of science fiction does but I don’t take it to be philosophy because the arguments fall apart or become uninteresting to me after a while due to this lacking of thoroughness (is that a word?)
Anyway – also wanted to add that it would be GREAT to have Dylan’s brother (Brian?) on again or any time because he represents me – a person who is interested in thinking about things differently but doesn’t have a philosophical background. I don’t know – I guess it’s nice to have a sanity check every once in a while of someone saying….WTF – why does he even need to say this?!! or something along those lines. It would be nice to bring some universality (ie – why this or that idea is important to life as a whole or perhaps what it has led to or influenced that is actually useful.–I’m really thinking of the mathematics/linguistic people here).
You guys are awesome.
Forever a groupie.
I meant the Brin episode (90) is the one I didn’t love – wrote Bostrom. It was still interesting though.