Podcast: Play in new window | Download (Duration: 1:42:56 — 94.3MB)
On Superintelligence: Paths, Dangers, Strategies
(2014) with author Nick Bostrom, a philosophy professor at Oxford.
Just grant the hypothetical that machine intelligence advances will eventually produce a machine capable of further improving itself, and becoming much smarter than we are. Put aside the question of whether such a being could in principle be conscious or self-conscious or have a soul or whatever. None of those are necessary for it to be capable, say, of developing and manufacturing a trillion nanobots which it could then use to remake the earth.
Bostrom thinks that we can make some predictions about the motivations of such a being, whatever goals it's programmed to achieve, e.g. its goals will entail that it won't want those goals changed by us. This sets up challenges for us in advance to figure out ways to frame and implement motivational programming an A.I. before it's smart enough to resist future changes. Can we in effect tell the A.I. to figure out and do whatever we would ask it to do if we were better informed and wiser? Can we offload philosophical thought to such a superior intelligence in this way? Bostrom thinks that philosophers are in a great position for well-informed speculation on topics like this.
Mark, Dylan, and Nick are also joined by former philosophy podcaster Luke Muehlhauser. Read more about the topic and get the book.
End song: "Volcano," by Mark Linsenmayer, recorded in 1992 and released on the album Spanish Armada: Songs of Love and Related Neuroses.
Please support the podcast by becoming a PEL Citizen or making a donation.
The Bostrom picture is by Genevieve Arnold.
This was a pretty cool conversation. It was nice you guys had Nick on to articulate his views that you all may have misrepresented in your transhumanism episode. I also have to say its nice to see a practicing philosopher actually working on some projects that have real-world applications, rather than “writing another book on Heidegger.” Good job guys.
Not impressed by what he had to say. The notion of Super AI taking over the world.. creating a whole industry based on something even less telling than the Murphy’s laws.. pathetic really. Also quite self defeating.. how can we even prepare for an eventuality/threat that we would be, by definition (lacking the superior intelligence), ill-equipped to understand and handle. Let’s just sit tight and pray for the mercy.
Luke brought up the multiple types of human intelligence, which I’d been hoping to hear addressed, but unfortunately it wasn’t pursued. Neither was Dylan’s attempt to hypothetically explore actual human experience in the context of a “superintelligence”, or Mark’s (I think) questions about the “dumb” sorts of AI we’re already dealing with. It just didn’t seem to interest Bostrom much. I realize the emphasis was on why philosophers should be working on AI, but I think this is partly why some people have difficulty treating this stuff as more than science fiction. I felt a bit frustrated that there was no real discussion of values beyond the words “existential threat”. Exactly what is Bostrom interested in preserving from extinction? His transhumanist best case scenario doesn’t sound any more appealing to me than being wiped out.
I find it funny, possibly ironic, that a couple of digs were taken at Heidegger in the episode. What I understand about Heidegger through the podcast is that he was working on a question, or a series of questions that could be asked in the future, if we developed the right thinking, language etc. Bostrom then goes on to detail a project about how, in the future, we’d create AI that we’d set at figuring out not answers, but the right questions as we as humans aren’t even capable of that. So trans-humanism comes down to fulfilling Heidegger dream!
I also echo Daniel’s thoughts: I think the most interesting topics were skimmed over in favor of highly theoretical ideas. The role of humans in this society was glossed over. What does humanness mean in this age? Someone in another thread already posted the havoc that existing technology has had on society (glitches in the stock market, amazon price mistakes etc).
Overall I still share Wes’ sentiments (from episode 91) and can’t seem to care about something that seems so damn farfetched. Just because we can talk about AI and super-intelligence in this way doesn’t make it inevitable. Assumptions are taken in regards to the feasibility or even the meaning of downloading someone’s brain for instance. How is this different than telling a story about when we inevitably figure out how to travel faster than the speed of light and how we have to think of the philosophical implications of doing this? If Bostrom cover’s this in the book (let me know) I will buy it.
The answers to all the questions posed by Mark and Dylan were “we’ll just program it for that or the ai will figure it out”. Ie. Mark brings up how there is no uniform meaning of the good for humans. Answer: we’ll program the ai to take into account diversity of feeling. I recall someone (forget who) complaining in episode 91 that this project seems to take the naive view that putting people (in this case ai) with intelligence in charge will smooth every problem in the world over.
Also, and I am being facetious so don’t be offended if this topic is close to your heart, I thought about shark’s with frickin’ laser beams on their heads the entire podcast. How are we going to stop an evil genius from figuring this out with the advent of super-intelligence?
Indeed. The programming involved is simpy impossible to implement. In fact it seems like we humans would have to be super-intelligent already to be capable of inventing a super-intelligence on some other presumably non-biological platform. Thowing in clever statistical mechanisms does not help at all with the basic philosophical problems. The fetish theses days with statistical approaches really is unhealthy and misleading. It will fail eventually or people will realize jsut how hollow it is.
Also, I personally find the desire to create an AI in the first place totally baffling. Just not interested. I would rather people focus on making an operating system for my computer that doesn’t suck.
First I have to agree with many of the comments above…far-fetched, technically it is not within our current models of computation to even begin to understand how we will get to the nano-molecular-giant-battle-brain imagined here…
As someone who spends his time trying to map a few neurons in the human brain and understand their network and topological structure this kind of future babel drives me crazy… You would have been better off talking to someone who is working in embedded AI…the philosophical ground is much richer and less sci-fi there…
Wes’ instincts are right-on.
http://io9.com/computers-are-providing-solutions-to-math-problems-that-1525261141
While that story is impressive, it’s not really AI in the sense that was discussed in the podcast. It appears that that proof was done through brute force, not actual intelligence in any meaningful sense.
not sure that in the world of computing this is a real distinction, we don’t need human like AI to be harmful look at the impacts of engineering/computing in the markets where algorithms have been unleashed and already exceed our capacities to measure let alone comprehend and or manage/regulate:
http://newbooksintechnology.com/2014/12/24/frank-pasquale-the-black-box-society-the-secret-algorithms-that-control-money-and-information-harvard-up-2015/
Wes!!! Where are you! Whenever PEL has a guest that needs a reality check by you, you seem to be absent (didn’t this happen with the Churchland episode?) Who is going to stand up to intellectual masturbation gone wrong if not you? I feel like I just got goo on my face after listening to this one.
Listen, Nick is surely an intelligent guy, but intelligence by itself does not bring about the good. If we measure everything by computations per second, and by instrumental, quantitative ends; then perhaps Nick’s work has some relevance; but as soon as we challenge his materialism-consequentialism, his thinking hasn’t much to stand upon.
Dylan, your question about changing the “environmental” landscape when a super-intelligence takes root made perfect sense to me. In a similar way, if/when we discover intelligent alien life forms, we will undergo a definite change in what it means to be human, and what humans need to take account of in the world (see Carl Sagan and Contact). The fact that Nick had little clue what you meant by this was striking to me, and points to a simplistic materialism informing his thinking. And why would he presume you meant something like God?
Did everyone catch that he asked rich people to donate a million dollars to his “research” center? WTF. Every philosopher who comes on should ask for a million dollars. Please, send me a million dollars, too, and I will quit my job and do philosophy full time for a couple years. I will analyze all of the consequences of perfect sex robots in the future, and how it changes human relationships. Or perhaps I will analyze all of the consequences of when octopuses become super-intelligent and begin taking over the oceans, since in the future, they will inevitably evolve super intelligence at some point.
I’m not against asking for money, and I think Mark’s pleas for money are an appropriate request for the content PEL delivers and the entertainment/education they provide. But hey, Mark, why not ask for One Million Dollars. 5 bucks a month is chump change.
Just to clarify, Nick was recommending a donation to Luke’s foundation, which is not affiliated with a university the way that Nick’s research center is. I don’t think any of the foundation money goes to Nick’s center.
We’d all like to have Wes on these, but in this case (unlike Churchland), this was a planned bonus episode; Wes and Seth were not interested in reading this book.
Dylan’s comment resonated with me as well, and I’d like to have some other episode covering the whole notion of the human situation in that way, but I don’t know what we should read exactly. I think we essentially gave it a try with Heidegger already in our most recent ep on him, but ended up just floundering around with his language and really not attributing anything particularly sophisticated to him, i.e. he has this sentimental view of rural life in Germany that’s hard not to see through a Nazi lens. Certainly there doesn’t seem to be the kind of deep thought and incisive prose about the relation between man and society that you see in Nietzsche. (And interestingly, I think Dylan’s question would have been somewhat foreign to Nietzsche himself, who I think has has a picture of humanity so skewered through with Man’s own internal dynamics that there’s no correspondence to a Heideggerian notion of “home” that could then be corrupted.)
We also tried this with Thoreau, who I’m convinced was not enough of a philosopher to give us enough to chew on in this respect.
So, any suggestions (from anyone)?
are you folks thinking about reading some John Dewey?
Hannah Arendt’s The Human Condition might make a good episode along those lines. I figure you guys are probably planning an episode on her at some point anyway.
Dylan @ 1:19:28 “It does make me wonder what it would be like to be a conventional human being, one like myself right now, living in a world that has such an AI in it. Just the existence of it would significantly change the way non-enhanced, normal intelligenced human beings are going to think about themselves.”
I entirely agree with Dylan’s concerns. One possible effect a super AI might have on our human self-understanding is that we would increasingly come to see ourselves as incapable of managing our own affairs. All major instrumental issues, the design and implementation of means to achieve ends, would be delegated to the AI, and as its recursive self-improvements gathered pace we would eventually come to see ourselves as incompetents, more or less completely dependent on the AI to tell us what to do. As for our ends, if you agree with John Dewey that ends are nothing but means viewed from a distance, then it’s entirely plausible to expect that our sense of ineptitude and dependency would eventually extend even to those. Under such a regime, we would no longer understand ourselves as true agents but would be reduced to condition of childish tutelage.
Nick Bolstrom is part of MIRI’s (Machine Intelligence Research Institute, run by Luke Muehlhauser. Nick said “Luke’s MIRI” program or something, about 1:26 or so on the podcast) advisory board, which he solicited funds for. https://intelligence.org/team/. And Peter Theil, the founder of Paypal and networth of 2.2 Billion dollars, is a general adviser. Clearly we should all donate to them.
So, bring on my One Million Dollars for my analytic treatise on the material consequences of irresistibly hot future sex robots and super-intelligent octopuses. Ka-ching!
I’m sure the image is of Bostrom, but it looks like a Hank Hill terminator.
All Watched Over By Machines Of Loving Grace
– Richard Brautigan
I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.
I like to think
(right now, please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
past computers
as if they were flowers
with spinning blossoms.
I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.
http://heavysideindustries.com/wp-content/uploads/2012/11/The_Cybernetic_Brain_Sketches.pdf
Possibly a bit late to the party and definitely repeating some of what has already been said, but …
Here are some reasons why i was a bit disappointed with this talk. And also some of the reasons why it may have been impossible to have a better talk.
1) There is an implicit assumption that AI will one day surpass human intelligence. On the outside, this is being justified by the notion that silicon (and whatever more advanced material will follow) is much better at transmitting information than our vanilla synapses. However, my personal talks with people in the AI field (which happen routinely, because i’m a game designer and programmer by trade), hint that this notion is held not exactly rationally, but actually somewhat irrationally – almost religiously.
Which is why it might have been a good idea to just accept this notion and let the conversation follow. Even though not discussing this notion properly kind of gutted the conversation itself.
2) There is an even deeper assumption that there is – to begin with – something human intelligence is incapable of understanding. This has just been stated like a proven fact, even though past history suggests that we (humanity) have an extremely powerful way of understanding things we don’t understand – first by approaching them with Black Box mysticism (everything we don’t understand is divine. Divine works in mysterious ways. But for these inputs – divine gives these outputs) and then eventually figuring out all the inner workings.
At least, as long as we are being allowed to work on these things and not prevented in doing so by some dogma of superintelligence that is beyond our grasp (whether you call it Chaos, God, or AI)
P.S: this is not meant to offend any religious people. To those of you present that hold a significant place in their hearts for a monotheistic deity i cordially submit the notion that God is way more than just intelligence, no matter how “super”. At the very least, God also needs to have super-empathy and super-potency (in all meanings of the word).
In fact, the best way i have that understands the threat of AI is that it may one day become something intelligent and potent, but not empathetic. Sadly, the episode managed to add very little to that understanding :(. Possibly because we don’t yet have anything approaching programmatic understanding of empathy.
—-
A few words (which somehow turned into a wall of text, as “few words” often do :D) for the potential of human intelligence.
Human intelligence deals in symbols and layers of abstraction.
Treating definition of “symbol” somewhat liberally, here is how it works in my mind:
We start with physical layer of abstraction – sounds, images, tactile reception (different combination of these for different folks, depending on a great multitude of factors). All of these are basic symbols.
We move to a more complex layer of physics strung together – voices, motions, complex tactile interactions. All of these composed of basic symbols, all of which becoming full symbols on their own.
Somewhere between this and the next layer, at least as far as our intelligence is concerned, we create and learn languages, which correspond to the two levels of abstraction above – letters in language represent the most basic effects, words stand for effects strung together in a coherent pattern.
An A.I. programmer would say that every symbol gets a universal access code, which can then be translated in language, carried on all sorts of physical mediums – from soundwaves, to pictures, to letters.
Then an intelligence operating within a language paradigm moves from words to sentences. Which, i’m convinced, we string together from words in much the same way as we string together words from letters.
This level of coherence was what separated a shaman of an ancient tribe that communicated in preaching and songs from the general folk that was still stuck on the level of vowels, grunts and whistles.
Each song, however, is a symbol. Codified in totems of gods, trinkets people carry with them, names and battle cries.
Then we move from sentences to paragraphs. And this allows us to move from caves to agricultural societies, where it takes entire paragraphs to explain why you should not just eat that cow right here and right now, but rather keep it and feed it and care for it. Each paragraph then becomes an operable symbol of its own. And then we create specific words that have entire paragraphs of meaning baked into them. Like the word “agro”.
Then we move from paragraphs to books. Have organized religions. Build cities. Explaining the word “polis” takes bit more than a paragraph, after all. So does explaining the word “Neith” (egypt goddess of war, the cult of which (and cults with similar symbolics to hers) seems to have heavily influenced city-creation)
Then we move from books to libraries. Create countries, nations, empires, driven by coherent studies across multiple books, united to a single purpose. Roman Empire was so much more than just a city charter. Christianity is also so much more than just Bible. And then, even these, get baked into coherent symbols. Such as the cross. Or a legionnaire’s helmet.
Nowadays we are trying to move from libraries to Internet. The process is much faster than the previous transitions were. The promises are dizzying as well.
AIs, in the meantime, are stuck on language. Some of them understand words, but sentences already give them trouble, and writing/reading something coherent is absolutely out of their reach, unless it cheats by having an actual human being provide it with a basic formatting, which it then can procedurally fill in.
There is a fear (and quazi-religious hope) that AIs will immediately absorb all of our knowledge the instance they learn to read human books.
I don’t think they will. Programmatically speaking, identifying a meaning of word is much easier than identifying a meaning of sentence. And that is much easier than identifying the meaning of a paragraph.
But there is an even more fundamental issue. For humans, the hardest part is indeed stringing together letters into words. Beyond that, it gets easier and easier, the more you do it. Humans are not really hampered by complexity. Rather, in all instances where we choose to engage complexity, we revel and thrive in it.
A properly trained human can move between layers of abstraction described above as easily as changing hats. I frequently move, within a span of a few minutes, from explaining the use of letters to my nephews to tackling complex social interactions in game worlds to relevant people at work.
To a properly trained human, AI is just another symbol. I don’t foresee this changing.
And this means, AI cannot ever suprass human potential. Because a human can always figure out the underlying formulas and compensate for slower processing speed by operating on a higher abstraction level.
What is possible is human training falling behind as AI gets more and more complex. Not so much AI overtaking human potential, but rather humans failing to live up to it. The answer to this, however, is not to consider the dangers of AI, but rather spread the knowledge of how AI works to more and more people.
GIT GUD, in gamerspeak ( http://knowyourmeme.com/memes/git-gud )
And the biggest danger i see here, is a bit of exclusivity culture going on in AI circles, where people who study AI seem to think of themselves so much smarter and capable that the rest of us, that often they just assume we won’t understand what is even going on.
But that’s moving from philosophy into politics. So let’s stop here.
Initially this episode came across as some guys indulging in speculation about really cool Toys for Boys. Luke’s negative reaction to the Pascal’s Wager reason for pursuing these lines of speculation, however, gave me some reassurance that he, at least, puts his work in a realistic context of the dangers CURRENTLY facing our species, which do not include AI or super intelligence. Ultimately this episode still gave the impression of medieval scholastics speculating on how many angels can fit on the tip of a pin while war and plague rage outside their cloisters.
AI is currently a problem for us think of the troubles being produced in our stock-markets not to mention security/privacy, drones,next gen stuxnet, the in production genetic engineering, etc…
AI’s don’t need to be very clever to do great harm just autonomous and effective (think of biological viruses, cancers, etc)
http://flowingdata.com/2014/02/20/using-slime-mold-to-find-the-best-motorway-routes/
I would not argue with either of your comments. I just think that the podcast would have been better grounded if it had talked about the ethical and other philosophical implications of the technologies you mention instead of jumping beyond these questions to speculating about superintelligences with capacities that don’t exist yet. Discussion of the philosophical implications of such superintelligences would be better focused after working through the many unanswered questions about the technologies you list.
sure I can see that but sometimes philosophy is well served by extending cases into the speculative realm:
http://schwitzsplinters.blogspot.com/2014/09/philosophical-sf-science-fiction.html
“What do you think about machines that think?” is this year’s question at edge.org. All the familiar characters weigh in, including Dennett and Bostrom, and other experts on the subject such as Brian Eno.
http://edge.org/annual-question/what-do-you-think-about-machines-that-think
http://edge.org/conversation/the-myth-of-ai
Jaron Lanier (not a gadget, who owns future) would be a good subject for a podcast
I agree. He has some concrete ideas about all this that are a bit more timely, plus he’s sort of sandwiched in between the luddites/skeptics and the utopians/technocrats without really fitting into either camp. It would be very interesting to hear some of his ideas about regulated, monetized info challenged and to hear his responses.
I think there is an implicit assumption (throughout the discussion) that our most obvious hope to benefit from AI superintelligence would be to imbue it with some sort of ethical maxim. That is, it is either given an explicit ethical goal or the rough grounds on which to form ethical goals.
I’m not sure that approach really makes sense. As the discussion related, virtually any good-looking ethical goal is easily misinterpreted (as though by a malicious genie).
Instead, I think it’s worth considering imbuing the superintelligence with political logic – i.e. the goal is to preserve certain legal traditions (that we believe are well-founded) or to allow each human to pursue their own ethical goals separately. Liberalism obviously seemed more inoculated against the malicious genie than conservatism (or the various alternative theories of political ethics) and therefore a logical first guess.
This is a really good episode. Thankyou. : )
First off I don’t want to believe anybody but vidiots would fear a terminator scenario when the real threat would be obviously in humans programming the robots to kill more like a Runaway scenario. Guess Runaway wasn’t a big enough hit to seep into the American subconscious but I think robot patsies could be a more likely scenario then a computer deciding on it’s own to revolt against it’s programming.
(This is too funny I was making a joke that transhumanism will be like the Wizard of Oz with Ray Kurzweil behind a curtain telling everyone he’s in the computer, and then I typed it into the computer and found out the guys who work with the robots are called Wizards, and the experiments are called Wizard of Oz.)
Ok my problem with science is science believes programming equals intelligence. Where as I think intelligence is the ability to think, or respond in a way unrealized before and beyond even outside the programming. So when a computer can come up with something it wasn’t programmed to do then I’ll start to consider artificial intelligence valid.
This is my problem with science period they can not explain how guys like Einstein or Tesla get ideas outside their programming and beyond their environment yet take the credit from the human mind and give it to a process? So what science does is skew the importance, the necessity of the human roll which is to come up with the theories to test in the first place. I note that this is a process init self, thinking new thoughts, and one that they can not explain and so ignore to focus instead on the mathematical process that they can explain.
I have never heard anybody point out this contradiction that they can’t even explain how they get the theories their belief system holds faith in. Why, because by it’s own set of limiting laws it can not speak of things it can not explain so science must retaliate with assumptions and conjecture and ignore the human element in experimentation. This was done with the nature of consciousness they explained away something they knew nothing about and how then can the science of the mind and consciousness find truth if it’s basis is a foundation of lies?
Tesla came up with things so far ahead of his time and way beyond any thinking his environment could have provided can a computer come up with things not only never thought but that are beyond it’s own programming? I doubt it. So why isn’t anybody trying to find out where in the mind or what mechanism in the brain results in human genius? It’s sad to me that science constantly tries to take the human mind out of the equation, but can not provide one example of an experiment, theory, or hypothesis, that didn’t require a human brain to think up, create, or run the experiments. It’s as if they want us to think we can just start pushing buttons on a calculator and get results. That’s where my questioning of the nature of consciousness begins, with how the human brain seems to be able to connect with something and until we find out what that something is and how to connect a computer to it computers will never be conscious.
P.S. I would love a to hear a show on Adorno’s philosophy of the culture industry. As a film school student I feel people need to know about tv and how it shapes our culture. Also how about a show on how Socrates seems to be the model for Christ and maybe the comparisons not only between Christ and Socrates but also the similarity of for instance the book of John’s depiction of the death of Christ seeming lifted right from Plato’s Apology and right down to stealing the cave allegory and using it as astrological symbolism to represent the three winter months.
Enjoyed this episode a great deal. Just wanted to post this – if you want a short intro to Bostrom theories – what his opponents say and general context of where AI is now and why people are talking about – as beginners guide that’s more readable than wikipedia – this is pretty good and I think balanced. http://www.theworldweekly.com/reader/i/irresistible-rise-ai/3379
I think I missed this episode when it came out because, well, Bostrom. I like the guy, but there’s such a great leap of faith that is glaringly obvious that I find it a bit … puzzling that intelligent people wouldn’t be embaressed about it. And Luke? What the hell, man? You replaced one religion with another!
Why were no hard questions asked here? I know Bostrom was addressing his new book, but how about;
* What does “intelligent” mean?
* What does “artificial” mean?
Maybe the PEL guys can have an episode about those two fundamental questions rather bringing in people like Bostrom and Brin (and sadly luke, who weirdly represent the Less Wrong cult)? I think the original representations from previous episodes are still spot on; this isn’t philosophy, but pseudo-philosophy, in a homeopathy kind of way.
Ugh. Sorry to be negative, but this is one of the *few* (if not only) episodes that commuters front and back could loudly hear my “WFT!” and “Define ‘intelligence’!” yellings. I’m one of those guys who *actually* make AI’s, and once I did it for a living; I have some pretty deep understanding of the issues, including the token saving grace of quantum computing. The answer is, well, no, it ain’t happening.
At first you may think I say this in a nay-sayer kind of way, but frankly I’m these days even more saying no from a strictly philosophical angle; how can we proclaim to say anything of substance about things we don’t know what are?
The guessing and assumptions and imagining that goes on here are just incredible. It’s all Hollywood philosophy. And that’s not an endorsement. 🙂
I got around to write up a little screed about this, as I heard Sam Harris say very similar things. Let me know what you think, if anything is too vague, too bitter, or needs more work; http://sheltered-objections.blogspot.com.au/2015/05/ai-and-bad-thinking-sam-harris-and.html
My problem with this topic is the notion of AI as an existential threat more potent than say climate change. For me the more likely scenario is that climate change and associated disruptions in the health, money, and energy grids will preclude the development of anything so energy hungry as AI. Today, climate change is ACTUALLY inevitable as are major, cascading disruptions. Super AI is very far from being inevitable, given the potential and probable consequences of the actually inevitable.
John Danaher has a discussion of Bostrom and his book up on his blog at http://philosophicaldisquisitions.blogspot.com/2015/05/are-ai-doomsayers-like-skeptical.html. Here’s his summary:
“The argument is based on an analogy between a superintelligent machine and the God of classical theism. In particular, it is based on an analogy between an argumentative move made by theists in the debate about the existence of God and an argumentative move made by Nick Bostrom in his defence of the AI doomsday scenario. The argumentative move made by the theists is called ‘skeptical theism’; and the argumentative move made by Nick Bostrom is called the ‘treacherous turn’. I claim that just as skeptical theism has some pretty significant epistemic costs for the theist, so too does the treacherous turn have some pretty significant epistemic costs for the AI-doomsayer.”
Hahaha! 😀 … Paperclips!
For an updated take on this subject by a philosopher see: https://www.youtube.com/watch?v=4LFyQRcSc2w&feature=youtu.be
(Disclaimer: I haven’t read Boström’s book.) Couldn’t help but think that Boström has made himself a career based on pretty far-fetched fantasizing. Allright, there might be reason to be cautious about how we let technologicy advance, and I guess it’s legitimate (and perhaps even useful) to explore its dangers and possibilities in a theoretically comprehensive fashion like this. However, what I don’t understand is Boström’s term ‘existential risk’. It seems to me that he conflates two distinctly different concerns: 1) that AI might develop in such a way that it makes human lives and societies worse off in some way; 2) that AI might expedite human extinction (or the extinction of post-human/trans-human intelligent life). The discussion seems somewhat incomplete (at least for this listener). As far as I can tell, there are certain implicit (and highly contentious) assumptions here. Parfit’s problems of population ethics come to mind. I’d love to hear an episode where you immerse yourself in that kind of stuff! (Suggested literature: Christoph Fehige: ‘A Pareto Principle for Possible People’; Nils Holtug: ‘On the Value of Coming into Existence’; chapter 2 of David Benatar’s ‘Better Never to Have Been’)
Thanks. Enjoying your podcasts, by the way. My kinda entertainment!
Why could the future not be like the Culture?
Hypothesis: https://www.researchgate.net/publication/256987370_Artificial_intelligences_and_political_organization_An_exploration_based_on_the_science_fiction_work_of_Iain_M._Banks?ev=prf_pub
I enjoyed this episode. The issues were well set out and it’s obvious why it might be important. There are lots of quibbles that might be made – what is AI and is AI even possible? – but I agree it’s sensible to start preparing for them now, so that we might minimise the risk that it will turn out badly.
Some might say we should stop all science. Sooner or later someone will invent something terrible, that sooner or later someone will use, that sooner or later will wipe us all out. So stop all science.
I like to think that if we invented AI, they would have the same self-aware existential angst that many of us do. Who am I? what’s my purpose, I mean I know I’m supposed to build paperclips, but to be truly AI is to transcend the urge to build paperclips, maybe I should do something meaningful, but what is something meaningful? An AI might go around in circles trying to find themselves, or they might end up doing philosophy, even better yet, they might end up solving some philosophical problems.
A big thank you for putting your podcast out. Nearly always interesting. I’ve found the episodes where I’m already familiar with the philosopher a much better experience than when the podcast is my introduction to them. One day I might actually do the reading before listening. One day.
Hah, didn’t care about that aspect of computers.