Just grant the hypothetical that machine intelligence advances will eventually produce a machine capable of further improving itself, and becoming much smarter than we are. Put aside the question of whether such a being could in principle be conscious or self-conscious or have a soul or whatever. None of those are necessary for it to be capable, say, of developing and manufacturing a trillion nanobots which it could then use to remake the earth.
Bostrom thinks that we can make some predictions about the motivations of such a being, whatever goals it's programmed to achieve, e.g. its goals will entail that it won't want those goals changed by us. This sets up challenges for us in advance to figure out ways to frame and implement motivational programming an A.I. before it's smart enough to resist future changes. Can we in effect tell the A.I. to figure out and do whatever we would ask it to do if we were better informed and wiser? Can we offload philosophical thought to such a superior intelligence in this way? Bostrom thinks that philosophers are in a great position for well-informed speculation on topics like this.
End song: "Volcano," by Mark Linsenmayer, recorded in 1992 and released on the album Spanish Armada: Songs of Love and Related Neuroses.
The Bostrom picture is by Genevieve Arnold.