Nick Bostrom's Superintelligence is a book that imagines how we should go about dealing with a super-AI, should it come about. The thesis of the book seems to be this: if a superintelligence were to be constructed, there would be certain dangers we'd want to apprise ourselves of and prepare ourselves for, and the book is a precis, essentially, for dealing with some of those risks. Assuming, for the sake of argument, that the thesis of the book is correct, what is of interest to me is how a superintelligence could be constructed. If someone wanted to construct a superintelligence, it seems to me they'd have to understand human intelligence at a deep level, but I doubt we'll ever come to understand how intelligence works.
Bostrom defines a superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” He believes that a superintelligence is on the way, writing that it'll probably arrive within this century. He also thinks that the different forms that superintelligence could take are practically equivalent, and they follow into these general categories: speed superintelligence, collective superintelligence, and quality superintelligence. Speed superintelligence is a “system that can do all that a human intellect can do, but much faster.” Collective superintelligence is a “system composed of a large number of smaller intellects such that the system's overall performance across many very general domains vastly outstrips that of any current cognitive system.” Quality superintelligence is a “system that is at least as fast as a human mind and vastly qualitatively smarter.” In other words, a superintelligence could either be very intelligent across the domains we care about because it's really fast or because it works really well with the subsystems that compose it, or because it could just works better within those domains than we do. On Bostrom's formulation, superintelligence is a general concept, to be amended later, and Bostrom assumes right now we don't have to spell it out too much because we'll know a superintelligence when we see it. But what of intelligence?
There's no currently universally accepted understanding for what constitutes intelligence, but it seems to be something like knowing how to do something relative to some domain. Even a concept like general intelligence as measured by an intelligence assessment can be broken down to checking for knowledge in specific domains, often assessing some combination of (1) logical and mathematical knowledge, (2) linguistic knowledge, and (3) visual and spatial knowledge. We might also add other domains of interest, those domains approximating what Howard Gardner means when he writes of intelligences, plural, or Steven Pinker when he refers to mental modules. These domains of interest might include knowledge of (4) self and others, (5) music and rhythm, (6) motion, (7) morality, (8) how the world works. These domains would bear a few qualities: (a) they could be found universally across human beings and (b) abilities with respect to these domains could vary widely from individual to individual, but (c) these abilities would ultimately follow the law of distribution—that is, most people will be average with respect to use of such knowledge and few will be exceptionally low or high in ability. If these were to be the domains of interest and they'd have these sorts of features, then anything that outperforms in all these domains would constitute a superintelligence.
The only way to know how intelligence works is to know how the subsystems work. And although there's progress in these domains, we don't have anything like a rich explanatory framework for these subsystems such that we could recreate them—apart from having a child (and even then we don't know how their subsystems work, otherwise it might be much easier raising a child). Bostrom acknlowledges that programming an entity with an intelligence is a seriously difficult feat on the engineering side of things:
[A]ccomplishing even the simplest visual task—finding the pepper jar in the kitchen—requires a tremendous amount of computational work. From a noisy time series of two-dimensional pattenrs of nerve firings, originating in the retina and conveyed to the brain via the optic nerve, the visual cortex must work backwards to reconstruct an interpreted three-dimensional representation of external space. A sizeable portion of our precious one square meter of cortical real estate is zones for processing visual information, and as you are reading this book, billions of neurons are working ceaseless to accomplish this task (like so many seamstresses, bent over their sewing machines in a sweatshop, sewing and re-sewing a giant quilt many times a second).
Besides showing what a great science writer Bostrom is, this passage also suggests the immense difficulty of visual programming. He writes, “The main reason why progress has been slower than expected is that the technical difficulties of constructing intelligent machines have proved greater than the pioneers foresaw." But he's optimistic, continuing: "But this leave open just how great those difficulties are and how far we now are from overcoming them. Sometimes a problem that looks hopelessly complicated turns out to have a surprisingly simple solution (though the reverse is probably more common).”
Let's assume that it's correct, that we have those eight subsystems I listed above, and probably more. If we're also to assume that the subsystems operate computationally, we'd have to try to go about figuring out how the algorithms are carried out in the physical plumbing of the brain, assuming also we think these operations are taking place at least mainly in the brain. Some headway has been made on linguistic knowledge through the likes of Noam Chomsky et al., as well as progress with vision through the likes of David Marr. Certain forms of symbolic logic and axiomatic systems have been created to account for logical-mathematical knowledge, just to name a few examples. But to build an artificial intelligence proper, if the goal is to mirror something like the way human beings think and behave, you'd have to program it with principles or algorithms that approximate to those that human beings use when they think and behave. Nothing has been developed yet, however, that could be deemed to be anything like that in terms of a rich computational understanding. Nobody has been able to reduce Psychology, for instance, to a handful of principles such that we could program them into a computer nor has there been any deep investigation of the principles that govern our moral knowledge apart from the creation of a taxonomy of what moral concepts are activated in certain ecologies.
My own view is not that I think there couldn't be such covering laws or principles, but that human inquiry is inherently limited such that we can't even investigate some domains properly. Some things we can conceive of but have no way of knowing what they would look like. For instance, we're pretty good at imagining three spatial dimensions—forward-backward, up-down, left-right—and we could even imagine a fourth: clockwise-counterclockwise. But try to imagine a fifth spatial dimension. Of course, it's just my hedge that we might be near the end of human inquiry in some of these domains. Smart people like Bostrom, on the other hand, are hopeful. And maybe he's right. Maybe it's not the end of inquiry. Maybe the task just requires people who are more intelligent in one of those domains. But as for me, I definitely don't foresee progress in those domains to the point we could imagine how they would integrate and we'd be able to program an AI with them.
-Billie Pritchett
The–I”ll call it “super”–AI topic is always pretty fascinating, and rarely does one argument gain much more approval over another. It’s just one of those endless debates. But I think you point out a seemingly fundamental contradiction–or, to a lesser extent, issue–when considering the super AI problem, i.e. how can an intellectually inferior being create something intellectually superior to their creator in all domains? It’s like making lemonade without the lemon. I think, perhaps, it is possible to create an AI with vastly superior mathematical and computational abilities and operate within them at much faster speeds than humans (speedsuperintelligence+mathematical knowledge), but I am skeptical about other domains. Humans do not only claim intelligence of easily comparable domains, if so education would be rather dull and even more ineffective than the American system already is. Things like mathematics, chemistry, and physics are such comparable domains. However, to create an AI with vastly superior intelligence in domains such as critical thinking, ethics, decision making, philology, liberal arts, performative arts, and effective rhetoric are hardly down to a comparable “science” if you will. And the latter domains, being of highly subjective and differing qualities, endows AI’s with the same issues the creator has. It’s not that one such domain is more important or crucial to intellect than another, but that as long as certain domains cannot be given a simple value judgement and thus unable to pass on to an AI without similar problems, the sort of super Artificial Intelligence partially described in Bostrom’s piece seems unlikely.
Good stuff! I think there is too much AI optimism behind all this AI pessimism. I am certainly not an expert in this field, but for me, what appears strange is that attempts to build AIs always center around very specific computing tasks. Such as “finding the pepper jar in the kitchen”, as in the above Bostrom quote. As if we could build a mind like AI from the scratch to do intelligent things instead of building a learning system that would be stupid when it was first created, but that would start learning and reducing its own uncertainty about the world (like the human mind learns to do). If there is a way to build human like intelligence, the system would have to be able to process (or to productively hide) the paradoxes that it confronts in its environment. I think “second order cybernetics” is on point when it considers the difference between trivial and non-trivial systems, and how all “observing systems” are non-trivial as they operate with double-closure (Heinz von Foerster, Niklas Luhmann, etc.), i.e. produce new distinctions from past distinctions. A mind-like system would have to be constantly oscillating between self-reference and other-reference, and thus create its own uncertainty and unpredictability. But then there’s the question of how can we make this process emerge without the (perhaps) necessary environment of brains and neurons? I think the first step would have to be to build machines that would operate on analogies (I think Hofstadter is onto something here) instead of making machines that just process “things as themselves”, and with fixed categories. But, indeed, this kind of system would make the dangers of AI very real.
One of the problems with super intelligence in an AI, something that I find is a problem, is the gap between humanity and artificial decision making. A computer will always act on how it is programmed to react, there is no consciousness from which any other outcome is derived, the idea that an AI could essentially take over the process of its own evolution, separate from any human input, and begin to self-replicate a smarter version of itself is highly unlikely. Are we able to create something more intelligent than ourselves? I don’t think so but it is interesting to think of a combination of systems developed on the model of the smartest human beings in that area – therefore, essentially creating a combination which would surpass that of a human being. However, the AI’s inability to act randomly and on any conscious level is a contradiction of referring to it as artificial intelligence, because it is essentially not intelligence.
Having said all of this, it is very worrying to think of how weaponised systems are being created that work independently of any human control. An automatic killing machine could be very disastrous for the human race and I read a very interesting article in which a Professor Mark Bishop was interviewed, the article referring to how our dependence on AI could have grave consequences. Furthermore, a machine that could make decisions for itself effectively mimics consciousness, and I don’t think it’s impossible for us to create a machine that could in essence mimic ourselves in having to make decisions dependent on certain criteria, this could perhaps be the start of a progressive AI. It would also be short-sighted to use our own perceptions of intelligence as a base from which to review computerised systems, which could quite possibly progress in other ways which we would not necessarily think of.
https://intelligence.org/blog/
Hello, everyone:
Thank you for your responses.
Jordan: Yes, the basic point, as it seems to me, is that unless we understand how intelligence (or intelligences) functions in human beings or other animals at the computational and algorithmic levels, it’s very unlikely that we’ll be able to program a computer similarly. The only way out of this trap is by thinking that we could somehow do this all accidentally, which, in my view, would be a serious failure to take how hard it would be to create intelligence seriously.
Antti: Regarding the ‘very specific computing task’ you mention, like finding a pepper jar, although it may seem simple from the outside, it’s actually a very complicated task. In order to successfully retrieve a jar or anything else, an AI would need a memory system, something approximating a perceptual-spatial system, a kinesthetic system to move toward the thing, and so on. If it seems like a very specific task, it only does from the outside because we’re only taking the behavior into account. But to really get into the Black Box that is the mind/brain and how it works, these sorts of subsystems have to be seriously understood. As I mentioned with Jordan, not to do so is to fail to take seriously how immensely complicated these subsystems (or intelligences) really are.
Adam: On your point about the difference between humans and AI, in some respects I am optimistic. This is just a hedge on my part, but humans and AI I think would in principle have similar programming. I actually believe that the failure to program an AI with all the intelligences that we have would be more a commentary on the scope of our ability to do rational inquiry and engineering (since we’re only human beings, after all) than on the way the processes and algorithms actually work. But I could be work.
Sincerely,
BP
“Could be *wrong,” I meant. Tried to edit the comment but it wouldn’t work. Probably just the AI revolting.
we absolutely do not have to understand how the human brain works to mimic what humans do.
in fact, human beings mimic one another without understanding how their brains works at all. this the beauty of the brain , it is a machine that operates ‘intelligently’ without understanding itself.
the complexity of the brain is that it seeks to understand itself. not that it needs to. much of what passes for our understanding and labels of ‘intelligence’ and ‘abstract’ thought is itself not very well embedded in any real understanding of the neurosciences.
while we have come a long way in the neurosciences, we remain , CENTURIES away from understanding the brain.
and yet, engineers are still free to build brain mimicing software and hardware. the pseudointellectual class of ‘transhumanists’ and futurists that read about electrical engineering and AI software and predict that someday , computers will be smarter than humans —have no clue what they are talking about.
unfortunately , like most debates, things get dumbed down to a dichotomy of ‘for or against’ a proposition. there is a third way.
consider this. software and computers are one of the major focuses of technology and military spending and research world wide. as many note ‘progress’ can and will continue to be made. just because these computers never will be biological in nature, and thus never EVER DO what the brain in fact does, does NOT mean the computers won’t become better at TRICKING us into believing they are ‘smart’ in so far as we recognize one another as human beings to be ‘smart’.
for all intents and purposes , it could turn out that it is MUCH easier to create ‘smart’ computers capable of superceding APPARENT human intelligence, than it was for nature hersolve to evolve human life forms over a few hundred million years from single celled life.
the brain itself may have been incredibly difficult to create, and is incredibly difficult to untangle and crack—-but this does not necessarily mean the incredibly few and NARROW lessons we’ve come to establish from studying the brain over the past 120 years of modern neuroscience, wont’ provide us with the very powerful assumptions we need in order to combine them with the tools of electrical engineering and modern mathematics to create some EXTREMELY convincing devices.
these devices dont’ care wether they are intelligent or not. and nor should we. the nouns and labels get in the way of the actual doing of the thing.
if we create APPARENTLY more intelligent robots over time, they get better at replacing human beings at ever more mundane tasks. this centralizes capital and creates more opportunity for reinvestment in R&D. it is in fact, a litteral arms race, as financial warfare and capital warfare seek to automate increasing portions of their economies. and it is REAL warfare as these R&D cycles also feed into military products which are used with increasing sophistication to bring down the political (and financial) cost of warfare. —–more intelligent ‘drones’ don’t have to be very intelligent at all, just marginally more intelligent——to sell more than the last generation.
the iphone and smartphone dynamics create economic loops that feed into cycles which will result in trending progress for computers that are better and processing information.
where does this lead, to computers that are better and smarter-. one day they can possibly be ‘super intellgient’ in the sense that they effectively trick everyone into thinking their ‘brains’ are superior to ‘human’ brains.
but they will NEVER EVER have brains ( unless we find a way to biologically wire and combine neurons on in-vivo systems which interact with digital system. )
it seems pointless to discuss analog biological to digital systems, when in fact there are already analog chips made of silicon .
the issue here is about bottom up engineering trends that are predictable and not outlandish and to see where they are practially heading and not to overblow what could or might happen.
consider if a ‘super computer’ were as smart as our characature of the human brain himself “ALBERT EINSTEIN’.
well, mr einstein didn’t oveturn all of society. far far from it. his contributions included within the context of existing social systems. his discoveries followed up upon and realized into invention and better physics by thousands to millions of other people. the human termite mound is still the driving force of contexualizing even the most important contributions from the most brilliant minds of history.
even the ‘smartest’ of super computers would still be in someone’s hand. the parody of the movie ‘terminator’ is just fairy tale nonsense more close to the bible and superstitution than it is any analysis of the future of mankind’s interaciton with technology.
the realistic future of AI is the superpowers of the world using it to further entrnech their control over the global population in order to create searchable files on everyone and everything which helps monetize that which can be bought and sold, and helps to destroy that which cannot.
the real question is how WE will use OUR technology. and to asnwer that question you really need to ask , WHO IS WE? and what is ‘OUR’? and those are political questions.
man’s technology is a reflection of his society. it is easy to say we are an evil and greedy society. but all societies have suffered from these human excesses. if our civilizatoin is different, maybe there is something very positive about the fact we can support people whose job it is to create this stuff.
Although you seem somewhat intoxicated, I think you bring up a few good points. Though I can’t tell what your final conclusion is. You seem at once to claim that super intelligent AI is plausible–maybe even now, but then later go on to seemingly insinuate that AI intelligence can appear greater than that of humans, but it feels more of a veiled critique of our usage of “intelligence.” Is this correct? By the end, you seem to endorse the advancement of super intelligent AI’s for the sack of man’s accomplishment (it being a reflection of what human can create), but it feels contradictory to the tone of the beginning and middle of your argument. Do you feel that human can create an AI with the exact abilities (or greater) than that of a human’s brain, or will it always just appear so? And finally, what could be the positives of a society wherein the lowers rungs of occupation are mainly occupied by AI, and what sort of massive ripple effects would that cause for people?
When people make a claim that :”Computers will NEVER EVER have brains” I am left with more questions than answers. First of all how could anyone know that? I think one needs to be into the future to be so sure and make such a claim.
First of all we know well that a brain is made of meats. We also know that meats raw material is made from food well including water. I truly believe that we are heading into the right direction consider the degree of research already achieved and the amount of knowledge we now have in the field biology, neurology and genetic engineering.
What humans want is ability to produce a mind that is superior to himself. A machine that will turn the table around show us how to do thing better. This might be achieved the day machine which by the way could be made of electronic devices, flesh and blood or a combination of both that would be able to adapt to his environment, learn and make decisions without the need of human intervention.
The key here is the ability to self learning. Acquire knowledge and apply it intuitively. In my view this a possibility. In fact sometimes I have a feeling that this is been tested inside military labs and is been kept secret. We will soon know in my view. I just hope we found out in a good way.