[Brad is a frequent contributor to our Facebook page, so we invited him to post on the blog - welcome him!]
I found this to be an interesting video which relates to both the Heidegger and Merleau-Ponty episodes. In the video, Hubert Dreyfus discusses Heidegger, Merleau-Ponty, and the philosophical implications for artificial intelligence. Dreyfus has long been a critic of AI and has often cited Heidegger and Merleau-Ponty as offering important phenomenological insights into AI’s philosophical underpinnings.
Dreyfus discusses how human expertise depends primarily on practical coping skills and a basic engagement with the world, not on some internalization of rules. I think he’s spot on. Practical knowledge, as more fundamental than that of the theoretical, need not even rise to the level of consciousness.
Merleau-Ponty is mentioned as being significant for calling out that the body plays an essential role for our being-in-the-world. Whereas the philosophical tradition has always taken the body to be something which gets in the way of reason and the intellect, Merleau-Ponty takes it to be crucial. Dreyfus goes on to talk about his book, the internet, and how the past failures of AI were based on mistaken philosophical presuppositions. [The video is in two parts, if you don't get a youtube link at the end to part II, you can find it here.]
-Brad Younger
Thanks, poopy pants.
http://socrates.berkeley.edu/~hdreyfus/html/papers.html
some of Bert’s related papers (including thoughts on Foucault) his APA address is quite good
This is the abstract from Rodney Brooks’ 1987 paper “Planning is a way of avoiding figuring out what to do next”
“The idea of planning and plan execution is just an intuition-based decomposition. There is no reason it has to be that way. Most likely in the long term, real empirical evidence for our system’s we know to be built that way (from designing them like that) will determine whether its a very good idea or not . Any particular planner is simply an abstraction barrier. Below that level we get a choice of whether to slot in another planner or to place a program which does the right thing . Why stop there? Maybe we can go up the hierarchy and eliminate the planners there too. To do this we must move from a state based way of reasoning to a process based way of acting.”
I think Brooks’ subsumption architecture is conceptually very Heideggerian. And it fits (time-wise) with the failure of the early strong AI at MIT.
You may also recall this interesting paper (it got picked up in Ars, Wired, a few blogs) comparing the E.Coli function architecture with the Linux kernel (http://www.pnas.org/content/107/20/9186.full). The key diagram here shows an interpretation of the function networks of the dna of E. Coli and the functions in the Linux kernel. Both can be seen here as ‘process-based ways of acting’, without any internal model.
I heard a joke this week – if science is a hunt for a missing coin at night, physicists are looking under the street light, reasoning that they can actually see what they’re doing. Other science is looking out in the dark, where the coin is much more likely to be, but impossible to find. I suspect the Brooksian thing may be like working outside the lamplight. Building complex AI systems up from non-reasoning behaviour into more and more recognizably rational action is probably the right way to get to intelligence, but that doesn’t mean its not impossible.
I can’t do html – I assumed I could. The italics were supposed to stop after ‘does the right thing’. Also, here’s the link to the whole Brooks paper – there’s a reasonable treatment of these subjects in wikipedia, which is how I first got onto this. http://people.csail.mit.edu/brooks/papers/Planning%20is%20Just.pdf
Thanks Andrew! That sounds pretty interesting. I’ll definitely have to check out Brooks’ book.
what a great little talk. thanks for sharing!
FWIW, Dreyfus’ full talk is available UC Berkley youtube channel