The Intuition Network, A Thinking Allowed Television Underwriter, presents the following transcript from the series Thinking Allowed, Conversations On the Leading Edge of Knowledge and Discovery, with Dr. Jeffrey Mishlove.

MIND OVER MACHINE with HUBERT DREYFUS, Ph.D.

JEFFREY MISHLOVE, Ph.D.: Hello and welcome. Our topic for this program is the nature and structure of the human mind, and my guest, Dr. Hubert Dreyfus, is a professor of philosophy at the University of California at Berkeley, and the author of numerous books, including most recently Mind Over Machine. Welcome to the program. It's a pleasure to have you here, Hubert.

MISHLOVE: You're really an expert in the whole area of artificial intelligence -- the nature of it, and the problems with it. One of your earlier books was called What Computers Can't Do. In your most recent book you focus in on the property of the human mind that's often called intuition. Really it seems that the centerpiece of your argument is that computers can't be intuitive.

DREYFUS: Right.

MISHLOVE: Let's talk about intuition, then. How would you describe intuition?

DREYFUS: Well, intuition, I think, is knowing in some area, almost immediately, what's the appropriate thing to do, without being able to give any rationalization, justification, reasons to yourself or to anybody else why you did it.

MISHLOVE: Now your opponents in the area of philosophy and in artificial intelligence argue, I understand, that this happens subconsciously, and that there are logical processes that we perhaps can't articulate, but that a computer could reformulate, at a subconscious level. You're saying something else, aren't you?

DREYFUS: Right. The usual view that's been around since Socrates, and surely since Plato in the Meno, is that we once knew the principles and rules that got us the answers when we act, but that now they've become unconscious. Computer people would say they've become compiled in some other part of the mental processor. But the idea has always been, for two thousand years, that we used to have to figure out how to do something, and though we don't consciously figure out how to do it anymore, the steps by which we once figured out how to do it are still there guiding our behavior.

MISHLOVE: And you're suggesting that those steps don't really exist -- that there's something about intuition that is non-replicable?

DREYFUS: Well, that something else happens in the course of becoming a master, an expert at anything. Not anything very sophisticated -- anything. The example that Ed Feigenbaum uses in The Fifth Generation, which is his book about expert systems, is tying your shoelaces. He says after all we had to go through careful steps in tying our shoelaces, and now we can do it without thinking, but that's because those steps have all been compiled and we're still going through the steps unconsciously. But my brother and I, who wrote this in our book, after looking at lots of skill acquisition -- of drivers and airplane pilots and chess players -- think that as you acquire a skill you acquire experience with lots of situations, so you don't have to go back and look at isolated features and apply rules. You can say, in the case of your shoelaces for instance, "When I get in a situation like this, then I bring that lace on top," without having to have any more a whole worked-out rule.

MISHLOVE: Not even subconsciously.

DREYFUS: Not even unconsciously. What you have subconsciously or unconsciously would be -- I'll say this first as an approximation. It's what we say in our book, which isn't quite right anymore, I don't think. But let's suppose what you need unconsciously is a whole lot of particular situations which you've stored, and when you see that the current situation is similar to some previous situation, you simply do what worked in that previous situation. And usually it will work in this situation.

MISHLOVE: That sounds rather mechanical to me, though. That sounds like you're scanning your memory for an appropriate template, and then responding to that. Is that what you mean by intuition?

DREYFUS: Well, again, to talk in the terms of the simple model we had in the book, you wouldn't have to scan. There are already holographic models of pattern recognition where you would have, say, fifty thousand typical driving situations stored. When the current input was like one of them, it would just pull that one out. It wouldn't have to scan all the others. You've got to have a model like that, because it turns out that these intuitive, skillful responses are practically instantaneous. If you're a skillful boxer, for instance, you don't have time to scan anything.

MISHLOVE: But the brain itself is operating. You've got billions of neurons, some of them firing thousands of times a second. The rate of potential information transmission, or information retrieval, in the brain, might be very fast, wouldn't you think?

DREYFUS: Well, if you use them like the serial, step-by-step processing that people use now to try to make digital computers be intelligent, it would be very slow, because our neurons are much, much slower than computer chips. If you had to do what the computer model says you have to do -- look at certain features, then see which situation is defined by those features, because you've stored a lot of concepts which define these situations, then look up the rule that says what to do in that situation -- it would take a lot of time, I think, with our neurons. But if you could just map the current situation onto some previously stored situations, and if you could have the most similar situation just pop out -- because you can't get two situations that are identical -- if you could do that, you'd then be using our billions of neurons to simulate some kind of holistic process, and that would be a lot faster.

MISHLOVE: So it seems that somehow the magic that's going on in the brain, that you don't believe can be replicated by a computer, is this instantaneous access to the right memory.

DREYFUS: That's right, that's right. And this business of since there's no exactly right memory, the instantaneous access to the memory which is the most similar to this current situation.

MISHLOVE: And being able to make do with that, and sometimes come up with a brilliant solution, even with inadequate information.

DREYFUS: Well, that's probably an accident, and not intuition, I would call it. Intuition -- I didn't make that really clear -- isn't just coming up with a lucky guess and getting it right. It's repeatedly getting it right. If you have no recording in your brain of any similar situation, you have two possibilities. You could just guess, and you might get it right, and you'd probably get it wrong; or you could do what I think people do. They fall back from intuitive mastery to acting like they did as a beginner. They start to figure out what to do. That's good, to be able to figure out what you do, because even if you're a master driver, an expert at that, you'll be in weird situations sometimes when none of your past experiences will help.

MISHLOVE: I want to push you on this point a little bit, because I notice in your book you mention Einstein as an example of a very intuitive person, according to your definition, and you point out that he developed his own theories intuitively. And yet surely he was acting in an area where he didn't know; nobody had thought about these things. Einstein describes the theory of relativity as coming to him in a flash.

DREYFUS: Well, he also says that a lot of what looks like creativity is really seeing similarities that other people have missed. I don't think that in this book my brother and I have, or claim to have, understood real creativity, if there is such a thing -- really coming up with something that's not like anything that you've seen before, and which works, and works consistently. There probably is something like that, and it's not what we're talking about. What we're talking about can often look like creativity, and I'll give you an example why, from chess. We talked a lot to chess masters in the course of writing this book, and once we were talking to a chess master in the city who was a grand master, and his wife was a master. He was saying about her, "I told her that in a certain kind of situation you should always do this, but her memory isn't very good, and the next tournament when she was in that sort of situation, she didn't do it." And she said, "I knew that I was supposed to do that in that sort of situation, but I didn't see that that was that sort of situation." So seeing similarities has a lot to do with the talent of a grand master, or a creative thinker like Einstein.

MISHLOVE: So it has to do with pattern recognition. As I recall in Einstein's case, he was contemplating what happens to somebody when they fall off the roof of a house. In fact he felt it in his muscles; he described the muscular sensation. And from that he got his insight about the theory of relativity. So there may have been a kind of pattern recognition going on there.

DREYFUS: I think that has something to do with it, but I also bet there was a lot of unconscious figuring out and trying alternatives that didn't work -- this whole idea that you can sleep on it, and in the morning your right hemisphere, or whatever part of your brain has been trying to figure it out, will report. That's what we don't have anything to say about. I don't.

MISHLOVE: In Einstein's case, I understand, he took ten years to write out his theory after he had his original insight. So what you're saying basically is that creativity aside, certainly if computers can't recognize patterns, if they can't be intuitive, they can hardly be creative.

DREYFUS: Right. But to me that's not really the problem. I don't like to talk about creativity because that's too hard. If they can't recognize patterns, they can't even understand the kind of story that a four-year-old child can understand. They can't recognize the face of their neighbor. They can't recognize as they drive down the street whether a pedestrian is about to cross the street or is standing at the corner. Just the simplest things we do require pattern recognition, not just creativity.

MISHLOVE: You've been something of a voice in the wilderness, I think, in the computer community, in the sense that what you're saying is that not only can't computers do this today -- I suppose some people in the artificial intelligence community might want to argue with you about that -- but you're saying even in the future we won't be able to accomplish these things. Is that right?

DREYFUS: That's right, or it's almost right. I'll qualify it a little bit. I've been saying since '65, when I first started studying these AI programs, that it looked to me like using the computer as a logic machine -- that is, using the computer to make inferences from lots of isolated facts or features or attributes or whatever you want to call them -- is not going to get you intelligence. That kind of bottom-up, atomistic, analytic approach isn't. There's got to be holistic pattern recognition. It seems to me, from my point of view, I've been vindicated. Twenty years later, using computers as logic machines hasn't produced much. They don't have natural language understanding; they don't have simple children's story understanding. They have good chess machines, but they do it by brute force; they don't do it by pattern recognition, so that doesn't count, for my prediction. It seems to me that the evidence is piling up that logic is not the right way to go about factoring expertise.

MISHLOVE: Well, how about this as a counterexample? It's been taught to me in my psychology courses that there have been computer programs that simulate the behavior of a psychiatrist, and that independent blind observers couldn't tell whether it was a psychiatrist -- and I think they've also done this for a mental patient -- whether there was a real person on the other end of your computer terminal, or just a computer. Isn't that correct?

DREYFUS: That's correct, but when you fill in the details it doesn't show much. The kind of psychiatrist that was being simulated by this Weisenbaum's Eliza program was a non-directive therapist, a therapist whose job it was simply to turn around what the patient said, and put in front of it, "So you don't," or something like that. How little understanding is in it I'll illustrate. When I played it -- because Weisenbaum was one of my friends at MIT when I taught at MIT, and in fact one of the few AI people who was courageous enough to talk to me when I taught at MIT. We had to meet off the campus at his house. AI people weren't supposed to be seen eating lunch with me. But anyway, Weisenbaum and I were friends, and when I played the Eliza program, or did it, it started like this. It said, "How do you feel?" I typed in, "I'm feeling happy -- no, elated." It typed back, "Don't be so negative," because it was programmed just to pick out the "no," whenever there's a "no." Here I was being about as hyperpositive as you can get. So that's how little understanding is needed to simulate, most of the time, non-directive therapists. The same for simulating completely extreme, crazy paranoids, which is the other one; I think it's Abelson's program. There it's somebody who has an obsession, and no matter what you say, it says, "But what about the Russians building up forces on the Eastern front?" You try to change the subject to American participation in the Olympic Games, and it says, "But the Russians are using this for political strategies." Again, you can simulate something so extreme, but it doesn't prove anything about what people do.

MISHLOVE: I think the interesting thing is that observers, probably less shrewd than you were, weren't able to tell the difference here.

DREYFUS: But even that is important. No observer, as far as I understand it, who was told, "This might be a computer and it might be a person; can you tell which?" failed to tell the difference. It was a kind of accident, as far as I understand it, at least in the Eliza case, where somebody without knowing that there might be a computer on the other end of this typewriter thing mistook it for a person. But no matter what, it wouldn't prove anything. If you knew that inside all it was doing was converting sentences, and when you say, "I'm unhappy about my job," it says, "Tell me, what makes you unhappy about the job?" and you know the simple rules for how it permutes that, then you know that it's not being intelligent the way people are intelligent, and that it would break down if you pushed it just a little out of its domain, which people don't. So none of that is any evidence for any success of artificial intelligence. It just shows how people can be fooled sometimes. People can be fooled into thinking phonographs are intelligent.

MISHLOVE: So as far as you're concerned, artificial intelligence hasn't gone anywhere.

DREYFUS: No, artificial intelligence of the sort which is -- let's call it from now on, for the sake of the discussion, conventional AI. I sometimes call it traditional AI, but it's only been around for twenty years, so traditional seems a bit much. Conventional AI, which is using the computer as a logic machine to try to get intelligence by making a lot of inferences, I think has gone nowhere.

MISHLOVE: Would we call that cognitive simulation?

DREYFUS: Well, cognitive simulation you can't, because that happens to mean something else. But you can call it cognitivism. Cognitivism, let's say, is the view that you can understand both computers and people as information-processing devices that are applying inference rules to data.

MISHLOVE: That's the double-edged sword, I think, that you're also concerned about, because on the other hand many people are beginning to think of human beings as like computers, and you seem to feel that we're really denigrating ourselves in that sense.

DREYFUS: Yes, and it's a long tradition. I think in no other country would you get people to believe that a logic machine that could do nothing but the kind of things that one does in an elementary logic course, but lots of it and fast, would ever have perception, understanding, and so forth. I just can't believe for a minute that you would get anybody in, say, Japan or China, to believe such a thing. But in our tradition it runs so deep. Socrates already was going around interviewing experts in the Euthyphro, which is one of the earliest Plato dialogues. He gets Euthyphro, who's supposed to be an expert on piety recognition. And he in effect asks Euthyphro for his piety-recognition rule -- the AI people would say his piety-recognizing heuristics. Euthyphro can't give him the rule, but only gives him examples from the myths of pious acts. Socrates, being very, very smart, says, "Yes, but I want to know the rule by which you select those as examples of piety." Sounds like a brilliant maneuver. It's been considered unrefutable, roughly, for two thousand years. And Euthyphro sort of breaks down, like all these people interviewed by Socrates. He can't give Socrates any rules, but he also can't convince Socrates that he knows anything about piety, because he can't explain how he does it. Socrates found, as you know, that nobody could give the rules by which they do their expertise -- not the craftsmen, not the poets, and horror of horror for Socrates, not even the statesmen. So he thought the country was going around and people were just making these blind guesses, and things were in bad shape, because nobody could state the principles on which they acted, but only give examples. I'll just go through one step to the end. Then Descartes contributed a lot more by saying we ought to be able to analyze anything that's intelligible into elements. Then Hobbes, who was a contemporary, said reasoning is nothing but the addition of parcels, which is calculating with bits. So by the time you get to the expert systems people, people who actually try to get expertise out of experts, they have the following interesting experience. Nothing has changed in two thousand years. They go to an expert, and the expert keeps giving them examples. They say, "We don't want examples; we want your rules." The expert says, "I don't have any rules." They finally badger the expert into giving them rules. They run these rules on their computers, and these expert systems are never as good as the experts whose rules they got. Now what I think follows from this, what my brother and I concluded, is maybe we should take seriously Euthyphro's attempt to give examples from the mythical tradition of piety, and experts in, say, mining or something, to give examples from their domain. Maybe that's what the experts really have in their heads, are examples.

MISHLOVE: Anecdotes.

DREYFUS: Anecdotes, myths.

MISHLOVE: Like the teaching parables of Jesus, so to speak, stories.

DREYFUS: Parables, models, paradigms, and all that. The only philosophers who thought this -- two philosophers who were very sort of philosophical rebels -- were Wittgenstein in England, who said the best way to explain things is to give a perspicuous example for a paradigm case; and Heidegger, who had his own notion of paradigms and the important role they played in understanding. But it would be a disaster for conventional artificial intelligence if this were true, because it would say, in effect, that if you get an expert to give you his rules, you're forcing him to regress to a beginner where he had rules. You're not getting his expertise; you're exactly getting his non-expertise.

MISHLOVE: It almost sounds as if the implication of what you're saying is something like this -- that in the grand debate between the sciences and the humanities, the universe itself, or the human mind, may ultimately be more like a big story than like a machine.

DREYFUS: Yes, yes, I think narrative is much more important than giving principles and deductions, if you want to understand anything in the everyday world. It happens that in understanding things in the world of physics, it just turned out -- who would have thought? -- that you shouldn't have used the narratives that the Renaissance tried to use, that you could do it by abstract rules operating over features. That's what Galileo and Newton found out. And it works, for planets and space ships. But nowhere in our everyday world, on Earth here, does that seem to work.

MISHLOVE: Even physics is going through some extraordinary changes now, where the rules seem very fuzzy in and of themselves.

DREYFUS: Well, I don't really believe that. I don't agree with people like Fritjof Capra at all. I think that physics is just as basically theoretical as it always was, where theory means that you capture regularities in the phenomena by finding some abstract features and then finding universal principles that relate those features. Plato started it, and Galileo continued it. No matter what weird features the modern people are finding, and what amazingly weird covering laws they have, it's still not holistic. Here's the crucial difference: the theories are always abstractable from any particular example. You may need an example to learn the theory, but the meaning of the theory is independent of the example. Whereas I think in our world examples, the incarnation of the truth, are absolutely essential.

MISHLOVE: Let me come around again to another point. You mentioned that the pioneers of the mechanistic view that you're attacking are Socrates and Plato. And yet they're also regarded by people such as myself as being very mystical thinkers. There seems to be a paradox here, that they're both mystical and mechanical.

DREYFUS: Well, let's take them separately. Socrates, I think, wasn't very mystical.

MISHLOVE: Socrates talked about having a daemon, a voice that spoke to him.

DREYFUS: Right. But this little voice, the daemon, only said one thing. When he was doing something wrong, it said, "Don't." That's all it ever said, and he says that. But it kept him from making serious mistakes. That's already, I agree with you, pretty strange; but generally Socrates didn't believe you could know anything positive, except if you could state the principles underlying it, and that's why Socrates claims he didn't know anything. But Plato --

MISHLOVE: Plato had a much different view.

DREYFUS: Plato had a much different view. Plato knew a lot of things, and he knew them by a kind of mystical intuition of the forms and of the good. Plato is a strange combination of a very rationalist philosopher, from Socrates, and lots of mystical intuitions from the mystery religion tradition that he was involved in. So I wouldn't want to say that Plato was a rationalist or a cognitivist or anything like that. But what happens is the philosophical tradition has dropped the mystical side of Plato and taken only the rational side of Plato, the side of Plato that did believe that there were underlying principles behind all intelligibility, and that anything people did that made sense and worked, made sense and worked because they had unconsciously understood the underlying abstract principles.

MISHLOVE: It seems to me that we're getting to a perennial argument here -- the argument of a hidden variable, so to speak. I'm sure that people in artificial intelligence who would oppose your point of view are saying, "As soon as we discover the hidden variables, we'll be able to solve our unsolvable problems."

DREYFUS: Oh, I think they are. But it seems to me it's not a question, as some people think it was for me -- and that must be partly my own fault, in writing What Computers Can't Do. People thought that I was saying in principle logic machines could never be intelligent, and then they looked through the book and they couldn't find where the argument was that proved that. I don't think you can prove that. I don't think you can prove that logic machines couldn't be intelligent. Maybe the rationalist side of Plato and Hobbes and Descartes and Kant had something. What I think you can show is that traditional AI, conventional AI, is what philosophers of science call a degenerating research program. That's a technical term which means -- a degenerating research program starts out with a lot of promise. In 1965, when I was first studying this at the RAND Corporation, Newell and Simon had just programmed computers to prove theorems in logic and to solve certain kinds of problems like the Tower of Hanoi, which is a problem about moving rings.

MISHLOVE: Those were very heady days.

DREYFUS: Those were heady days. And they believed that the same techniques they were using would solve all the problems one by one. What makes it a degenerating research program is, problems come up which nobody expected, and which it looks like the old techniques won't solve. Just to give you some examples, it turns out people use images a lot in solving problems. but images can't be simulated on these logic machines. They would have to be turned into descriptions; but it doesn't look like people use descriptions, they use images.

MISHLOVE: Our mind works with images in a much different way than a computer could possibly do right now.

DREYFUS: Exactly. And that turned out to have some role in intelligence.

MISHLOVE: Well, to wrap up, do you think if your argument is accepted, that we'll be able to look at the human mind anew, and appreciate ourselves, and maybe strengthen those very parts of the human mind -- the intuitive side of ourselves, the use of imagery -- that seem to have been neglected a bit in our culture?

DREYFUS: I think that it may be great luck for us. I mean, once we see that logic machines can't do what we thought we could do as rational animals, we'll see that we're obsolete as rational animals, and we will understand that we are really intuitive animals, and we'll appreciate wisdom and intuition a lot more than we do now.

MISHLOVE: Dr. Dreyfus, it's been a pleasure having you here on the program. I think what you've said, although you probably haven't intended it, is probably of great comfort to people who are parapsychologists, or have mystical intuitions like myself. I know you try to distinguish yourself from that, but you're building very important bridges. Thank you very much. It's been a pleasure.

DREYFUS: You're welcome.

END


Index of Transcripts      Intuition Network Home Page    Thinking Allowed Productions Home Page