The New York Review scored a real coup in getting Gary Kasparov as their reviewer for the MIT Press book Chess Metaphors: Artificial Intelligence and the Human Mind by Diego Rasskin-Gutman, translated from the Spanish by Deborah Klosky. Having retained the position of top-ranked chess player in the world until his retirement from professional chess in 2005, Kasparov can provide the practitioner's perspective on his losses to the IBM Deep Blue supercomputer; and, as co-founder of The Other Russia, the one pro-democracy coalition that seems to receive coverage for its opposition to Vladimir Putin, Kasparov is also a man with the wisdom to choose his words with great care and effective impact. Indeed, Kasparov may be so cautious with his words that one suspects that he has been kinder to Rasskin-Gutman than the author deserves, since the most important point of his review is that those who try to reduce chess to a mathematical problem that can be solved always seem to miss out on the subtle roles that practice plays when the game is actually played.
Chess was originally appealing to the artificial intelligence community because of its complexity. I suppose there was a tendency to believe that, if one could master the complexity of the number of possible chess games, one could apply the same strategy to mastering the complexity of neural interconnections. From that proposition followed the corollary that, if one mastered the complexity neural interconnections, one could model consciousness itself. Those who dared to argue that consciousness might involve something more than neural material were dismissed as monists wedded to belief in some unquantifiable interaction between mind and body. If the body was involved at all, it was only in the stimulation of individual neurons; and, if one could master complexity, then those stimuli would eventually work their way into an effectively operating model of consciousness.
The key point of Kasparov's review, however, is that, while Deep Blue may have demonstrated that a mathematical model could master chess better than the best human player, one could not (in either the near or distant future) deduce from the operation of Deep Blue a model of human consciousness. To demonstrate this point, he does not try to investigate the capabilities and limitations of the mathematics behind the Deep Blue program. Instead, he cites an experiment in the domain of what used to be called "man-machine systems." He cited an experiment (which, he observes gently, was totally ignored in Rasskin-Gutman's book) in "freestyle" chess. The "freedom of style" was that, while the competition was for humans, a game could be played by a team consisting of multiple players and/or computers. In other words a "player" was not necessarily an individual but rather a "pooling of expertise," where that expertise could reside in humans and/or algorithms. Kasparov summarized the result of this experiment with the following proposition:
Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.
In other words the "secret sauce" of a "winning combination" resided in the quality of the process that accounted for what the human did, not only in making moves but in interacting with the machine element(s) of the team. To invoke the sort of terminology I have tried to explore in my own research, the essence of monism does not lie in noun-based relationships between components of body and "components" (if they exist) of mind; rather, it lies in verb-based relationships between what the body does and what the mind does (which may include communicating with resources such as machines and other minds).
Kasparov sees this proposition as the explanation of why any mathematical model of chess that never gets beyond the combinatorics of the combinations of moves will never be an effective model of the practices of an expert chess player. He says the following:
Where so many of these [mathematical] investigations fail on a practical level is by not recognizing the importance of the process of learning and playing chess. The ability to work hard for days on end without losing focus is a talent. The ability to keep absorbing new information after many hours of study is a talent.
Note, just to draw on one of my other favorite topics, that those three sentences would make just as much sense if one were to substitute the word "piano" (or any other instrument) for "chess." Note, also, that Kasparov stresses a tight coupling between learning and doing. One may seek out descriptions for what one and others have done; and those descriptions are, themselves, tightly coupled to the processes of doing (one of the insights we encounter in Plato's "Theaetetus"). However, as John Dewey argued, what we learn is necessarily connected to what we do; and a database of those descriptions will never be an adequate substitute for the actual doing.
Thus, Kasparov's proposition should be taken as a plea for a paradigm shift among those committed to research in artificial intelligence. The shift involves setting aside the Holy Grail of the autonomously intelligent machine in favor of the less glamorous pursuit of more effective problem solving and decision making through man-machine interactions. This would involve departing from exhaustive efforts to model problems and situations in favor of the "solving" and "making" verbs applied to problems and decisions. To draw, once again, upon the trivium, logic should no longer be allowed to play with all the marbles. Verb-based thinking will require an intense understanding of the richness of verb grammar and of the action-based foundations of the application of rhetoric (which cannot be ignored when considering how minds interact). Unfortunately, such understanding can only build on a far richer educational foundation than the one we currently have; so it is highly unlikely that anyone with any clout in our current culture will take Kasparov's plea seriously.
2 comments:
Thus, Kasparov's proposition should be taken as a plea for a paradigm shift among those committed to research in artificial intelligence. The shift involves setting aside the Holy Grail of the autonomously intelligent machine in favor of the less glamorous pursuit of more effective problem solving and decision making through man-machine interactions.
You might want to actually do some research on what the artificial intelligence community has been doing for decades now. This isn't a new idea. The artificial intelligence research community got there long ago. But, as you say, it's not quite so glamorous.
The great bane of artificial intelligence research has always been public relations. I would take that scope of "decades" to encompass the rise of expert systems, most (probably all) of whhich are grounded in man-machine interactions, not just in how they are used but in how their respective knowledge bases are constructed. However, to draw upon my own terminology, the technology of expert systems is thoroughly noun-based, even when we include efforts that tried to incorporate temporal logic. The shortcomings of that foundation were identified and analyzed at great length by Lucy Suchman in Plans and Situated Actions, whose subtitle, The Problem of Human-Machine Communication, gets to the heart of the matter (and emphasizes the flaw in my own choice of words). Expert systems can go a long way in managing transactions with data resources, but they fall short when it comes to the communicative actions that take place among human experts. To invoke the language of
Jerome Groopman, a medical expert system tries to be a valuable resources for "what doctors know;" but, when it comes to getting the job done effectively, "what doctors do" depends much more heavily on "how doctors think." We need a bit more truth in advertising when it comes to identifying the shortcomings of technology in supporting that latter practice!
Post a Comment