King’s College, Cambridge, where Alan Turing pursued some of the earliest thoughts behind artificial intelligence (photograph by Dmitry Tonkonog, under the Creative Commons Attribution-Share Alike 3.0 Unported license, from Wikimedia Commons)
Apparently, it is time for my annual exasperation with a technical world that is perfectly happy discussing “artificial intelligence” (AI) software without having the foggiest idea of how that concept was first imagined. It would probably be fair to say that the concept first emerged in 1947, when Alan Turing took a sabbatical year at Cambridge University after his wartime efforts at Bletchley park. While there he wrote a monograph entitled Intelligent Machinery, which was not published in his lifetime. It was only a few years later that “Computing machinery and intelligence” appeared in the October, 1950 issue of Mind.
The first sentence of that article is straightforward: “I propose to consider the question, ‘Can machines think?’” Turing then goes on at length, beginning with a serious effort to establish “ground rules” behind what that question is asking. By the end of the essay, he admits that he has not resolved the question. Nevertheless, he is optimistic that, while much “needs to be done,” others will carry on with the doing.
One of those “others” was Professor Marvin Minsky on the faculty at the Massachusetts Institute of Technology. He was able to raise government funding to support the Artificial Intelligence Laboratory; and he served as my advisor for both by undergraduate and doctoral dissertations. Both of these were structured around the hypothesis that both the composition and performance of music could be managed through software.Web
It did not take me long to acquire a generous variety of coding skills. As a result, much of my research was fueled more by the study of music history. Developing software to “compose music” came easily. Creating music that deserved attentive listening was another matter!
It was through that hard truth that I found myself exasperated with the title of a CNET article that appeared early this morning: “Google Thinks AI Can Make You a Better Photographer: I Dive Into the Pixel 10 Cameras.” This is an expository piece about some seriously powerful image-processing software. Indeed, given what passes for photography these days, I would say that the software is jaw-dropping; but is it “intelligent?”
This may be old-fashioned, but I tend to associate intelligence with the power to make reasoned decisions to resolve difficult problems. There is no questioning that the Pixel 10 software is powerful; but where are the “reasoned decisions?” Certainly, the team behind that software had to confront a diversity of challenges; and, if the author of the article, Jeff Carlson, can be taken as a “credible source,” then there is no questioning that the resulting achievement is an impressive one. However, if the “reasoned decisions” only went into making that software, rather than residing in the software itself, is the “intelligence” behind the software “artificial” or “human?”
My fear is that the usage of the phrase “artificial intelligence” is so remote from the ambitions that motivated Turing and Minsky that the words themselves have pretty much lost any useful meaning. Meaning of course, is necessary for person-to-person communication. I took a “deep dive” into that concept when I read Jürgen Habermas’ The Theory of Communicative Action. Through his own analysis of the writing of Max Weber, he concluded that a capitalist society whose value system is based only on market value is fated to suffer two different kinds of loss: loss of meaning and loss of freedom. Could the current misinterpretation of “artificial intelligence” lead ultimately to both of those losses?

No comments:
Post a Comment