I spent two hours this morning listening to the advanced chamber music students from the Preparatory Division (as in too young to matriculate) of the San Francisco Conservatory of Music. I could start out with some hyperbolic statement about some of these kids being smaller than their instruments. That would be a bit too extreme, although clarinetist Gabe Bankman-Fried certainly came close; and, while his name may have been a bit overextended for his stature, his sound certainly made up the difference. It's rough making the first impression in a showcase; but, while it was followed by many other impressive performers, his performance of the clarinet version of the Beethoven Opus 11 piano trio was still with me as I was leaving the Conservatory!
My greatest fear in listening to young talent is that, no matter how young they may be, there is usually someone who is already preparing them to face the competition circuit. Following up on a previous post, I would accuse that "someone" of robbing a kid of what may be the last time to really enjoy the music s/he has decided to play. So, as each little ensemble came out to strut its stuff, I found myself sorting them out. There was probably only one that just did not "get it:" The opening gesture of the Haydn quartet whose first movement they had selected (Opus 71, Number 2) was just an unfortunate blooper from which they never recovered. They were too poker-faced to give any indication of whether or not they realized this but certainly did now show any satisfaction when taking a bow. The remainder could be divided into those who did want to enjoy the music and those who simply wanted to demonstrate that they did "get it;" and it seemed as if the best way to tell the former from the latter was that the performers in the former class had locked into that wonderful sensation of being able to hear and enjoy your own performance while it is taking place.
Music education does not prepare one for this kind of talent; and I think much of the problem stems from the fact that what we have chosen to call "music theory" has little to do with what theorist Thomas Clifton once called "music as heard." When the camel of the computer first stuck its nose under the tent of music theory, we encountered a series of misconceptions about how to think about music. Some of them are embarrassments of the past. Others are still with us. All provide us with lessons to learn.
When I was first getting into this game, it seemed as if any theoretical research had to begin with a representation system; and, since music was grounded in a relatively sophisticated notation, the best approach would be to develop a representation system for the current state of music notation. Back in those days the laser printer was barely a glint in anyone's eye; but IBM had a project called the "photon printer," which was intended to be programmable for rendering any kind of image. Stefan Bauer-Mengelberg was at least part-time at IBM in those days and launched a project for a programming language for this printer that would render music notation. This was originally known as the "Ford-Columbia Input Language" and would later be called "DARMS" (Digital Alternate Realization of Musical Symbols), sort of an homage to the experimental music work going on at Darmstadt. DARMS was nothing if not thorough. I remember Ray Erickson showing me a sample of how it had stood up to the extreme demands of a score by Elliot Carter. The problem was that there was this school of thought that formed around DARMS with the idea that it would be the perfect input language for experiments in computer analysis of musical compositions. Needless to say, it was a decided inappropriate representation system, since it was concerned only with where marks should be placed on a sheet of paper. It would be a bit like trying to use PostScript as an input language for computer-based literary analysis. Sure, it would capture the sort of visual detail one encountered in Mallarmé, cummings, or the "concrete" poets; but it would be thoroughly unwieldy for the more ordinary forms of text usage. As a notation DARMS was not particularly kind to either the "vertical" dimension of music (the progressions of harmonies) or its "horizontal" dimension (the interplay of the voices of counterpoint); and, since it was totally locked into the notation itself, it had no way to deal with the ways an actual performance would interpret what had been notated.
If IBM was focusing on music notation, Bell Laboratories, under the leadership of Max Mathews, was at the other extreme, developing a representation language for audio synthesis in its full generality. These languages were called the "Music" languages, the most developed being "Music V." This was a highly modular language, impeded primarily by the batch-processing system that supported it. I have no idea if Robert Moog was aware of Mathews' activities; but there is a lot of "family resemblance" between the modular elements of Music V and the component modules of the first publicly available Moog synthesizers. The problem here, however, was that all of the representation effort went into describing the computational synthesis for "instruments." The representation of what those "instruments" would then "play" was almost left as an after-thought; and, since Music V did not run in a real-time environment, the need to "play" the instruments (rather than "conceive of a score" for them) was not an issue. Indeed, the computer environment was so "user-hostile" that, for most "real" musicians, getting any sort of interesting sounds out of the system for a respectable duration of time was enough of an accomplishment that the results, whatever they may have been, where immediately dubbed a "composition!"
I took a crack at a representation that had more to do with music performance under the guidance of Marvin Minsky. The result was EUTERPE; and, if I had been more enterprising (or greedy), I probably could have tried to promote it as prior art for MIDI. I was primarily interested in the idea that the voices of a contrapuntal structure were like a bunch of computer programs running in parallel. So I asked what would be the right language for those individual voices, bearing in mind that the programs would sometimes have to coordinate with (i.e. cue) each other. Also, because I was getting much of my guidance from Ezra Sims, who was extremely interested in microtonality, I endowed EUTERPE with an octave that was divided into 72 equal parts, thus enabling Sims to experiment with both quarter and third tones. MIDI never considered that as an option; and these days, as we become more interested in alternatives to equal-tempered tuning, that is probably its greatest disadvantage.
Meanwhile Robert Cogan was experimenting with an entirely different approach to representation. In his book New Images of Musical Sound Cogan developed analyses based on images of the actual vibrations responsible for the sounds, thus taking the most direct path possible to address "music as heard." When he was working on this book, his apparatus was highly limited. These days we can do this sort of thing with just about any personal computer; and, when I was in Singapore, I even advised a Master's student on a project involved using such data to compare performances of the Stravinsky "Serenade" for solo piano, one of the performances being Stravinsky's own. The most interesting thing about Cogan's book is that it is not limited to the usual "classical" compositions addressed by analysis; and, indeed, one of his analyses is of a Billie Holliday recording of "Strange Fruit." However, if the acoustic strategy of Music V was flawed by being based on abstractions that might not be particularly useful, Cogan's approach did not involve any abstractions. At the end of the day, he seemed to be using the visual displays simply to back up what his ear was telling him, which is not a bad idea but still needs to be done is some sort of consistent way.
This problem of abstraction, however, is basically the problem that faces all of the attempts I have outlined. However, one cannot define criteria for the "right" abstraction without first identifying the sorts of questions one hopes the abstraction will answer. (Minsky made this point in his "Matter, Mind, and Models" essay.) The sad truth is that the questions one asks in music theory are more normative (the sorts of questions folks have asked for the last two hundred years) than epistemological or even ontological. This is why I explored the possibility that performances, rather than notes or, for that matter, audio traces of recordings, could best be examined through the three lenses of logic, grammar, and rhetoric. Each of these is a distinct category that introduces its own family of questions, and each of those families needs to be addressed through a distinct set of theoretical strategies and methods.
This, of course, is my own concept of "rehearsal" at its most extreme. I could probably "wing" a few representative questions for those categories; but it would be better to be honest and admit that I have not through that far yet! Nevertheless, today's recital reminded me of just how important listening is and how little we actually know about the process as it applies to music. Clifton's approach to "music as heard" was grounded in phenomenology but still suffered from the problem of not beginning with an examination of what questions he actually wanted to answer. Of course, it probably makes sense to address the question of who is doing the asking. The questions a performer needs to ask are unlikely to be the ones that an audience listener would raise, no matter how well informed. The only educated guess I would hazard, though, is that the sort of questions raised by an academic trying to get a paper published are likely to align with either of these other two classes of questions!