Reading "Perceiving Temporal Regularity in Music," a paper that Edward W. Large and Caroline Palmer published in Cognitive Science in 2002, turned out to be quite an exercise in reviving past skills that have not received much exercise lately. On the other hand it was also a satisfying reminder of how patience can sometimes be as beneficial side-effect of retirement. Not to long ago I was writing to a sometime colleague about the extent to which "professional" research has become contaminated by business practices concerned more with return-on-investment than with insights. In that climate I realized that the pressure of delivering results was a serious impediment to reading extended survey papers when my mind was preoccupied with teasing out highly specific answers to narrowly-framed questions.
These days I do not have to worry as much about either the questions or how good the answers are. As a result I take more pleasure in reading a challenging technical paper that satisfies my curiosity than in reading a lot of that poorly written junk that tries to pass itself off as literature. Thus, while much of recent fiction my try my patience to the point of aggravation, I seem to have no trouble taking the time to dig into either the breadth of a survey paper or the depth of a report of specific results, particularly if if involves catching up on how the state of the art in a particular area has matured since I was last pursuing it as a "professional" researcher.
In that "former life" one of the projects I pursued while I was in Singapore was the use of visualization as a tool for piano pedagogy. This came about as a result of a conversation with a piano teacher to whom I was explaining MIDI representation. I showed her the representation for my own performance of Wolfgang Amadeus Mozart's K. 2 minuet; and, as she eyeballed the numbers, she started making observations about my phrasing as if she had actually heard the performance itself. She then talked about how hard it was to explain phrasing to beginning students, since it involves a major step beyond just decoding the notation.
What came out of this was a relatively low-level approach to representing those MIDI data as images superimposed on the score of the music being performed. For example, we colored the notes on a continuum between blue and red to indicate dynamic level. However, we also made a crude stab at inferring the pulse of the pupil's "internal metronome," representing it as tick marks that might appear to the left or right of the notes themselves, rather than directly underneath.
Reading the Large-Palmer paper, I realized that my team had taken a first stab at capturing and visualizing the subtleties of timing in a performance. On the one hand, there was the regularity of the beat itself (or, if you buy into the theories of Fred Lerdahl and Ray Jackendoff, the hierarchy of such regularities); but then there was the principle that phrasing often involved a departure from those regularities. Large and Palmer managed to capture this in a rather elegant mathematical model, which has now opened the door to new ways in which to visualize the subtleties of performance.
I still champion the value of such visualization. There are too many times when mere words cannot guide the listening practices of even the best students; and, of course, that challenge of description remains with performers long after their student days have passed. Whether or not visualization technology will built on the recent insights from Large and Palmer remains to be seen, particularly since it is unlikely to become a major revenue stream to seize the attention of would be entrepreneurs. Nevertheless, it is nice to know that the state of knowledge is still being advanced by those more interested in the heavy lifting of science than in cashing in on the "next big thing" in Silicon Valley!