Last night Adobe Books hosted the latest concert in the NextNow performance series produced by Mika Pontecorvo. He prepared a three-set evening, each of whose sets involved real-time computer processing. For those who have either followed this particular approach to music-making or participated in it (full disclaimer: I have been deeply involved in both of those activities), the concert turned out to be an engaging series of reflections on both the past and more recent approaches to the techniques.
The opening set was taken by John Bischoff, described on his Wikipedia page (and rightly so) as “an early pioneer of live computer music.” Working at a relatively small table, his gear consisted of a MacBook Pro and what looked like a home-grown configuration of manually-operated controls. All of Bischoff’s performance, consisting of three short pieces filling roughly have an hour, was realized through his manipulation of those controls.
If both my memory and my ears serve me well, the software being controlled involved the FM (frequency modulation) synthesis algorithm that John Chowning first implemented in 1967. Prior to Chowning’s invention, computer music was involved primarily with synthesis techniques based on simple waveforms, noise created through random-number generators, and filters. When it came to constructing “inventive” sonorities, filtering noise was the technique of choice. FM synthesis provided an innovative algorithm whose parameters could be varied to yield a much richer family of sonorities, many of which suggested bell-like qualities.
Here I must confess to a strong personal connection to Bischoff’s performance last night. 1967 was the year of my transition from undergraduate to graduate student at the Massachusetts Institute of Technology. My own interests in music were being designed and implemented at the Artificial Intelligence Laboratory thanks to extensive access to a PDP-6 computer, then followed by its PDP-10 successor. Chowning’s algorithm was first implemented at Stanford University, but one of my colleagues installed it on the PDP-10 I had been using. He used a bank of 36 on-off switches on the control panel as the interface to the algorithm, assigning different subsets of the switches to the different parameters.
The result may have been one of the earliest instances of real-time improvisation based on a computer algorithm. After the two of us had played with the interface, I decided to start recording a reel of tape. After the tape had filled, I took it down to the facilities at the campus radio station (whose call letters at that time were WTBS). Using that equipment to copy sections of the tape at different speeds, played both forward and in reverse, I eventually forged “Lemniscate,” my first piece of tape music, which would later be used by a choreographer I knew in New York.
The result was that watching Bischoff at work on his controls was both a trip down Memory Lane and the comforting recognition that solutions to the interface problem had become far less arcane. Having established my thoughts about the “how,” I could then devote full attention to the “what.” Much of the performance involved pointillist techniques, but there were also stirring moments in which masses of sound would gradually accumulate.
The sound itself came from two large loudspeakers in the small space of Adobe’s front room, but nothing ever felt unbearably loud. There was a clear sense that Bischoff’s control work was tightly coupled to his own listening, an activity very much in the spirit of the jamming of an adventurous jazz pianist. (Cecil Taylor is the example that comes to mind most readily, even if Bischoff’s rhetorical approaches were decidedly his own.)
Bischoff was followed by Sean [updated 10/2 8:20 a.m.: corrected in response to comment below] Hamilton, a percussionist based in Tampa. He has been on an extended tour of the United States, and NextNow provided him with a stop in San Francisco. Like Bischoff he worked with a MacBook Pro running his own software. However, that software seemed to involve primarily capture and playback and was used very sparingly. Hamilton’s set amounted to an extended improvisation with ties (once again) to jazz jamming at its most adventurous.
Such improvisations often involve imaginative use of polyrhythmic structures, such as those unleashed by Jacob Felix Heule almost a week ago at the Center for New Music. Hamilton’s technique, on the other hand, tended to take a “single voice” approach in which a single “melodic line” would peregrinate from one instrument to another across the full population of his drum kit. He also showed meticulous attention to dynamic levels in both individual lines and in the phrasing behind laying out those lines in sequence.
This brings us to the computer. It would appear that the primary role of the software was to enhance the qualities of reverberation. Thus, one could detect when the sound was coming from his instruments and when it was coming from the loudspeakers. Inevitably, the speaker sounds seemed to be sustaining the instruments beyond their own physical decay times. Hamilton’s phrasing tended to reflect more on much of the drum virtuosity of the swing period, rather than the complexities that emerged from bebop and post-bop; but his technological devices cast the very act of drumming, however traditional its roots may have been, into a stimulating new light.
The final set was taken by Pontecorvo himself, performing with percussionist Mark Pino. Technologically, Pontecorvo has had a long-standing interest in what are called generative process architectures. These are complex systems that exhibit a high level of autonomy based on internal feedback paths. Such systems were one of the topics discussed in James Gleick’s Chaos: Making a New Science. Their capacity for autonomy also drew the interests of many of the scientists working in the area of artificial life.
The thing about autonomous systems, however, is that they do what they do. From my vantage point I could see that Pontecorvo had provided himself with a rather extensive interface on the screen of his MacBook Pro. However, his interactions with that interface tended to be minimal. Basically, he had created a system and unleashed it. Some of its behavior seemed to involve processing the sounds that Pino produced, but this was a system whose complexity tended to thwart hypothesizing about causality. Taken on their own terms, the sounds themselves were engaging, even if they were also a bit enigmatic in how they actually figured in the acts of performance.
Taken as a whole, the evening was a comforting reminder that real-time computer music is not only alive and well but also capable of being thoroughly compelling, even to those not focused entirely on the technology.
2 comments:
The name should be SEAN Hamilton, not "Scott". His name is Sean Hamilton, not Scott Hamilton.
The correction is now in place.
Post a Comment