This morning Amanda Kooser used her Crave blog on CNET to post a poll on whether or not Bitcoin should be banned. This provided me with yet another example of how those acolytes of anarchy, who serve at the altar of the Internet, never seem to get the point when any question of governance arises. Usually, I am content to observe that the concept of governance is simply alien to the Internet community, even when the issue at stake involves death threats to one of its own.
Because "Internet fidelity" imposes blinders that limit view to only the most objectively technical issues, those acolytes have never really grasped the fact that currency systems are all "fictions of convenience." Thus, to maintain the acolyte metaphor, the very act of attributing a value of a currency amounts to an article of faith. I would modestly propose that one of the functions of government is to impose some set of constraints based on rationality to limit the damage brought on by the irrationality of faith.
In less explosive language this amounts to saying that matters of regulation and oversight should be the responsibility of some form of governmental authority. As long as the Internet is run by those who question the authority of any form of governance, abuses such as those of the recent Bitcoin episode will continue to occur, since there are no regulations to check them nor any body of oversight to implement the checking. I realize that, from this point of view, this makes the Internet sound like a lemming headed for a cliff. On the other hand, when we consider how many of today's crises may be attributed to failures of governments to do their jobs in a manner that will earn the consent of the governed, we can appreciate why a preference for anarchy among the acolytes will endure. Bitcoin may not be the cliff that kills the lemming, but we should recognize that it has the potential to do so.
Friday, February 28, 2014
Thursday, February 27, 2014
Why Anthropology?
Regular readers should know by now that I often appeal to anthropological practices in my writing, particularly involving the study of work practices and not only on this site but also in my writings for Examiner.com. Much of this may be attributed to the amount of time I spent rubbing shoulders with workplace anthropologists when I was an IT research in Silicon Valley. However, I realized that there is another perspective that might be useful.
Several weeks ago, while I was waiting for a concert to begin, one of the managers of the ensemble I was covering came by and observed that I was reading Local Knowledge, a collection of essays by Clifford Geertz. Since she had studied anthropology, she wanted to know why I was reading his work. I replied that I had spent the better part of my time as a student with my head in an innumerable number of scores. As a result, whether I was attending a concert or listening to a recording, I could never get my head away from questions of fidelity to the score, without realizing that there was more to performance than decoding pages full of symbols that had nothing to do with the English language. That "more," of course, had to do with the ability to consider the performance of music as a work practice grounded in a culture no different, fundamentally, from the culture of sculptors in a particular African tribe or (the example to which I was closest) the culture of technicians who repair copy machines.
What appealed to me in Geertz' work was that he appreciated the extent to which one could appeal to the abstractions of symbols (such as those in music notation) as part of the study, as long as one did not overload the attention one paid to those symbols. Here is a passage from this "Art as a Cultural System" essay that particularly appealed to me:
Several weeks ago, while I was waiting for a concert to begin, one of the managers of the ensemble I was covering came by and observed that I was reading Local Knowledge, a collection of essays by Clifford Geertz. Since she had studied anthropology, she wanted to know why I was reading his work. I replied that I had spent the better part of my time as a student with my head in an innumerable number of scores. As a result, whether I was attending a concert or listening to a recording, I could never get my head away from questions of fidelity to the score, without realizing that there was more to performance than decoding pages full of symbols that had nothing to do with the English language. That "more," of course, had to do with the ability to consider the performance of music as a work practice grounded in a culture no different, fundamentally, from the culture of sculptors in a particular African tribe or (the example to which I was closest) the culture of technicians who repair copy machines.
What appealed to me in Geertz' work was that he appreciated the extent to which one could appeal to the abstractions of symbols (such as those in music notation) as part of the study, as long as one did not overload the attention one paid to those symbols. Here is a passage from this "Art as a Cultural System" essay that particularly appealed to me:
To be of effective use in the study of art, semiotics must move beyond the consideration of signs as means of communication, code to be deciphered, to a consideration of them as modes of thought, idiom to be interpreted. It is not a new cryptography that we need, especially when it consists of replacing one cipher by another less intelligible, but a new diagnostics, a science that can determine the meaning of things for the life that surrounds them. It will have, of course, to be trained on signification, not pathology, and treat with ideas, not with symptoms.This idea that interpretation involves more than decoding and may even involve a frame of mind oriented towards diagnosis appealed to me when I read it, and it continues to appeal to me. It seems to be that anyone who sits in the audience during a performance of music is trying to make the experience an intelligible one. If, as I writer, I realize that this is what I am doing and if I can then translate my efforts to do so into a readable text, then I may be of assistance to others faced with that same problem of finding intelligibility. If that is the case, then I feel obliged to follow Geertz' lead, putting more attention into the "culture of making music" than into either the objective artifacts of that culture (scores and recordings) or to any intellectual effort that tries to abstract the music away from how it is made, whether it involves a mathematical analysis of a symbolic representation or the far more pedestrian ideology focused on worshiping the "greatness of a hero."
Wednesday, February 26, 2014
Identifying the Rotten Core
Stilgherrian's analysis on ZDNet of the major security flaw that afflicted both iOS and OS X provided a readable, and therefore valuable, summary of the why and how of what happened. The same can be said of his diagnosis of either rot or poison beneath Apple's shiny surface. I just wished he had taken his diagnosis one or two steps further, even if that meant venturing beyond his personal expertise in security.
That broader scope was actually suggested when Stilgherrian cited the pioneering work of Edsger Dijkstra in a discipline that would come to be known as "software engineering." It recalled those early days when coding was a seat-of-the-pants effort performed by individuals involving, at most, a moderate scope. Dijkstra was one of many to realize that this rather amateur approach would not sustain the development of large systems, such as operating system. He was a pioneer in preaching that the act of coding should follow disciplined standards, particularly so that one programmer could easily understand what another was doing.
Software engineers has matured considerably since then. It has also expanded beyond coding to include areas such as interface design, architecture, and testing. However, I would modestly suggest that, while the practice has become impressively disciplined, the practitioners have not kept up with the pace. Instead, they now rely on any number of "labor-saving" tools, which may allow them to be more productive but also distance them from what is really going on at all levels of the engineering process.
It used to be that the best IT companies could be recognized for recruiting the best talent. Apple used to be a star, but that star got eclipsed by Google. These days I am not sure that there are any stars with noting out there. Institutions that once inspired students to distinguish themselves a skilled practitioners now, instead, dangle the prospect of being a successful entrepreneur, thus neglecting the inevitable truth that, somewhere along the line, someone has to be responsible for getting the real work done.
Anyone who has experienced the decline in Apple software quality does not need to be told by Stilgherrian that the Apple now has a rotten core. However, Apple is hardly the only major corporation to confront this problem; and perhaps it does not need to be shouldered with the full blame for it. The real rot may come from the pernicious shift in values that now dominates our whole system of higher education, creating a brave new world of Eloi who may not longer have the benefit of Morlocks keeping them supplied with their necessities.
That broader scope was actually suggested when Stilgherrian cited the pioneering work of Edsger Dijkstra in a discipline that would come to be known as "software engineering." It recalled those early days when coding was a seat-of-the-pants effort performed by individuals involving, at most, a moderate scope. Dijkstra was one of many to realize that this rather amateur approach would not sustain the development of large systems, such as operating system. He was a pioneer in preaching that the act of coding should follow disciplined standards, particularly so that one programmer could easily understand what another was doing.
Software engineers has matured considerably since then. It has also expanded beyond coding to include areas such as interface design, architecture, and testing. However, I would modestly suggest that, while the practice has become impressively disciplined, the practitioners have not kept up with the pace. Instead, they now rely on any number of "labor-saving" tools, which may allow them to be more productive but also distance them from what is really going on at all levels of the engineering process.
It used to be that the best IT companies could be recognized for recruiting the best talent. Apple used to be a star, but that star got eclipsed by Google. These days I am not sure that there are any stars with noting out there. Institutions that once inspired students to distinguish themselves a skilled practitioners now, instead, dangle the prospect of being a successful entrepreneur, thus neglecting the inevitable truth that, somewhere along the line, someone has to be responsible for getting the real work done.
Anyone who has experienced the decline in Apple software quality does not need to be told by Stilgherrian that the Apple now has a rotten core. However, Apple is hardly the only major corporation to confront this problem; and perhaps it does not need to be shouldered with the full blame for it. The real rot may come from the pernicious shift in values that now dominates our whole system of higher education, creating a brave new world of Eloi who may not longer have the benefit of Morlocks keeping them supplied with their necessities.
Sunday, February 23, 2014
Be Careful What You Wish For
Charles Simic's latest NYRBlog post provided a slightly disquieting example of a poet who could see the future:
All of these visions, however, evade the question of whether or not any of this stuff is worth leaving for posterity in the first place.
That gravestone reminded me of something crazy the poet Mark Strand thought up many years ago, when he was broke and thinking up ways to make money. He told me excitedly one day that he had invented a new kind of gravestone that he hoped would interest cemeteries and carvers of gravestone inscriptions. It would include, in addition to the usual name, date, and epitaph, a slot where a coin could be inserted, that would activate a tape machine built into it, and play the deceased’s favorite songs, jokes, passages from scriptures, quotes by great men and speeches addressed to their fellow citizens, and whatever else they find worthy of preserving for posterity. Visitors to the cemetery would insert as many coins as required to play the recording (credit cards not yet being widely used) and the accumulated earnings would be divided equally between the keepers of the cemetery and the family of the deceased. This being the United States of America, small billboards advertising the exciting programs awaiting visitors to various cemeteries would be allowed along the highway, saying things like: “Give Your Misery A Little Class, Listen to a Poet” or “Die Laughing Listening to Stories of a Famous Brain Surgeon.”
One of the benefits of this invention, as he saw it, is that it would transform these notoriously gloomy and desolate places by attracting big crowds—not just of the relatives and acquaintances of the diseased, but also complete strangers seeking entertainment and the pearls of wisdom and musical selections of hundreds and hundreds of unknown men and women. Not only that, but all of us who are their descendants would spend the later years of our lives devouring books and listening to records, while compiling our own little anthologies of favorites.I wonder if either Simic or Strand realizes how close this is to a viable reality. There has been a fair amount of debate over that happens to your Facebook site after you die. It seems as if Strand inadvertently (and anachronistically) provided a perfectly reasonable solution: Your Facebook site becomes your gravestone. Furthermore, since Facebook is always on the ball keeping that site populated with the latest advertising, visitors will not have to worry about dropping coins into a slot (even bitcoins). For that matter, if Strand had a bit of a lawyer's imagination to go along with his poetic inventiveness, he might even be able to claim prior art for the whole Facebook concept!
All of these visions, however, evade the question of whether or not any of this stuff is worth leaving for posterity in the first place.
Saturday, February 22, 2014
Sarah Crompton "Gets It!"
In last Tuesday's "Children in Museums" post, I called out Ivan Hewitt over the fact that the word "parent" appeared only once in his "Should children be banned from museums?" article for the London Telegraph and that the sentence in which it appeared was not even his own. Consequently, this morning I was very glad to see a "response piece" by Sarah Crompton in the Telegraph that reinforced the proposition that this was a problem of parenting rather than a problem with the children themselves. Crompton did not go quite as far as I did with the hypothesis that bad parenting is just another symptom of life in a market-driven society. My guess is that she is more charitable than I am, believing that parents will improve their behavior if the problem is called to attention. Personally, I find this about as naive as Galileo's belief that he would prevail over the accusations of the Inquisition simply with the powers of pure logic!
Thursday, February 20, 2014
Was the Renaissance a "Transitional Age?"
One of the pleasures of reading papers outside my own areas of expertise is the discovery that academics can be very inventive in picking topics for an argument. In "The Renaissance and the Drama of Western History," William J. Bouwsma takes on the question of whether or not the Renaissance was a "transitional age." When confronted with the more general view that any age may be viewed as a transition from its predecessor to its successor, he responds with the assertion that the transition was more accelerated during the Renaissance. He then summons up the metaphor of this accelerated transition taking place "between two granitic headlands, clearly identified as the Middle Ages and the modern."
As I say, this can all be very inventive. For better or worse, I read this essay in the context of having read Thomas Kuhn's famous study of "scientific revolutions." Kuhn talks about such "revolutions" in terms of "paradigm shifts." What he is really getting at, however, is that, at any given time, there are certain social conventions that determined what might be called the "normal practice of science." Those criteria for what constitutes "normal practice" basically identify a "paradigm." Kuhn then suggests that "revolutions" take place because "normal practice" does not change gradually. To provide a simple example, once people like Galileo had access to a telescope, the whole way of "normal thinking" about astronomy changed radically.
My personal interest in the Renaissance is concerned with making music, a domain in which it makes sense to talk about "normal practices." I think it would be fair to say that becoming a musicians was a matter of being recognized as a musician by other musicians. Such recognition tended to be based on whether or not one's practices were perceived as "normal." Thus, Richard Wagner's opera Die Meistersinger is all about what happens when someone not skilled in those "normal practices" dares to make up and sing a song.
The nice thing about studying music history is that there are all of these "proclaimed authorities" who would document what those "normal practices" were. Thus, as Wagner suggested, one became a musician through an apprenticeship process just as one became a craftsman skilled enough to be part of a guild. Acquiring "normal practices" had more to do with doing under the supervision of a master than with reading one of those documents and then doing things "by the book." There may have been documented points of reference, but all that really mattered was how one did things in the immediate present.
What is interesting about the Renaissance is that new technologies led to new practices. The most interesting of these was probably the technology of printing, since this triggered a major change in the "normal practices" of making music, whether it involved performing someone else's music or coming up with something original. However, there were also changes in the motives behind those practices. There was a past in which making music amounted to a calling one followed for the greater glory of God. The advent to printing brought the concept of intellectual property along in its wake and with it the possibility that creating new music could create a revenue stream.
The question that historians seem to have overlooked is whether all those practices changed in the "revolutionary" way that Kuhn associates with the history of science or whether it was a gradual process of one thing leading to another. If we knew enough about the nature of the changes and how they unfolded, our innate capacity for forming categories might be better equipped to decide whether or not those categories we currently call "ages" are actually useful and whether a new set of categories might be in order. In other words, we should be taking those documents from the past and reading them not as authorities but as "snapshots" of "normal practice" and then see where all those data point lead us.
As I say, this can all be very inventive. For better or worse, I read this essay in the context of having read Thomas Kuhn's famous study of "scientific revolutions." Kuhn talks about such "revolutions" in terms of "paradigm shifts." What he is really getting at, however, is that, at any given time, there are certain social conventions that determined what might be called the "normal practice of science." Those criteria for what constitutes "normal practice" basically identify a "paradigm." Kuhn then suggests that "revolutions" take place because "normal practice" does not change gradually. To provide a simple example, once people like Galileo had access to a telescope, the whole way of "normal thinking" about astronomy changed radically.
My personal interest in the Renaissance is concerned with making music, a domain in which it makes sense to talk about "normal practices." I think it would be fair to say that becoming a musicians was a matter of being recognized as a musician by other musicians. Such recognition tended to be based on whether or not one's practices were perceived as "normal." Thus, Richard Wagner's opera Die Meistersinger is all about what happens when someone not skilled in those "normal practices" dares to make up and sing a song.
The nice thing about studying music history is that there are all of these "proclaimed authorities" who would document what those "normal practices" were. Thus, as Wagner suggested, one became a musician through an apprenticeship process just as one became a craftsman skilled enough to be part of a guild. Acquiring "normal practices" had more to do with doing under the supervision of a master than with reading one of those documents and then doing things "by the book." There may have been documented points of reference, but all that really mattered was how one did things in the immediate present.
What is interesting about the Renaissance is that new technologies led to new practices. The most interesting of these was probably the technology of printing, since this triggered a major change in the "normal practices" of making music, whether it involved performing someone else's music or coming up with something original. However, there were also changes in the motives behind those practices. There was a past in which making music amounted to a calling one followed for the greater glory of God. The advent to printing brought the concept of intellectual property along in its wake and with it the possibility that creating new music could create a revenue stream.
The question that historians seem to have overlooked is whether all those practices changed in the "revolutionary" way that Kuhn associates with the history of science or whether it was a gradual process of one thing leading to another. If we knew enough about the nature of the changes and how they unfolded, our innate capacity for forming categories might be better equipped to decide whether or not those categories we currently call "ages" are actually useful and whether a new set of categories might be in order. In other words, we should be taking those documents from the past and reading them not as authorities but as "snapshots" of "normal practice" and then see where all those data point lead us.
Wednesday, February 19, 2014
Damn the Consequences! Full Innovation Ahead!
Sacred Heart Cathedral Preparatory is a school located only a few blocks north from where I live in San Francisco. I usually do not give it much thought, expect when I see a basketball game going on in the building when I am walking over to St. Mark's Lutheran Church for a concert. This afternoon, however, I noticed that they have attached a banner to a lamppost that I can see from my bedroom window. I am used to this "advertising space" being used by performing arts organizations, such as the New Century Chamber Orchestra, or museums.
Beyond the fact that it was a parochial school doing the advertising, I found the nature of the advertising itself more than a little curious. I had to do some Google searching to figure out that the key phrase on the banner, "Fearless We Pursue," came from the school's Alma Mater:
I am used to things being more than a little odd in this city. I would even say that such oddities provide one of the reasons why I like to live here. However, I still feel a need to draw the line when they get downright zany!
Beyond the fact that it was a parochial school doing the advertising, I found the nature of the advertising itself more than a little curious. I had to do some Google searching to figure out that the key phrase on the banner, "Fearless We Pursue," came from the school's Alma Mater:
Hail Sacred Heart Cathedral PrepHowever, underneath that "Fearless We Pusue" phrase was a word nowhere to be found in the Alma Mater (reproduced in capital letters as it appeared on the banner):
Green, White and Blue
Unity proclaimed
Fearless we pursue
Untold strength will be our guide
Irish brave and true!
INNOVATIONI was a little amused that a parochial school had decided to exchange innovation Kool-Aid for communion wine. Perhaps this was the school's way of saying that it was preparing its students to follow the yellow-brick road that leads to Silicon Valley, even if that is a world of smoke-and-mirrors wizardry (which, presumably, is still frowned upon by the Catholic Church) or, in a more positive light, the world Galileo (who was at least fortunate enough to be posthumously pardoned of the crime of heresy).
I am used to things being more than a little odd in this city. I would even say that such oddities provide one of the reasons why I like to live here. However, I still feel a need to draw the line when they get downright zany!
Life Modestly Imitates Poulenc
Today's news is not quite the stuff of Dialogues of the Carmelites, but it is close enough to deserve mention. Megan Rice, an 83-year-old nun, was sentenced to three years in prison. Her crime involved breaking into a high-security nuclear complex in Oak Ridge, Tennessee. She and two accomplices cut through three fences to get on the grounds. Once inside, they painted messages of protest, hung banners, and threw blood on the bunker wall. The facility they penetrated is sometimes called "the Fort Knox of uranium," since it holds our country's primary supply of weapons-grade uranium. At least she will not have to listen to the sound of the guillotine while serving her sentence.
A Possible Misconception of History as Narrative
Recently, I have been trying to revive my interest in narratology and the role of narrative in the writing of history (which the academic seem to wish to call "historiography," as if authority was a matter of establishing "lexical turf"). I finally found an opposing statement (to the effect that writing history is not about writing narrative), formulated by Maurice Mandelbaum in a paper he wrote in 1967. After reading his objections, I realized that his view of narratology was relatively narrow, perhaps to the extent that he had not read very much on the subject. (This was a time when narrative theory may have been drawing more attention in Europe than in the United States.)
As I see it, the fundamental flaw in his reasoning was that narrative is always a linear chain on antecedents leading to consequents, while history involves a more complex network of relationships. I certainly agree with his view of history. However, it seems to me that he is assume that, just because the act of narrating must, of necessity be linear (the narrator cannot break free of the immediate flow of time) and just because the story he is relating can also be laid out as a linear sequence of events on its own time-live, there is no reason to assume that the discourse structure of the narrative itself must also be linear. Indeed, there are any number of ways in which the telling of a story will take advantage of devices such as "flashbacks" and even "flash-fowards;" and it seeemed as if Mandelbaum was not interest in the role that such discourse devices play in narrative.
Mandelbaum then goes on to say that history is all about part-whole relationships, rather than those concerned with antecedent-consequent. This, again, strikes me as off the mark. Narrative is as much about establishing context as it is accounting for the events that take place within that context. One might almost say that the entire scope of Marcel Proust's Remembrance of Things Past is all about the part-whole relationships in the context, rather than the sequence of actions that take place in the course of his epic narrative. Indeed, one might even take the extreme position that Proust-the-novelist may provide one of the more useful models to be considered by any would-be historiographer!
As I see it, the fundamental flaw in his reasoning was that narrative is always a linear chain on antecedents leading to consequents, while history involves a more complex network of relationships. I certainly agree with his view of history. However, it seems to me that he is assume that, just because the act of narrating must, of necessity be linear (the narrator cannot break free of the immediate flow of time) and just because the story he is relating can also be laid out as a linear sequence of events on its own time-live, there is no reason to assume that the discourse structure of the narrative itself must also be linear. Indeed, there are any number of ways in which the telling of a story will take advantage of devices such as "flashbacks" and even "flash-fowards;" and it seeemed as if Mandelbaum was not interest in the role that such discourse devices play in narrative.
Mandelbaum then goes on to say that history is all about part-whole relationships, rather than those concerned with antecedent-consequent. This, again, strikes me as off the mark. Narrative is as much about establishing context as it is accounting for the events that take place within that context. One might almost say that the entire scope of Marcel Proust's Remembrance of Things Past is all about the part-whole relationships in the context, rather than the sequence of actions that take place in the course of his epic narrative. Indeed, one might even take the extreme position that Proust-the-novelist may provide one of the more useful models to be considered by any would-be historiographer!
Tuesday, February 18, 2014
Children in Museums
I see that Ivan Hewitt has run another piece in the London Telegraph with a question as the headline. This time it is "Should children be banned from museums?" This piece was apparently inspired by the recent episode of a child climbing on a $10 million sculpture by Donald Judd at the Tate Modern. With that as context, it was clear that Hewitt was answering his own question in the affirmative. This time, however, he allocated half of his column to a rebuttal from Dea Birkett, Director of an organization called Kids in Museums.
One of the things that struck me about this piece is that the stem "parent" appeared exactly once in the entire text. It was in Hewitt's portion, but it was not Hewitt's sentence. Rather, he was quoting the words of an observer during that episode at the Tate:
In that context, Hewitt's issue is part of a far broader question:
One of the things that struck me about this piece is that the stem "parent" appeared exactly once in the entire text. It was in Hewitt's portion, but it was not Hewitt's sentence. Rather, he was quoting the words of an observer during that episode at the Tate:
I said to the parents I didn’t think their kids should be playing on a 10 million dollar artwork. The woman turned around and told me I didn’t know anything about kids and said she was sorry if I ever had any.As far as I am concerned, this disregard of the role that parents play made for a major shortcoming in what both Hewitt and Birkett had to say. I find it interesting that our own legal system makes it a point to differentiate between children and adults, particularly when a case is brought into a courtroom. To me this emphasizes that children cannot be held accountable for all of those actions; and, when they are not accountable, that accountability transfers to at least one individual in loco parentis.
In that context, Hewitt's issue is part of a far broader question:
Should parents unwilling to accept responsibility for that accountability be allowed to take their children into any public places?My guess is that Hewitt is one of many who has had a meal spoiled in a restaurant by a child who really does not want to be there but has to be because the parent is more concerned with his/her own indulgences. There may be a big leap between indigestion and damage to a $10 sculpture, but the underlying cause is still the same. In our new market-driven society, those who have wealth feel entitled to do whatever they want with it without worrying about the impact of their actions on the social world in which they are embedded. That sense of entitlement extends down to how they view raising their children.
What can Cardiff Promise?
Apparently, the big entertainment news in the United Kingdom is the announcement by Pinewood Studios will be building a new facility on the site of a former Energy Centre building in Wales, located at Wentloog, Cardiff. For those not in the know, Pinewood is the home of the James Bond franchise. Does this mean that the music for the opening credits of the next Bond film will feature some of that fine singing from an all-male Welsh choir? On the other hand, all I know about Cardiff I learned from Torchwood (talk about a distorted education). So does this mean that we can hope for Torchwood: The Movie, or does this mean that James Bond will have to contend with Gwen Cooper in his next adventure? Perhaps this will bring a new interpretation to You Only Live Twice!
Sunday, February 16, 2014
Is the Recording the new RES FACTA?
Once again I find myself taking issue with the "Audiophiliac" opinions of Steve Guttenberg over on CNET News. This time it involved his launching a poll on the question:
Live vs. recorded music -- what's better?Indeed, I am tempted to assume the contentious position that the very phrase "recorded music" is an oxymoron. In this respect I suppose I am reviving the way in which Tinctoris adopted Aristotle's distinction between making and doing, to which I devoted a post last month. In that post I quoted heavily from a paper by Rob Wegman entitled "From Maker to Composer: Improvisation and Musical Authorship in the Low Countries, 1450–1500." In this case I would like to home in on one sentence:
While written counterpoint, by its very nature, can only be represented by a noun (res facta or cantus compositus), oral counterpoint necessarily requires one to use a verb (cantare super librum).A recording, particularly one made with extensive studio work, can never be anything other than a res facta (literally, a "made thing," with some bizarre connotations of the Mafia expression "made man"). Doing is clearly part of the process; but, when the work is done, it is inevitably secondary. True Aristotelian doing only takes place in the immediacy of performing in front of an audience (and lip-syncers need not apply). Calling anything else "music" amounts to taking a counterfeit to be the real item.
Saturday, February 15, 2014
Do We Have an Artistic Leg Up on the Brits?
Ivan Hewitt's latest piece on the Web site for the London Telegraph is entitled, "Why don't composers win cash prizes when they need them most?" He is far from the first to ask this question, and I am sure he will not be the last. However, he seems to have blanked out on efforts to change this situation. Perhaps the problem is that one of the more interesting of those efforts is taking place here in the United States and may thus be beyond the range of Hewitt's vision.
Ironically, one of the major forces behind this effort is the British publisher Boosey & Hawkes. This is doubly ironic since B&H published the music of Steve Reich, whose recent acknowledgements triggered Hewitt's article. However, the motivating force seems to have come from Michael Tilson Thomas; and he has involved the two orchestras he leads, the San Francisco Symphony and the New World Symphony, in the process. The result of this partnership is a program called New Voices. The primary objective is to provide professional development for emerging composers; but the actual practice of that development is given a "kickstart" in the form of financing for two commissions. one for full orchestra and the other for more reduced resources. The only problem is that only one composer gets to benefit from this program each year, turning this into yet another process at the mercy of competition judges (the same process that, as Hewitt observed, thwarted an attempt by Igor Stravinsky and Dylan Thomas to collaborate on a project).
I am not saying that we do things better on our side of the pond, but it may be that we are better at pointing in the right direction.
I also just noticed that Hewitt's article was preceded by another piece entitled "Why are there so few women in contemporary music?" The first composer to receive support from New Voices was Zosha Di Casti. As might be guessed, her name never appears in this earlier article!
Ironically, one of the major forces behind this effort is the British publisher Boosey & Hawkes. This is doubly ironic since B&H published the music of Steve Reich, whose recent acknowledgements triggered Hewitt's article. However, the motivating force seems to have come from Michael Tilson Thomas; and he has involved the two orchestras he leads, the San Francisco Symphony and the New World Symphony, in the process. The result of this partnership is a program called New Voices. The primary objective is to provide professional development for emerging composers; but the actual practice of that development is given a "kickstart" in the form of financing for two commissions. one for full orchestra and the other for more reduced resources. The only problem is that only one composer gets to benefit from this program each year, turning this into yet another process at the mercy of competition judges (the same process that, as Hewitt observed, thwarted an attempt by Igor Stravinsky and Dylan Thomas to collaborate on a project).
I am not saying that we do things better on our side of the pond, but it may be that we are better at pointing in the right direction.
I also just noticed that Hewitt's article was preceded by another piece entitled "Why are there so few women in contemporary music?" The first composer to receive support from New Voices was Zosha Di Casti. As might be guessed, her name never appears in this earlier article!
Thursday, February 13, 2014
A Memorable Tenth Anniversary
Ten years ago yesterday (February 12) in San Francisco, then Mayor Gavin Newsom ordered the county clerk to issue same-sex marriage licenses. By a curious coincidence, my wife and I had closed on the purchase of a unit in Opera Plaza (from which we could see the dome of City Hall). We were living in Palo Alto at the time, but we wanted a place in the Civic Center to avoid long drives back to Palo Alto after concerts and operas. The weekend following Newsom's order was our first full weekend spent at the new place.
What I remember most was the number of people lined up at the entrance to City Hall. Linda had heard the news, and she asked if we should go down to give them coffee. I asked if she realized how many there were. I also remember that we spend part of the weekend at a revival screening of The Battle of Algiers at the Castro. There was no way to describe the extent of the party atmosphere throughout the Castro. Just going somewhere for coffee made you part of at least one wedding party.
I shall always remember this curious overlap of events and the way in which it served to welcome us to life in San Francisco.
What I remember most was the number of people lined up at the entrance to City Hall. Linda had heard the news, and she asked if we should go down to give them coffee. I asked if she realized how many there were. I also remember that we spend part of the weekend at a revival screening of The Battle of Algiers at the Castro. There was no way to describe the extent of the party atmosphere throughout the Castro. Just going somewhere for coffee made you part of at least one wedding party.
I shall always remember this curious overlap of events and the way in which it served to welcome us to life in San Francisco.
Tuesday, February 11, 2014
THE NEW YORK TIMES can Still Have a Distinctive Style
There are any number of sources in both the national and foreign press that we can read this morning responding to the death of Shirley Temple. However, what struck me most about the obituary by Aljean Harmetz for The New York Times was the decision to begin with her full married name and then to refer to her as "Mrs. Black." I am not sure whether this was just an old-fashioned form of respect or a deliberate decision to make sure that her more recent history as an ambassador for recent Republican administrations was given just as much priority as her reputation as a child movie star. Perhaps it was just a nuanced effort to demonstrate to Fox News what "fair and balanced" really means in practice!
Tuesday, February 4, 2014
Google Joins the Club
This morning Charlie Osborne used her Zero Day blog for ZDNet to survey the most vulnerable systems and software applications in 2013. Her data source was a report from GFI labs released yesterday. In the applications category, Microsoft's Internet Explorer maintains its top position on the list, followed by Oracle's Java. However, Google ascended (if that verb is going in the right direction) to the number three spot with its Chrome browser. Since Google has made a point of replacing Microsoft in all things, one can assume that they will use 2014 to establish a firm grip on the number one position on this list!