Wednesday, October 31, 2012

Cage’s Memories of Computer Programming

I spent the better part of yesterday reading a paper by Leta E. Miller about John Cage’s work with electronic technology leading up to and then following the composition of “Variations V,” one of his most technology-rich pieces. Miller cited Cage having a relationship with Bell Labs that was new to my knowledge of him. It involved his connection with Max Mathews, a leading pioneer of digitally synthesized sound involving the development of a programming language for use by those wishing to compose electronic music. Miller wrote the following about Mathews:
Mathews recalls that he first met Cage when the composer contacted him to see if Bell Labs could construct a random number generator program that would mimic the stick-tossing procedures of the I Ching, an ancient Chinese treatise on divination. Mathews remembers accommodating Cage (‘not a very hard job; about 15 minutes of writing a program’).
None of this surprised me very much, including Mathews estimate of the amount of programming time involved.

While Miller did not attach a specific date to this encounter, it probably would have been in the early Sixties; and therein lies the surprise. I met Cage for the first time in the summer of 1968 while he was in the midst of working on “HPSCHD” with Lejaren Hiller at the University of Illinois. This was another technology-rich project, this time heavily digital where “Variations V” had been heavily analog. I attended a seminar at which Cage talked about his work at Illinois. This involved, among other things, once again having a program to simulate the chance operations behind I Ching consultations. This time the programmer was Ed Kobrin; and, according to Cage, it took six months for Kobrin to write the code.

(As an aside, Cage gave this talk at MIT. A programmer I knew who was in the audience ducked out after Cage made this statement. He was back with a running program before Cage concluded his talk. As I said, Mathews’ estimate was entirely believable.)

In retrospect this was, for me at least, a powerful lesson in programming environments. Both Bell Labs and MIT had rich environments for “interactive” programming. At the University of Illinois, on the other hand, one submitted punched cards to a batch-processing system. Interactive programming also entailed interactive diagnosis when the program did not work. There were even powerful software tools to facilitate the programmer observing what was happening on a step-by-step basis, modifying the program itself, and observing the impact of the change.

My guess is that the University of Illinois had no such interactive environment that allowed programmers to be more productive in their work. If the program did not do the right thing, one stared at the code, possibly using pencil and paper to facilitate the analysis. They were programming computers using techniques not that different from those used for those early computers being developed during the Second World War, unaware of how the state of the art had advanced at places like MIT.

In many respects the Internet has leveled the playing field across different work environments. However, as tools have gotten more powerful and more available to more individuals connected through the Internet, the knowledge of what actually happens seems to be declining. As I read various articles in trade publications, I have discovered that the number of people who know how code works and how to write it is in attrition, meaning that I am not the only one complaining about how few people are left who know how “the machine” works these days!

Tuesday, October 30, 2012

New Interfaces for Whom?

I just finished reading David Meyer’s analysis of the latest reorganization within Apple on ZDNet. His prediction at iOS and OS X will merge into a single operating system with a common user interface strikes me as a reasonably good educated guess. Furthermore, the promotion of Jony Ive from product design to the broader responsibility of human interfaces across all Apple products is a sign that this unification will be seen at the interface level, as well as the infrastructure. I would also agree with Meyer that these changes recognize a potential threat arising from Microsoft’s new commitment to touch-based products.

Still, I have to wonder just who will benefit from those changes that are potentially in the works. I have already written about the fact that those of us who still take “an analytic approach to both reading (as in long reports that often require multiple open windows to support fact-checking, testing, and related queries) and writing (as in responding to such reports with a comprehensive analysis)” are likely to be the losers in this mobile-based world of the future that has seized the attention of both Apple and Microsoft. I would also suggest that, beyond the basic acts of reading and writing, there are also basic issues of content management (once called file management), that have always been fundamental to any operating system. Whether the content is on your own device or off in some cloud, you still have to worry about both saving it and retrieving it; and interfaces should be designed to make those worries less bothersome. Finally, there is an even more fundamental issue of operating system design, which is the idea of managing multiple active processes for those “multiple open windows.” If I am trying to read anything of substance from a computer screen, I am likely to be writing at the same time. That is why I am such an advocate of the support for note-taking provided by Acrobat; but the notes I write usually require that I am running a Web browser (and probably also a tool for searching documents on my hard drive) at the same time. The notes I take may involve both pasting content from other sources or inserting useful hyperlinks. Such multitasking is not currently supported by iOS, nor would I want to do that kind of reading on a telephone. However, the corollary is that I cannot do it on an iPad either.

My fear is that we face a highly consumer-based approach to the next generation of technologies. This obviously plays well for the marketing folks, who can then dream up any number of scenarios of happy consumers for television commercials. However, it pushes those of us who have to do something other than consume, not only those of us who desperately cling to writing as a legitimate form of work but also all of those trying to run businesses confronted with day-to-day decision-making challenges that require hard-and-fast analytic thinking, into a distant background. It will be E. M. Forster’s world in which the machine satisfies all consumption needs but in which no one knows how the keep the machine running effectively; and it seems as if it is no longer in the interests of either Apple or Microsoft to consider the implications of such a future.

Friday, October 26, 2012

Integrating “Dennis the Menace”

In today’s panel for Dennis the Menace, we see that he has an African American baby sitter complaining that her book on child psychology was of no use. I think this was the first time I saw an African American in the series. Checking the Wikipedia entry, I discovered that the effort to integrate it had an interesting history:
… in the late 1960s, Ketcham decided to add an African American character to the cast named Jackson. Ketcham designed Jackson in the tradition of a stereotypical cartoon pickaninny, with huge lips, big white eyes, and just a suggestion of an Afro hair style. In one cartoon that featured Jackson, he and Dennis were playing in the backyard, when Dennis said to his father, "I'm havin' some race trouble with Jackson. He runs faster than me." The attempt to integrate the feature did not go over well. Protests erupted in Detroit, Little Rock, Miami, and St. Louis, and debris was thrown at the offices of the Post Dispatch. Taken aback, Ketcham issued a statement explaining that his intentions were innocent, and Jackson went back into the ink bottle. However, another African American character named Jay Weldon appeared in the 1986 animated series to far less controversy as he was not a stereotype.
The panels are now written and drawn by Hank Ketcham’s former assistants, Marcus Hamilton and Ron Ferdinand; and they may have decided that one could keep up with the times without stereotyping. After all, one assumes that a babysitter lives nearby, if not in the same neighborhood; and the implication is that this kid takes her job seriously. It may have taken half a century, but Dennis the Menace may have caught up with contemporary reality!

Thursday, October 25, 2012

The Clash of the Fetishes

It appears that Andrew Ross finally wrote something on his The Rest is Noise blog that prompted me to get out my flame-thrower. The good news is that I put in solid effort into my contrary position, resulting in an article for my Examiner.com national site. However, having taken that position, I realize that it may be situated in a broader context.

The "something" in question consisted only of a single sentence:
Wouldn't it be great if the media were covering significant new works by living composers, instead of reporting the discovery of an exceedingly minor piece by Beethoven?
The crux of my Examiner.com article involved pushing back against what I felt was an unfair attempt to conflate musicology (the discovery of a new Beethoven manuscript) and music criticism.

Ironically, yesterday I was typing up notes I had taken after having read an essay by Theodor W. Adorno entitled "On the Fetish-Character in Music and the Regression of Listening." Basically, the "fetish-character" amounts to taking a consumerist stance on music experiences (including concerts as well as recordings), thinking in terms of the exchange-value of commodities rather than any strictly subjective use-value. From this point of view, Ross might accuse me of fetishizing the newly discovered manuscript, while I would retaliate by accusing him of fetishizing the performance of "new works by living composers." In other words you get to choose the fetish for which you pays your money!

Tuesday, October 23, 2012

The Concept of “Art”

I spent part of this past weekend wrestling with an essay entitled “Art and the Arts,” which I found in the Stanford University Press anthology of works by Theodor W. Adorno collected under the title Can One Live after Auschwitz?: A Philosophical Reader. This was the essay in which I found Adorno making explicit reference to John Cage, and I figured I had better get a sense of the context in which that reference was situated. The title referred to the question of whether or not it made sense of have a concept of “art,” given the diversity of all the instances subsumed by that concept.

I was a bit surprised that this “philosophical reader” contained no reference to Ludwig Wittgenstein in this essay. After all, Wittgenstein had taken on the same question with regard to the concept of “game.” Ultimately, he concluded that, while one could not define the that concept through the necessary and sufficient conditions of a rigorous formal logic, one could not dismiss the concept out of hand. To borrow a later phrase from John L. Austin, this was just one of those examples of how we “do things with words,” regardless of whether or not what we do can be reduced to a formal infrastructure.

The bottom line is that categories are not mere abstract constructs. They are products of how mind imposes order on sensory input, which is why Gerald Edelman chooses to focus not on the categories themselves but on those processes that he calls “perceptual categorization.” This stance is particularly important where “art” is concerned. Like it or not, we exist in a social world of minds that have declared it a perceptual category, reinforced by how our capacity for language has chosen to hang a noun-label on it. We have done this without worrying about whether that label has a variable target. Indeed, we may even embrace the variability of that target, which is what I had in mind when, back in 2010, I wrote that Edgard Varèse had “laid siege to those perceptual categories that we all assumed would serve us when listening to music.” From this we may conclude that Cage showed up in Adorno’s essay because he came along with a bigger siege engine.

In order to advance from sensation to cognition, Edelman uses his foundation as a basis for building hierarchies of categories of categories. This hierarchical stance has appealed to the artificial intelligence set, where it was abstracted into “object-oriented programming.” Unfortunately, that approach tried to abstract away the social dimension, which is one reason why it still cannot come to grips with “game.” (I once had a colleague who wrestled with whether, in the hierarchy he was trying to build, a “toy truck” was a “toy” or a “truck!”)

My own interest, on the other hand, has been to determine whether or not the things we do with our words might fall into some “meta-level” set of categories that serve us when we talk about different art forms. I have been at this for some time. Thus, when I find myself wrestling with a particularly tricky aspect of the making of music, I still tend to turn to the medieval trivium to guide how I use my words within a framework of logic, grammar, and rhetoric. This does not strike me as far-fetched, since one of the key aspects of the social dimension of music concerns the intersection between how we make music and how we talk about making music.

This is not to imply that, in the course of my own doing things with words, everything always fits nicely into that framework. Sometimes I feel as if I have to take a shoehorn to what I am trying to say. Then I have to remind myself that rethinking the framework may be more valuable that cramming into it things that may not belong there!

Monday, October 22, 2012

Are We Still Worrying about Rain?

Back in the Dark Ages, when, as a result of my role as a Silicon Valley researcher, I was an early adopter of DSL to my home in Palo Alto (which took SBC about a year to get working, leading my wife to call them "The Stupid Solutions People"), I soon discovered that my connection would get flaky whenever it rained. (These were also the early days of globalizing customer support. So I had my first experience of dealing with a help desk that had no idea what weather conditions were like outside my window, working from a script that he preordained that such information was irrelevant.) Today the Bay Area is getting its first major rainfall (much to the consternation of Giants fans); and, almost as if like clockwork, Yahoo! Mail is exhibiting regular drifts off into Aristophanes' cloud cuckoo-land.

So, as we get besieged with television commercials telling us about the mass proliferation of cell towers, do we need to ask how much of our "advanced" digital technology rests on the foundation of a decades-all infrastructure that still cannot stand up to a little rain?

Saturday, October 20, 2012

Adorno and Cage

I have been procrastinating for some time on acquiring a better understanding of the work of Theodor W. Adorno, particularly regarding his approaches to music theory. However, as a result of reading a paper by Thomas Y. Levin (“For the Record: Adorno on Music in the Age of its Technological Reproducibility,” October, Volume 55, Winter, 1990, pages 23–47), I realized that there is an interesting connection to John Cage that deserves some recognition before we reach the end of the latter’s centennial year. In examining Adorno’s attitudes towards recorded music, Levin finds one Adorno text in which he advocates the use of recordings as a creative medium, through which one may apply montage techniques similar to those that had established themselves in filmmaking.

Levin reacts to this text as follows:
Such practice, he now argues, enlists the element of chance (which is unavoidable in all performance) in the service of reason, and exposes the falsity of the ideology of inspiration that is already incompatible with the iterated structure of traditional rehearsals.
One cannot read passages invoking “the element of chance” (or, for that matter “the falsity of the ideology of inspiration”) without thinking of Cage. However, there is still the question of whether Adorno himself was thinking about Cage when he wrote this sentence. We may never know for sure. On Everything2 we can find a post by the user Oisin that includes the statement:
So Adorno included John Cage among his composers to be championed, for although his work is not dodecaphonic it is "atonal" in the sense that Adorno uses the word.
However, Adorno’s interest in montage is not necessarily related to his advocacy of atonality, nor does Oisin state explicitly whether Adorno actually knew who Cage was or had hear any of this compositions.

These questions may be resolved somewhat more satisfactorily by a sentence in Adorno’s “Art and the Arts” essay from 1967, which is included in the Stanford University Press anthology, Can One Live after Auschwitz?: A Philosophical Reader. The sentence in question is the following:
What set out to spiritualize the material of art ends up in the naked material as if in a mere existent, just as was explicitly called for by a number of schools—in music, by John Cage, for example.
In other words Adorno knew enough about Cage to know that he believed that any sound could be treated validly as “material of art’ (although we have no idea from this sentence whether Adorno actually listened to any of the ways in which Cage put this theory into practice!

Friday, October 19, 2012

Thursday, October 18, 2012

Opera Audiences, Then and Now

Last month I wrote a post entitled “Putting the Claque in its Proper Perspective,” based on my experiences in reading William L. Crosten’s book, French Grand Opera: An Art and a Business. I had previously thought of the claque as a publicity engine, designed to shape the opinions of audiences that could not think for themselves. I was a bit surprised to discover that the claque also provided a model for proper decorum. This was basically a coy reminder that the manners of the market, so to speak, were not necessarily those of the opera house or any other venue for music where performance was part of the experience. In retrospect, I should not have been surprised, since much of Richard Strauss’ opera Der Rosenkavalier is about the nature of decorum in the face of the rise of the bourgeoisie.

Ultimately, however, Crosten’s primary message was that the success of opera as a business had a lot to do with telling people what to think. This often involved the delicate matter of providing content consistent with their expectations and then shaping their opinions around that content. Thus, in many respects, this paragraph from the final chapter is a representative take-away from the entire book:
To a bourgeois society that had lost all contact with the past glories of the French lyric theater and that had shown itself singularly unable to appreciate the finely ­modeled, individualistic style of a Mozart or of a Rossini at his best, grand opera of 1830 came as a revelation. Seasoning originality with compromise, it spoke to its auditors in a language they could understand. While the older aristocracy took its patronage to the Théâtre-ltalien, the bourgeoisie stormed the doors of the Académie Royale de Musique, for there they found an art made in their own image—an art that was at once revolutionary and reassuring, that extended one hand towards Romanticism as it held fast to conventionality with the other. Grand opera's luxury, size, and complete seriousness gave it an appearance of greatness which was both stimulating and flattering to its audience; yet there was always enough commonness in its expression to keep it easily accessible. Tied to no program, either classic or romantic, it was in all essentials a popular art keyed to the tempo and taste of its day.
This state of affairs may be a bit harsh for contemporary audiences, whose appreciation of both Wolfgang Amadeus Mozart and Gioacchino Rossini is probably at least a bit more refined than that of the Parisian bourgeois of 1830; but I am not sure it is that far off the mark. While I have any number of good things to say about Bart Sher’s production of The Barber of Seville for the Met, I still have to admit that it prioritizes spectacle as the primary vehicle for making the music palatable. If Rossini needs that kind of assistance in achieving “the appearance of greatness,” then we can imagine how Met audiences must feel these days when Mozart is on the bill.

Claques did not exist over here during my years of learning to be an informed member of the audience. Now, with everyone glued to their smartphones, they no longer need to exist as they did in 1830 Paris. You may be instructed to turn off your smart phone during the performance, but during intermission you can tweet all you want and follow others doing the same. Opinions are still being shaped; only the medium has changed.

Needless to say, this is a bit demoralizing to those of us old-fashioned enough to believe that opinions should be informed on the basis of more than a tweet. Still, I suspect that those of us who prefer the well-wrought description to the summary judgment, however well-honed it may be by rhetoric, have always been in the minority. Thus, we really have not come particularly far since 1830, even to the point that business interests still trump aesthetic ones and are likely to continue to do so for some time to come.

Wednesday, October 17, 2012

Daniel Mendelsohn’s “Reality Problem”

I was glad to see New York Review Books release a collection of essays by Daniel Mendelsohn entitled Waiting for the Barbarians. As readers of this site know, I have followed Mendelsohn’s New York Review articles eagerly and enthusiastically. While many of them capture some immediate spirit of the moment, such as his review of Avatar, Mendelsohn always seems to extrapolate his accounts beyond that immediacy to more general hypotheses, if not truths.

The review of this collection by Edward Mendelson (note the spelling difference) calls particular attention to what Mendelsohn calls the “reality problem.” Since this has been a favorite topic of my own, I feel it worth quoting how Mendelsohn formulates it:
… how the extraordinary blurring between reality and artifice that has been made possible by new technologies makes itself felt not only in our entertainments…but in the way we think about, and conduct, our lives.
Expressed this way, the reality problem may be viewed as a corollary of Max Weber’s “loss of meaning” problem, since the very meaning of “reality” is at stake. Since Weber posed this problem as a hazard of too much emphasis on market-based thinking, I think this connection is particularly appropriate. Whether it involves the convincing levels of artifice available through, for example, CGI or the extent to which Facebook embeds us in an “artificial” version of the social world, rather than the “real” one, the intense marketing of new technologies seems to have confronted us with the unintended consequence of Mendelsohn’s reality problem.

Of course, once “reality” loses its meaning, so does everything else, whether it involves how we choose those who govern us or how we shall be able to eat a decade from now.

Sunday, October 14, 2012

Following Best Sellers onto Television

Last night, after my wife and I returned from a delightful Voices of Music concert, we made our usual decision to unwind with something we had recorded on our DVR. The television itself happened to tuned to NBC. I saw just enough to realize that this week's episode of Law & Order: SVU had been inspired by Fifty Shades of Grey. (I suppose that, if you were determined to spin a television episode off of that book, SVU would be the most appropriate target.) What I saw did not make me any more curious about the book, let alone how things would proceed on the episode; but it did leave me to wonder if Joseph Anton has a similar destiny and, if so, where and how it would show up on commercial broadcast television.

Wednesday, October 10, 2012

Truncation Games

Fortunately, Yahoo! News can also be a source of amusement in its campaign to encourage Max Weber's dire forecast of a society in which meaning has been lost. Consider the following item (on the same page as the blog post about the bar following the stock market) on the MORE FROM YAHOO! NEWS list this morning:
In appeal to swing voters, Romney offers a more centrist mes … The Ticket - 2 hrs 27 mins ago
Yes, that is a truncation; the ellipsis shows it. It is clear that the last word is "message." I just wish that the character count had allowed the second "s" be be included!

Yahoo!'s Latest Attack on Semantics

I have to confess that curiosity drew me to the headline "Inventive Bar Uses Stock Market Approach to Price Drinks" in the NEWS FOR YOU list on this morning's Yahoo! News site, even though I was pretty sure that, compared with the breaking story about our sending troops into Jordan, it would not really qualify as "news." It didn't. As a matter of fact, it did not even come from an accredited news source. Rather, it was a post to the Trending Now blog, managed by Yahoo! News.

The icon for this blog sports the subtitle "The news you need to know." I suppose that, if there were some interesting economic analysis about coupling the price of drinks to stock market behavior, I might need to know that, in case it had an impact on how I managed my own portfolio; but analysis was clearly not the order of the day. For that matter, neither was "now." When I read this post, it was eighteen hours old. That hardly counts as "now" in Internet time. If high crimes and misdemeanors against semantics were a capital offense, Yahoo! would be facing the prospect of spending time in the big house!

Tuesday, October 9, 2012

Smart Meter, Dumb System

PG&E has been putting a lot of money into public relations. Considering the mess they made in San Bruno, I suppose this was necessary; but the picture is bigger than a single case of negligent maintenance. It also extends to how those of us stuck with being customers get to monitor our billing.

When "smart meters" were launched, PG&E launched a major television advertising campaign explain how we would all be able to use monitoring data to see why we were paying what we had to for each month's bill. I have been playing with the Web site since it became available. For the most part I find it no more informative than the printed statement I would get each month that compared my usage for the month for that of the same month in the previous year.

Nevertheless, there was a major dip in last month's bill; and it was interesting to put that number in the context of month-by-month activity, rather than just where things stood a year earlier. Today I examined my latest statement and discovered that the number had bounced up again. However, when I went to the My Energy Use page on the PG&E Web site, I discovered that the data for this month's bill was not available! In other words I was getting more useful information when it came in paper form than when it was coming from the "smart meters." This leads me to wonder whether those meters are actually providing accurate data (or, for what matter, if they are providing data at all). If PG&E was capable of "faking it" with their maintenance reporting activities, why should we trust their usage data?

Monday, October 8, 2012

Time-Consciousness in the Performance of Music

This is definitely “rehearsal” material, having grown out of some casual conversations I have been conducting while in the midst of a rather heavy schedule of covering concerts.

The thoughts first emerged through a conversation with a friend who believes that teaching piano should pay as much attention to improvisation as it does to reading from the score page. This struck a particularly resonant cord in my own consciousness, because, while I spent a lot of time improvising as a kid, I was not particularly good at it then; and I am even worse at it now. As a result, I have developed a real interest in the extent to which Johann Sebastian Bach’s approach to pedagogy seems to be grounded in the assumption of a tight coupling (not that Bach would ever have used such a phrase) between proficiency in execution and proficiency in invention.

Since I still tend to be as interested in “wet brains” as I am in “abstract ideas” and since last year I was put off by what I felt was some really bad experiment design in an effort to identify, through brain scanning, areas of brain activity associated with both memorization of music and improvisation, I tried to relate this inadequate attempt to a firmer foundation of hypothesis generation. It occurred to me that questions concerned with both memorization and improvisation could only be framed in the context of some more general model of time-consciousness. This continues to be one of the most problematic concepts for those trying to get a handle on time-based thinking. Edmund Husserl wrote a whole book about it, but the problem has been nagging great minds going back at least to Augustine, not to mention Aristotle’s efforts to get a handle on memory.

In his book The Remembered Present, Gerald Edelman tries to approach time-consciousness through areas of the brain that he calls “organs of succession.” (For those wanting me to be more specific, these are the cerebellum, the hippocampus, and the basal ganglia.) In his model time-consciousness has much more to do with the ability of the mind to work with the concepts of “before” and “after” than with the more specific matters of duration, whether in the specific domain of clock time or in those of Henri Bergson’s more subjective model of subjectively “felt” time. (This actually suggests that Edelman and Augustine might have easily found a common ground for conversation.)

I would like to suggest that those who are good at improvisation depend very heavily on such organs of succession. In more simplistic language improvisation comes down to continually dealing with two questions:

  1. What have I done?
  2. What do I do next?

Now, while these questions are good to bear in mind when one is reading from a score page, from a strictly logical point of view, neither is absolutely necessary. Reading music can take place entirely “in the moment,” with no regard to either past or future. The eye is simply providing a stream of answers to only one question:
What do I do now?
This then suggests why Bach felt it was important for the student to acquire both sides of the coin, so to speak. One masters execution because, once you know the answer to the what-do-I-do-now question, you have to have the physical capacity to actually do it. On the other hand Bach’s approach to invention addresses the capacity for improvisation. That requires those before/after questions; and they cannot be satisfied unless your organs of succession have been “primed” to deal with them.

As I said at the beginning, these are admittedly “unkempt thoughts.” However, I figure that a “rehearsal studio” can double as a “laboratory notebook;” and such a notebook is more than a record of hypotheses, data, and analyses. It can also be a diary in which one lays out the “tracks for trains of thought,” so to speak, that direct one to those hypotheses that need to be further investigated.

Friday, October 5, 2012

Solutions Only Work if They are Effectively Implemented

One might think, from reading Walter Addiego's review in today's San Francisco Chronicle that Matthew Heineman's new documentary, Escape Fire: The Fight to Rescue American Healthcare, is a good thing. He writes approvingly:
The film is surprisingly optimistic, arguing that there are genuine, practical answers to many of the problems afflicting the system, and some are already being adopted.
However, on the basis of Addiego's review, it would appear that was focusing entirely on the nuts and bolts of healthcare itself, rather than the context set by the "industrial" practices of insurance and hospital management (whose directorates often interlock).

One is reminded of another documentary, Who Killed the Electric Car? In this case the point was that we had the scientific and engineering foundations for an electric car decades ago, along with many enthusiastic promoters. (There was footage of Tom Hanks in the documentary waxing lovingly over an electric car he owned.) However, at that time the oil industry was powerful enough to quash all competition; and we can rest assured that both insurance and hospital management will see any proposal by Heineman as a competition to be eliminated by any means necessary. After all, isn't that the American way?

Thursday, October 4, 2012

The Debate that Wasn’t

Yesterday I took a rather jaundiced view of the debate between Barack Obama and Mitt Romney that was “about to be,” which received a comment preferring “to defer judgment until after the event.” Well, it is now “after the event.” As fate would have it, I have been reading another “after the event” article in the latest issue of The New York Review, Joseph Lelyveld’s reflections on the Democratic Convention. One sentence about television coverage continues to stick in my craw:
Though they tend to run as a herd, these toilers [the newscasters], reaching daily and hourly for fresh insights, save us from having to think for ourselves.
This was certainly as true “post-debate” as it was “post-convention;” but the fact is that the herd was not putting very much thinking into the process. Thus, it was unclear what they were doing for us.

The bottom line is that Mark Mardell’s bleak assessment, which I cited yesterday, could not have been better fulfilled. It is as if that collective herd is more interested in aspiring to be Roger Ebert, rather than (to choose a serious media journalist from the past) Edward R. Murrow. The babble is all about the “performance on the stage” rather than what was being performed, perhaps because the latter is a script that we all “know too well” (as Leporello puts it when Don Giovanni’s band strikes up “Non più andrai”). Even the headline on the front page of the San Francisco Chronicle (dwarfed by the story about the Oakland Athletics, by the way) looked as if it belonged in the Datebook section.

The fact is that the most memorable moment from the debate may well have been the reference to Big Bird; that may tell us all we need to know about how the media outlets want us to prepare for the election.

Wednesday, October 3, 2012

Debates That Aren't

With a title like "Do the US presidential debates matter?," one might think that Mark Mardell had been watching The Newsroom on HBO before writing this editorial for BBC News. He certainly seems to have gotten the point that the HBO series made clear through their own dramatic context. He just articulated that point in more objective language:
What matters most is not the closely drawn intellectual argument about rival policy platforms, but the body language and the pithy one-liner that sums up an opponent's faults.
Most of us do not need to be reminded that this is all that matters. We are used to living in a culture preoccupied with television advertising preaching that no problem is so difficult that it cannot be solved by buying a new car. We expect to have Presidential candidates promoted to us through the same strategies that seem to work so well for cars, knowing full well that neither the cars nor the candidates are going to deliver "as advertised." After watching Democracy Now! yesterday, the only thing that really matters to me is whether or not Jim Lehrer will have the gumption to bring up the topic of gun control in the state where that issue has reared its ugly head twice. My guess is that the "mad men" mentality will overrule any effort to bring such a "hot button" issue into the debate, leading me to believe that viewers are likely to learn more about the candidates by using their cable providers on-demand facility to watch past episodes of Boss.

Tuesday, October 2, 2012

Making an Issue out of Neglecting the User

Danny Sullivan deserves a shout-out for today's piece in the Common Sense Tech column on CNET News, even if, by his own admission, this is not the first time he has visited this particular topic. The topic is the state of calendar management on the Mac, which was pathetic under Snow Leopard, long before Mac OS fell vicitim to "iOS-ification" and has progressed from bad to worse with the spin-off of Reminders. I am beginning to think that any allegiances that Apple had to user-centered design have faded into ancient history, at least as far as Mac OS is concerned, after which the transition to myth is inevitable. I suppose this is all a consequence of an ideology based on the slogan:
We have seen the future, and it's mobile.
However, I have yet to be convinced that any business that depends on an analytic approach to both reading (as in long reports that often require multiple open windows to support fact-checking, testing, and related queries) and writing (as in responding to such reports with a comprehensive analysis) will be able to flourish if all computing needs to be done on a iPad. The future of politics may devolve to a "battle of tweets;" but it is hard to imagine Twitter being the only tool available when a company decides to prepare for its IPO!

Monday, October 1, 2012

The Jewish Homophobe

Since I remember reading Merle Miller’s “What it Means to be a Homosexual” in January of 1971 in The New York Times Magazine, I felt a personal connection to Charles Kaiser’s NYRBlog post, “When The New York Times Came Out of the Closet,” which serves as an afterword to the new Penguin release of Miller’s On Being Different: What It Means to Be a Homosexual.” I was particularly struck, however, by Kaiser’s backstory about particularly vocal homophobes at the time Miller’s article appeared. One was the Times own managing editor, A. M. Rosenthal, and the other was Joseph Epstein for his Harper’s article, “The Struggle for Sexual Identity.”

This aspect of Kaiser’s post led me to think back on being a student at the time of the Sexual Revolution. I realized in retrospect that I had encountered a variety of different homophobic stances in the classroom among my teachers, and the most vocal of those stances took place in music classes. I suppose this was understandable, since it was a time when researchers were just beginning to disclose how many members of the pantheon of “great composers” had homosexual experiences, which must have been quite a blow to those teachers who worshipped those idols, rather than concentrating on studying them. I also remember that the most vocal of the homophobes was Jewish; and he often made it a point to identify at least two composers who had not concealed their homosexuality as members of his worst-composer-of-all-time category, as if homosexuality involved a degradation of aesthetics as well as morals.

In retrospect I am inclined to call this an instance of the nice-Jewish-boy syndrome. This was a time when the old “nice Jewish boys” concentrated on excelling in intellect and keeping a low profile in everything else, while the younger ones (one of whom showed up in the last season of Mad Men) rejected that whole low-profile attitude. Why was there a generational shift? My conjecture is that the older generation lived with vivid memories of the Holocaust and saw the low profile as a necessary survival tactic, while the following generation as more detached from Hitler’s anti-Semitic nightmare.

I was teaching in Israel during the 1972 Presidential election. Just about every Israeli I met opposed George McGovern, because he wanted to declare Jerusalem an international city. I therefore wore a McGovern button with a certain amount of pride, not to mention an excuse for declaring my suspicions about Richard Nixon. I was once confronted by an Israeli who asked what I would do if the President of the United States decided to persecute Jews. I replied that, if the Federal Government wanted to get me, they would probably come up with reasons a lot better than my religion to do so!

Like most Jews I have a lot of respect for those who endured the Holocaust. However, I also believe that a generational shift has taken place and that keeping a low profile about your beliefs is a thing of the past. Of course I now live in a City whose culture embraces just about every imaginable form of tolerance. I am practical enough to recognize that there are many corners of the world, including in my own country, in which discretion is not merely the better part of valor but a necessity for survival.