skip to main |
skip to sidebar
Having lived in Singapore between 1991 and 1995, I was well aware of the vulnerability to the Maldives of global warming. Unfortunately, at that time global warming was receiving very little attention; and the Maldives were governed by Maumoon Abdul Gayoon, a president who had secured himself in office for some time. Thus, when opposition surfaced in the form of Mohamed Nasheed, the official response was to throw him into prison along with his dissenting views.
Eventually, however, Nasheed managed to get himself fairly elected to the presidency. This gave him a bully pulpit for raising consciousness of the proposition that, were warming to raise sea level by only a few inches, the Maldives would cease to exist. He is best known in the media for having held a cabinet meeting under water as a sign of what the future might bring. He was thus one of the most forceful voices at the Copenhagen Climate Summit in 2009.
Well, nothing that would protect the Maldives came out of that meeting. Then, almost as if to add insult to injury, Nasheed was removed from office by Gayoon. As a result, business-as-usual was restored to the Maldives, including the neglect of any environmental problems, thus paralleling the outcome in Copenhagen.
Neither of these sad conclusions received much attention from the media, leading me to wonder if this was just a case in which public attention was once again being directed according to the determination of those with a stake in the production (and probably large-scale consumption) of energy. Put another way, having Nasheed running the Maldives was as much of an “inconvenient truth” as the popularity of Gore’s documentary of the same name. However, while the Gore documentary has faded into oblivion, Nasheed was harder to control; so he had to be dealt with by other means, the most effective of those means being the power-wielder he had displaced. So it is that the fate of what may be the most vulnerable nation on earth will be disposed of by Big Oil with the same efficiency applied to ruining the Gulf Coast of our own country.
The title of Tim Parks’ latest NYRBlog post was irresistible: “Do We Need Stories?” Before reading the post my initial reaction was that this would be a response to Walter Benjamin’s despair that “the art of storytelling is coming to an end;” but it turned out that the incentive came from Jonathan Franzen’s campaign for the need (in Franzen’s words) “for long, elaborate, complex stories, such as can only be written by an author concentrating alone, free from the deafening chatter of Twitter.” Given my current feelings about Franzen and his proclamations, I was all the more curious about what Parks would have to say.
I was glad to see that his focus was more on the stories themselves than on any obsessions about the conditions under which we read them (although it was still nice to see Parks wield Jane Austen as a stick to beat down Franzen’s particular brand of obsessions). In particular this post explores the extent to which stories provide semantic facilitation in our efforts to come to grips with what William Empson called “complex words.” Thus, Parks makes his primary point through words that most students of semantics tend to avoid, simply because they constitute the ultimate minefield of the social world:
The only way we can understand words like God, angel, devil, ghost, is through stories, since these entities do not allow themselves to be known in other ways, or not to the likes of me. Here not only is the word invented—all words are—but the referent is invented too, and a story to suit. God is a one-word creation story.
In many ways this approach to semantics reinforces Ludwig Wittgenstein’s proposition that the meaning of a word resides in how it is used. On the one hand the story provides a “framework of use” for the word in question, while, within that framework, we are likely to encounter how the word is being used by those agents involved in the story. Parks then advances his own position to accommodate the storyteller as well as the story:
Like God, the self requires a story; it is the account of how each of us accrues and sheds attributes over seventy or eighty years—youth, vigor, job, spouse, success, failure—while remaining, at some deep level, myself, my soul. One of the accomplishments of the novel, which as we know blossomed with the consolidation of Western individualism, has been to reinforce this ingenious invention, to have us believe more and more strongly in this sovereign self whose essential identity remains unchanged by all vicissitudes. Telling the stories of various characters in relation to each other, how something started, how it developed, how it ended, novels are intimately involved with the way we make up ourselves. They reinforce a process we are engaged in every moment of the day, self creation. They sustain the idea of a self projected through time, a self eager to be a real something (even at the cost of great suffering) and not an illusion.
In other words semantic interpretations reside not only in the texts of the stories but in the actions through which those text are related (or, if you prefer, “performed”). Ultimately, there are more things in Parks’ world of stories than are dreamt of in Franzen’s feeble philosophizing.
In the wake of Lang Lang having performed all five of the piano concertos of Ludwig van Beethoven with the Philharmonia Orchestra in London, Ivan Hewitt has used his column for the London Telegraph to review the disconnect between “his 15,000 adoring fans” and the less-than-enthusiastic critics. Hewitt makes some good points, and I certainly think the column is worth reading. However, this is a topic that exceeds the limitations of his column space; so I would like to provide some supplementary thoughts that both support and question his position.
Let’s begin with the example he presents in Lang Lang’s defense:
Think back to the times when you’ve seen a performance of this concerto, or any concerto that calls on the notion of the heroic performer. Remember how the emphatic passages got much of their power from the look of the pianist’s raised forearms and his expression of stern invincibility. Or how the deliciously coloured flourishes of the slow movement were given an extra glow by the graceful flourish of the pianist’s fingers.
Now imagine another performance where the sounds emanating from the piano have exactly the same expressive force, but they’re played by the pianist with an air of studied neutrality, so you hardly notice his presence. Difficult, isn’t it? In fact I would say it’s impossible, and not just because performance always involves responding to the whole human being in front of us. It’s because a lot of music from Beethoven onwards absolutely requires self-assertive performance. To avoid it out of some misguided loyalty to classical music’s ''purity’’ robs the music of something essential.
The thing about the argument in that second paragraph is that it is not as impossible as Hewitt seems to think. The fact is that I only had to consult my personal memory to summon up just what Hewitt wanted me to imagine. Furthermore, the pianist in question was one of those “greats,” perhaps one that provided those critics attacking Lang Lang with just the sort of cudgel they needed. The pianist was Alfred Brendel. The occasion was a recital he had given at the California Institute of Technology.
When I had learned that Brendel would be performing Franz Schubert’s D. 960 B-flat major piano sonata, I made it a point to drive from Santa Barbara to Pasadena for the occasion. The event turned out to be one of sobering disenchantment. I do not know if I would call Brendel’s composure one of “studied neutrality;” but there was no ignoring his air of detachment. Indeed, he was so detached that I had to wonder if his mind was occupied only with getting out of the Caltech auditorium (if not getting out of California) while the rest of his body was on automatic pilot. The whole affair left me wondering why Brendel had invested so much time in learning D. 960 in the first place.
In that context I would say that Lang Lang might be a useful antidote for those who have become besotted with Brendel’s particular approach to “purity.” However, like any drug, that antidote should be taken with both moderation and discretion. Clearly, I do not mind pianists (or any other musicians, for that matter) who view performance as a “whole body” experience; but I draw the line when it seems as if the body is more important than the music. This takes me back to the last time I used one of Hewitt’s columns as a platform for refining my own thoughts about Lang Lang.
Back in April of 2009, I wrote a post entitled “Ivan Hewitt’s Thoughts on Lang Lang,” which was a response to my first encounter with one of Hewitt’s well-reasoned defenses of Lang Lang. In that post I recalled what I had written after my first encounter with Lang Lang, when I saw him perform Frédéric Chopin’s Opus 11 concerto in E minor (the first) with the San Francisco Symphony. Here is how I described his priorities regarding body and music:
There was almost a choreographed plan to all of his physical gestures of attentiveness during the orchestral sections, and it seemed as if more effort went into those physical gestures than into the musical gestures in the score. The result was a highly skilful act of audience manipulation based on nothing more than the compelling personality of the soloist.
As far as I am concerned, Lang Lang was as detached from Chopin’s music as Brendel had been from Schubert’s. The only difference was that the two pianists conveyed their detachment in different ways, neither of which benefitted the music very much. To be fair, however, both of these pianists had to contend with grueling touring schedules. The fact is that no performer can be “on” every time (s)he comes out on stage in front of an audience; and, much as I dislike the proposition, I fear that both pianists are more interested in how they present themselves in Carnegie Hall or the Barbican Centre than in any impression they make in any city in California, even one with the musical legacy of San Francisco.
Since I have been watching Luck with great interest, let me turn to the racetrack for a closing metaphor. Buying a ticket to a concert is a bit like placing a bet on a horse. You can draw upon no end of “stocks of knowledge” in seeking out a bet that is likely to give you the best payoff; but all those data can never tell you just what the horse will do after its gate opens. Thus, those of us who write about concerts are in the same boat as sports writers covering the race after it has happened. We can describe; and, from time to time, we may even be able to apply some form of diagnostic thinking to our respective descriptive tasks. However, we have to be very careful about evaluating, since we rarely know just what factors will be most relevant to making a sound evaluative decision.
In 1947 Martin Buber published a collection of Hasidic sayings entitled Ten Rungs, each “rung” corresponding to an aspect of daily life impacted by faith. The first two paragraphs of his Preface are as relevant today as they were when he wrote them:
They asked the “holy Yehudi”: “Why is it written: ‘Justice, justice, shalt thou follow’ [Deut. 16:20]? Why is the word ‘justice’ repeated?”
He answered: “We ought to follow justice with justice, and not with unrighteousness.” That means: The use of unrighteousness as a means to a righteous end makes the end itself unrighteous; injustice as a means to justice renders justice unjust.
I was reminded of this passage this morning while reading a BBC News report of the latest Israeli Supreme Court ruling on a dispute over Jewish settlers on Palestinian territory.
The case concerns a Jewish settlement at Migron on land that had been privately owned by Palestinians before the Six Day War. Because it was private land, the Court had ruled in favor of the previous owners, declaring that the Israeli settlement had to be demolished by the end of this month. However, the government appealed on behalf of the settlers, requesting that demolition be postponed for three and one-half years, claiming that the settlers needed the time to build new homes. Today’s news was that the Court has rejected this appeal; this was the unanimous decision of a panel of three judges.
Israel has never had a codified constitution. In fact, there has never been a serious effort to write one prior to 2003. Thus, there have been skeptics who have dismissed the Supreme Court as some elevated form of Rabbinical Court. Perhaps those skeptics should reconsider. In light of Buber’s anecdote about the nature of justice, the government’s proposed three-year extension would have, as the “holy Yehudi” put it, rendered the justice of decision in favor of the Palestinian owners into an unjust one, making the Court’s decision “fundamentally rabbinical,” so to speak. Perhaps it is time for some of the more zealous fundamentalists among those Israeli settlers to pay more attention to what centuries of rabbis have had to say about such concepts as “justice!”
Regardless of the number of delegates he actually accumulates, Rick Santorum has made it clear that he cannot be ignored. This means that those who vote for him cannot be ignored either. Apparently, there are votes to be gained by appealing (pandering) to the highly exclusive precepts of Fundamentalist Christian morality. It was therefore refreshing to read the efforts of Robert Reich to stand such Fundamentalism on its head in an editorial in today’s San Francisco Chronicle:
What Americans do in their bedroom is their own business. What corporate executives and Wall Street financiers do in boardroom and executive suites affects all of us.
Unfortunately, such clarity is easily silenced; and, as we learned from the case of Eliot Spitzer, the closer the clarity gets to actually threatening the financial sector, the more vulnerable it becomes to attack on purportedly moral grounds. Nevertheless, I am always glad to see Reich trying to break the cycle, even though I have lost any of my own personal faith that his efforts will ever have an effect.
I have been auditioning the fourteen CDs in the Kathleen Ferrier Centenary Edition box of all the recordings she made for Decca (which used to be called “London” in the United States). I was reminded of how much I enjoy listening to the two sets of Liebeslieder waltzes by Johannes Brahms. There are so many examples of what Brahms could do with extended duration that it is easy to forget what a master of brevity he could be. For this we have to turn to the vocal side of his catalog, where we often encounter pieces that are almost Webern-like not only in duration but also in how much is packed into that short duration. On the instrumental side we also encounter it in his other major set of waltzes, the Opus 39. Perhaps I have Brahms to blame when I feel that many of the performances of waltzes by Frédéric Chopin that I have encountered tend to sound so long-winded!
The Hunger Games does not open until tomorrow, but the BBC is already treating it as a major news story. They are hardly alone, but they may be the most honest player in the pack. What struck me while watching Tim Masters’ report for World Service Television News was his preference for the noun “franchise” over “movie” (or any of its variants). Yes, one cannot avoid mentioning that the lines for the opening have been forming for several days; but, at the risk of sounding too cynical (if that is possible), folks need to line up for something now that they have their new iPads.
Still, there may be a similarity. The first generation of the iPad certainly had a lot both to attract and to sustain customer attention; but, like so many other brain children of Steve Jobs, it also created a “hunger” (yes, I’m playing with my words) for what the next model would bring. Apple has always been good about creating a craving strong enough for what would come next that would eventually block out any pleasure taken in what you already had. Thus, a film based on the first book of a trilogy is likely to create that hunger, perhaps even as soon as the lights come up after the screening of this first film in the set. Selling a franchise is not about creating desire for the product but about creating desire for one product motivated by a strong desire for what would follow it.
This is, of course, a dicey business. Edgar Rice Burroughs made a successful (at least for his strong fan base) franchise out of his books about the adventures of John Carter. Walt Disney seems to have turned that franchise into “one of the biggest flops in cinema history.” In trying to prognosticate about The Hunger Games in the historical context of what just happened to Disney, I am reminded of how, back in 1999 writing for The New York Review of Books, Louis Menand compared Star Wars and Titanic. In a review of The Phantom Menace, his opening sentence was:
Star Wars is entertainment for eight-year-old boys.
Two paragraphs later, he then described Titanic as a movie for ten-year-old girls.
We should not think about The Hunger Games in the context of either the initial Star Wars success or the John Carter failure. We should think of it as the franchise that is trying to move in on Twilight’s turf. Perhaps what matters most is that the ten-year-old girls are already hooked on the Suzanne Collins books; and this new attraction may have come right around the time that they were beginning to have their fill of Bella and her romantic entanglements. Nevertheless, I think it is important to stress that weasel-word “may.” Ten-year-old girls can be fickle; and hopefully Lion’s Gate will have enough Hollywood smarts to avoid counting any numbers they do not yet have.
While I was basically sympathetic with the jeremiad posted by Charles Simic to NYRBlog yesterday, I would like to take issue with one of his turns of phrase:
It took years of indifference and stupidity to make us as ignorant as we are today.
This seems to presume that the blame for what Simic calls the “Age of Ignorance” can be spread equally across the entire American population. In the context of Hans Magnus Enzensberger’s theory of a “consciousness industry,” however, I would suggest that pervasive ignorance is the result of a highly calculated effort diligently pursued by those who could appreciate how it would benefit their own self-interests. That effort began with the rise of consumerism following the Second World War; and the first boy to cry “wolf,” so to speak, may have been Newton Minnow, when he observed that the domination of advertising over content had turned television into a “vast wasteland.”
What Simic calls “years of indifference and stupidity” may actually be the product of deliberate debilitation of mental capacities brought about by the addictive nature of consumerism that became the most successful product of American industry. Indeed, it has been through such addiction that Simic’s observation that “deceiving Americans is one of the few growing home industries we still have in this country” emerges as a corollary. Unfortunately, those running the consciousness industry became so enamored of it that they assumed they could use it to addict the rest of the world, not realizing that most other countries out there had their own consciousness industries better attuned to their respective “customer bases.” We thus now stand on the brink of being a country that no longer produces anything of value to any other country; and that is where the “indifference and stupidity” that our own consciousness industry worked so hard to cultivate will turn out to be our undoing.
While reading David Schiff’s The Ellington Century, I encountered an interesting riff on Othello, presented as part of a discussion of the Duke’s Shakespeare-inspired suite, Such Sweet Thunder. The following passage caught my attention:
Othello is a play about race (“an old black ram is tupping yuour white ewe”) and about being and seeming. Othello seals his doom early by believing in self-evident facts of his existence: “My parts, my title and my perfect soul,/Shall manifest me rightly.” Iago acts out the opposite principle: “I am not what I am.” Othello’s tragic pride stems from a failure to understand that his noble character is as much a product of eloquence as is Iago’s malignant fabrications; Othello’s military and amatory success depends on the power of his discourse.
This idea of the double-edged sword of eloquence may apply very well to many (if not all) of the frustrations that Barack Obama has encountered since he took office. One has only to “look at the record” of Republican rhetoric to warrant the hypothesis that they have cultivated an “eloquence of hatred” as powerful as Iago’s, to a point where “malignant fabrications” are a far more viable stock-in-trade than “self-evident facts.”
From this point of view, we may consider that the Republican primaries are all about which malignant fabrications are likely to provide the most effective damage to not only Obama by the Democratic Party as a whole, hardly the healthiest way for anyone to be thinking about our electoral system!
Apparently Google has decided to rise to the challenge of Microsoft’s Bing, which grew out of technology introduced by Powerset, regardless of whether or not that challenge is carrying any weight. According to a report by Amir Efrati, which appeared yesterday on the Web site for The Wall Street Journal, Google is planning a phased transition from its keyword-based search system into a technology that is more “semantic.” (The quotes indicate that this is Google’s word choice; but they are also scare quotes to bring attention to the complex nature of semantics in linguistic studies, along with related concepts such as “knowledge” and “understanding.”)
I see from my archives that, back in November of 2009, I accused Google of being a “small boy with a hammer who sees everything as a nail.” Comparing what I said then with what I read today, I realized that, over the years, Google’s primary objective seems to have been to add more and more (and presumably larger and larger) hammers to its tool box, possibly at a time when fewer and fewer nails are being used in the world. Here are three critical paragraphs from Efrati’s article:
Amit Singhal, a top Google search executive, said in a recent interview that the search engine will better match search queries with a database containing hundreds of millions of "entities"—people, places and things—which the company has quietly amassed in the past two years. Semantic search can help associate different words with one another, such as a company (Google) with its founders ( Larry Page and Sergey Brin).
Google search will look more like "how humans understand the world," Mr. Singhal said, noting that for many searches today, "we cross our fingers and hope there's a Web page out there with the answer." Some major changes will show up in the coming months, people familiar with the initiative said, but Mr. Singhal said Google is undergoing a years-long process to enter the "next generation of search."
Under the shift, people who search for "Lake Tahoe" will see key "attributes" that the search engine knows about the lake, such as its location, altitude, average temperature or salt content. In contrast, those who search for "Lake Tahoe" today would get only links to the lake's visitor bureau website, its dedicated page on Wikipedia.com, and a link to a relevant map.
Consider, now, the poor soul who wants to plan a ski trip. By watching local weather reports on San Francisco television, he quickly finds out, particularly at this time of year, that there is good skiing in the Tahoe area. In contrast to Singhal’s “in contrast” world, a visitor’s bureau page is probably exactly what this guy wants and could care less about the salt content of the lake.
To be fair, however, this vacation-planning guy probably wants something like a visitor’s bureau page because that is what he expects to find when doing a Google search. In other words, to draw upon some of the pioneering research in semantics by Roger Schank, he thinks about planning a vacation in terms of a “script.” He formed that script on the basis of past experience that informed him about what he could do with the variety of tools (not just metaphorical hammers) at his disposal. More importantly, however, his behavior reflects a tight coupling between knowledge and action. We need knowledge in order to act; but we also understand that knowledge on the basis of how we act, rather than as some “entity” in a vast network of other entities.
The last time I tried to take on Google about such matters, my post was entitled “Google Gets it Wrong About Service.” This was an easy topic to pursue, since service is all about action, basically outsourcing some action that we either cannot or do not want to do to some third party who, in some way or another, can be counted on to do it more effectively. If we wish to think of a search engine as providing a service, rather than simply retrieving a bunch of pointers by analyzing some words according to a page rank algorithm, then we need to account for what the guy who wants that service is doing or wants to do. Very knowledgeable logicians have cracked their heads over whether or not concepts involving actions and motives can be expressed through the symbolic primitives that constitute their stock-in-trade; and they have yet to come up with any particularly viable analytic solutions.
However, the failure of logic may lie in the fact that it is, by design, objective. Matters of motive, on the other hand, are subjective, usually with considerable social influence. Thus, a truly “semantic” system will have to have a fair amount of knowledge about your personal psychology and probably the sociology of that corner of the world you inhabit. Can this be done? It certainly is a challenging research question, and Google probably has the resources to support an appropriate research program. However, the more important question is: Do you really want Google to have a model of your psychology and the sociology of your world? If you think that advertising is already invasive, imagine what it would be like if Google could tap into your personal psychology and sociology!
Today the editors of Encyclopædia Britannica used their blog to post the discontinuation of their print edition and establish themselves strictly in the online domain. As I discovered in a post of my own from April of 2008, this is not the first time that Encyclopædia Britannica has tried to take on Wikipedia on the basis of quality of content. Nevertheless, I suspect that Britannica has a long way to go, at least as far as my personal needs are concerned.
Once again, the editors are trying to lure eyeballs through a free access offer. Once again, I have taken them up on their offer. Bearing in mind that I always design particularly challenging tests, I still feel my first experience was a telling one. Because I happened to be chatting about him last night at the San Francisco Conservatory of Music, I decided that György Kurtág’s name would make a good test case. I used only his last name, typing it first without the accent. The bottom line is that Britannica could not find a match for his name either with or without the accent. What made this pathetic, rather than merely sad, however, is that the search without the accent turned up a Google Ad for CDs of this composer’s music available through ArkivMusic.com!
In other words Google Ads knows more about Kurtág than Britannica does! Meanwhile, even with the inadequacies explicitly cited by the Wikipedia editors, the Wikipedia entry for Kurtág is as good a place to begin looking for background on this composer as any (unless, like myself, you can get access to Grove Music Online with your library card). Yes, Britannica has a tradition of employing quality writers to provide material for their entries; but, while I am not fan of crowdsourcing, I fear that, whatever their aspirations, Britannica just cannot keep up with the “knowledge explosion.”
Indeed, it would appear that Britannica cannot even muster editorial quality when presenting themselves. Consider this “infographic” included on another one of today’s blog posts: This is as sloppy as it is jumbled. Would you really want to give these guys your eyeballs?
There is much more to Diane Ravitch’s latest post to NYRBlog, “Flunking Arne Duncan,” which appeared earlier this week, than her systematic account of all the ways in which Barack Obama’s Secretary of Education has done such a dreadful job. The real punch line comes when she escalates her analysis from education to general public policy at the end of the post:
We will someday view this era as one in which the nation turned its back on its public schools, its children, and its educators. We will wonder why so many journalists and policymakers rejected the nation’s obligation to support public education as a social responsibility and accepted the unrealistic, unsustainable promises of entrepreneurs and billionaires. And we will, with sorrow and regret, think of this as an era when an obsession with testing and data obliterated any concept or definition of good education. Some perhaps may recall this as a time when the nation forgot that education has a greater purpose than preparing our children to compete in the global economy.
This is not just a critique of educational policy. The logic of this paragraph applies just as effectively to an economic policy in which the concept of recovery seems to signify only for the financial sector. Perhaps it even applies to a foreign policy that is concerned more with supply chain management than with our country’s ability to act as an honest broker in resolving global crisis situations.
The Obama Administration has provided a painful reminder that, while the people choose their President, they have no voice in who advises him. Once in office, Obama’s audacity has amounted to casting his lot with the 1% and giving no thought to dancing with those who brought him to the Inaugural Ball. The only thing more depressing that the fact that Obama deserves to share Duncan’s failing marks is that the Republican Party seems determined to put up a candidate likely to fail even more destructively.
Every account I have read of last night’s performance by the San Francisco Symphony of Henry Brant’s “A Concord Symphony,” an orchestral rethinking of Charles Ives’ second piano sonata, has mentioned the recurring presence of the first four notes of Beethoven’s symphony in Ives’ score. Actually, Ives himself may be included among those accounts, since he discusses Beethoven’s presence in his Essays Before a Sonata. Fascinating as this may be, even more fascinating is the way in which Ives draws Beethoven into a good old-fashioned American hymn-sing. As I observed back in the early days of this blog, those four notes also introduce “Ye Christian heroes” (my original post mistakenly called them “heralds”) from The New Harp of Columbia, where it is called the “Missionary Chant.” This hymn is first referenced in the “Hawthorne” movement’ and then emerges in fuller form at the beginning of “The Alcotts.” The original piano setting is already very hymn-like; and Brant’s orchestration deliberately evokes a tired old pipe organ. Ives may have talked up Beethoven, but his ultimate purpose may have been to draw him into the Congregationalist cause!
The last time I took a swipe at Jonathan Franzen was when he shot off his mouth at the Cartagena Hay Festival, doing little more than advertising his ignorance of the nuts and bolts of reading practices. In that post I could think of no source better than Mose Allison to write my punch line:
Apparently, however, the folks at the London Telegraph find that Franzen makes for good copy. As a result they followed him to New Orleans, where he made the following proclamation:
Twitter is unspeakably irritating. Twitter stands for everything I oppose.
It's hard to cite facts or create an argument in 140 characters. It's like if Kafka had decided to make a video semaphoring The Metamorphosis. Or it's like writing a novel without the letter 'P'.
It's the ultimate irresponsible medium. People I care about are readers... particularly serious readers and writers, these are my people. And we do not like to yak about ourselves.
Apparently Franzen has never heard of haiku; or perhaps he has dismissed the form as illegitimate, which might be because he lacks the skill to read one. Whatever the case may be, the Tweeters of the world seem to have united against him (having nothing to lose but their time), creating the hashtag #JonathanFranzenHates as a vehicle for retaliation. I have to say that, while I am not personally big of Twitter, because of that time-sink factor, I really enjoyed the response by Minnie Driver:
This almost comes down to making the 140-character limit serve the constraints of haiku, which strikes as the unkindest cut one could give to Franzen!
Much of my Examiner.com writing this season has been occupied with centennials. The first half of 2012 is also the second half of the Centennial Season of the San Francisco Symphony, which gave its first performance on December 8, 1911. However, a celebration closer to my personal interests is that 2012 is the centennial year of the birth of John Cage on September 5, 1912.
In this austere context it may seem a bit frivolous to celebrate the centennial of a cookie, but 1912 is also the year in which the first Oreo cookies were baked by Nabisco in their Chelsea factory in New York City. (The Wikipedia entry does not give a specific date.) It would not be an exaggeration to say that the Oreo is no more an ordinary cookie than to call Cage an ordinary composer. When my wife and I moved to Singapore in 1991, expatriates were still telling stories about the enormous line that formed outside Jason’s when this goodie first went on sale there. It is therefore no surprise that BBC News has reported celebrations of the Oreo Centennial not only by flash mobs in the United States but also by (presumably more subdued) celebrations in China, Saudi Arabia, and Venezuela. Considering the current global tensions, one would think that the Oreo should be seriously investigated as a vehicle for world peace. That would probably bring a beatific smile to Cage’s face, were he to be informed of how his anniversary was being shared.
The Old Testament prophet Isaiah may have spoken of a time when nations would “beat their swords into plowshares;” but Israeli Prime Minister Binyamin Netanyahu seems to be paying as much attention to these words as Isaiah’s contemporaries did. He is apparently more interested in saber-rattling, particularly when it comes to standing firm against any objections President Barack Obama may have towards Israeli domestic and foreign policy. Netanyahu is in town for the annual meeting of the American Israel Public Affairs Committee (AIPAC); and, while Obama’s address last night tried to focus on diplomatic channels to address the possible nuclear threat from Iran, all that seemed to matter to Netanyahu is that he read Obama the riot act over Israel’s “right” to act unilaterally against Iran.
It is worth noting the timing of the AIPAC event. The 2008 meeting provided a platform for Presidential candidates to woo this lobby for its support. Sure enough, Mitt Romney, Newt Gingrich, and Rick Santorum are all scheduled to address the convention this year. It is a bit ironic that they will be doing so on Super Tuesday, right after each candidate has taken his last shot at winning over those who will vote on that day. Since Romney has now gone on record with the statement that “if Barack Obama is re-elected, Iran will have a nuclear weapon,” one can imagine that he is a prime candidate for Netanyahu’s blessing; but whether that will transfer to AIPAC support remains to be seen.
I have to say that I am generally sympathetic with the misgivings that Joshua Kosman voiced in the Sunday Datebook section of today’s San Francisco Chronicle. The title of his article is “Maverick idea for Symphony festival - new players;” and the basis thesis (in my own words) is that there is a disconcerting been-there-done-that feel to the San Francisco Symphony American Mavericks festival that is about to begin. However, if the lion’s share of this festival amounts to revisiting music performed at past festivals, were should we be looking?
As I thought about this, I realized that some of the most striking new music I have experienced does not come from American composers. Put another way, the “Maverick action,” so to speak, is in Europe and Asia, rather than in the United States. We got a warning signal that this might be the case when Sofia Gubaidulina was composer-in-residence with the Symphony; and no one has shaken the tree quite the way she did during her residency. So, could it be that, if we want to find “mavericks” these days, we need to look to post-Communist Eastern Europe, Finland, Japan, and possibly even China? If so, then what does that say about the talents of future American composers?
The latest issue of The New York Review of Books has John Banville’s review of the second volume of the letters of Samuel Beckett, covering the period between 1941 and 1956. Banville chose to begin with one of Beckett’s most famous passages, from the beginning of Worstward Ho:
Ever tried. Ever failed. No matter. Try again. Fail again. Fail better.
Banville called this Beckett’s “negative aesthetic.” I am sure that it how it struck those who read it when the book first appears. On the other hand how many Silicon Valley evangelists are out there who have pretty much made their careers by flogging that “Fail better” principle. Could it be that Beckett was the prophet for the century he never lived to see, or has Silicon Valley simply appropriated the frustrations of the artist and turned them into the motivating force for the entrepreneur?
I’ll say this about Apple. They at least seem to have recognized that, now that it seems as if the eyes of the entire world are on Foxconn, they appreciate the need to clean up their public image. What better way to do this than to promote Apple as a significant provider of American jobs? Here is how Don Reisinger reported the strategy for his The Digital Home blog on CNET News:
The Cupertino, Calif.-based company today released the findings of a study conducted by the Analysis Group that found that it has either "created or supported" 514,000 jobs across the U.S. That figure includes 304,000 current jobs across a wide array of industries, including engineering, manufacturing, and transportation. The balance comes through the so-called "iOS app economy."
Fortunately, Reisinger has been in this game long enough to know better than to take any such claim at face value. Therefore, he did some math. Without even venturing into any questionable issues of that “app economy,” he came up with the following results:
In studies such as these, figures need to be taken with a grain of salt. Apple currently only employes [sic] 47,000 people in the U.S. And although its claims that the iPhone and iPad, among other products, have increased manufacturing jobs across the country make sense, not everything holds up. For example, Apple is staking claim to the workers at FedEx and UPS that deliver its products to customers, without acknowledging that even before the iPhone and iPad were around, they had jobs delivering other packages.
The math might also be a little fuzzy. In order to arrive at the employment figures, Analysis Group took the entire amount Apple paid out for goods and services this year and applied that to the Type 1 employment multipliers used by the U.S. Bureau of Economic Analysis. In other words, it's a lot of guesswork.
In other words the whole analysis amounts to the latest generation shell game. If you want to join our Government in playing that shell game with Apple, I bet there’s an app for that!
I read Sameer Rahim’s piece in this morning’s London Telegraph, about the controversy arising over a billboard promoting the return of Mad Men, with great interest. The billboard depicts the “falling Don Draper” image, which has been in the opening credits since the series began; but the background for this image connotes (but does not depict) the World Trade Center towers, thus suggesting an association with those who chose to jump from the burning towers as an alternative to an unknown, and possibly more horrific, death. My guess is that the billboard was designed to spark controversy. The question is what sort of controversy was anticipated.
Did AMC, for example, anticipate the following statement from Nancy Nee, sister of one of the firefighters who perished on 9/11?
It seems that Hollywood, and now advertising, doesn’t care about the sensitivities of the families and New Yorkers.
With all due respect to Nee, I would posit that the advertising industry, as well as most Hollywood production companies (which, as we know from Morgan Spurlock’s recent documentary, are basically run to serve the advertising industry) do care about personal sensitivities. If they did not care, they would not be succeed in flogging stuff for their clients; and, following the logic of Darwinian selection, they would not remain in business very long. The same would hold for product placement strategies in both films and television programs.
What Nee has failed to recognize is that caring is a statistical matter. No matter what you do, there will be some statistical distribution of those whose attention is attracted, those who are offended, and those who do not react very much one way or the other. What the advertising industry needs to recognize in order to survive is whether the statistics of those who are offended are significant enough to outweigh those on whom the idea registers in a positive way. In other words Nee has been unpleasantly confronted with her own statistical insignificance, an insignificance that seems to have been reinforced (perhaps even more unpleasantly) by the get-over-it comments and exchanges attached to Rahim’s article.
Personally, I sympathize with Nee. However, my sympathy is grounded on the many frustrations I encounter when I am confronted with my own statistical insignificance. In a way the Internet has reduced us all to such insignificance, even if we are not explicitly aware of it. Indeed, the comments to Rahim’s article make it clear how good so many of are at denying that insignificance, even if it involves reacting to evidence to the contrary through rage.