Facebook likes to trumpet the value of "trusted referrals"--recommendations and ads with the endorsements of members of your friends list. But a new study from Jupiter Research, commissioned by analytics company BuzzLogic, says that consumer purchases are more likely to be influenced by what they read on a blog versus what their social-networking rosters recommend.Half of all those surveyed who identify as "blog readers" (people who read more than one blog per month, a fifth of total survey respondents) say that blogs are important to them when it comes to making purchasing decisions. But they don't necessarily find them to be all that reliable: only 15 percent of blog readers, and five percent of all those surveyed said that in the past year they had trusted a blog to help them make a purchase decision.
That's still higher than the number of people who said they used social-network recommendations, though: ten percent of "blog readers," and four percent of all those surveyed.
Results of the survey are similar when it comes to advertising: a quarter of "blog readers" say they trust ads on blogs that they read (versus 43 percent on "familiar" or mainstream media sites), but a slightly lower 19 percent say they trust the ads on social networks.
So what does all this mean? Well, it's good news for BuzzLogic, which tracks blogger influence for clients and has seen blog advertising pushed aside a bit on Madison Avenue in favor of "appvertising" and social ads. Aside from that, the real take-away point is that the results seem to indicate most blogs are less mainstream than you might think: Only a fifth of respondents say they read a blog at least once a month.
That's actually really surprising--or maybe blogs have become so ingrained on the Web that people don't even know they're reading them.
McCarthy was spot on in backing off from the study itself and the circumstances under which BuzzLogic commissioned it in order to ask the more fundamental question of what the results actually mean. On the other hand I feel contentious enough to counter her question with a deeper one: Can this study possibly mean anything? My point is that the entire Jupiter Research methodology may be too flawed to provide data that would support any meaningful interpretations. The problem with any survey is that the questions often bias the nature of the answers; and, since this was a commissioned survey, there is the added risk that this bias has been induced by the sponsor. If we want to be serious about the general question of utility, then the survey is probably too blunt an instrument. We need a more ethnographic approach through which we can examine what people really do when they are trying to collect useful information before making a purchasing decision. Yes, information like that can be found on blogs; and those "trusted referrals" probably have at least some decision-support value. On the other hand how many users are out there who, out of either a lack of technical understanding or just plain laziness, set up a Google search and seek out things that look like opinions in the little content excerpts? How many of them can go to the next level and recognize which of those search results are for sites explicitly set up to collect reviews? How many of them know which search results are taking them to an individual opinion (such as a blog), rather than a collation of multiple opinions? Given the generally low numbers in this survey, we cannot dismiss that first (admittedly naive) sector without a better understanding of who they are and what they think they are doing. We may thus be wasting too many cognitive cycles on what is fundamentally a GIGO (Garbage-In-Garbage-Out) project!
No comments:
Post a Comment