Tim Berners-Lee is back in the news again. As in the past, it would appear that the combination of his reputation as “inventor of the Internet” (whatever that means) and his knighthood have led him to believe that he can pontificate about anything. Apparently it has only recently come to his attention that, if you create a medium through which anyone can say anything to anyone else, then everyone will. In other words it would appear that Berners-Lee has not yet grasped that phenomena such as fake news and hate speech are the inevitable products of what the Internet is and, for that matter, what Berners-Lee claims he wants it to be.
Back in November of 2013, this site took Berners-Lee to task on the basis of an article summary on the BBC News Web site that quoted him as saying that the “democratic nature” of the Internet was being threatened by a “growing tide of surveillance and censorship.” Apparently, it took him over two years to realize that, where information is concerned, an unregulated democracy has, by its very nature, tended to produce excess noise to the detriment of the signal in was intended to produce. If one wants to filter out the noise, then one must look for it (surveillance) and then design a filter for it (censorship). Before trying to unpack Berners-Lee’s failure to grasp this fundamental reality, it is worth recalling his own origins, as well as those of the Internet.
The important thing to remember is that Berners-Lee is a computer scientist whose skills led to his working as an independent contractor for CERN, a European research agency specializing in the challenging domain of particle physics. It subsequently became the home of the Large Hadron Collider, the primary instrument involved in the discovery of the so-called “God particle,” known more accurately as the “Higgs boson.” CERN is a perfect example of people with a wide diversity of cultural backgrounds coming together to work on shared problems. In that context it is easy to imagine Berners-Lee recognizing that communicating through a computer network could enable an even larger population to share their work and their results. He went about designing and implementing an infrastructure to enable that kind of world-wide sharing; and the result was the World Wide Web.
Berners-Lee then became an evangelist for making the Web available to as many people as possible. The result tends to reflect the joke about the Statue of Liberty in the Beyond the Fringe “Home Thoughts from Abroad” routine:
After they put up the Statue, some idealistic Johnny decided to put the words “Give me your tired, your poor, your huddled masses” on the base. Well … people did!
Berners-Lee’s mindset never seemed to recognize that his evangelism would attract more than scientists or that the Web would be used for anything other than doing more science. Even the pre-Internet Usenet had more common sense, thanks to which those responsible for regulating discussion groups came down like a ton of bricks on the very first effort to advertise through spamming.
According to a BBC News story that appeared over this past weekend, Berners-Lee finally seems to have recognized that an unregulated Web is not good for much of anything, particularly his dearly beloved science. For example, he would like to see regulation of “unethical” (those quote marks come from the BBC, not me) political advertising. Unfortunately, he thinks this can be done by putting “a fair level of data control back in the hands of the people,” failing to recognize that “the people” are the ones putting the data he abhors into the system in the first place.
Fortunately, there is at least one government out there with a more realistic model of regulation, and it will be interesting to see if they can put it into practice. In an article that showed up in today’s The New York Times, Melissa Eddy and Mark Scott reported that Heiko Maas, Minister of Justice and Consumer Protection in Germany, is proposing that the technology companies responsible for maintaining social networks, such as Facebook and Twitter, be fined for the presence of content that would constitute hate speech or fake news. In other words the only way you are going to get people to fix a problem is to penalize them for failure to do so, and whopping fines can make for a very effective penalty.
I applaud Maas’ effort, and I wish him well. However, whether we like it or not, this kind of regulation is likely to turn into a losing game of Whac-A-Mole. Shutting down one source can easily lead to another popping up, and probably more than one of them. The fact is that just trying to figure out how hate speech can propagate may be a major research problem. It is one thing for an extremist to find a sympathetic blog and then tell all of his friends about it, but what about the process of discovery itself?
Someone I know rather well tripped over this problem the other day. She was trying to track down information about Jews in Eastern Europe prior to the Holocaust. She typed a search query into Google and discovered a relevant Wikipedia page as the highest-ranked result. On the other hand the second-highest ranked result turned out to be an anti-Semitic blog being maintained on a Wordpress platform. In other words the Google search result was not only unreliable for her needs, it was also disturbingly offensive. Thus, regulating the content itself is definitely a step in the right direction; but, if we are really interested in regulating sources of hate speech or fake news, we have to think about not only those sources but also the ways in which they show up in search results.
Now we are in territory that, as some of Berners-Lee’s professors may have said on more than one occasion, is “beyond the scope of this course!”
No comments:
Post a Comment