Column: Cracking the Webnut

Every day, the debate on Internet regulation becomes more hectic. This is mostly due to the increasingly overlapping interests that populate the web. Not only Oracles, Microsofts and Googles, but also telcos, media providers, “brick and mortar” businesses, salesmen and intermediaries of all kinds. All these actors are now converging on the same stage, and inevitably start competing for the attention of the audience. And as they compete, the theatre takes off to reach the clouds – and not only metaphorically, since cloud computing is changing again the dynamics of competition in this field.

Leaving this stage unregulated would have enormous consequences. In its early, less regulated years, the Internet was defined as “central park after dark”: full of excitement and thrill, for sure, and often beyond real-world legality. But it was not a place where you’d send your kids, nor a place to date your girlfriend. No country for old men, no place where you’d walk with a bag full of cash. Not a place for business meetings or picnics. Not a place where you’d sell your books or play your music.

The hot question today is how much police should we put in the park, and what type of activities should be deemed legal: a very difficult question, because many park visitors are used to quite libertine customs. The current social norms – which compose the netiquette – follow very different standards and criteria than what is deemed acceptable in real life. This is why opinions diverge so significantly on the future of the Net. Those that value the excitement and thrill of central park after dark want to preserve the Net as free and end-to-end as possible, whereas those that start populating cyberspace from real life don’t feel at ease in that world. The more the Internet permeates the life of the average citizens and businesses, the more it leaves the underground world of techies and geeks to enter the living rooms of households, the more regulation advocates gain weight. And this is understandable: as recently recalled by US President Obama, Internet thieves stole IP-protected data for more than $1 trillion dollars worldwide last year. And recent research shows that many companies are subject to tens of thousands of cyber-attacks per day, some of which are very difficult to detect and resist.

However, regulating the Internet is easier said than done. The Internet is a tough nut to crack for governments: it is global, virtual, digital, de-centralised, user-centric, and respects no authority (except perhaps Google). Who can possibly regulate it, and how?

The fundamental regulatory, monitoring and enforcement tool on the Internet is technology, because it is as global, de-centralised and pervasive as the Internet is – legal rules aren’t. As Larry Lessig noted back in the 1990s, in cyberspace it is code, or the technological architecture of the Web, that determines what is possible, much more than law. When preferences and behaviour diverge and enforcement is prohibitively expensive, societies have to build locks and fences to protect property and ensure respect of the rule of law. The Internet, for a long time, was a commons without locks and fences, and the legal rules that would protect basic rights such as copyright or privacy were so far away from the surfers’ ethical code than they got easily circumvented – remember the Brits forcing Indians to swear on the Bible that they would tell the truth during trial? Faced with the need to impose respect of real life rules in a virtual, globalised environments, governments have little choice beyond the implementation of changes in the architecture of cyberspace: it’s like saying, if you want to make sure cell phones don’t ring during a concert, you better isolate the concert hall, rather than put warning signs or even sanctions on the wall. Otherwise, monitoring and enforcing may be prohibitive. The same happens on the Internet: a password is a password, either you know it or you don’t. That’s why code – to quote again Lessig – can achieve the “perfect technology of justice”.

But how far can technology be pushed? The problem with technology is that it is so pervasive that it can be used both as a tool to define Internet users’ rights, and a tool to enforce non-Internet rights on the Web. A clear example of the former is DRM systems, which determine possible uses of given information goods before they are introduced in cyberspace. On the contrary, an example of latter case is the French HADOPI law, where technology is put in the hands of ISPs to inspect packages, detect copyright infringement and – after three warning messages – block the user’s Internet subscription. The first is a case of self-help – something similar to a lock on a door or on a scooter, aimed at avoiding theft in surroundings where law enforcement is insufficient. The second is a case of cyber-police, more similar to dawn raids to inspect company books.

The real battle on the “how” of Internet regulation is being played on this thin red line. Openists claim that technology should not be used to regulate the Internet by adding more intelligence at the core, and certainly not as a weapon in the hands of cyber-policemen. On the other hand, the industry (on the copyright front) and new cybernauts (on the privacy front) want a safer environment.

Deciding how to regulate the Internet is also crucial to determining “who” should regulate. A broad definition of acceptable protection measures (e.g. DRM); reasonable traffic management practices (such as competitively neutral traffic prioritization and application blocking); and codes of conducts for various Internet players can only take place through co-regulation, with Internet players taking the lead, and governments certifying the compatibility of those solutions with public goals. To the contrary, invasive technology-based enforcement of (often obsolete) real life rules must be mandated by governments and enforced by legal or administrative authorities (see the HADOPI 2 law, which requires a decision in court).

How will we solve the puzzle? The compromise is hard to strike, but I see trends emerging. The most reassuring one is a call for global co-regulation to define what is acceptable on the Internet – provided that users are adequately represented in this co-regulatory effort. Co-regulation should lead to a better definition of users’ rights (including the “Internet freedoms”), reasonable traffic management practices, and non-discrimination obligations at all layer of the IP-based architecture. Governments around the world should then commit to respecting and enforcing these rules.

To the contrary, any rule that goes too far from netiquette will be doomed to failure. As a clear example, recent studies show that HADOPI is already being massively circumvented and pirates are on the rise in France. Political scientists call this “compliance trap”: never play with techies, they’ll find a way to cheat you up. Cyber-police is not a good way to crack the webnut: these technical solutions, in turn, are quite easily cracked.

In conclusion, policymakers will increasingly be faced with a trade-off between two pretty bad options: adopting structural remedies based on technology (e.g. DRM); or strengthening enforcement through cyberpolice. Tertium non datur, if not for a combination of the two. My opinion is that the former approach, though very difficult, is way more likely to hit the target than the latter, especially if policymakers will manage to put all players at the table when deciding what should be possible, in our brave new world.

Andrea Renda
Senior Research Fellow at the Centre for European Policy Studies (CEPS) in Brussels.

Editor’s note: This column was first published at Netopia.se in May 2010. It is a proof of the author’s foresight and understanding that it is just as relevant today.