Writing with an Eraser: Online Platform Position on Digital Services Act

The point has hit home: something is wrong with the way the internet works and the online platforms are part of the problem and maybe maybe part of the answer too.

This is no longer a topic of debate, the platforms’ EU lobby EDiMA recently published a paper that conceded this point. First look it also suggested some steps to how platforms can be part of the answer, but on closer inspection it sort of fades out. I’ll come back to that in a bit.

First, would it not be great if there was some manner of legal certainty online? For everybody: internet users, infrastructure providers, services and all others. Except it’s real difficult to catch the real culprits: the troll farms, spambots, darknet drug dealers, pirates, viruses and that lot. That’s why governments look to the intermediaries for help: telecoms, platforms providers, payment services. They are not the bad guys, but they provide the tools and have the means to stop them. Similar to how governments task banks to take action against financial crime.

EU does not “limit the liability of online service providers whose services are abused by others”. It only limits the liability of passive online service providers whose services are abused by others.

However, there is one problem: not all intermediaries are the same and if they have too much responsibility, it may conflict with other things we like: privacy, freedom of information, competition and such. Where to draw the line then? Here’s an idea: make a distinction of passive and active intermediaries. Passive are those who only provide infrastructure, similar to roads and bridges (they don’t know anything about the vehicles that travel on them). Active are those who interact with the traffic: analyze it for monetization or harvest the data and sell it to third parties. Make sense? Sound good? Ok, great because that is exactly what the EU thinks. This is the way the famous E-Commerce Directive is designed (read Recital 42!) and how the Court of Justice in the European Union interpreted it in the so-called L’Oréal v Ebay-case. Passive intermediaries, those who only provide the “dumb pipe” of the internet, are protected from liability for what their users do. Active intermediaries on the other hand, they must take action against foul play on their systems. Nice and clean, if you’re an internet company, you have a choice of being passive or active, depending on how you want to run your business. Except online platforms appear not to like this deal, they want active to be passive. Bear with me.

If you followed the debates around internet and copyright and such in the last decade and a half, you have likely noticed that pirates love to stick labels onto their opponents’ arguments. “Straw Man”, “Moving the Goalposts”, “Double-Think” etc. Some of these labels even have Wikipedia entries! Looks like EDiMA read the same playbook and decided to go big, because their paper on the upcoming EU Digital Services Act ticks every box. It’s called the “Online Responsibility Paper”, a headline which may trigger the Spider-Sense for some people. Let’s take a look:

Strawman

A strawman is the technique of claiming the opponent has a position he does not and then attack that position instead of the real argument. The EDiMA paper claims the current rules for intermediaries create a “perverse incentive” where platforms are “discouraged” from taking action against illegal content on their systems. Ignorance is bliss: as long as platforms can claim they have no knowledge of illegal content, they cannot be held liable. But that of course also requires ignoring things like recommendation algorithms, targeted adverts, search results linking to illegal content and other types of interactions that happen every day, minute, microsecond.

Poison Pill

This is the act of hiding something controversial inside something that looks perfectly normal or constructive. In this paper, it is the expansion of the exemptions from liability. The paper claims to suggest more responsibility for online platforms, but it also says “Built-in safeguards would be required to ensure that measures taken under this framework of responsibility would not compromise service providers limited liability”. What the paper really suggests is less responsibility!

The online platforms’ pipedream is to export a US legislation called Section 230 of the Communications Decency Act to other jurisdictions. It has already been attached to some trade agreements and this paper smells much like an attempt to inject the same thinking into EU policy (that pattern is familiar from the TTIP trade negotiations where the US tech sector tried to pull the same trick). The catchphrase to watch out for here is “Good Samaritan”. More on that below. The irony of course is that the Communications Decency Act was supposed to help intermediaries fight bad content. More on that here.

Smoke and Mirrors

This is when big words or generous gestures are applied to obscure the real information. In this case the proposed oversight body fits this bracket. A real oversight body such as for news media, advertising, age-ratings or other self-/co-regulatory systems is independent of the industry it oversees, a body of experts making the decisions, transparency of process and rules, appeals procedure for all concerned and not least “teeth” in the form of sanctions.

The EDiMA paper “accepts” that some form oversight might be required, but “it should not have the power to assess the legality of individual pieces of content and it should not be empowered to issue take-down notices”. So… no teeth.

(The paper says that the decision of what is illegal and not should be the remit of the courts, but the point of self- or co-regulation is that it exists on top of the court system, interpreting the legal rulings. There are many benefits of this, for instance self-/co-regulation systems can handle many more cases than the courts and quickly adjust to changes in the field they regulate. More on the benefits of self-/co-regulation here.)

Double-Think #1

War is peace. Freedom is slavery. Ignorance is strength. A motto from Orwell’s 1984. Take a word and make it mean the complete opposite. Best example here is the principle of the “Good Samaritan”. The point of the principle is to protect those who do take action from liability, for example someone who tries to save lives in a traffic accident should not be sued for medical malpractice. The Samaritan is a third-party trying to good, not an accomplice to the robbers. Online platforms, however, want to use the Good Samaritan-principle to protect them when they don’t take action against harmful content, again pretending to not know what’s on their systems. Double-Think.

Double-Think #2

War is peace. Change to is continue. EDiMA says “The law should continue to assign primary liability to those users that act illegally or harm others and limit the liability of online service providers whose services are abused by others”.

Except the EU does not “limit the liability of online service providers whose services are abused by others”. It only limits the liability of passive online service providers whose services are abused by others. “Continue” means “change to”. Double-think.

Turning a Blind Eye

Pretending not to know is an all-time favourite, as in “it’s an algorithm, we don’t know what it does”.

EDiMA’s paper blames the current rules, rather than its tech company members, for the failure to take action against online harms: “We need rules that allow us to take more responsibility online,” pleads EDiMA Secretary General Siada El Ramly. Really? Are we to believe current rules don’t allow tech companies to act against illegal content?  What happened to all those boasts by Mark Zuckerberg about Facebook’s unprecedented actions to fight this or that? (Such as banning “sexy” emojis lol.) The European Commission in its wisdom pointed out back in September 2017 that platforms can and should do more under existing law, without risk of losing their intermediary privilege. Conveniently ignored.

Moving the Goalposts #1

If you are losing an argument, try to change the objective. If, let’s say, you argue about video assisted referees in football and get out of trouble by saying “well it will never be 100% fair anyway”, you have moved the goalposts because nobody said the goal was 100% fairness. (For the record: this writer is no fan of VAR, but that’s a different story!)

The EDiMA paper says “There are valid concerns about the abuse of online service providers to disseminate both illegal and harmful content online”. Really? I thought the problem was that online service providers interact with and promote that content. Goalpost move fail.

Moving the Goalposts #2

The paper makes many references to the courts, as if illegal is only when a court has found something to be illegal.

If it’s illegal by law, it is illegal.

But, as Swedish Centre Party leader Annie Lööf once (maybe) put it: “In Sweden, it is forbidden by law to be a criminal”.

If it’s illegal by law, it is illegal. Courts interpret the law by trying particular cases, but of course not every crime makes it to the court. That don’t make those crimes legal, folks!

Blurring Lines

If the definitions are not to your liking, try obfuscating them. Looking at the distinction of passive and active as discussed above, this fits the bill for the paper at hand. Another good example is the notion of “filter content” as anathema. Except online platforms filter lots of content, Gmail does very well with spam for example. Filters are great, at least when the platforms get to decide for themselves.

Arbitrary Distinction

The opposite of blurring the lines. This is where EDiMA wants to make a distinction between illegal and harmful, where the latter is supposed to be harmless or at least not bad enough to require any action. Except some of the most toxic online phenomena such as hate-speech, fake news, online bullying, troll farms and many others may not be illegal but harmful nevertheless. Ignoring all those would be… flagrant.

If EDiMA’s paper is difficult to decipher, here is a much more straightforward wish-list. Read that instead.

Big Tech wants to stay clear of regulation, but is more than happy to regulate us, who depend on the services they provide.

The main take-away from this paper is no surprise: Big Tech wants to stay clear of regulation, but is more than happy to regulate us, who depend on the services they provide. The self-image is that platforms are victims of abuse from malicious users and need to be protected from liability. Ever more intermediary privileges is the answer. With that departure point, making law to make them act more responsibly becomes like… writing with an eraser.

Rather than to change the law to fit their business, how about the other way around. Did somebody say “you need to find new business models”?