Arbiters of Truth

Should internet platforms be responsible for what their users do? Many say No, as it would have unreasonable consequences: should platform companies monitor everything users do? Others say, what about the hate speech and fake news and threats and piracy and all the other bad things? Platforms deal with those issues with varying ambition, it is difficult to see the logic. For example: Google changed its search results after users tagged images of dog faeces with Michelle Obama – so that an image search for the former First Lady would deliver… crap. Google’s intervention was a good deed. But on the other hand, when US political commentator Rochelle Ritchie received a death threat via Twitter, the company said there was no violation of the terms and did not act. The person who made the threat was Cesar Sayoc, apprehended two weeks later on suspicion of sending mail bombs to several prominent democrats and media profiles in the US. The list can be longer and different companies may have different principles, but still: racist dog poo – no, death threats – yes. (By the way, make no mistake: platforms already monitor everything users do – how else could they sell personalized ads?).

Should internet platforms be responsible for what their users do? Policy-makers come to different conclusions (even from the same country). I give you three Brits:

(1) Sometimes. Last month UK Commissioner Julian King presented a proposal demanding terrorist propaganda be speedily removed by intermediaries (King added that crackdown on terrorist content is not censorship).

(2) Yes; a committee of the UK House of Commons suggests that platforms are neither publishers nor passive intermediaries, but a kind of third animal with special powers and responsibilities.

(3) No; former UK deputy prime minister the right honourable Sir Nick Clegg said to the BBC [about Zuckerberg] “he believes in free speech but doesn’t want to be the arbiter of truth”, commenting on his move to join Zuckerberg’s ranks. Sometimes, yes and no. All three cannot be right at the same time!

If platforms use the safe harbor trump card (as they do today), there is no transparency or accountability. If fake news helps the share price, fake news we shall receive. On the other hand, we can’t have gatekeepers deciding what content is kosher and who gets to publish, can we? And if policy-makers want to regulate in detail what platforms can and cannot do (as the above examples suggest), we’re in for a long bureaucratic trans-Atlantic paddle.

There is no transparency or accountability. If fake news helps the share price, fake news we shall receive

Wild guess: internet companies will re-define their business in order to not be locked in the platform bracket. Your move, EU policy-makers. It can go on forever. Is there no way out? Of course there is!

First: transparency. It appears that this conversation is alive at least to some extent at least in some of the big tech companies. A leaked document from Google (via Breitbart of all places) suggests that there is an ambitious internal discussion around these topics. If the document is real, Google has worked with some of the best thinkers in this space – such as author Franklin Foer, whose book World Without Mind Netopia has reviewed here. The leaked slides talk about the responsibility of tech companies, the utopian narrative of Silicon Valley and how to square free speech online with a safe internet for all users. If this is real, it is insightful, serious and surprisingly humble. (If it’s fake, it still makes some good points!) But why keep it internal? Why can the rest of the world only take part through dubious leaks? A lot of people talk, write and even legislate about these things, but when Zuckerberg appears in hearings, he keeps his cards very close to his chest. When Alphabet (=Google parent) chairman Eric Schmidt appeared in Stockholm last November, he talked about his fear that governments would “break the internet”. When there is a policy Silicon Valley doesn’t like, it pushes back via various front groups. Why not be transparent about these issues, tap into the wisdom of others, be humble and say something along the lines of “we see the problem, we don’t have all the answers, what are some of your best ideas?”? Call it crowdsourcing, if you want the lingo.

Second: Self-regulation. I’ve talked about this before. This is not the same as hiring thousands of moderators. Self-regulation means an independent body with a fixed rule set, authority to sanction misconduct, accepting reports from anyone, possibility for appeal by both the accuser and the accused. This works fine in many industries and has added benefits of flexibility for changing circumstances (new types of problems) and being able to adjust to different jurisdictions, compared to legislation. Consider the words of former digital commissioner Viviane Reding, speaking at the DLD Conference in Brussels in September:

In my political life, before I went to regulation, I always started with self-regulation. Only when it didn’t work, I would go to regulation. I saw with Big Tech that they could not care less about self-regulation. So, we had to go to regulation. And I saw also that regulation was not enough, and that is why I put the 4% of worldwide turnover in [into GDPR – editor’s note]. Because that made everybody understand us. I mean, it’s crazy. It’s like with children. You have to say that something happens if they don’t behave. And then they behave.

Panel at DLD Conference in Brussels, 5 September 2018 (Photo: Per Strömbäck)

Panel at DLD Conference in Brussels, 5 September 2018 (Photo: Per Strömbäck)

Take it from Madam Reding: Stop acting like children. Start behaving. Or there will be more 4%-of-global-turnover-fines coming your way, Big Tech. (I’ve heard other policy-makers say things like “if we throw some of them in jail, that’ll teach them”, but that was off the record.)

Third – and this one is for the policy-makers and regulators just as much as the tech companies – enforce the existing rules! The terms and conditions have all kinds of rules. Facebook, for example, requires users to not share material that is “illegal, misleading, discriminating, or false”. How can there even be a problem with fake news, if this is what the T&C’s say? Or why do telecoms go to court to escape acting against piracy, when their subscription contracts say the broadband service cannot be used for IP infringement? Or what about Article 5.1c of the Electronic Commerce Directive, which demands that an “information society service” gives “the details of the service provider, including his electronic mail address, which allow him to be contacted rapidly and communicated with in a direct and effective manner” Really? Please give me the details to any of the services – Google, Facebook, Amazon, Uber, AirBnB etc – that operates under the safe harbour principles so I can contact them rapidly and communicate with them in a direct and effective manner. Good luck. What if the thing we need most is not new law but using the laws we have?

The answer is not Yes, neither No, nor Sometimes. Rather, start with these three ideas. Then we can talk.

 

0

Leave Comment

Comment on this article