What kind of responsibility should the intermediaries of digital media have? This question is key to how the online world develops. Different intermediaries have come to different conclusions. Most advertisers, for example, want to make sure their ads don’t end up on illegal websites, which is why the ads found on for example pirate streaming services tend to be for things like pills that increase the sizes of various body parts or shady casinos. Payment services is another example, they tend to see themselves as a part of society rather than counter-culture and will take action against money-laundering and terrorism funding when prompted. Others come to the opposite conclusion; telecoms, search engines and internet platforms often subscribe to the ideology of internet freedom which says that any problem that may come from the internet is less important than keeping the internet unregulated and any step in the direction of regulation will inevitably result in a surveillance state of the North Korean flavor. (Or variations thereof, surely you have heard it many times.) Of course this is a convenient narrative, a shield against interference from government.
Yesterday, Google appealed the EU anti-trust case against its shopping comparison service. This was expected and it is of course fair to settle the matter in the courts. But this is also an example of precisely the kind of intervention that the open internet narrative serves to avoid. After-the-fact-anti-trust-action is however less of a problem than actual responsibility for content. Legal processes take years and even if fines can be Billions of Euros, there is plenty of time to make money on the way.
But the laissez-faire ideology has limits, even for tech companies and telecoms. A recent example is how after the neo-nazi terror attack in Charlottesville, white supremacist content was removed, banned or denied access to by domain providers, web hosting services, cybersecurity firms, software providers and others. Netopia applauds these acts of responsibility, but must admit that it is difficult to follow the logic. Why is it not important to keep the internet “open” in this particular case? Why is the line drawn there? And who decides where that line is to be drawn? It is tempting to come to the conclusion that it is the preferences of the tech companies managers (or boards or owners) that decide what is available and not. Far from democratic, I think you will agree.
Sexual abuse of children (or depictions thereof) used to be the exception that all but the most radical internet freedom activists could agree to. ISP:s voluntarily block access to such websites. Google search prevents such links from appearing in search results. But this line appears to be blurring. In the US, a proposed change (known as Stop Enabling Sex Traffickers Act of 2017) to the immunity from liability for intermediaries that host or provide ad revenue for online sex trafficking services. Case in point is the classified ads service Backpage.com which accounts for 73% of all child trafficking reports from the American public. Oracle is the only Silicon Valley firm that supports the bill, others push back saying that it would undermine protection for legitimate companies.
The question is: why won’t the same companies that take down white supremacist content do it to stop child trafficking? It’s an honest question. I don’t follow. My guess is that it has to do with legislation. Maybe if the public opinion is as strong as in the case of Charlottesville, Big Tech can act voluntarily, but they won’t accept any legal restriction to the intermediary immunity.
The answer could be to change attitude. To accept the fact that the idea of an internet that is not regulated by law but by tech has come to an end. That the online monopolies have to do more than provide free services in order to be responsible corporate citizens. As pressure amounts from various directions, it is almost inevitable. But of course the consequences are dire. It would mean taking responsibility for a lot more than neo-Nazis and child abuse. Transparency about what is acceptable content and not would be a good first step.