The classic example of the good Samaritan-paradox is if you intervene at a traffic accident, by moving a victim away from traffic in order to avoid them being run-over by other cars, you should not be prosecuted for damaged caused by the act of moving them. It makes sense, right? The law should not stop people from doing the right thing. Now this law has entered digitaly policy. A recent example is from the European Parliament Internal Market Committee-report The functioning of the Internal Market for Digital Services: responsibilities and duties of care of providers of Digital Services:
6.2. Good Samaritan Paradox
One point of criticism to exclude active role hosting providers from the liability privilege of Article 14 E-Commerce Directive is the so-called “Good Samaritan Paradox”. The “Good Samaritan Paradox” is meant to describe the following: Article 14 E-Commerce Directive with its model provider being neutral and passive may disincentivise the hosting provider from taking precautions against infringements due to its fear of losing safe harbor protection.
Does this sound familiar? If yes, it is because we’ve heard it before. Big Tech loves to say it has no legal mandate to intervene against illegal content. In the next breath, they give the solution: if only the EU could have similar rules as the US: good Samaritan clause as in Section 230 of the Communications Decency Act, nothing could stop them from keeping their services clean.
CDA230 works both as “a sword and a shield” explains Canadian foreign trade expert Hugh Stephens. The “shield” being the intermediary liability privilege often called “safe harbour”: service providers are not liable for what users do (which has created an ocean of nosebleeds, but let’s save that for now). The “sword” is the good Samaritan clause, service providers are protected from consequence if they take action against user content. Such as fact check-labels on tweets. Or take down of nazi videos. You get the picture.
According to Big Tech, EU law only has the shield, not the sword. If only they could also have the sword…
Except, even in the US, where that sword is available, there’s not a lot of sword-wielding going on. Twitter’s fact-check labels are the exception (and very likely legal as freedom of speech anyway, nevermind good Samaritan). Rather, CDA230 is used as an excuse to not take action, for example against sex-traffickers (sorry not sorry for using the same example as in the last post). Or as Hugh Stephens puts it:
It’s not about the law, it’s about the platforms’ lack of will.
Unfortunately it is the shield aspect of the legislation that has been most often invoked by internet platforms, allowing them to ignore all sorts of abusive material on their sites on the basis that they are merely passive bulletin boards, and not responsible for content posted by others. Thus hate speech, content promoting terrorism and violence, revenge porn, sex trafficking, and so on has been allowed to proliferate on the internet with no legal recourse against the platforms providing access to the material. In some cases, platforms have had no incentive to remove access to objectionable material because they have been able to monetize it by attracting consumer eyeballs and thus advertisers.
That’s the Good Samaritan Paradox Paradox – the US example shows that even if EU law would embrace the Good Samaritan, the internet would be no better off. It’s not about the law, it’s about the platforms’ lack of will.
Back to the IMCO report, turns out the platforms already have the mandate to take action against illegal content without risk of liability anyway:
/../ it is not the “active role” to identify infringements which leads to the hosting provider losing the liability privilege of Article 14 E-Commerce Directive. Rather, it is the active role to promote, present or organise the content. With such an understanding of “active role” no “Good Samaritan Paradox” will emerge from the Article
Good Samaritan Paradox Paradox Paradox, anyone?