Is Deleting Abusive Comments Stifling Freedom of Speech?

Earlier this summer Twitter came under heavy criticism in the UK, after a number of women had been subjected to bomb and rape threats on the site.

It all started with the seemingly innocuous feat achieved by campaigner Caroline Criado-Perez of getting the Bank of England to display a woman (Jane Austen) on the new £10 notes. This apparently infuriated a group of Twitter-users who proceeded to bombard Criado-Perez with abusive messages, such as “Everybody jump on the rape train – @CcriadoPerez is conductor”, “Everyone report @CcriadoPerez for rape and murder threats and also being a cunt #malemasterrace”, and “Rape threats? Don’t flatter yourself. Call the cops. We’ll rape them too. YOU BITCH! YO PUSSY STANK!” Lovely.

Despite a 21-year-old man in Manchester being arrested for sending a threatening tweet to Criado-Perez, the abuse continued – quickly expanded to target MP Stella Creasy after she publicly stated her support for Criado-Perez. Creasy promptly received a Twitter photo of a masked man wielding a knife.

Creasy suggested a day’s boycott of the site in protest over Twitter’s ineptitude in controlling trolling on its service.

This was followed by a number of female journalists – and historian Mary Beard – receiving a bomb threat.

Some people have accused those demanding that Twitter take action against these bullies of being pro-censorship. The solution, they say, would be for these women to simply close their Twitter accounts, a sort of ‘if you can’t stand the heat, get out of the kitchen” attitude to the problem. But isn’t that, in fact, to support censorship? These bullies are, effectively, threatening any woman who has the “audacity” to express dismay over sexism and discrimination in society.

This is why some writers, including Bonnie Greer and Liz Jarvis, opposes the #twittersilence action, with Jarvis tweeting “The best way to stand up to bullies is to speak out.”

A petition calling for Twitter to add a “report abuse” button to tweets attracted 130,000 signatures. It worked. Soon after, Twitter unveiled its new in-tweet button and announced that it has put extra staff in place to handle reports as well as updating its rules to make it clear that abuse would not be tolerated.

I’m surprised it’s taken this long for Twitter to install such a button. For the majority of British online newspapers it’s always been standard practice to include such a feature on all comments posted by the public. Partly it’s because, in the UK, newspapers can be held responsible if a comment is libellous, but it also protects readers from being threatened and abused for voicing their opinions.

The Guardian website publishes community standards guidelines to instruct on what is not acceptable, and employs moderators that monitor when a comment receives a complaint, acting immediately if the guidelines are broken.

Any time a contributor writes about gender issues these moderators end up being on high alert, as these articles for some reason attract an extraordinary amount of bile and abusive comments, the vast majority from males feeling hard done by.

Some cyber activists proclaim that we should simply get used to and accept this abuse online – that it’s the price of freedom of expression. But even the European Convention states, in Article 10, that this right is subject to limitations concerning, for example, obscenity, sedition (including inciting ethnic hatred), sending articles that are indecent or grossly offensive with an intent to cause anxiety or distress – and threatening, abusive works that are likely to cause harassment, alarm or distress.

Those posting such comments on social media run the risk of arrest, but the debate still rages over to what extent the platforms themselves carry any responsibility – if any at all. The debate has, however, mainly focused on libel cases for far.

A recent ruling by a court of appeal in London concluded that a gap of five weeks between a complaint being made and the removal of allegedly defamatory comments on a blog post on Google’s Blogger platform left the company open to a libel action.

The ruling is far from demanding that technology companies police user comments as soon as they’re posted – but it means they cannot take a completely hands-off approach to what’s posted on their platforms. That any of these companies could even contemplate that they should carry absolutely no responsibility is puzzling. After all, they’re the ones holding the power to remove offending comments. It’s a responsibility traditional newspapers understood they had to take on board when moving online, so why would anyone think technology companies should be completely exempt?

No doubt the vast amount of media attention the “Twitter-storm” attracted played a role in Twitter taking action. Clearly the company has to deal with an extraordinary amount of comments – many, many more than any newspaper – and though its new “report abuse” button will increase the likelihood of abusive tweets being taken down quicker, perhaps we need to look at a combination of remedies, starting with wider societal refusal to accept troll behaviour.

As someone who has been on the receiving end of online abuse, including a death threat, as a result of writing opinion pieces for the Guardian, I’ve often felt it being somewhat unfair that my identity and picture is publicly displayed next to my comment while the mob attacking me are able to hide behind a cloak of anonymity.

When English boxer Curtis Woodhouse finally got fed up with a Twitter troll, earlier this year, he offered a 1,000 reward to anyone who could identify and locate the abuser. His Twitter followers obliged, and he soon set off to pay Jimmyof88, the abuser, a visit. Tweeting a photograph of the street where the troll lived, Woodhouse wrote: “Right Jimbob, I’m here. Someone tell me what number he lives at or do I have to knock on every door #itsshowtime.”

Realising the possibility of being confronted, Jimmyob88 quickly replied: “I am sorry it’s getting a bit out of hand. I am in the wrong. I accept that.” He later went on television apologising to the boxer face-to-face, who graciously accepted.

The women that have been victims of Twitter abuse may not feel comfortable with physically confronting their trolls – or even have the time to chase down hundreds of them. But instead of blocking accounts belonging to repeat abusers, only for them to pop up under a different pseudo name, a more effective solution could be to name and shame them.

It’s highly possible that running the risk of having their picture and identity revealed for all to see would make trolls think twice before hitting return on their computers, firing off another tirade.

Helienne Lindvall
Columnist The Guardian