Social Media Sites Should Remove Hate Speech Or Face The Consequences

Convershaken Staff
May 24, 2017

Facebook, YouTube, Twitter and other social media platforms could soon face stricter regulations in the European Union that require them to remove videos containing hate speech. Similar restrictions are long overdue for text-based posts, tweets and comments.

After the recent terror attack in Manchester, thousands flocked to social media to rant about the existential threat of Islam and call for violent retribution against Muslims - before the identity of the attacker was even known. The flurry of terror attacks in the past two years, coupled with politicians blaming refugees, immigrants and religious minorities for their nations' problems, has compelled people to paint entire religions as the enemy and vocally support violent retribution. Many are justifiably emotional, outraged and desperate to feel safe and in control; they may be venting rather than seriously condoning mass murder. However, there will be some who are deadly serious and eager to please their compatriots, who will take up arms and follow through.

People should be free to denounce terrorist groups and express their anger, fear and anxiety in public. But inciting violence against innocents or a genocide of unprecedented scale crosses the line. The failure of Facebook, Twitter and other public-forum owners to remove hate speech and calls for the extermination of more than a billion people is unconscionable.

A few examples should illustrate the gravity of the situation. Searching 'kill moslems' - a popular nickname for Muslims among self-proclaimed 'deplorables' - brings up a post that reads "KILL THE MOSLEMS WHEREVER YOU FIND THEM", followed by two emojis: a black-bearded face and a gun pointed to its temple. Typing 'exterminate moslems' into Twitter brings up tweets such as "exterminate them all...Moslems serve no purpose on our planet". And a recent comment on alt-right hub Breitbart, posted via the Disqus comment platform, reads: "the (terrorist) network is called 'islam' and every muslim in the UK is a member...no muslim stands in opposition." One shimmer of a silver lining is that Disqus seems to have ramped up its moderation efforts. Previous Breitbart comments about refugees included: "Keeping them bunched up in cities make them easier to target with 155mm howitzers" and "Send them ALL to San Fransico (sic)...then Nuke them".

Right-wing extremists can already gather on Breitbart, 4chan, The_Donald subreddit and other niche forums. Google and Facebook has no obligation to provide a mainstream mouthpiece for any violent, intolerant views, whether they're aimed at Muslims or held by Jihadis. True, coaxing extremists into the open allows them to be more easily tracked, and they're more likely to be hear contrary views that may encourage debate rather than fill the echo chamber. However, surveillance tools should be sophisticated enough to follow users of more niche, extremist websites. And by protecting hate speech, social media sites implicitly endorse it as communication fit for public consumption. 

There are technological barriers to stamping out hate speech. The recent leak of Facebook's moderation guidelines highlighted the enormous challenge of balancing open dialogue with censorship for a monthly user base of more than 1.94 billion people. Although Facebook allows users to deny historical events such as 9/11 and describe refugees as 'filthy', it bars the use of violent or dehumanising language towards migrants and protects religious groups from abuse. However, Facebook doesn't provide an option to report a comment as 'hate speech'; users can only flag it as advocating "violence or harm towards an individual or animal". Similarly, Disqus only offers 'flag as inappropriate'. In contrast, both YouTube and Twitter offer options to report content as hateful.

When Facebook, Twitter and the rest choose to leave hateful comments up, they grant them a veneer of credibility and amplify their reach, making them complicit in any crimes that result from those conversations. Therefore, these sites should be held responsible for policing hateful content on their platforms, and taking action against those who propagate dangerous lies and incite violence and genocide. If they fail to step up, we will all face the consequences. Legislators should nudge them to be socially responsible by mandating they remove hateful language or face penalties.