In the wake of the London terror attack, UK Prime Minister Theresa May has lambasted internet companies with accusations of providing a safe space for terrorism ideology.
A day after the London bridge and borough market attack, May accused the firms of giving “this ideology the safe space it needs to breed.” She pressed for ‘international agreements that regulate cyberspace to prevent the spread of extremist and terrorism planning.’
Her statements have pivoted the argument between digital privacy and security to the fore, once again.
Tech companies like Facebook, Google, Twitter have denied these assertions, stating that they are already engaged in concrete steps to prevent and remove extremist content. According to Facebook Director of Policy, Simon Milner ‘We want Facebook to be a hostile environment for terrorists. Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it — and if we become aware of an emergency involving imminent harm to someone’s safety, we notify law enforcement.’
Understandably, it is quite difficult for these online platforms to moderate all information uploaded on their site due to the sheer volume. Some critics have said that Theresa May’s calls are dangerous, disproportionate, and “intellectually lazy.”
According to Business Insider, Facebook already prohibits content that supports terrorist activity, letting users report potentially infringing material to human moderators. It also uses some technical solutions, like image-matching tech that checks new photos to see if they’ve already been banned from the platform for promoting terrorism. It also reaches out to law enforcement if it sees potential evidence of a forthcoming attack (or attempt at human harm more generally). Google removes links to illegal content once notified. YouTube also takes down inciting videos and bans accounts believed to be operated by agents of foreign terrorist organisations. Twitter suspended 376,890 accounts in the six months leading up to December 2016. 74% were detected via its internal tech, and just 2% came from government requests.