On Friday the social network announced that it has suspended over 125,000 accounts since the middle of 2015 for threatening or promoting terrorist acts, most of them related to ISIS. Twitter also said that it has expanded the size of the team that reviews tweets flagged for terrorism so they can respond more quickly and has enacted strategies to help suss out potential terrorists even before they’re flagged.
“Like most people around the world, we are horrified by the atrocities perpetrated by extremist groups,” the company said in a blog post. “We condemn the use of Twitter to promote terrorism and the Twitter Rules make it clear that this type of behavior, or any violent threat, is not permitted on our service.”
Terrorism on the platform has become a big problem for Twitter, forcing the company to rehash its anything-goes policy. Friday’s announcement is an indication of just how much the company’s zealous free speech-stance has softened under pressure. Twitter had previously refrained from hunting out terrorists on its network, instead waiting for ISIS accounts to be flagged by its users. But now Twitter will investigate “other accounts similar to those reported” and use “proprietary spam-fighting tools to surface other potentially violating accounts.”
The number of terrorist accounts already suspended is fairly astounding. By contrast, between September and December of 2014, research from the Brookings Institute indicated that ISIS supporters used some 46,000 Twitter accounts. Back then, Twitter’s stance was fairly hands-off, but its stance changed after videos and images of journalist James Foley’s beheading were circulated on the network.
Recently, the public pressure for Twitter to do something about terrorism has intensified. At a recent summit on terrorism attended by bigwigs from both the White House and Silicon Valley, government officials suggested that the heads of tech companies like Twitter should do more to proactively combat terrorism, perhaps by creating some kind of technological system that could detect, measure, and flag “radicalization.” Last month, a lawsuit was filed against the company by a woman who blamed Twitter’s lagging approach to terrorism as a contributing factor in her husband’s death in a Jordanian attack. The lawsuit will likely be dismissed—the Communications Decency Act clearly protects platforms like Twitter from liability for what kind of content users post to their platforms—but in the meantime, it still went viral.
Twitter said that it has already seen results, “including an increase in account suspensions and this type of activity shifting off of Twitter.”
That terrorist activity is shifting off Twitter is not surprising. According to a recent report from the U.K. Parliment’s intelligence and security committee, part of what makes online terrorist behavior so difficult to tamp down is that it is incredibly adaptive.
“A video rip-off of a copyrighted movie can’t change its characteristics to avoid detection,” Daniel J. Weitzner, an MIT professor and former deputy White House CTO wrote in reaction to the findings. “Nor can a child abuse image morph into something else. However, terrorists know they are being watched so take steps to avoid detection.”
There is little question that Twitter savvy has helped terrorist groups like ISIS, allowing them to easily spread their message and recruitment efforts far beyond the Middle East. But even if you could get all of the terrorists off Twitter, they would probably just go somewhere else.
And in the meantime, we’ve left it up to Twitter to decide who is a terrorist, to delineate the line between what makes something propaganda and what makes it political speech.
“There is no ‘magic algorithm’ for identifying terrorist content on the internet,”the company said. “Global online platforms are forced to make challenging judgement calls based on very limited information and guidance.”
GET SPLINTER RIGHT IN YOUR INBOX
The Truth Hurts