Against the Clock: Can the EU’s New Strategy for Terrorist Content Removal Work?


Courtesy of Chaay_tee


The European Commission’s proposed plan to take down online terrorist content within an hour of publication seems limited in its effectiveness and risks ignoring other aspects of extremist activities online.

Throughout 2020, numerous incidents on European soil have served as a reminder of the threat terrorism continues to pose to the continent. The recent series of attacks in France and Austria has led the EU and the UK to raise terrorism threat levels and rethink existing laws and policies to prevent and respond to terrorism. As part of the EU’s response to the heightened threat levels, the European Commission published a new ‘Counter-Terrorism Agenda’ in December .

Instead of recommending tougher sentencing for convicted terrorists, as suggested by France and Austria following the attacks in late 2020, the Agenda emphasises European police cooperation and information exchange and suggests strengthening the mandate of Europol, the EU agency for law enforcement cooperation. The Agenda also proposes the creation of a network of counterterrorism financial investigators and measures to ensure the physical protection of public spaces.

The main item in the Agenda with regard to the prevention of terrorism urges EU member states to adopt a regulation on removing terrorist content online that was initially proposed in 2018. The regulation aims to oblige online platforms in member states to take down terrorist content on their platforms within an hour of publication. The proposal also provides member states with the ability to sanction non-compliance and includes complaint mechanisms to reinstate erroneously removed content.

The renewed focus on countering extremist content and ideologies online comes at a time of increased online activity, due to the global pandemic, which has amplified fears about far-right, Islamist and other types of extremist radicalisation. The violence that unfolded at the US Capitol on 6 January and the resulting crackdown by mainstream social media companies on those involved in inciting or perpetrating the attack on the Capitol has once again fuelled the debate about the merits of deplatforming and content moderation. Indeed, despite ongoing content moderation efforts, which have been essential in counteracting the amplification of extremist messaging online, mainstream social media platforms and fringe communications forums remain important tools for the spread of extremist content and allow extremists to forge international ties.

Yet, while the internet clearly plays a role in spreading extremist propaganda, ‘consuming’ extremist content online is typically only one of many factors that contribute to a person’s radicalisation process. Therefore, preventive strategies that primarily focus on the mass removal of extremist content online are unlikely to address the wide range of factors that motivate individuals to act on extremist beliefs and attitudes.

Furthermore, such efforts have proven to be limited in their effectiveness as they typically fail to take into account the many different ways in which extremists use technology for the dissemination of propaganda, recruitment of new members, internal communication and facilitation of attacks. Arguably, content removal and deplatforming efforts might also have undesired side effects, for example, by causing extremists to migrate to more secure parts of the internet, therefore making it more difficult for law enforcement to detect their activities, or by reinforcing a sense of community and belonging among members of violent extremist groups.

What do we know about radicalisation online?

There is no universal pathway toward extremist violence and individuals are influenced by a complex combination of structural motivators, individual incentives and enabling factors. While many people experience some of the same conditions that drive certain individuals toward violent extremism, the presence of protective factors prevents most people from engaging in extremist violence. Similarly, we know that supporting extremist ideologies does not necessarily lead to participation in violence.

Technology can contribute to certain risk factors, but trajectories of would-be terrorists usually include offline as well as online elements. For example, the role of (offline) peer groups has been found to be particularly relevant in radicalisation and recruitment processes. Even lone actors, who often rely heavily on the internet, usually engage with other extremists in offline domains as well. Hence, placing too much focus on the removal of terrorist content on the internet cannot address the problem in its entirety as it risks disregarding the role of offline interactions, psychological stressors and other offline factors which might contribute to individuals’ trajectories toward violent extremism.

What do we know about the effectiveness of content removal?

Despite frequent criticism that social media companies are not doing enough to tackle extremist content on their platforms, tech companies are already removing extremist content at an impressive pace, particularly with the aid of automated detection and removal. Some social media companies have also been successful in reducing the reach of specific extremist groups by removing profiles and content associated them, for example, with Facebook’s removal of the extremist group Britain First’s content in 2018.

Nevertheless, technical and definitional limitations have significant impacts on the effectiveness of content removal that make the policies proposed by the European Commission difficult to implement.

While automated tools for detecting extremist content online are promising, given the massive scope of new content that is constantly being added to social media platforms they typically lead to high numbers of false negatives and false positives. Even the resource-intensive triaging processes, combining automated tools with human moderators, that are already in place on large social media platforms are unlikely to effectively filter terrorist content out without also blocking legitimate content – especially within the one hour window the regulation envisages. Smaller companies with fewer resources to hire human moderators are even less likely to achieve high accuracy rates in removals or fast responses to complaints made through the proposed mechanism to reinstate content that was mistakenly removed.

Linked to the issue of effective detection of extremist content is a second limitation of content moderation – the presence of ‘grey zone’ content. Extremist groups and individuals produce a wide array of content, some of which does not, explicitly or even implicitly, condone violence or incite hate. Certain extremist groups and individuals purposely use humour and irony in their messaging to mask violent intentions and maintain plausible deniability. Deciding where to draw the line between acceptable and extremist content is particularly sensitive in the context of free speech and in light of the fact that banning content and accounts excessively can feed extremist narratives of censorship.

Questions around ambiguity of content are further exacerbated by legal definitional challenges. Not only do different governments use different definitions of terrorism and violent extremism (if they have an official definition at all), but national designation lists often only encompass a small fraction of active groups. The EU’s proposal provides a definition of terrorist content for national authorities to adopt, but rights groups have expressed concerns about free speech online, given the wide range of content encompassed by the definition.

One way to overcome issues with definitions and grey zone content would be to marginalise extremist or grey zone content by demonetising it, disabling comments, and removing it from algorithm-based social media recommender systems, instead of focusing on rapid content removal. Additionally, working with smaller and fringe platforms and online platforms that are not typically targeted, such as gaming platforms, to improve their efforts in countering extremism will be important to avoid the migration of extremist content to those platforms.

Overall, the narrow approach proposed in the regulation is unlikely to be effective in significantly reducing terrorist activities and tackling extremist ideologies online. The aim of regulating extremist content is important and the EU should take the presence of extremist content online seriously. However, a broader strategy aimed at addressing the many different ways in which extremists use the internet as well as the relevant offline factors would have a better chance of success than the narrow focus on rapid content removal.

The views expressed in this Commentary are the author's, and do not represent those of RUSI or any other institution.


WRITTEN BY

Claudia Wallner

Research Fellow

View profile


Footnotes


Explore our related content