The Buffalo Attack and the Gamification of Violence


Site of the attack. Courtesy of Andre Carrotflower/Wikimedia Commons


The attack in Buffalo again highlights the challenges of online radicalisation and the importance of improving research on the gamification of violence.

The attack in Buffalo, which the FBI has indicated will be investigated both as a hate crime and as an act of ‘racially motivated violent extremism’, follows in the wake of similar attacks. The violence was live-streamed on Twitch, the most popular platform for gamers and content creators to reach audiences in real-time. Based on our analysis of video copies that could not be fully authenticated, it appears a limited number (around 22 people) viewed the livestream. The feed appeared to have been severed in the middle of the attack, either by content moderators at Twitch or by the alleged shooter, around 40 seconds after the first shot was fired. Despite the limited number of initial viewers and the removal of related content by most major platforms, a recording of the attack posted elsewhere online had been viewed over 600,000 times less than 24 hours later and working links to the video remained available on Reddit (another online gaming-adjacent platform) hours later.

The incident raises several significant localised issues, including the increasing number of far-right motivated attacks in the US, and salient global concerns, including the spread of hateful ideologies in online spaces and the ease with which copy-cats can readily cherry-pick ideologies, conspiracies, motivations and tactics.

Gamification of Violence

The Buffalo incident represents a continuation of extremist attackers ‘gamifying’ their attacks – repurposing elements from video games as part of their violence. The Christchurch mosque attack in New Zealand in 2019 – also broadcast on Twitch – set a precedent for livestreaming racially and ethnically motivated attacks by the far-right. Forthcoming analysis by Suraj Lakhani and Susann Wiedlitzka looks at the gamification of the Christchurch attacks – which share many parallels with the Buffalo shooting. They point to subsequent incidents in 2019 with gamified elements including the abortive synagogue shooting in Halle, Germany, along with another attack on a synagogue in Poway, California; a racist attack at a Walmart in El Paso, Texas; and at a mosque in Bærum, Norway. Like the Christchurch shooter, the Buffalo assailant posted a link to a long, racially and ethnically biased manifesto (running some 180 pages) some two days before the attack, which reportedly established a white supremacist logic for his attack and directly referenced the Christchurch and Charleston Church (2015) attackers.

During the attack, the alleged shooter used a helmet camera to film his assault in a similar fashion to other live-streaming attackers. Dressed in military-style combat gear and holding a long gun, the approach mirrors the visual style of first-person shooter games and, more specifically to livestreaming, ‘Let’s Play’ videos where audiences tune in to gamers playing their favourite titles. The video content also indicates that the shooter inscribed the barrel of the Bushmaster assault style rifle used in the attack – modified to be an automatic weapon – with racist epithets and a reference to the popular white supremacist ‘14 Words’.

The attack also points to issues of far-right extremism on a variety of the most popular gaming-adjacent platforms. While Twitch has taken a zero-tolerance approach to violent extremist content since being used during the Christchurch attacks, ‘support for extreme right wing ideologies can [still] be discovered on Twitch with relative ease’. Popular online message-boards 4Chan and 8Chan were noted in the attacker’s manifesto as where he ‘learned through infographics, shitposts, and memes that the White race is dying out’ and he initially planned to post the attack itself on 8Chan.

Discord, a community-based chat service that the Buffalo attacker reported using, was also used by organisers of the deadly Unite the Right rally in Charlottesville in 2017. It has since taken steps to remove far right and extremist content from the platform. A user with the same handle as the attacker held a public account on Steam, which has a particularly prominent and largely unmoderated neo-Nazi and far-right scene. Lastly, the attacker posted his manifesto on Google Drive. While it was promptly removed by the platform, the manifesto was reportedly online for several days before the attack.

Online Content Moderation

This latest tragic attack highlights the challenges of online content moderation. The Trust and Safety team at Twitch appears to have acted extremely swiftly to stop the livestream – perhaps as soon as 40 seconds after the first shot – which was likely the result of improved internal policies. However, the continued exploitation of gaming-linked platforms to gamify violence and raise the profile of attackers seems unlikely to stop.

Livestreaming and audio-based content, in particular, is difficult to moderate and enforce via community guidelines designed by platforms for a number of reasons: it occurs in real-time; is harder to algorithmically monitor for specific symbols or audio; and it can later be reposted to further forums even after being removed from the original site. In this case, content from the attack was promptly shared with the Global Internet Forum to Counter Terrorism (GIFCT), which activated an incident protocol within hours and added the content to its hash-sharing database (a way to verify and code harmful content) ‘to identify whether the same content has been shared on their platforms and address it in accordance with their respective platform policies’. However, as of 15 May, smaller platforms not party to the GIFCT and the hash-sharing database continue to host content related to the attack.

Following the first major attack livestreamed in this way, the Christchurch Call was issued ‘to eliminate terrorist and violent extremist content’ online. Since that call many tech companies and even media organisations have taken steps to try to anticipate and moderate content. However, this remains a challenging and controversial process, and can only realistically be seen as one part of the solution. There will always be spaces available in which this content can be hidden and individuals who will repost it via new and creative ways to circumvent moderation efforts. This will likely not be the last attack using gamification as an element of a broader extremist agenda.

Therefore, we have co-founded the Extremism and Gaming Research Network, with the aim of enhancing research and knowledge in this space. We see gamification as just one way in which extremists and terrorists exploit and misuse aspects of gaming to their own ends. While video games bring joy and can foster positive community engagement and resilience and do not inherently cause violence, there are multiple ways in which extremists seek to exploit platforms and gaming. In the days and months ahead, the motivations and background of the Buffalo shooter will be assessed in great detail, along with his use of and likely radicalisation on gaming-related platforms. We must continue to develop better ways to understand, mitigate and prevent the use of gaming for radicalisation and extremism while building stronger and more inclusive online communities for all.

The views expressed in this Commentary are the author(s), and do not represent those of RUSI or any other institution.

Have an idea for a Commentary you’d like to write for us? Send a short pitch to commentaries@rusi.org and we’ll get back to you if it fits into our research interests. Full guidelines for contributors can be found here.


Galen Lamphere-Englund

Associate Fellow

View profile

Dr Jessica White

Senior Research Fellow

Terrorism and Conflict

View profile


Footnotes


Explore our related content