Radicalising And Recruitment
Stuart Macdonald and David Wells reveal how AI is being employed to fuel terrorism
The past two years have witnessed growing concerns about the risks of generative artificial intelligence (AI) technologies being exploited by a range of malicious actors, including terrorists, with early attempts by terrorists and violent extremists to experiment with generative AI already underway.
Generative AI offers a number of real and potential benefits to terrorist actors, which can broadly be divided between optimising processes already possible (and occurring) using existing technologies and enabling completely new capabilities.
Propaganda is the primary area where existing terrorist activities can be improved by generative AI, which can almost instantly produce images, video and text in different styles, formats and languages. Given the extent to which terrorist recruitment and radicalisation takes place online – and the critical role played by propaganda material in this process – generative AI offers significant improvements in terms of speed and scale.
It also radically lowers the barrier to entry for producing terrorist propaganda. Previously, this often required specialist skills (in video production or editing for example) and in some cases, access to individuals with particular specific language skills. Indeed, the lack of these language skills has prevented terrorist groups like ISIS from reliably targeting particular nationalities or groups with their propaganda materials in the past.
Although there remain significant issues with the quality and reliability of AI-generated material (across all formats), its use will allow terrorist groups to increase the volume of their propaganda. This is of course just the first step in a process, with some (but not all) of this material likely to evade counter-measures and make it online, with a further proportion successfully reaching their desired audience.
Other areas where generative AI can optimise existing processes include the potential to better evade these online counter-measures – by creating an overwhelming volume of content and by enabling the relatively easy customisation of content in very minor ways – and changing how terrorists conduct target or methodology research.
AI-enabled search functionality can rapidly generate easy-to-understand, digestible summaries of overly complex or technical information and is increasingly being rolled out by Google and other online service providers.
However, it continues to generate significant factual errors. Terrorists might get information quicker and be able to understand it more easily, but it is also more likely to be inaccurate. This poses significant risks for a terrorist conducting research on how to build a bomb, develop CBRN materials or evade the security measures in place at a planned target of an attack.
In terms of new capabilities, perhaps the most concerning is the potential for terrorists to use chatbots to radicalise and recruit individuals. Rather than being limited by time zone, language capability and the persuasiveness of an individual recruiter, ‘terroristGPT’ chatbots can operate globally, 24/7, and theoretically adjust their approach as they interact with more potential recruits.
However, this type of approach remains largely theoretical at present, despite improvements in the extent to which chatbots can present as ‘human.’ Few chatbots currently on the market could hope to rival the sophistication of a real life terrorist radicaliser or recruiter, both due to technical limitations and the types of data they have been trained on. And the chatbots that are more sophisticated are typically run by companies with established procedures to prevent this type of abuse.
Instead, it is more likely that terrorists and violent extremists will employ unsophisticated chatbots at scale, again with the hope that even if largely ineffective, they might succeed in at least attracting a small number of vulnerable individuals.
Both of the key risks highlighted here point towards the broader impact of generative AI; the flooding of the online space with poor quality, unreliable information. This will make it harder to determine what is real or fake, or what is true or false and is exactly the type of environment in which mis and disinformation will spread and terrorist propaganda can flourish.
Given these concerning trends, it is vital to consider how some of these technological advances can simultaneously be used for counterterrorism purposes. The extensiveness and effectiveness of these countermeasures are likely to shape the threat landscape of tomorrow, including the risks posed by generative AI.
One application of automated tools, including AI, is to identify and remove terrorist content online. On average, every minute Facebook users share 694,000 stories, X (formerly Twitter) users post 360,000 posts, Snapchat users send 2.7 million snaps and YouTube users upload more than 500 hours of video. Manually inspecting this vast quantity of content for items that promote terrorism – or violate other Terms of Service – is impossible. Technological solutions are imperative.
In broad terms, there are two types of tools used to identify terrorist content. The first is behaviour-based and works in a similar way to spam detection. It focuses on such things as the age of the account, abnormal posting volume and the use of trending or unrelated hashtags. This type of tool is valuable for detecting the rapid dissemination of large volumes of content, including by bots.
The other type of tool is content-based. These focus on linguistic characteristics, word use, images and web addresses. Automated content-based tools take one of two approaches. The first approach is to compare new images or videos with an existing database of images and videos that have previously been identified as terrorist in nature.
A problem with this approach is that terrorist groups are known to try and evade such methods by producing subtle variants of the same piece of content. After the Christchurch terror attack in New Zealand in 2019, for example, hundreds of visually distinct versions of the livestream video of the atrocity were in circulation.
So, to combat this, matching-based tools generally use perceptual hashing rather than cryptographic hashing. Hashes are a bit like digital fingerprints, and cryptographic hashing acts like a secure, unique identity tag. Even changing a single pixel in an image drastically alters its fingerprint, preventing false matches.
Perceptual hashing, on the other hand, focuses on similarity. It overlooks minor changes like pixel colour adjustments, instead identifying images with the same core content. This makes perceptual hashing more resilient to tiny alterations to a piece of content. But it also means that the hashes are not entirely random, and so can potentially be used to try and recreate the original image.
The second approach relies on classifying content. It uses machine learning and other forms of AI, such as natural language processing. To achieve this, the AI needs a large dataset of examples that have been labelled as terrorist content or not. By analysing these examples, the AI learns which features distinguish different types of content, allowing it to categorise new content on its own.
This approach also faces challenges, however. Collecting and preparing a large dataset of terrorist content to train the algorithms is time-consuming and resource-intensive. A future priority should be the development of generative AI that can be used to create datasets that are both sufficiently large and representative – especially as training datasets can become dated quickly, as terrorists make use of new terms and discuss new world events and current affairs.
A further challenge is that algorithms have difficulty understanding context, including subtlety and irony. They also lack cultural sensitivity, including variations in dialect and language use across different groups. These limitations can have important offline effects. There have been documented failures to remove hate speech in countries such as Ethiopia and Romania, while free speech activists in countries such as Egypt, Syria and Tunisia have reported having their content removed.
So, in spite of advances in AI, human input remains essential. As well as maintaining databases and datasets, it is needed to assess content that automated systems have flagged for review and to operate appeals processes for when decisions are challenged.
But this is demanding and draining work, and there have been damning reports regarding the working conditions of moderators, with many tech companies such as Meta outsourcing this work to third-party vendors. This is another area where advances in AI may help, through the development of tools to safeguard moderators’ wellbeing, for example, by blurring out areas of images so that moderators can reach a decision without viewing disturbing content directly.
When reflecting on both the potential impact of AI on the evolving threat landscape and the role of technology in supporting online counter-measures, partnerships are essential, particularly collaborative initiatives between governments and the private sector, which help both parties better understand and respond to the threat landscape.
Partnerships within the private sector will also be critical, given the significant disparity in resources but also experience and expertise across the sector in countering terrorism online. This is likely to increase yet further with the entry of new AI companies with limited experience in this regard. One example of this type of partnership is that some automated content moderation tools – such as Meta’s Hasher-Matcher-Actioner – are now openly available. Companies can use these tools to build their own database of hashed terrorist content.
Given the scale of the existing challenge, and the potential role of generative AI in complicating this yet further – including by creating the need for companies to be able to identify and flag AI-generated content – international organisations, governments and tech platforms must prioritise the development of collaborative resources and approaches moving forward.