Tech Dependencies Undermine UK National Security

A stock image of the UK.

US Tech dependency: Between 'Censorship' and Security. Image: Sandu / Adobe Stock


While the UK focuses on hybrid threats, is it being undermined by dependencies on US providers? Can the UK have a national security agenda in isolation?

In January, public outcry developed in the UK over X’s chatbot – Grok – and its ability to generate explicit and non-consensual images of people. The UK government ultimately succeeded in pressuring X into implementing a localised geoblock on the generation of deepfake sexual images via its platform, which are illegal to share in the UK. But this small victory was accompanied by a concerning wave of allegations and hostile rhetoric from the US, with Elon Musk accusing the UK regulator, Ofcom, of suppressing free speech and a Republican congresswoman threatening sanctions and tariffs should the UK block access to the platform.

This is not an isolated incident. Also in January, Italy announced a €14.2 million fine against the US-based internet infrastructure provider, Cloudflare, for non-compliance with the country’s anti-piracy laws. Cloudflare’s CEO, in response, defended his company’s position not only by highlighting the need to prevent latency and poor resolution on Cloudflare’s domain name service (DNS), but to resist attempts to 'censor' online content. This is not the first time Cloudflare’s role in promoting online safety has been questioned, as debates over its response to extremist content hosted on 8chan in 2019 show.

Nevertheless, these incidents appear both more salient and geopolitically charged in light of the growing rift between US companies and foreign governments on the legality of state intervention in online moderation. While in recent years the UK and Europe have introduced new laws intended to enhance state powers to mitigate online threats to safety and security - including through increased moderation of social media platforms – the Trump administration has presented this agenda as ‘censorship’, while increasing diplomatic pressure against its key architects.

In this heavily politicised climate, the UK and Europe can no longer bet on the consistent cooperation of US-based platform and internet services companies to implement national laws promoting online safety and national security. Amongst other threats, this poses serious problems for the UK’s ability to disrupt foreign influence campaigns online, an area which has typically relied on platform cooperation and social media ‘takedowns’.

Social Media Takedowns Are No Longer A Suitably Scalable Solution

Events over the past two years have highlighted the threat that foreign influence campaigns pose to security and democracy. Foreign influence operators attempt to impact UK politics during acute crisis moments – such as during the Southport Riots – and in a more endemic manner – for instance on longstanding political issues like Scottish independence. In contrast to the United States – where Trump has presided over a dismantling of the operational architecture for combatting foreign interference online – the UK has been increasing its capability to counter these operations. An ;amendment to the National Security Act, passed in 2023, criminalises foreign-directed influence activity which uses illegitimate (for example, misleading or coercive) means to achieve its objectives.

quote
Heightened political tensions aside, external takedown requests have often proven slow and difficult to scale

But critical mechanisms for enforcing this law depend on cooperation with US platform providers. Typically, disrupting foreign influence networks online has relied on public and private sector investigators – including teams within the social media platforms themselves – collating evidence of illegitimate foreign influence campaigns and requesting the content and/or users be removed by the platforms. These ‘takedowns’ could be couched in the language of breaking platform policy (such as Meta’s coordinated inauthentic behaviour policy, or CIB) or implemented in response to other legal actions, such as sanctions against information threat actors.

Nominally, much of this is still going on despite shifts in American politics. While Elon Musk abolished many of X’s anti-misinformation polices after taking over the platform in 2022 and Meta replaced fact-checkers with community notes in 2025, policies on coordinated and inauthentic foreign interference remain largely intact.

But the operating environment has clearly altered, with practitioners attesting to the difficulties of accessing the necessary data for research or getting through to the right contacts for takedowns over the past year or two. Indeed, according to the former Chief Executive of the UK’s National Cyber Security Centre, it is now 'much much harder' to defend against foreign inauthentic activity online, as many of the major platforms are no longer 'playing ball' in the way they were five years ago, due to the political climate surrounding free speech in the US.

Arguably, platform counter-threat teams are facing competing pressures. They need to maintain a minimum capability for complying with the requests of international governments on issues including foreign influence campaigns and tend to do so via CIB and similar policies. But these same campaigns instrumentalise disinformation, an issue which, domestically in the US, is now being presented as a 'guise' for suppressing free speech.

Heightened political tensions aside, external takedown requests have often proven slow and difficult to scale. During the UK general election in 2024, these takedowns were taking around a day to be implemented, by which time audience impact has often already occurred. 

Meanwhile, advances in automation and generative AI are increasing the scale – and some argue the sophistication — of the threat. In this context, many argue that 'reactive' social media takedowns are an ineffective and 'insufficient' response. With automated and industrialised foreign influence operations being generated and disseminated faster than they can be detected and removed, relying on social media takedowns as the principal response is no longer viable.

Off-Platform and Covert Approaches to Disrupting Foreign Influence Campaigns

There is a need for wider and more sophisticated thinking about the art of the possible for disrupting foreign influence campaigns online.

Some insights can be drawn from adjacent fields, such as cyber-security and counter-financial crime, as well as nation-state approaches to disrupting other threat actors online.

Firstly, it is worth highlighting that foreign influence operations rely on activities conducted across an influence ‘stack’. This stack spans cognitive, application, infrastructure, and network levels. For instance, at a cognitive level, influence operators deploy content and narratives; at an application level, they use social media profiles, pages, and communities. 

Over the last decade, disruption strategies expanded from cognitive interventions, such as debunking, toward platform-level actions, including content and account takedowns. 

Drawing on practices from cyber-security and counter-cybercrime, there are opportunities to disrupt some influence operations at the level of infrastructure or networks, for instance through cooperation with web hosting services. This could provide opportunities to disrupt website-based influence operations, but leaves unresolved the issue of ‘bulletproof hosting’ and infrastructure hosted in adversary states. Furthermore, even when content is hosted by companies in partner states, public-private cooperation may prove unreliable, as recent and historic examples involving Cloudflare show.

A second strand of thinking draws inspiration from the cyber kill chain, conceiving of foreign influence activity as staged operations involving infrastructure, funding, and personnel. This approach highlights opportunities for disrupting foreign influence operators upstream of content being posted on a particular website or platform, during the research, asset acquisition, and planning phases of operations. This might include identifying and exposing inauthentic digital identities acquired by influence operators before they are mobilised. 

But it might also include off-platform actions against operators, such as sanctions or transaction restrictions – drawing on the toolkit of counter-financial crime. Countries such as Moldova – which witnessed widespread financial interference in its 2024 presidential election – are investing in new teams dedicated to securing elections through financial monitoring, including of cryptocurrency. 

These measures introduce grit into adversary influence operations, complicating their ability to pay influence proxies or procure professional services. By focusing on government levers of power, these approaches may be more effective than on-platform mitigations, enabling disruption without relying on public-private partnerships with companies abroad.

Thirdly, lessons might be drawn by states from their experiences disrupting other threat actors who recruit and mobilise operators online, such as terrorists and cyber-criminals. The adversary architects of foreign interference campaigns are known to recruit proxies via social media groups or channels, on open, semi-open and closed groups. 

These online spaces provide vectors for potential state-led disruptive activity at a covert level. Such campaigns might seek, for instance, to undermine foreign influence architects’ recruitment drives by making potential proxies distrust financial offers. Inspiration could be drawn here from successful counter-ransomware operations and their combined use of cyber and information effects to enact technical disruptions while maximising cognitive effect on operators.

Countering foreign influence campaigns relies on a range of approaches which should rightly span resilience measures, alongside proactive disruptive and offensive measures. The UK must remain cognisant of technical and geopolitical realities when considering and prioritising investment. Two trends look unlikely to be reversed: the increasing pace, scale, and scope of adversary influence activities online; and political polarisation and technical dependencies reducing the reliability of on-platform takedowns as the primary response.

In this context, we need to think more deeply about how to meaningfully counter this threat, drawing on adjacent industry or policy areas, and being clear-sighted about what levers remain available to government.

© RUSI, 2026.

The views expressed in this Commentary are the authors', and do not represent those of RUSI or any other institution.

For terms of use, see Website Terms and Conditions of Use.

Have an idea for a Commentary you'd like to write for us? Send a short pitch to commentaries@rusi.org and we'll get back to you if it fits into our research interests. View full guidelines for contributors.


WRITTEN BY

Sophie Williams-Dunning

Research Analyst

Cyber and Tech

View profile


Footnotes


Explore our related content