Secure Access to Frontier AI Taskforce

This project taskforce explores the cyber security, information security and model-level risks associated with third-party access to frontier AI models.




Image generated with Canva AI


Introduction

Security and safety research is an essential part of the advancement of trusted and safe frontier models. Gated model releases and API access provision can make it easier to prevent misuse of models. However, closed models also introduce the challenge of how, when, and under which conditions a model should or could be evaluated by a third party with the aim of enhancing the security of its operation and deployment. 

As the rate of innovation and deployment of frontier AI models accelerates, ensuring secure and trusted third-party evaluations is a greater priority than ever. Third-party evaluators play invaluable roles in providing accountability, stress-testing and building vital trust in increasingly complex systems and applications. Ensuring development labs and evaluators are sensitised and equipped to actively mitigate potential risks is vital to delivering evaluations at scale, and safeguarding public and industry trust and safety.

As part of this project, RUSI will establish the Secure Access to Frontier AI (SAFA) Taskforce, drawing together a community of technical experts from labs, evaluators and cyber and information security experts from across sectors to provide an evaluation of third-party access.

Image generated with Canva AI

Project sponsor

This project is sponsored by Google.

Aims and objectives

The Safe Access to Frontier AI (SAFA) Taskforce will gather inputs from cross-sector experts in cyber and information security with the aim of producing a valuable baseline guidance on risks, security implications, and mitigation strategies to third-party access to frontier AI models, to help inform future policy positions or compliance frameworks. 

Project output

The SAFA Taskforce will convene a series of closed-door, invite-only roundtables, bringing together experts from Frontier AI development labs, evaluators and cross academia and industry experts in cyber and information security.

These Taskforce roundtables will culminate in the publication of a RUSI Insights Paper. The paper will effectively synthesise the findings of the SAFA roundtables, complete with analysis from RUSI, outlining key access risks and security mitigations strategies to strengthen third-party evaluation processes. The paper will be publicly accessible from this website.

Project team


Louise Marie Hurel

Research Fellow

Cyber and Tech

View profile

Elijah Glantz

Research Fellow

Organised Crime and Policing

View profile

Dr Pia Hüsch

Research Fellow

Cyber and Tech

View profile

Taskforce members


Markus Anderljung, Centre for the Governance of AI

George Balston, AVERI

Daniel Cuthbert

RUSI Associate Fellow, Cyber and Tech

View profile

Francesca Federico, Faculty AI

Charles Foster, METR

Elijah Glantz

Research Fellow

Organised Crime and Policing

View profile

Louise Marie Hurel

Research Fellow

Cyber and Tech

View profile

Dr Pia Hüsch

Research Fellow

Cyber and Tech

View profile

Ehab Hussein, Principal AI Engineer, IOActive

Kellin Pelrine, Member of Technical Staff, FAR.AI

Mohamed Samy, Senior AI Security Consultant, IOActive

Adriana Stephan, Frontier Model Forum

Raquel Vazquez, Google

Mathias Vermeulen, Public Policy Director, AWO

Latest publications

View all publications