Developing a Framework for Secure Third-Party Access to Frontier AI

AI with human faces to illustrate a paper on frontier AI

Image: Panther Media Global / Alamy


This paper enables secure third-party evaluation of frontier AI models, balancing safety, security and innovation for defence and security stakeholders.

A Note on Publication

The report will be available on the website in early May 2026. We are happy to provide an advanced copy in private from 27 April upon request. Please contact Dr Louise Marie Hurel (LouiseH@rusi.org).

Overview

This paper delivers a critical framework for securely enabling third-party evaluation of frontier AI models, directly addressing the urgent need for robust safety and security in the defence and security sector as AI capabilities rapidly advance. By mapping the risks and proposing actionable solutions, it empowers stakeholders to balance innovation with protection against emerging threats.

Key Recommendations

  • Do not let security concerns impede safety-critical evaluation: Ensure that third-party assessments can proceed without unnecessary barriers, supporting transparency and accountability.
  • Harmonise language and access tiers: Adopt a shared taxonomy for model access levels (black-box, grey-box, white-box) to standardise communication and expectations across developers, evaluators, and policymakers.
  • Operationalise secure access through shared standards and practices: Develop and implement common security controls, including technical, procedural, and contractual measures, grounded in principles like least privilege, data minimisation, and time-bound access.
  • Build feedback loops for continuous improvement: Establish mechanisms for ongoing learning, incident reporting, and periodic review of risk frameworks to adapt to evolving threats and regulatory requirements.

The paper introduces a threat taxonomy and an Access–Risk Matrix, providing practical tools for identifying, assessing and mitigating security risks associated with third-party access to sensitive AI models. It calls for a multistakeholder governance framework to ensure that secure access becomes the foundation for safe innovation, not a constraint. This approach is essential for defence and security professionals seeking to harness frontier AI while safeguarding against intellectual property theft, model manipulation and weaponisation by adversaries.

About the Secure Access to Frontier AI Taskforce

As the rate of innovation and deployment of frontier AI models accelerates, ensuring secure and trusted third-party evaluations is a greater priority than ever. Third-party evaluators play invaluable roles in providing accountability, stress-testing and building vital trust in increasingly complex systems and applications. Ensuring development labs and evaluators are sensitised and equipped to actively mitigate potential risks is vital to delivering evaluations at scale, and safeguarding public and industry trust and safety. As part of this project, RUSI has established the Secure Access to Frontier AI (SAFA) Taskforce, drawing together a community of technical experts from labs, evaluators and cyber and information security experts from across sectors to provide an evaluation of third-party access.

Secure Access to Frontier AI Taskforce

This project taskforce explores the cyber security, information security and model-level risks associated with third-party access to frontier AI models.

Acknowledgements

The authors would like to thank all those who volunteered their time and expertise as part of the Taskforce. In particular, we would like to acknowledge the contributions of all SAFA-TF members and participants throughout the workshops as well as their feedback throughout the development of this report: Daniel Cuthbert, Mohamed Samy, Ehab Hussein, Omer Nevo, Alejandro Ortega, Kevin Klyman, Markus Anderljung, George Balston, Francesca Federico, Charles Foster, Esme Harrington, Pia Huesch, Kellin Pelrine, Adriana Stephan, Raquel Vazquez, Pegah Maham, Talita Dias, Rumman Chowdhury , Robert Trager, Conrad Stosz, Dawn Song, Abby Cruz, Mathias Vermeulen, Clement Briens, Markus Hobbhahn, Madeline Carr, Ingrid Dickinson and others that have provided comments to earlier versions of this report.


WRITTEN BY

Dr Louise Marie Hurel

Senior Research Fellow

Cyber and Tech

View profile

Elijah Glantz

Research Fellow

Organised Crime and Policing

View profile

Daniel Cuthbert

RUSI Associate Fellow, Cyber and Tech

View profile


Footnotes


Explore our related content