Middle Powers Must Win the AI Deployment Race
Middle powers like the UK and Canada will not win the race for ever larger AI models, but through collaboration they can win the deployment race.
The AI Race is often framed as a contest over which countries – primarily the US and China – can develop ever larger and more powerful models. But this overlooks a more mundane reality: deploying AI at scale often matters as much, or more, than a slightly more powerful model. In certain military domains, models which are milliseconds faster than adversaries may be required. But for the overwhelming majority of use cases – from manufacturing, to healthcare, to shipping – models that are simply near the front of the pack can unlock the benefits of AI when they are meaningfully integrated into existing workstreams at scale.
This is particularly critical for middle powers, like the UK and Canada, who lack the scale and resources needed to keep pace with the US and China. Instead of trying to compete with superpowers to build Artificial General Intelligence (AGI), middle powers must focus on winning the deployment race. This means embedding AI across our economies and militaries through narrower and less flashy applications that solve specific, real-world problems – from automating labour-intensive agricultural practices, to improving productivity in ship building, or expanding access to high-quality personalised healthcare. However, for AI adoption to be trusted and accepted, it must be deployed with safety front-of-mind. Safety is not orthogonal to integrating AI, it is the tracks on which it progresses.
The AI Deployment Race
To meet this challenge, middle powers like UK and Canada must combine their resources and expertise. For middle powers, working together is no longer merely desirable, it is essential to deploying AI at scale and safeguarding our sovereignty. As the Canadian Prime Minister, Mark Carney, recently said in Davos, we must cooperate ‘with like-minded democracies to ensure that we won't ultimately be forced to choose between hegemons and hyperscalers’.
To deepen these efforts, we call for the establishment of a landmark UK-Canada Partnership on AI Deployment, structured around five pillars designed to accelerate both industrial and military adoption. While an initial bilateral agreement is actionable in the near-term, a future multilateral expansion of the partnership to like-minded allies in Asia and Europe would further pool resources and reduce costs.
Alongside international cooperation, broad multistakeholder collaboration will be needed. On this front, both the UK and Canada already possess mature AI ecosystems to build on.
For its part, the UK has established leading institutions such as AI Security Institute (AISI), The Alan Turing Institute (ATI) and the Advanced Research and Invention Agency (ARIA) to improve safety and advance adoption. The UK has also developed innovative public sector approaches – such as positioning AISI as a flexible ‘start-up within government’ or implementing talent allowance programmes to bring world-class expertise from tech companies and universities into government. Moreover, sustained cross-party support for the AI sector, alongside hosting flagship events like the world’s first AI Safety Summit at Bletchley Park, has established the UK as a global leader in AI Safety and Assurance. With Google DeepMind – the only leading Western frontier lab outside the US – based in London and the highest concentration of start-ups and VC funding anywhere in Europe, the UK has considerable strengths to draw on. Combined with world-class research from Oxford, Cambridge, Imperial College London and elsewhere, the UK is well positioned to compete in the deployment race.
Both Canada and the UK have often struggled to translate their advantages into practical deployments. Moreover, both suffer from the same inherent constraint: scale
While not yet as developed, Canada is also home to a strong and growing AI ecosystem. Despite not being as centralised or public facing as UK institutions, Canada’s Mila (Montreal), Vector (Toronto) and Amii (Edmonton) generate world-class research and innovation. These institutes are bolstered by leading researchers such as Geoffrey Hinton and Yoshua Bengio, as well as research talent distributed right across Canada’s publicly-funded universities. Moreover, recent policies taken under Prime Minister Mark Carney signal the new government’s ambition to catch up with global peers after the last decade. To that end, Canada recently announced a £1.1 billion Sovereign AI Compute Strategy to rapidly expand domestic compute infrastructure. The most recent federal budget also earmarked nearly £43.4 billion in defence spending over the next five years. While only a portion of this will go towards integration of AI into the military, it offers a substantial opportunity for the Canadian Armed Forces to develop, procure and deploy AI solutions at scale. Canada is also the first non-EU country to join the Safety Action For Europe (SAFE) programme, which grants Canadian defence firms the ability to bid for contracts within the £150 billion EU defence buying bloc.
Despite these advantages, both Canada and the UK have often struggled to translate their advantages into practical deployments. Moreover, both suffer from the same inherent constraint: scale. In an era of mounting fiscal pressures and an erosion of the ‘rules based order’, collaboration between like-minded middle powers has become essential. Only by combining expertise, resources and standards can we escape the dilemma Carney posed at the Davos conference and avoid a forced choice between hegemons and hyperscalers.
New Model for Middle Power Collaboration
To meet this moment, the UK and Canada must work together to accelerate practical deployments of AI across our economies and militaries. As long-standing and close allies across diplomatic, military, economic and cultural domains, both countries are natural partners. Building on the collaborative governance framework proposed by Fitz-Gerald, Maple and Padalko, we propose a new, targeted UK-Canada Partnership on AI Deployment that prioritises AI adoption at scale. Through five pillars of strategic collaboration, the agreement commits both countries to accelerate the safe deployment of AI across our economies and militaries.
1. Defence AI Deployment Accelerator
A new jointly-funded Defence AI Deployment Accelerator should leverage a larger combined resources pool to focus distinctively on supporting ventures which prioritise practical deployments in defence AI. While existing programmes like the UK’s Defence and Security Accelerator (DASA) and NATO’s Defence Innovation Accelerator for the North Atlantic (DIANA) focus on broader innovation, this initiative is intended to rapidly accelerate deployment specifically.
Secondly, the programme’s investment committee would endeavour to provide funding for high-risk, high-reward projects, as well as narrower and hardware-intensive ventures which are clearly in the public interest but cannot provide the commercial returns that VCs are not willing to fund. In both instances, the programme will help to backfill gaps left by the private sector. Crucially, the project’s scope should be limited to deliver deployable, interoperable systems, rather than open-ended studies and workshops.
2. Shared Procurement, Standards and Verification
To escape the scale trap, like-minded middle powers must pursue shared procurement, standards and verification to the maximum extent possible. Shared public sector procurement would provide greater bargaining power, lower costs and improve interoperability, amongst both economic and military applications. In certain domains such as training, sharing resources could also reduce duplication and improve performance. Similarly, shared standards and regulations would enable great interoperability and make testing faster and more robust. Crucially, it would reduce bureaucracy, align certification across multiple middle powers and enable larger markets that come closer to the size of the hegemons. This is crucial to keeping our most talented founders from leaving to build in San Francisco or Shenzhen. Lastly, shared verification frameworks can improve systemic trust and enable Canada and the UK set best practices for other middle powers. While full interoperability may not be feasible in all military domains, collaboration should focus on areas of highest operational and economic value, while allowing space for national differentiation.
3. Collaborative Testing and Evaluation
To evaluate if companies and deployments meet the shared standards and regulations above, the UK and Canada must work in lockstep. To reduce costs and avoid duplicating efforts, Canada is well-positioned to partner with the UK’s existing expertise in AI Safety and Assurance. By combining the resources and expertise in the UK’s AISI and its Canadian counterparts, we can better address both acute near-term security threats and long-term existential risk. As agentic AI tools present new risk vectors and adversarial threats from terrorism, organised crime and state actors expand, we cannot afford to work in isolation. Greater access to resources and expertise can help address these threats by facilitating meaningful progress on improving the robustness of evaluations to better reflect real-world security threats. By working together, both countries can reduce costs and improve evaluation capabilities. A smaller and narrowly-scoped alignment initiative between the UK and Canada – alongside Schmidt Futures, Anthropic and other industry partners – provides an early proof-of-concept for this expanded collaboration to build upon.
4. Joint Operational Testbeds and War Gaming
For defence AI and military applications, both countries should go beyond the standardised pre-deployment testing, developing shared operational testbeds to trial AI systems under realistic military constraints. These environments should support joint wargaming and simulations across intelligence, surveillance, logistics, cyber operations and command and control. This allows systems to be thoroughly stress tested under more realistic conditions and fine-tuned through pilots before force wide adoption. Lessons from these trials should feed directly back into evaluation criteria and procurement decisions, creating a continuous loop between testing, deployment and governance.
5. Talent Circulation and Secondment Pipeline
To ensure government departments have access to the expertise needed to lead on AI deployments, Canada should adopt the UK’s approach of offering an enhanced civil service pay scale to recruit engineers and researchers from top universities and AI labs. Given that public service offers a higher calling and mission, the public sector talent allowance does not need to exceed top private sector salaries, only reduce the chasm that has widened over the past decade. Secondly, it should establish a new bilateral visa and exchange programme to enable top AI talent to work and live seamlessly in either the UK or Canada. This bilateral pipeline would enable both countries to share expertise and cross-pollinate talent.
A Roadmap for Middle Powers
For middle powers, building the latest state-of-the-art model is neither realistic or necessary. Instead, advantage lies in translating AI into operational capability. The UK and Canada are well placed to meet this challenge together. As long-standing and trusted allies, they can combine resources, expertise and institutional strengths to punch above their individual weight and deliver through deployment.
Beyond this, the UK and Canada can develop a new collaborative model for safe deployment of AI across industry and defence. While a bilateral agreement provides faster initial progress, this model can later scale to include multilateral collaboration with like-minded allies in Asia and Europe. For middle powers seeking to preserve their sovereignty from both hegemons and hyperscalers, collaboration is no longer a choice but a necessity. Focusing this collaboration on the safe deployment of AI allows us to compete in an arena where we can win.
© Broderick McDonald, Connor Attridge and Alexandra MacEachern, 2026, published by RUSI with permission of the authors.
The views expressed in this Commentary are the authors', and do not represent those of RUSI or any other institution.
For terms of use, see Website Terms and Conditions of Use.
Have an idea for a Commentary you'd like to write for us? Send a short pitch to commentaries@rusi.org and we'll get back to you if it fits into our research interests. View full guidelines for contributors.
WRITTEN BY
Broderick McDonald
Guest Contributor
Connor Attridge
Guest Contributor
Alexandra MacEachern
Guest Contributor
- Jim McLeanMedia Relations Manager+44 (0)7917 373 069JimMc@rusi.org






