You are here
In a single day, the US’s reconnaissance aircraft and satellites collect more raw data than the entire defence workforce could analyse in their aggregate lifetimes. Security officials are trying to find needles in ever-expanding digital haystacks. As a recent RUSI paper recognises, this information overload is also ‘perhaps the greatest technical challenge facing the UK’s national security community’.
Over the last year as a Kennedy Scholar at Harvard University, I explored how defence and intelligence organisations are using artificial intelligence (AI) to respond to that overload. Beyond the technology itself, I wanted to find out how the US has created the foundations for successfully deploying AI: a skilled workforce, data management, computational power, cloud platforms, technical foundations of security and trust, and a prudent policy framework. Lindsey R Sheppard at the Center for Strategic and International Studies refers to this as the ‘AI ecosystem’. What can we in the UK learn from the US experience?
I interviewed 50 senior technologists, diplomats, security officials, researchers and defence contractors. The one thing they all agreed on was that a relatively small team within the Department of Defense was forging the AI ecosystem: the Joint Artificial Intelligence Center (JAIC).
The JAIC grew out of the pathfinding work of the Algorithmic Warfare Cross-Functional Team, more commonly known as ‘Project Maven’. Project Maven stands out as an example of rapidly acquiring and then deploying AI technologies. The Maven team deployed AI in the battle against the Islamic State just six months after receiving funding.
In the words of Maven’s director, Air Force Lt. Gen. Jack Shanahan, ‘Maven [was] designed to be that pilot project, that pathfinder, that spark that kindles the flame for AI across the department’.
The spark lit a blaze. The JAIC has now expanded Maven’s mission significantly. It has already applied machine learning to predictive maintenance, logistics, cyber operations and servicemembers’ health. The JAIC is even crunching supply chain data to support the coronavirus response.
I visited the JAIC to find out how it has been so successful. I was struck by six main lessons which the UK could benefit from:
Partnerships: None of Maven’s six founding members were AI experts or even data scientists. They were instead recruited because they were skilled at building partnerships. The JAIC team has continued to partner with top AI talent in the private sector and academia.
Cross-functional teams: The JAIC is purposefully small and operationally focused. Teams are cross-functional end-to-end, rather than working in vertical silos.
Learning by doing: The teams test with end-users as they go, building experience by dealing with barriers as they emerge. All tasks – including labelling data, developing infrastructure and neural network algorithms, and collecting feedback – happen iteratively and in parallel.
Bottom-up programmes; central coordination: Rather than directing projects top-down, the JAIC provides common foundations to facilitate decentralised development and experimentation. The JAIC does, however, provide top-down coordination for projects exceeding $15 million to ensure lessons are shared and duplication is avoided.
Ruthlessly focused: Maven has delivered notably few projects. The approach has been to focus on delivering a few, carefully selected products successfully and rapidly.
An evolving portfolio: The JAIC first worked on data-rich, non-controversial challenges with commercial interest, including logistics, disaster relief and health. Only after ironing out wrinkles has the JAIC moved on to direct combat applications or ‘Joint Warfighting’.
Of course, the JAIC is still relatively new. It was established less than two years ago and only started scaling in earnest earlier this year. Software projects, and particularly AI projects, don’t happen overnight. Nonetheless, the JAIC has already made a strong mark in the US AI ecosystem.
The UK needs a JAIC. There are currently dozens of AI projects within the Ministry of Defence (MoD) alone, not all of which are pushing in the same direction and none of which develop AI across its entire lifecycle. And while the technical challenges of designing and testing AI applications may be surmountable, the bureaucratic processes – to share data, run applications on existing platforms, build products end-to-end and test updates – may act as barriers.
The UK government should establish a team similar to the JAIC: a small, operationally focused, cross-functional team empowered to develop external partnerships, leverage existing infrastructure and pilot iteratively.
A new AI unit or entity should enable co-creation with industry and academia. The team should be co-located with defence and intelligence partners. It should have a concentration of capital and talent: technical experts should be available on an ongoing basis, for instance to revalidate algorithms.
Any new unit should not duplicate the work of existing pockets of excellence, including the Royal Navy’s NELSON data platform and the AI hub of the Defence Science and Technology Laboratory. The unit should instead provide a coordination function, strategic direction and common foundations to allow decentralised experimentation.
The primary role should be to facilitate a fertile AI ecosystem, including computing infrastructure, test facilities and data platforms. The unit should provide standardised tools for cleaning, aggregating, labelling and securing data, and should maintain a strong focus on testing and evaluation.
Historically, new transformative technology has demanded significant changes to the machinery of government and the organisation of the military. For instance, aerospace developments led to the establishment of a new branch of the military (the Royal Air Force).
AI is likely to have profound, persistent and pervasive implications for national security. As part of the forthcoming Integrated Review, now is an opportune moment to make sure the MoD is structured to make the most of this transformative new technology.
Sam Sherman is a Kennedy Scholar at the Harvard Kennedy School.
The views expressed in this Commentary are the author’s in a personal capacity, and do not necessarily reflect those of RUSI or any other institution.
BANNER IMAGE: Courtesy of Staff Sgt. Alexandre Montes / the public domain.