You are here
The government’s Integrated Review comes at a time of considerable technological change. The UK has entered a ‘Fourth Industrial Revolution’ (4IR), which promises to ‘fundamentally alter the way we live, work, and relate to one another’. This new era will be characterised by scientific breakthroughs in fields such as the Internet of Things, Blockchain, quantum computing, fifth-generation wireless technologies (5G), robotics, and artificial intelligence (AI), which together are expected to deliver transformational changes across almost every sector of the economy.
Of particular note are recent developments in AI, specifically advances in the sub-field of machine learning. Progress over the last decade has been driven by an exponential growth in computing power, coupled with increased availability of vast datasets with which to train machine learning algorithms. While machine learning technology can be traced back to at least the 1950s, investment has increased substantially in recent years, and as a result AI is rapidly becoming ubiquitous across the economy.
AI is often described as a ‘general purpose technology’ – its potential applications are manifold, ranging from ‘mundane’ administrative tasks through to complex individual-level behavioural analysis, for instance to forecast consumer demand based on purchasing history, or to recommend music and films based on users’ personal interests. The ability of machine learning algorithms to rapidly derive insights from previously unexamined data has far-reaching ethical and societal implications, which are particularly pertinent in ‘high risk’ contexts such as healthcare, law enforcement or defence.
There are countless ways in which the UK’s defence and security sector could conceivably seek to deploy AI. Given its diverse applications, it will be essential to strategically prioritise the areas where AI is expected to provide the most immediate benefits, while being realistic about areas where its utility remains unproven. This strategic prioritisation process should be guided by the following three principles.
There is a natural tendency to overestimate the effects of new technology in the short term while underestimating the long-term impacts; the phenomenon is known as ‘Amara’s Law’. While AI is likely to have a transformative impact on defence and security in the long term, any specific forecasts looking beyond the next decade are likely to be highly speculative. There is a risk that policy decisions are guided by hypothetical future uses and hyperbolic ‘worst-case scenario’ outcomes, rather than focusing on realistic near-term applications. In reality, the immediate short-term benefits of AI will be an incremental augmentation of existing processes, rather than the creation of novel, futuristic capabilities. This will need to be appropriately reflected in development and procurement strategies.
Moreover, AI investment is often hampered by a lack of technical understanding, and customers are all too easily seduced by media hype and marketing buzzwords. Rates of ‘predictive accuracy’ are often misinterpreted or misrepresented, making it difficult for the buyer to assess a tool’s real-world benefits. A focus on statistical accuracy may also distract from fundamental questions regarding the operational value of AI products when deployed in the field. In many cases, a non-AI solution may be more appropriate to the task at hand, and there will be situations in which use of AI will be undesirable or counterproductive.
Poor data quality or data availability can also pose major challenges. Developing effective machine learning systems requires access to large, well-curated datasets. Lack of access to clean, operationally relevant data can lead to frustration and delay during software development, particularly in sensitive contexts such as defence and security, where datasets often require additional protections and restrictions. Data requires substantial preparation, cleaning and pre-processing before it is suitable for machine analysis, which will need to be taken into account in the resourcing of government AI projects.
For these reasons, it is essential to ensure a sufficient degree of data analytics literacy among senior decision-makers responsible for AI procurement. The UK government should adopt a cautious and sceptical approach to the procurement of commercial AI technology, and refrain from committing to long-term contractual agreements before assessing a product’s real-world benefits. The importance of data quality and testing should not be underestimated: many products will fail to deliver as advertised when deployed in the field, and AI applications require iterative trialling, evaluation, verification and validation to maintain their efficacy.
Focus on the Mundane Uses First
AI is often characterised in terms of the ability ‘to perform tasks normally requiring human intelligence’. With organisations under increasing pressure to ‘do more with less’, AI can be viewed as an attractive option to minimise the human resources required to deliver certain business functions. But there are limits to the human processes that can be effectively automated. Existing AI is most useful when applied to narrowly defined, repetitive tasks. The more abstract the problem, the less useful AI becomes.
For this reason, the most immediate benefit from AI will be the ability to automate organisational, administrative and data-management processes, freeing up staff time to focus on more complex or abstract cognitive tasks. There are countless more innovative, experimental uses which will be of interest to the defence and security sector, but in many cases these remain at an early stage of development and their potential benefits are yet to be proven. Moreover, ‘mundane’ uses of AI to automate repetitive administrative processes will typically not give rise to the same complex ethical challenges associated with more innovative applications.
In the short term, the main focus for AI investment should be the automation of organisational, administrative and data-management processes. Alongside this, efforts should focus on repurposing existing AI technology that is already widely used in other sectors (such as audiovisual analysis and natural language processing). To support innovation in the medium to long term, research funding should be made available for technology providers and academic institutions to co-develop ‘proofs of concept’ and pilot projects for the more experimental, cutting-edge capabilities which are yet to be evaluated operationally.
Invest in the Workforce
Human expertise is the single most important component of any AI project. Cultivating technical expertise and developing a workforce of data-literate practitioners must therefore be a core objective of any future AI development strategy.
The UK government should invest in developing a core cell of data-science expertise to lead the development and deployment of new AI applications in the defence and security sector. This should be achieved by recruiting new talent, retraining current practitioners and partnering with academic institutions. Many of the AI capabilities the defence and security sector may wish to implement could be developed in-house without reliance on third-party providers, minimising costs and enabling a more agile approach to testing, evaluation and validation. The initial investment of developing this in-house technical expertise will therefore be more than recuperated by the cost savings made in the long term. At the same time, the skills required may often be more readily available outside the public sector, and there is a need to develop more agile models of strategic collaboration with external stakeholders to take full advantage of this expertise.
In addition to this core cell of technical expertise, it is essential to ensure a high level of data literacy among practitioners across the defence and security sector. When AI is integrated into a decision-making process overseen by a human operator, the user must sufficiently understand the limitations inherent in the system to be able to use the output in conjunction with their professional judgement. This is important not just to ensure accountable decision-making, but also to build trust between human operators and AI systems. Senior decision-makers must also have a foundational understanding of the benefits and shortcomings of different AI systems in order to maintain accountability at all levels of the decision-making chain.
Finally, any future AI development will need to be governed by a clear ethical and regulatory framework. Public discourse is increasingly focused on the governance and regulation of data analytics, and there are high expectations of transparency in how new technologies are developed and deployed. Despite an abundance of high-level ‘data ethics principles’, it remains unclear how these should be operationalised in different contexts. Additional sector-specific guidance should be developed to ensure ethical and proportionate use of AI for defence and security, including mechanisms for independent scrutiny and ethical oversight.
The views expressed in this Commentary are the author's, and do not represent those of RUSI or any other institution.
BANNER IMAGE: Representation of an artificial brain. Public domain