QinetiQ and RUSI release new paper on trust in AI
23-06-2022
Last week, the Defence Artificial Intelligence Strategy was published, setting out how the UK will ‘adopt and exploit AI at pace and scale’ to transform ‘Defence into an ‘AI ready’ organisation and deliver cutting-edge capability.
This new paper aims to trigger a broader debate about the cultural and organisational changes required within the UK defence enterprise to become genuinely ‘AI ready’. It considers this in the context of AI-enabled decision-support, and its impact on the role of command and commanders.
Trust in AI: Rethinking Future Command builds on the premise that trust at all levels (operators, commanders, political leaders and the public) is essential to the effective adoption of AI for military decision-making and explores key related questions such as:
- What does trust in AI actually entail?
- How can it be built and sustained in support of military decision-making?
- What changes are needed for a symbiotic relationship between human and machine members of future command teams?
The paper follows an earlier report produced by QinetiQ, which looked at trust as a fundamental component of military capability and an essential requirement for military adaptability, and is theoretical but with practical application.
The paper considers the concepts of AI and trust, the role of human agency, and AI’s impact on humans’ cognitive capacity to make choices and decisions. It proposes a five-dimensional framework for developing trust in AI-enabled military decision-making and examines the implications of AI on people and institutional structures that have traditionally underpinned the exercise of authority and direction of armed forces.
In seeking to answer how trust affects the evolving human–AI relationship in military decision-making, this paper exposes several key issues requiring further research including:
- How to build the trust necessary to reconfigure the organisation of command headquarters, their size, structure, location and composition, at tactical, operational and strategic levels.
- How to adapt military education to better prepare commanders for the age of AI.
- How to optimise and transform collective training across all domains to improve command involving greater collaboration with artificial agents.
- How to operationalise the concept of ‘Whole Force’ to make better use of the extensive talent within society, industry and technology.
- How to understand the needs of AI and humans within human–machine teams.
Paul O’Neill, RUSI Director of Military Sciences, said:
"Much of the discussion about the use of AI focuses on the technology. What our report seeks to do is balance the discussion to take account of the human and organisational impacts and implications of the technology. This is a symbiotic relationship in which the greatest value derives from considering the needs of the whole, human/machine, team."
Christina Balis, QinetiQ Campaign Director for Training and Mission Rehearsal, said:
"The growing military use of AI for operations and missions support will transform the character of warfare. This is not just a question of adapting our armed forces’ tactics; we need to fundamentally rethink the role of humans in future military decision-making across the spectrum of ‘operate’ and ‘warfight’ and reform the institutions and teams within which they operate. It requires that we rethink the notion of trust in human-machine decision-making."