While less directly threatening in the public mind than ‘killer robots’, the use of AI in military decision-making presents key challenges as well as enormous advantages. Increasing human oversight over the technology itself will not prevent inadvertent (let alone intentional) misuse.
This paper builds on the premise that trust at all levels (operators, commanders, political leaders and the public) is essential to the effective adoption of AI for military decision-making and explores key related questions. What does trust in AI actually entail? How can it be built and sustained in support of military decision-making? What changes are needed for a symbiotic relationship between human operators and artificial agents for future command?
Trust in AI can be said to exist when humans hold certain expectations of the AI’s behaviour without reference to intentionality or morality on the part of the artificial agent. At the same time, however, trust is not just a function of the technology’s performance and reliability – it cannot be assured solely by resolving issues of data integrity and interpretability, important as they are. Trust-building in military AI must also address needed changes in military organisation and command structures, culture and leadership. Achieving an overall appropriate level of trust requires a holistic approach. In addition to trusting the purpose for which AI is put to use, military commanders and operators need to sufficiently trust – and be adequately trained and experienced on how to trust – the inputs, process and outputs that underpin any particular AI model. However, the most difficult, and arguably most critical, dimension is trust at the level of the organisational ecosystem. Without changes to the institutional elements of military decision-making, future AI use in C2 will remain suboptimal, confined within an analogue framework. The effective introduction of any new technology, let alone one as transformational as AI, requires a fundamental rethinking of how human activities are organised.
Prioritising the human and institutional dimensions does not mean applying more control over the technology; rather, it requires reimagining the human role and contribution within the evolving human–machine cognitive system. Future commanders will need to be able to lead diverse teams across a true ‘Whole Force’ that integrates contributions from across the military, government and civilian spheres. They must understand enough about their artificial teammates to be capable of both collaborating with and challenging them. This is more akin to the murmuration of starlings than the genius of the individual ‘kingfisher’ leader. For new concepts of command and leadership to develop, Defence must rethink its approach not only to training and career management but also to decision-making structures and processes, including the size, location and composition of future headquarters.
AI is already transforming warfare and challenging longstanding human habits. By embracing greater experimentation in training and exercises, and by exploring alternative models for C2, Defence can better prepare for the inevitable change that lies ahead.
About the Authors
Related Content
Trust in Training
Trust in AI builds on the earlier report produced by QinetiQ, The Trust Factor, which looked at trust as a fundamental component of military capability and an essential requirement for military adaptability in the 2020s.
Artificial Intelligence, Analytics & Advanced Computing
Our Data Science experts work on a range of Artificial Intelligence (AI) projects using a variety of Machine Learning (ML) techniques to discover patterns in, analyse, classify and verify data.