Blogs

Technology Ethics in Defence and Security

14/03/2022

There are few sectors in which decisions concerning ethics are as consequential as those faced by defence. While life-or-death ethical dilemmas are not unique to defence, poor judgement can lead to widespread and enduring catastrophe on scales unparalleled in any other sector. As a community of innovators, how great is our share of the responsibility for ensuring the ethical development of defence technology?

An article from TechWatch Edition 10
tech ethics main - conceptual image of android appearing to think while looking out at a nighttime cityscape

Lessons from the past

You don’t have to look hard to find historic examples of defence technology that have had a negative impact on global stability. If nuclear weapons had not been invented at all, the lives of thousands of Japanese civilians might have been spared, and the Cold War would probably never have happened. However, there is a possible world in which Nazi Germany was the first to assemble a nuclear armoury; or another in which unending war in the Pacific caused greater suffering than the atomic bombs dropped on Hiroshima and Nagasaki. Could it be that the devastating real-world outcome was actually the ‘least worst’ of the most likely scenarios? As the debate continues to rage among philosophers and historians almost 80 years later, it is clear we will never have definitive answers to these questions – and yet the conversation remains as vital as ever. We cannot change history, but we can influence the future. Today, we as innovators must pause to consider how our legacy will be remembered another 80 years from now. The question we must continually ask ourselves while pursuing technological advances is: just because we can, should we? 

The ethics of emerging technologies

There has always been a tension in defence between taking sufficient time to consider the ethical implications of new technologies, and deploying their capabilities quickly and decisively enough to maintain the advantage over adversaries. This tension is growing in the 21st century as technology advances at an ever-increasing rate; becoming more powerful, diverse, and accessible. Consequently, there is a higher risk of knee jerk decision-making in response to emerging threats. To counteract this, it is vital to maintain an up-to-date understanding of emerging technologies and their potential ethical pitfalls. What follows in this article is by no means an exhaustive list, but a sample of today’s defence technologies and a brief overview of the ethical considerations surrounding them.


Artificial Intelligence

Much has been written about the threat of the ‘singularity’ – the point at which artificial intelligence (AI) supersedes human intelligence, resulting in AI’s uncontrolled expansion. The likelihood of this event is the subject of ongoing dispute – but there are other, more immediate concerns about AI in the present day. The unconscious biases of programmers can make their way into the datasets on which AI technologies base their decision making. A 2018 study by the Gender Shades project, published by Massachusetts Institute of Technology (MIT), found facial recognition technology to be significantly and consistently less accurate in identifying young dark-skinned women than any other group. Facial recognition is already being trialled in law enforcement applications, where mistaken identity or poorly-considered profiling techniques could have disastrous impacts on individuals’ civil liberties. In the US, Robert Williams, an African American, was arrested and held for nearly 30 hours after facial recognition software used by the Detroit Police Department misidentified him as a shoplifter. There are two actions that should be taken immediately to help rectify these issues. The first is the diversification of computer and data scientist communities, enabling minority groups to be better represented in AI design and decision-making processes. The second is ensuring decisions made by AI machines are explainable to those affected. If an AI outcome is presented as evidence in a court case, the jury, defendant, and all other parties must be able to understand the process by which the computer arrived at its conclusion. It is not enough to take a ‘computer says guilty’ approach when the system is less than infallible.

tech ethics AI main - conceptual image of human face and brain as circuits overlaid on laptop

Robotics and Autonomy

In a world where AI makes decisions, it follows that robots will often act upon those decisions to generate real-world effects. There are some applications for which there is an explicit ambition to remove human decision making entirely – the driverless car being among the most high-profile. Fatal accidents involving prototypes of these vehicles have led to court cases seeking to allocate responsibility to one or more humans, with the aim of redressing harms, deterring recurrence, and demonstrating that justice has been served. There are some defence applications for which full autonomy is largely accepted, such as the use of self-navigating land and air vehicles to deliver aid and supplies in dangerous regions. However, controversy surrounds the possibility of developing weapon systems capable of making ‘kill decisions’ without human intervention. All of the same concerns about AI apply here, but with the added risk of lethal outcomes. Several countries’ governments, and a number of defence technology companies (including QinetiQ) have publicly announced their commitment not to develop fully autonomous weapons where there is no human in the loop to authorise firing decisions. Finally, irrespective of its likelihood – if the singularity were to occur, you would certainly not want it to happen in a robot equipped to kill.

tech ethics robot main - conceptual image of robot spider

Enjoyed the first half of this article? Check out our quarterly TechWatch magazine to read on about ethics around Human Augmentation, Directed Energies and more.