Right now, it’s impossible to browse the news, social media or a magazine without seeing a mention of Artificial Intelligence (AI). But where has AI come from and why has it suddenly become so prevalent? We provide a general introduction to the technology, its applications and its relevance to Defence.
What is Artificial Intelligence?
AI is a field of computing that focuses on systems which can perform tasks that typically, or previously, required human intelligence to complete: learning, problem-solving, decision-making, perception, creativity and social engagement.
Often the terms AI and Machine Learning (ML) are used interchangeably. While they are closely aligned, the core difference is that AI is based on the general idea of a machine that can mimic human intelligence, whereas Machine Learning leverages data and observed patterns, specifically with the aim to teach a machine how to perform tasks and provide results.
Arguably, the earliest body of work in AI was undertaken by British mathematician and computer scientist, Alan Turing, in the mid-20th century. The ‘Turing Machine’ concept described an abstract computing machine moving through its limitless memory, learning and predicting from previous patterns. While this may be considerably different to the software hitting the market today, the concept of a machine that is able to learn, modify or improve without being explicitly programmed to do so, is the essence of all AI.
The rise of AI
While the idea of AI may have existed for nearly a century, the more widely recognised applications of AI and Machine Learning techniques were developed around a couple of decades ago. At the time, computing power and storage was an extremely expensive commodity meaning that high-powered computing could mainly only be explored by professionals in the computing field and those with access to specialist machines. As the prevalence of devices and internet connection skyrocketed in the late 90s and early 2000s, and the cost of processing and data storage dropped, the ability to handle big data sets within the average professional or even personal computer environment became more of a reality.
This coupled with developments in sensor technology, which enabled better data collection and situation awareness, an increase in data available online, and the rise of cloud and edge computing methods, saw a simultaneous increase in the need to process more data, faster, and a desire to have machines take on the burden of many human activities.
AI hitting the headlines
As with all trends, technology and beyond, it takes a few high profile use cases to turn the process into a recognised household concept.
One of the first companies to break through in this space was Google’s sister company, DeepMind. Over the past decade, they have developed algorithms that are able to beat professional gamers, created a protein-folding prediction system, capable of predicting complex 3D shapes of proteins and even established programmes for medical applications such as disease diagnoses in eyes.
However, the application that has made the biggest impact worldwide to date is ChatGPT, sophisticated chatbot-style AI by OpenAI that enables you to ask complex questions and receive answers based on its globally available data. The platform clocked over 100 million active users in just two months and left many users unable to use the system as its servers struggled to cope with global demand. ChatGPT, and others like it, are based on a type of AI called Generative AI (Gen-AI). This form of AI has the sole purpose of generating new content based on patterns in existing data or information. Where ChatGPT uses text and language, other Gen-AI programmes exist to create different media such as art, music and photography.
While ChatGPT was certainly the first of these tools to hit mass consumption, others to watch in the coming months and years include Google’s Bard, Microsoft Bing (based on OpenAI technology), ChatSonic and Ernie by China’s Baidu social media app.
Right now, it is often easy to tell when AI has produced content as it is usually centred on fact, not opinion. In a recent BBC Sounds radio documentary, A Documentary: By ChatGPT, presenter Lara Lewington used ChatGPT to help write and produce a programme about the software. While it provided useful information and a starting point for scripts, it became obvious that the tone and the way the information was presented, was far different to both the presenter and the outlet’s usual style and that actually, if AI was solely used to create the programme, it wouldn’t be very engaging for the end listener.
AI in the Defence and Security Industry
While AI has the power to augment our daily lives – helping to plan excursions, write basic content and skill-up at pace – the real power of AI can be seen in industrial and commercial applications. Large language models, algorithms that use deep learning techniques to process massively large data sets, are changing the way many industries operate. In an instant, there is a possibility to process all of the learnings that have come before and identify new ways to innovate and move forward.
In December 2020, the US Air Force used AI as a co-pilot and ‘mission commander’ on a simulated military mission for the first time. The algorithm assumed full control over sensor employment and tactical navigation, while its human teammate piloted the aircraft. This was a significant step in making AI more of a reality in the complex air domain, where computers and systems are hard to update and innovate with their highly secure and locked-down systems.
The team behind the Air Force project blended development, Security and operations using a more agile approach to information technology and producing higher-quality code faster and more continuously. The algorithm design allows operators to choose what AI will and won’t do and where it pushes the boundaries of operational risk.
So, if this simulated mission went successfully, why can’t we rollout AI more widely in Defence? Dr Will Roper, Assistant Secretary of the U.S. Air Force for Acquisition, Technology and Logistics and lead for this project highlights a few of the key issues: “Today’s AI can be easily fooled by adversary tactics, precisely what future warfare will throw at it.”
Writing in Popular Mechanics, he also highlighted one of the key learnings the industry needs to work through over the coming years: “As we complete our first generation of AI, we must also work on algorithmic stealth and countermeasures to defeat it. Though likely as invisible to human pilots as radar beams and jammer strobes, they’ll need similar instincts for them—as well as how to fly with and against first-generation AI—as we invent the next. Algorithmic warfare has begun.”
AI’s impact on how decisions are informed, made and implemented will be profound. By processing vastly more data at speeds that defy current human-based processes, AI can improve understanding of the operating environment and reduce the cognitive load on decision-makers. This enables application in Defence and Security from surveillance and reconnaissance to cyber Security, robotics and autonomy, platform management and maintenance, combat and even weapon systems. However, the industry must simultaneously work to develop counter measures to thwart adversaries and recognise misinformation, such as deep fakes – a synthetic media that mimics a real person using existing data.
As the technology becomes more prolific, there is certainly a call for more governance around safe and ethical practice in AI. The UK Ministry of Defence has established the Defence Artificial Intelligence Strategy as well as setting up multiple working groups and policy papers. We are also seeing industry-led movements in the US that call for tighter regulation and audit for AI companies. This area is where we are likely to see the biggest changes worldwide over the next year.
So, will AI replace humans and lead the Defence industry in future?
Overall, AI has become an increasingly important and pervasive technology over the past two decades, with applications ranging from speech recognition and recommendation systems to self-driving cars and medical diagnosis. These advances have made AI more powerful, accurate, and accessible than ever before, enabling a wide range of applications and use cases across many industries.
As the BBC ChatGPT Documentary concluded, it is likely that AIs will largely be used to augment human activity, not replace humans (at least for now). If used correctly, AI can help to reduce the amount of time humans spend on menial tasks and allow us to grow and focus on more strategic or innovative paths. We should be embracing its potential rather than fearing it or trying to resist it.
In the Defence industry specifically, AI is fast becoming a core part of our national Defence strategy. Global military and intelligence agencies are embracing the new technology and augmenting existing workforces. These applications are generally focused around summarising and processing large amounts to data for better decision making, shortening processing times and providing a better understanding of the operational environment.
It is extremely unlikely that AI will be given complete control in this environment, unless for low-risk, low-trust applications. For now, at least, it is important to keep a human in the loop and embrace AI as a core part of our wider operational team, not our leader.
Here at QinetiQ, we fuse a world-class understanding of science and engineering with a sector knowledge of Defence and Security to deliver market-leading solutions for our customers. Find out more about our Artificial Intelligence, Analytics & Advanced Computing capability here.
This article first appeared in the 13th edition of our horizon-scanning quarterly technology publication TechWatch. To receive similar insights, hot off the press, subscribe for free here.