Insights

Test and evaluation of adaptive intelligent systems

12/04/2018

Adaptive Intelligent Systems are among several emerging technologies being applied across a range of environments as part of a widespread shift in the technology we use every day, under the banner of the Fourth Industrial Revolution (4iR).

Testing the untestable

These systems build on developments in areas such as Artificial Intelligence and Machine Learning where technology seeks to optimise and improve performance and behaviour over time, learning and adjusting as new data becomes available. The potential for these systems to deliver an operational advantage in defence, security and critical infrastructure is huge, but that potential will only be realised if these systems have been evaluated for safe and practical deployment in such high-criticality environments.

The problem is that, by their very nature, these systems exhibit no stable end state against which they can be tested using traditional engineering methods. Such methods will only provide a snapshot, and repeating the test for an adaptive system will potentially give a different result every time as the system may have learned during the very tests to which it has been subjected.  Herein lies the conundrum – how do you evaluate whether a system is fit for deployment when the system will change its behaviour in response to the testing itself?

It is a significant barrier for the deployment of many 4iR technologies because a considerable proportion of them display adaptive intelligent properties and it is these very properties that make them such a strong prospect for defence, security and CI markets. For applications which are not safety critical, these issues are not likely to result in serious disruption. But as the technology extends its reach, adaptive intelligent systems will emerge in high-criticality and hostile environments where unexpected behaviours or performance could ultimately lead to failure or even loss of life and, consequently, a loss of trust by users. So for safety-critical markets, the problem is acute – they rapidly need to harness the value of adaptive intelligent technology; they can’t do so without sufficient test and evaluation to offer appropriate assurance; but existing approaches to test and evaluation are incompatible.

It’s clear that we need to take an alternative approach and this must be based on three action areas:

  1. We need to address the qualification of adaptable intelligent systems through the lens of human factors and behavioural science rather than traditional engineering practices. Essentially we must adapt the techniques we use to for establishing human performance and behaviour so they can be employed for the evaluation of technology. There is already work underway in this area – including studies in the US investigating the use of virtual mazes to test the behavioural psychology and cognitive skills of artificial intelligence systems, and papers from research teams in China on the adaptation of common psychometric and IQ tests to assess their abilities, attitudes and knowledge traits. We need to look at how academic findings can shine a light down new avenues for applying human behavioural tests to emerging technology systems.
  2. We must consider the through-life requirements for maintaining these systems via regular re-qualification, robust configuration management and the adoption of appropriate regulation and legislation.
  3. We need to consider the professional accreditation of intelligent systems designers to ensure they understand the unique testing requirements for assurance in defence, security and CI environments and how to inject these test and evaluation criteria early on in the development pathway.

There is no doubt that adaptive intelligent systems hold significant promise for boosting performance, reducing costs, and improving accurate decision making in defence, security and critical infrastructure. We can already see how their potential is starting to be realised in many unregulated commercial sectors. But emerging technology by its very nature can require the development of new processes and approaches to underpin its successful practical operation. For adaptive intelligent systems, that requirement to rethink spreads right down into the design and testing phase because existing approaches cannot accommodate the radical shift in capability they possess. So regardless of how much positive impact they could have on the way we defend, protect and maintain safety of life, unless we can develop an appropriate way to provide the assurances these types of high criticality applications require, we will not be able to draw significant benefit from this new potential source of advantage.