Autonomous Resilient Cyber Defence (ARCD) is a multi-year programme from Dstl to develop and demonstrate self-defending and self-recovering concepts for military platforms and networks in the face of increasingly complex cyber-attacks. 

The programme is being delivered in two tracks:

  • Track 1 is delivered by Frazer-Nash Consultancy, and provides the cyber-defence agents. Agents are Artificial Intelligence/Machine Learning products that are capable of responding to a cyber incident. Find out more information here.
  • Track 2 is delivered by QinetiQ and QTSL and responsible for performing experimentation and evaluation of the agent performance. It provides the environments to train, evaluate and demonstrate the agents; the evaluation schemes and tools to measure the performance and measures of effectiveness of the agents; and an actuation and integration capability to provide agent, environment and evaluation orchestration.

By ensuring that military platforms can automatically identify and defend themselves against cyber-attacks, this reduces the need for human intervention and ensures a quicker response time. This will therefore limit risk of the platform being compromised and ultimately protect the sailors, airman and soldiers who use them.

ARCD is being supplied by suppliers on Lots 3, 4, 5 and 6 of the Serapis framework.

Under Task 2 of Track 2, a number of State of the Art (SOTA) Reports were commissioned by QinetiQ.  The reports were looking to answer a number of research questions.

 RQ

Short Title 

Research Question 

 1 Dimensions  If we were to base ARCD procurement decisions on the outputs of evaluation processes, what set of dimensions (e.g. of system behaviour, or situation outcomes) should we be evaluating?
 2 Measures of Effectiveness for Cyber Defence Where CD systems are made responsible for protecting  systems from cyber-attack, how should we measure their effectiveness? (‘Systems’ is used here in the most general sense, to include any human components that enable it to deliver its function) 
 3 Measures of Performance for Cyber Defence  Where components of CD systems are made responsible for recommending system actions to address potential threats, how should we measure their performance?
 4 Measures of Performance for Agentic AI  Where AI-based services are used to recommend CD actions, how should we measure the performance of those services?
 5 Task Performance   How can we measure task performance in cyber-defence, in a way that is neutral with respect to whether the performer is human or AI?
 6 Predict Measures of Effectiveness Using Measures of Performance  Can we build models that allow us to predict CD system effectiveness using observations on AI performance metrics (such as precision/recall)? If not, what can we do to approximate this inference process?
 7 Readiness  How should we measure the readiness of an AI technology for use within an ARCD system?  
 8 Suitability  How should we measure how suitable an AI technology is for use, within the contexts of use which we expect for ARCD?

 

  • Advai RQ6 and RQ7 SOTA v2.0

  • Aleph Insights RQ1 and RQ5 SOTA v2.0

  • Arke RQ1 RQ2 RQ3 RQ4 RQ5 SOTA

  • Cambridge Consultants RQ7 RQ8 SOTA

  • Frazer-Nash Consultancy RQ8 SOTA

  • Thales RQ2 RQ3 and RQ4 SOTA v2.0

  • Trimetis_PA_UWE RQ8 SOTA v3.0

 

Study

Name 

Type 

Supplier 

 Horizon Watch 1  Assessing the provision of Training and Evaluation Environments for the purposes of Autonomous Resilient Cyber Defence   Catalogue (Excel)  Aleph Insights
 Technology Watch 1  Solutions for Reconfigurable, Reusable, Robust and Rapidly Deployable (R4) Environments  Report (PDF)  Frazer-Nash Consultancy
 Technology Watch 2  Future Military Technology and UK Defence Platforms  Report (PDF)  Cordillera Applications Group
 Technology Watch 4  Invoking Pattern-of-Life within Cyber Environments

 Report (PDF)

Catalogue (Excel)

 Aleph Insights
 Technology Watch 4  Data Provision for AI / Machine Learning and Invoking Pattern-of-Life

 Report (PDF)

Catalogue (Excel)

 Thales
 Technology Watch 5  Bridging the Sim-to-Real Gap for AI and Machine Learning  Report (PDF)  Improbable
 Technology Watch 5  Bridging the Sim-to-Real Gap for Artificial Intelligence and Machine Learning  Report (PDF)  BMT

To request any of these reports, please email ARCD-Track2@qinetiq.com