Connected systems are now enmeshed inexorability into our everyday lives and environments. This fundamental truth extends, of course, into the realm of US government agencies and the military. Here, it is mission-critical that we uphold the highest safety and security standards to protect systems, data and users from outside threats. And when it comes to ensuring that machines involved in large-scale autonomous projects can be trusted, a holistic approach is essential. A holistic approach, moreover, that anticipates the coming threat from quantum computers.
Organizations will benefit from augmentation of human effort that could improve safety, operational efficiency, user comfort and much more. But the big question is how can we trust the intelligence behind it? Trust is a fascinating, multi-dimensional topic, with both objective (using high rigor disciplines) and subjective (human behavior) perspectives. Assurances can be given on each dimension, giving us confidence in how a system should behave in any event, and in a way that is not only safe but in keeping with what humans reasonably expect of that machine at any given time.
Nevertheless, challenges remain – and the largest and most dangerous tend to be systemic. This is the result of many factors: complexity, connectivity (which leads to blurring of boundaries), uncertain cascading impacts and unclear responsibilities, attribution, or accountability.
Let’s take an example from the commercial world of farming as an example. With an autonomous harvesting robot, the issue of trust isn’t just with the machine itself. This is because the trend, in agriculture and many areas, is migration from a unit sale model to a service provision model. Thus, the robot itself will only be part of the service encompassing data provision, harvesting, fitting out a farm with intelligent components like sensors, and managing the entire process every season.
Addressing trust in a cyber-physical system
We now have a cyber-physical system, able to make its own decisions and connected to a wider world. Therefore, addressing trust must extend to all these areas. The service model also means that the owner and operator of the harvesting process may no longer solely be the farmer. This blurs the boundaries of who (or what) makes the decisions on the safety of the operator and those in proximity, operational effectiveness and the cyber security of both robot and associated data services. Furthermore, how do we know we can trust those decisions?
A holistic view is crucial, as it allows us to be adaptable rather than deterministic, something that is especially important when considering the pace of technology, the increased demand for agility, and the ever-evolving threat landscape. This means using probabilistic approaches (whether this be in the realms of opportunity or risk) and taking multiple perspectives. These range from the technology bundles that are most suitable (what sensors or algorithms to use or which verification and validation strategy to deploy), to socio-technical aspects such as finding the right skillsets, to integrating philosophies from different disciplines such as safety and security.
When it comes to addressing trust in autonomous systems, there are many aspects to consider and explore. Beyond these considerations, we see another fundamental challenge for government organizations: ensuring that these autonomous systems have been future proofed to address emerging quantum threats.
Quantum resilience for government
Let’s look more closely now at the implications of the emerging quantum threats for government organizations. It is reasonably likely that quantum computers will be capable of defeating existing asymmetric cryptography (RSA, ECC and so on) and will be available to state or federal level actors some time before the public. It is especially imperative the government implements post quantum security (PQS) algorithms on critical embedded devices. At CC we have developed a hardware accelerated implementation of CRYSTALS-Dilithium, a lattice-based signature scheme, targeted at low-power embedded systems.
The principal goal was to create this implementation of an accelerator running in simulation, together with testbench software that exercises it to compute Dilithium signatures and allows for performance measurements. A secondary goal is an implementation running in an FPGA+MCU system to allow real-world power measurements to be taken. The implementation will initially be targeted at an FPGA-based SoC (most likely Xilinx Zynq). We will run FPGA synthesis during development to get estimates for resource usage.
Reacting to quantum attacks on government
More broadly, government must re-examine the way it responds to new threats. An initial reaction might be to patch in some fixes; to upgrade and be safe again. While this might be possible in some cases, it is more complicated with the quantum computing threat. PQC algorithms are fundamentally different to current standards. As a result, optimizations made for a device, to improve efficiency based on current algorithms, may not work with PQC algorithms. So, if the device will be around for 10 years or so, this will need to be addressed in the design now.
In further work at CC, we have been pushing the boundaries of the ability of today’s hardware to run the PQC algorithms needed in the future. It has been a challenge and a lot has been learnt from the work. While we’ve shown that it is possible to future-proof devices at a reasonable cost, a number of important factors must be considered:
The right chip must be chosen to ensure there is enough memory and storage to operate the new algorithms
It’s important to update firmware securely. New digital signature algorithms will be more computationally expensive, so the device must be able to handle things like having enough storage for the signatures or any other metadata associated with the operation
Cryptographic agility must be considered. The flexibility to easily switch underlying crypto is important when assumptions on the security of algorithms fall through. This is especially relevant for custom low cost/bulk manufacture devices, as web-based standards such as the X.509 public key certificate standard already include provisions for switching out the underlying encryption algorithms used
The use case matters, with PQC mainly affecting asymmetric crypto. Some use cases which could experience a disproportionate impact are:
Large device volumes with inter-device communications. Here, a more computationally expensive key exchange mechanism could have a large impact
Large message volumes, where authenticity of the messages matter. Potentially slower signature algorithms could have various impacts on the performance of a system
While this does demand that devices are built with greater memory and storage resource, there are many favorable things which mean it can be done sooner rather than later. Resources are getting cheaper, so the new overheads are going to be affordable in many cases. Work is being done to integrate the new algorithms into common tools and libraries such as OpenSSL, meaning it shouldn’t be necessary to migrate to vastly different tools and tech that are unfamiliar.
The chances are that many devices and systems still to be designed will be around in the quantum world in some form. So, can they be designed now and optimized for the future threat? Or even designed with the ability to adapt to future threats? The answers are yes and yes – but it’s not straight forward. Please don’t hesitate to reach out to me if you’d like to hear more about our quantum research and discuss the applications that are emerging. With a broadly-based multidisciplinary team of 800 across the world, CC is uniquely placed to help with your ambitions.