The coming world of automated and autonomous machines will transform how we live, travel, work and engage with the world. Industry – from agriculture and smart infrastructure to retail and logistics – looks set to enjoy incredible commercial opportunities. Businesses will benefit from augmentation of human effort that can improve safety, productivity, efficiency, comfort and much more. But there’s a big if here. Automation will prevail… if we can assure ourselves that we can trust the intelligence behind it.
Want to learn more?
As a security technologist, my job is to protect companies as they extend the boundaries of digital innovation to reimagine their products and services. And in the case of automation and autonomy, my advice always begins with the fundamental issue of trust. It’s a fascinating, multi-dimensional topic, with both objective (using high rigour disciplines) and subjective (human behaviour) perspectives.
The levels through which this trust can be attained is likewise a socio-technical issue. Assurances can be given through these levels, giving us confidence in how a system or a system-of-systems should behave, in any event, and in a way that is not only safe, but in keeping with what humans can reasonably expect of that machine at any given time.
Nevertheless, challenges remain – and the largest and most dangerous of them tend to be systemic. This is because of many factors: complexity, connectivity (which leads to subsequent blurring of boundaries), uncertain cascading impacts on the system or system-of-systems and unclear responsibilities, attribution or accountability.
Autonomous harvesting robot
To illustrate my thinking, I’m going to dip into an example use case from the world of agritech. This is just one of the sectors we explore in our recent CC whitepaper, Automation to autonomy: navigating the path to success. Let’s consider an autonomous harvesting robot where the issue of trust doesn’t just end with the machine itself. This is because the trend in many domains, including agriculture, is increasingly migrating from a unit sale model to a service provision model. In this case, the robot itself will only be part of the service of data provision, harvesting services, fitting out a farm with intelligent components such as sensors, and managing the entire process every season.
We now have a cyber-physical system, able to make its own decisions, connected to a wider world and therefore addressing this trust issue must extend to all these areas. The service model also means that the owner and operator of the harvesting process may no longer solely be the farmer. This also blurs the boundaries of who makes the decisions – those decisions might be about the safety of the operator and those in proximity, operational effectiveness or the cybersecurity of both robot and associated data services. How do we know we can trust those decisions?
Plenty of worries and concerns then. But rather than diving deeply into the detail here, I wanted to pull back to explore the high-level strategic view of how we can maximise the effectiveness of any solution given technical, financial and regulatory constraints. It seems an obvious point that resources in any business in any context are finite. The real question is whether we know where those thresholds are, be they technical, financial or regulatory. Part of this is informed by the business strategy, and part of it comes from approaching the situation holistically.
This holistic view is crucial, as it allows us to be adaptable rather than deterministic, especially important when considering the pace of technology, the increased demand from customers wanting flexibility, and the threat landscape. A common version of the holistic viewpoint, for example, is taking a probabilistic approach, whether this be in the realms of opportunity or risk.
It also means that we’re able to take multiple perspectives. These range from the technology bundles that are most suitable (what sensors to use, which algorithms to implement and which verification and validation strategy to deploy), to socio-technical aspects such as finding the right people and the right skillset, to integrating philosophies from different disciplines such as safety and security.
Is this approach right for you?
This high-level viewpoint is where the real opportunity lies. If we spot these systematic issues early enough, we can address them well before implementation of the system becomes irreversible and costly delays come into play. To ensure the right strategy, several key aspects need to be considered. First, we need to clarify commercial constraints such as cost or risk appetite. This should also encompass any legislation or regulation that could have an effect on development. We could then draw out larger technical constraints and requirements such as latency, power, size, memory and so on. The security strategy of the system would then be created in tandem.
After this vision setting exercise, we would conduct a first pass of a threat analysis or security risk assessment. These probabilistic activities are useful in determining where to focus initial efforts. A layer down from this would be to optimise between the risks and the constraints and use those insights to choose the most appropriate technical solutions. These might be the right cryptography mechanism (since they require computational overhead which affects latency), the right access control policies (since complexity of access control rules can negatively impact on user behaviour), the right identity management system (since a powerful one could be considered expensive but may be too heavy for the usage envisaged) and so on.
A key practice within all of the above is to monitor the system from the start. Deciding early on what, when, where and to what degree this monitoring takes place is absolutely essential. Fortunately, our holistic viewpoint, considering the many stakeholders and various assurance techniques, can provide us with insights such as the right amount and the right types, even before any data is gathered. Early planning for mitigation can also be performed, both in light of the planned monitoring and based on any earlier opportunity or risk analyses.
To conclude then, there will forever be a place for context, design and implementation of specific solutions that will allow low-level issues to be addressed. But this is akin in many ways to treating the symptom, rather than the root cause. Large systemic issues will always require systemic multi-layered solutions and the best way to achieve that is to start early and think big. Drop me an email if you’d like to know more, I’d be more than happy to discuss the topic in further detail.