How will the risks and downsides of AI be managed so that society can benefit from the transformational impact it offers? This is a theme we’ve grappled with for as long as we’ve been involved in developing powerful new algorithms and AI systems.

The AI renaissance: why it’s taken off and where it’s going

In the late 2010s, concerns around the potential risks associated with AI and automation led to vigorous debate around the ethics and principles that should be applied. Cambridge Consultants was amongst other technology developers who identified a consensus forming around a few core principles for responsible AI: Responsibility, Explainability, Accuracy, Transparency and Fairness.   

In recent years, similar principles have been adopted by national and international agencies, such as the OECD. As the debate moves on – and as an industry we collectively continue to learn from real-world deployments rather than abstract research – it seems like a good time to revisit those principles. What should we be considering when we develop and deploy AI?  

Who is responsible? 

Part of the advantage of AI systems is that they can operate with a degree of autonomy. Indeed, that’s a significant part of their attraction. This raises the question of who is in control and who is responsible when something goes wrong. Equally it is hard to understand if an occurrence was foreseeable, which is crucial in apportioning culpability and hence incentivising responsible use of AI.   

In my view, the question to ask ourselves is: do we have appropriate governance in place to demarcate responsibility and hold the right people to account? 

Is it robust? 

As AI systems become deployed in increasingly autonomous applications, with humans moving ‘out of the loop’ they need to maintain a high standard of robustness to adverse outcomes. I propose starting with a few key questions:

  • Is it biased? Bias creeps in from your data, from your team, from the way your workflow is structured and from other sources of unsaid assumptions. Critically examine those biases, assess their impact and eliminate or adapt to account for them. At the very least your system shouldn’t be sustaining or aggravating known human biases
Figure 1: Types of bias
  • Is it secure? Any digital system is vulnerable to adversarial attack and sometimes accidental damage. Introducing adversarial examples as simple as a piece of tape on a speed limit sign to trick a Tesla into accelerating make ML-based systems especially vulnerable
     
  • Is it robust to new environments? Humans are great at learning new principles from a small number of examples. AI is not. Models trained in one context are not necessarily applicable in others. So, if the world changes – for example, asking a robot trained to pick apples to pick a soft berry – the AI is not necessarily transferable without some rework
     
  • Does your architecture depend on any critical links? Does your model inference run in the cloud? If so, it’s probably dependent on a network connection and a cloud services provider. If you are reliant on infrastructure, data or algorithms from a third party what would you do if those were withdrawn or damaged? These are all risks that can be managed but only if you understand them. Edge-based architectures might play a role in defending against patchy communications, and custom algorithms may help if your cloud provider becomes a competitor
     
  • Are values aligned? Bostrom’s classic paperclip maximiser thought experiment highlights the importance of explicitly coding stated and unstated values into AI systems. As Reinforcement Learning systems become more sophisticated, monitoring those values and updating them to reflect current thinking will become ever more critical

Is it explainable? 

Explainability and transparency are closely related. Explainability refers to our ability to open up the black box of ML systems. Transparency refers to our openness to sharing those explanations, to accepting test, review and criticism.

Explainability isn’t straightforward and it means different things to data subjects, end users, developers and commercial leaders. Developers may be interested in explaining performance so that it can be improved. Data subjects and end users may wish to understand a specific decision – why was my credit application rejected?

It would be impossible to fully explore this topic in such a short article but I’d suggest you check out the guidance produced by the Information Commissioners Office in the UK as an example of the expectations that may emerge in other jurisdictions. Tools from OpenAI, Google and IBM are also useful.

Is it sufficiently accurate? 

All else being equal, we would prefer to have the most accurate AI possible. In reality, accuracy is often a trade-off with a multitude of other factors, most notably explainability and robustness.   

I’d encourage you to think in terms of ‘minimum viable intelligence’. This is not just because striving for 100% accuracy will prevent your project from ever launching. It is also because a clear idea of where acceptable boundaries for accuracy are will enable intelligent trade-offs with factors like explainability and robustness.

Is it fair? 

Does your AI system use data reasonably, respect privacy and people’s informed consent? As with any potential controversy in business, science or engineering, I urge you to put yourself in the shoes of each of your stakeholders and ask: is the impact of our AI on them fair? How will it be perceived?

As legislators around the world limber up to continue to extend regulations on AI – most notably in the European Union, United States and Singapore where AI policy is already most mature1 – it is important that the AI community maintains the trust of people and politicians. This way, there should be no reason for them to jump to excessive rules. An environment of trust will help foster innovation and adoption of some transformative applications of AI

So, please do ask yourself: how responsible is my AI? If you would like to talk more about pushing the boundaries of the possibilities of AI in your industry, whilst developing responsibly, please get in touch.  

 

 

1These are the highest scoring regions in the Government Artificial Intelligence Readiness Index 2019. Available at https://www.oxfordinsights.com/ai-readiness2019

Author
Michal Gabrielczyk
Head of Edge AI

Michal works with clients to explore how their businesses can be transformed with the right mix of cutting-edge technologies. Michal helps our customers apply Cambridge Consultants’ world-leading expertise in AI, silicon, sensing and connectivity to realise their ambitions with AI at the edge.