Cambridge Consultants today unveiled five missing ingredients to responsible governance of Artificial Intelligence (AI). A new report, to be published at Mobile World Congress, aims at combatting fear resulting from the multitude of headlines portraying both deep and machine learning as an all-encompassing force taking over industry, society and politics.

A recipe for responsible governance of AI

It states that while the research and application of AI techniques is quickly coming to the attention of governments across the globe, the truth is this often lacks the holistic framework to appropriately govern such adoption.

The report confirms the key to successful collaboration, resulting in responsible AI deployment in government, lies with the following five factors:

  • Responsibility: There needs to be a specific person responsible for the effects of an autonomous system’s behaviour. This is not just for legal redress but also for providing feedback, monitoring outcomes and implementing changes
  • Explainability: It needs to be possible to explain to people impacted (often laypeople) why the behaviour is what it is. This is vital for trust
  • Accuracy: Sources of error need to be identified, monitored, evaluated and if appropriate mitigated against or removed
  • Transparency: It needs to be possible to test, review (publicly or privately) criticise and challenge the outcomes produced by an autonomous system. The results of audits and evaluation should be available publicly and explained
  • Fairness: The way in which data is used should be reasonable and respect privacy. This will help remove biases and prevent other problematic behavior becoming embedded

The timing of the report is crucial: the UK’s House of Commons Select Committee investigation into robotics and AI concluded that it was too soon to be setting a legal or regulatory framework, but did highlight the following priorities that would require public dialogue and eventually standards or regulation. These were: verification and validation; decision making transparency; minimising bias; privacy and consent and; accountability and liability. This is now being followed by a further Lord’s Select Committee investigation which will report in Spring 2018[1].

In February 2017 the European Parliament Legal Affairs Committee made recommendations about EU wide liability rules for AI and robotics. MEPs also asked the European Commission to review the possibility of establishing a European agency for robotics and AI. This would provide technical, ethical and regulatory expertise to public bodies[2].

While we can clearly see that governmental interest around AI continues to evolve, the range of problems and markets to which they are applicable increases. Such breadth of applicability might raise problems as AI is not specific to an industry or sector, but regulations often are, meaning its foundational infrastructure could be left vulnerable.

AI has been hyped as both the solution to business and personal challenges and a threat to our creativity, autonomy and livelihoods. Businesses are faced with education gaps and are therefore concerned about where AI is heading and what impact it will have on their development. With so much focus on, and preparation for, what the future of AI might look like in many years’ time, we can’t lose sight of the immediate priorities, and instil a framework that shapes the way governments and businesses deliver today’s narrow AI applications.

Commenting on the report, Michal Gabrielczyk, Senior Technology Strategy Consultant states, “These principles, however they might be enshrined in standards, rules and regulations, give a framework for the field of AI to flourish within government whilst minimising risks to society and industry from unintended consequences. Only by laying the groundwork and guidelines for effective, reliable AI today can we build consumer faith and enable an exciting future, while maintaining a firm control of costs as AI-based outputs evolve.”

Cambridge Consultants will be releasing its full report, titled "AI: UNDERSTANDING AND HARNESSING THE POTENTIAL" at Mobile World Congress, taking place in Barcelona, February 26th to March 1st. Register to receive the full report or visit Cambridge Consultants in Hall 7, stand 7B21.

Notes to editors

Cambridge Consultants develops breakthrough products, creates and licenses intellectual property, and provides business consultancy in technology-critical issues for clients worldwide. For more than 50 years, the company has been helping its clients turn business opportunities into commercial successes, whether they are launching first-to-market products, entering new markets or expanding existing markets through the introduction of new technologies. With a team of more than 700 staff, including engineers, scientists, mathematicians and designers, in offices in Cambridge (UK), Boston (USA) and Singapore, Cambridge Consultants offers solutions across a diverse range of industries including medical technology, industrial and consumer products, digital health, energy and wireless communications. For more information, visit:

Cambridge Consultants is part of Altran, a global leader in engineering and R&D services which offers its clients a new way to innovate. Altran works alongside its clients on every link in the value chain of their project, from conception to industrialisation. In 2015, the Altran group generated revenues of €1.945bn. With a headcount of more than 27,000 employees, Altran is present in more than 20 countries. For more information, visit:

Media downloads

PR-PR18-002 v0.8.jpg, 0.2MB
PR-PR18-002_App2 v0.8.jpg, 0.1MB


Cambridge Consultants

Public Relations Manager
Cambridge Consultants

European agency


Korean Agency

오 주연 차장