We help our customers develop breakthrough products and services within their market.
When you are in a conversation, you are naturally discerning a wealth of information about the other person – information about their mood, age, gender, accent – this provides you with additional insight into their meaning beyond the actual words spoken.
Artificial Intelligence approaches can be used to give a similar level of insight into the context of speech to enhance and ‘humanize’ voice interfaces. This could deliver a wide range of possible benefits, for example:
- For the consumer: removing the frustration of conveying your actual meaning, e.g. if a British person asks for ‘chips’ then they want ‘French fries’ whereas an American will be expecting potato chips (known as crisps in British English!).
- For a brand: how someone says something is often as important as what they actually say in order to get real feedback on what consumers actually think.
Using AI we can develop better voice interfaces to give more appropriate responses to commands, gain greater insight into consumers and improve the overall user experience.
Unpacking the black box
We’ve been doing some development work on a really important aspect of AI algorithms – Explainable AI. Currently, AI algorithms are ‘black boxes’, making them difficult to understand and trust. Imagine you have a computer that tells you it thinks you’re feeling ‘sad’….wouldn’t you want to know why it thinks this? Also if you’re a company wanting to make business (or even safety) critical decisions based on results how do you know you can rely on the results if you don’t know how the algorithm is reaching the result?
We’ve developed an Explainable AI system, where we can probe the inner workings of an algorithm and visualize how it’s reached the conclusion; it’s even understandable by non-math-geeks! Understanding the workings of AI algorithms will help build trust in the results, make them easier to audit, and provide insights for improving them.
It’s also looking likely that making AI explainable may become a requirement….the Information Commissioners Office in the UK recently issued guidance on GDPR (General Data Protection Regulation). With more and more uses of AI across a multitude of business operations we should take heed about what this potential regulation could mean if your AI systems are going to have to explain their actions.
We have been having fun experimenting specifically with accents. We’ve put together a demo system based on gradient boosted decision trees for a voice classification system that will tell you if it thinks you are British or American, and, more importantly, it will tell you WHY it has made that decision.
You can see in the screen shots below (click to expand) where it thinks words are variously British (blue) vs American (red) …also you can see the decision tree workings also.
We are going to be demonstrating this system at CES in Vegas in January. We’ll be in the Sands Expo at Booth 44137, so come along to try this out (and see if you can fool it!) Let us know if you’re like to arrange a meeting to learn more about our work and how we could support you.
We’ll also have a number of other demos at CES that showcase some of our work in machine learning, from a deep learning application that creates art work from just a few sketched lines, to an augmented reality system that helps production line workers follow a workflow more easily, and a digital biomarkers system that gives insight into your health.