Interviewed at innovation Day, Cambridge Consultants’ Head of Artificial Intelligence, Tim Ensor, discusses how businesses can approach the challenge of deploying AI.

Tim introduces the concepts of Minimum Viable Intelligence and continuous learning to improve AI performance. Tim also discusses the use of emerging techniques, such as simulated and synthetic data, domain adaption and few-shot learning.

One area or particular interest is teaching a system to have a human-like sense of intuition, so that they perform better in the highly complex challenges that face businesses today.


Understanding & harnessing the potential of AI

Richard: I'm here at Innovation Day with my colleague Tim Ensor our Head of AI. Welcome Tim. So today we've been talking about understanding and harnessing the potential of AI, most interested in that latter part, so where does a business begin harnessing the potential of AI?

Tim: So it's an important topic and lots of people who we speak to are trying to grapple with that, and so we're proposing an approach which is very much wrapped up with this idea of Minimum Viable Intelligence. So as you can imagine from that phrase its borrowing some of the ideas from agile software development, but the fundamental idea is that you build and develop your initial version of an AI system with enough capability that you can launch something, get that into the market and deliver some value for customers. Now, in the sphere of AI we have a slightly different need for doing that than some of the kind of agile development methodologies which tend to use it, and that is that by getting something into the field, we can start collecting more data, and by collecting more data from users actually using a system, you can then feed that back into the training of your AI systems and continually improve that performance. So being able to have a philosophy of, what can you get into the market, to get it into the hands of users to allow you to deliver value but actually collect more data, means that you can start improving the performance of that system and incrementally improve the number of user needs you can address.

Richard: That's an interesting idea, Minimum Viable Intelligence. Are there any examples that people might recognize of that happening? I'm just interested in being able to picture that.

Tim: Yes, sure, well actually most of the voice assistants that we're familiar with in our homes today have gone through that kind of process. So the very early voice assistants - the Googles and Google Homes, Alexas and Siris - they were quite formulaic in the way that you could interact with them. They had quite structured grammar, you had to address your requests in a very specific way, but actually that was okay, people tried them, they interacted with them, they got some value out of them. And clearly you know the businesses behind those products, in the process of having got those in the market and being able to have users interact with them, started to collect a whole bunch of data. And that has allowed them to continually improve the performance of those systems, because of the data they’ve collected.

Richard: Yes, that's interesting so your assertion is that those voice assistants are significantly better than the products that launched three or four years ago. That's interesting, I wonder if people are conscious of that or if it's an unconscious thing they just seamlessly improve over time.

Tim: No, I think that's true. They continually push out new updates, not always to the awareness of the people who are using it, but they're definitely continually updated. I mean if you remember when they were first launched you had to specify the skill or the kind of task that you wanted them to perform, and you don't have to do that anymore. They have much more of an ability to extract the context from what you're saying.

Richard: That's interesting. So you've presented another idea which I wonder how easy it is for organizations. And that's the idea of iterating over time, repeatedly. Does that come naturally to most organizations you know? How do you do that?

Tim: I think lots of organizations who are innovating are quite familiar with the idea of iterative innovation, and this idea of Minimum Viable Intelligence very much fits into that model. The challenge with an AI innovation is that you really do need to get something into the market so that you can start. As I said, collecting that data and that feedback, and then you can fold that additional data you've got back into another launch of another product. That can either be just to iteratively improve the performance of the product, or in some cases, we see organizations who are putting a particular product into the market and almost the sole aim of that product is to collect data, because they want to target another application. So there are a number of different ways that you can see that play out, but I think many organizations familiar with innovation understand the iterative process, but they're just starting to get to grips of doing that for AI-based products.

Richard: And how about data? You know there always used to be this sense that you needed vast amounts of data to be able to get anything useful from a machine learning algorithm. Is that still the case? What's happening with data and the need for piles and piles of the stuff?

Tim: Sure, no absolutely, and I think that that is a very common challenge that many of the customers we work with are facing. There are two sides to that, so on the one hand, being able to put systems into the field allows you to collect data, so there is always that underlying need for some level of data to train your machine learning algorithms on. But you're right, there are a number of different research areas which are specifically trying to target that problem, and we're involved in moving each of those forward. Those span from using more simulated and synthetic data, so being able to simulate completely the environment that your machine learning algorithm has to work in, train the model but then actually achieve high performance in the real world from that synthetic data. Then there’s a bunch of other techniques, things like domain adaptation, where you train for one task but you can still achieve performance on another task. Things like few-shot learning and meta learning are some of the emerging techniques which are specifically trying to target these problems of achieving good performance with limited training data. And we're working on being able to deploy those to give our customers the edge in these challenges.

Richard: So what are some of these emergent techniques that we should be paying attention to?

Tim: So the couple that I mentioned I think are worth paying attention to. The ideas of few-shot learning and meta learning are very much targeted on this challenge of limited data sets. But meta learning, in particular, is also quite interesting because we're talking about not only just training an AI system to do a task, but training it to be good at learning new tasks. That means that suddenly you open up the possibility of machine learning systems which are able to more effectively learn at the edge. So you can imagine autonomous agents that are out in the wild and actually they're able to start learning from a few examples that they've encountered, and be able to make use of that learning.

Some of the other areas which I'm particularly interested in at the moment are areas associated with teaching machines a sense of intuition. I think this is particularly important because many of the tasks that we as humans do are actually hugely complicated. Just playing a game of chess, there are more options in a game of chess than there are atoms in the visible universe. It is an extremely complex task but actually, as a human, we deal with it trivially. Take the example of cooking a meal or driving a car you know the number of options and decisions we have to make are vast and as humans we do that because we have learnt experience, we have rules of thumb, we have a sense of intuition. The reality is to get machines to perform intelligently in these contexts requires us to teach them something similar. And there is research, which is actually coming through quite rapidly now, which is starting to be able to make that a reality, to be able to teach a machine to take a look at the scenario it finds itself in, evaluate a set of options, but without having found the final answer, to understand which choice should I make to put myself in a more favorable position. And these kind of ideas are definitely going to be crucial in the quite short term, to use AI for some of the more complex tasks that we're trying to get them into.

Richard: So it sounds like a common theme through what you've just been saying is becoming more like humans, approaching problems in ways that humans are able to do. Is that still the game we're playing? Is there a lot more to go in imitating the way the human brain works?

Tim: Yes, absolutely there is. One of the things I find fascinating is if you look right back to the very early days of artificial intelligence, you know the first time that was defined as a term in this context was in 1955, and that original definition was essentially the problem of making a machine behave in a way where if a human behaved like that, we would call it intelligence, and that is kind of the nub of the field of artificial intelligence. So yes, I think that is what we're trying to do when we perceive an action which requires a level of human intelligence. Can we get a machine to achieve the same task and the same performance?

Richard: And is there still a need for massive computers? Is that still one of the challenges here?

Tim: That definitely remains an elements of the challenge. So there is still the need for training of these algorithms. We still have a need for that compute horsepower to be able to train our AI. Now there is a very different context when we start putting that into use in the real world in what's called inference mode. Now there is a significant level of growing interest in low-power, low-cost, edgedevices which are capable of running these AI models, and I think we're going to see a lot of that in the next year or so. Models are coming out right now which we're helping customers to deploy and develop on to get those AI models out to the edge. But you will still need to be able to use that compute power. Now the only thing I'd add to that is that today I don’t think that is the limiting factor. So the things I was mentioning around AI maths techniques to be able to deal with the existence of limited data etc., are actually still allowing us to make quite substantial leaps forward by the development of new mathematical approaches. So yes, compute is important, but I don't think it’s going to be the limiting factor for progress.

Richard: Got it, Tim that's very interesting, thank you very much

Tim Ensor
Director of Artificial Intelligence

Tim is the Director of Artificial Intelligence at Cambridge Consultants. He works with clients across many sectors to help them achieve business impact with world-changing technology innovation. Tim has had a string of commercial leadership roles focused on innovation in fields including telecoms, logistics and energy and working with world-leading AI, robotics and connectivity technology. He's an electronic engineer, Cambridge MBA and optimistic about using technology to make the world better.

Connect on LinkedIn