The next generation of human/machine interaction demands as much imagination as it does innovation. As the business world searches for competitive advantage, the desire to imbue digital products and services with deeper levels of understanding will only increase. To put it another way, today’s AI-powered systems may seem to be smart, but they aren’t nearly as good at learning and thinking as we give them credit for.
Understanding and harnessing a transformative technology
Our purpose at Cambridge Consultants is to help clients accelerate the path from discovery to practical, high-impact application. That mandate has driven us to explore psychology as an ingredient rich in potential. Actually, we think of it as a key differentiator for future success – because we believe products, services and systems simply must get more psychologically compatible with their users.
We’ve created a designated research initiative under the banner of Human Machine Understanding (HMU) at our UK headquarters on the Cambridge Science Park. Two key questions dominate the team whiteboard to help shape daily thinking. What if machines had a psychological understanding of how we humans feel, and can we build a system capable of user insight at the level of a psychologist or behavioural scientist?
If you have any strong opinions, by the way, feel free to share them with us. One of the consequences of investing in exciting emerging technologies is the need to unearth the necessary exciting emerging talent. To maintain and increase momentum right now, we’d love to recruit an HMU Lead. Someone who can confidently and comfortably straddle the boundaries of AI, digital, engineering and UX on one hand and psychology and behavioural science on the other.
Human perception AI
The more strident voices amongst my colleagues insist the perfect person doesn’t exist. I disagree. There’s a large community of sharp scientific minds currently stretching horizons in lots of relevant areas, from affective computing to neurotech, human augmentation and brain computer interfaces. Then there’s cognitive robotics, applied general intelligence, artificial emotional intelligence and human perception AI…
As ever, of course, a cultural fit is vital. We need like-minded, curious souls with a natural allegiance to the Cambridge Consultants’ anticipatory approach to technology. The decision to develop the HMU capability was taken by our Strategic Technology Group. Its role is to identify where the greatest potential for innovation sits. The team scans the tech landscape and invests in areas with potential for massive commercial impact. The idea is that we work ahead of the curve, so that expertise, facilities and infrastructure are already in place when our clients need them. The strategy has paid dividends most recently in AI and bioinnovation, with world-firsts in the fields of generative adversarial networks and DNA storage.
We’re setting out our HMU vision with particular care, because it’s vital to find the right balance between the strands of behavioural insight and engineering-based application. Our blueprint for HMU system design recognises the need for psychological insight to drive functionality. What will the user do next, what is their intention, can they handle bad news if the system fails, are they enjoying themselves, do they think they’re using the system successfully, are they struggling and if so, can the system adapt? There are plenty more questions, trust me!
Judging the human state of mind
The potential applications for HMU are broad and include some of the really hot market areas such as self-driving vehicles. Here, it could enable a new kind of human/machine partitioning of responsibility. In other words, the system decides whether or not to hand over control by judging the human driver’s state of mind.
Across the increasingly competitive area of digital services innovation, HMU could play a key role in making experiences ‘stickier’ and more attractive to users. That might mean achieving better and safer results from human operators in difficult situations. For example, deciding when and how to present bad news on system performance to someone already struggling to cope in a disaster scenario.
Other avenues we’re investigating include protecting users from psychological stress – limiting the harm done to moderators of unpleasant online content perhaps – and making technology more inclusive and comfortable by better supporting people with dementia or learning difficulties.
Our work in HMU is embryonic but I believe set for rapid acceleration. Alexa herself is already starting to detect when people are getting agitated with her, just as Amazon steps up its attempts to make the market-leading home voice assistant more emotionally intelligent. If you’d like to discuss any of these themes in more detail, please don’t hesitate to drop me an email. Oh, and if you know anyone up to the job, just let me know!