Human-Machine Understanding in AI can help us be more human

作者 Sally Epstein | Nov 6, 2021 | 消費財

So, whatever happened to the revolution? AI, they said, would spark a fundamental shift in the world order. Machine learning, they agreed, would automate the drudgery of existence and liberate society. Don’t get me wrong, we are witnessing mind-blowing breakthroughs and advances every day. But honestly, I’m restless, I’m dissatisfied, and I want more, sooner rather than later. I need technology to ‘get me’ on a deeper, emotional level – and that requires the exquisite synergy of Human-Machine Understanding.
You might have gathered that I’m somewhat passionate about this. I suppose it’s what comes of working just ahead of the technology curve here at Cambridge Consultants, anticipating which extremities of innovation are most likely to transform business beyond recognition and quite literally change the world. (I know, cool right?) Human-Machine Understanding, let’s call it HMU, is one of the lines of enquiry currently getting me out of bed in the morning.

I think, actually I’m sure, that it will shape a new age of empathic technology. In the not too distant future, we’ll be creating machines which comprehend us humans at a psychological level. They’ll infer our internal states – emotions, attention, personality, health and so on – to help us make useful decisions.

But let’s just press pause on the future for a moment and track how far we’ve come. Back in 2015, media headlines were screaming about the coming dystopia/utopia of artificial intelligence. On one hand, we were all doomed. Humans faced the peril of extinction from robots, or were at least at risk of having their jobs snatched away by machine learning bots.

On the other hand, many people – me included – were looking forward to a future where machines answered their every need. We grasped the fact that intelligent automation is all about augmenting human endeavour, not replacing it.

Training the future AI workforce

Five or six years on, we can look back on significant change. We have a plethora of institutes and academies training the future AI workforce, buoyed by multibillion dollar resources. We have bigger datasets, bigger Graphical Processing Units (the GPUs that perform millions of calculations in parallel) and bigger neural networks (the brain-like system of algorithms dedicated to perception).

All of the above has contributed to extraordinary breakthroughs and more excited headlines. Google’s DeepMind artificial intelligence defeated the world’s number one Go player, much to the dismay of Ke Jie, who was a ‘little sad’ because he thought he’d played pretty well.

If you want a creative escape from quarantine, you can now go online to create a personalised poem, with the help of AI. And if you want a little light reading on the side, you can catch up with the thoughts of the GPT-3 language generator, writing here exclusively for The Guardian.

The status quo sucks

That’s all well and good, but I’m staying out on a limb here to say that in my opinion the status quo sucks. I’d argue that there’s been remarkably little tangible progress in the products and services that you and I interact with in our personal and professional lives. Maybe I should rephrase that. There hasn’t been enough of the right kind of progress for me.

As I’ve already alluded to, we’ve made great strides making machines with logical intelligence – but what about the social, emotional or even ethical intelligence? Put it this way, I’m sure Google’s AI didn’t put its metaphorical arm around Ke Jie to console him in defeat.

Let me share my frustration here. My smart speaker happily misunderstands me six times in a row – and has no qualms about responding in the exact same way a seventh time. Infuriating. And let’s say I’m test driving a top-of-the-range motor. It might have all the AI-powered bells and whistles when it comes to sensing danger or staying in lane, but it hasn’t a clue whether I’m enjoying the drive or not. See where I’m going with this?

As I said at the outset, technology doesn’t get me, you or any of us. Each day I am constantly having to adjust to every piece of technology I interact with. Rather than accounting for me and my needs, I am pouring energy into adjusting to technology. I just think it should be the other way round.

I believe in a future where each and every piece of technology takes account of my emotions, behaviours and wants, to give me the best possible outcome. Instead of a passive interface, I expect products and services to understand my state and make decisions to aid me in my life. That’s not too much to ask, surely.

As we dig a little deeper into this, you might argue that there are already many products and services that make assessments of the human state. Wearables that help track our sleep quality, for example, or biomarkers that track our stress levels. That’s true, and to date we’ve been able to approach successively difficult problems by treating each as a data problem to be fixed with bigger AI.

But here’s the thing: we know that empathy is not equal to the amount of data processed. To truly move forward I believe machines need to truly understand humans, and that can only mean one thing. HMU.

Progress in Human-Machine Understanding

There is currently no single signal that can be read from your brain or body to reliably tell a computer what you are feeling. But there are a variety of multisensory systems through which a computer can begin to infer information about your emotional state.

HMU is beginning to inch forward, but only just. And if we continue with the current trajectory of innovation, we are unlikely to meet the milestones I’ve been talking about. This is because while technology is getting cognitively smarter, there needs to be equivalent progress in developing technologies that ‘understand’ and interact with us much more seamlessly to best serve our needs. In this domain, I’m pleased to say that some inroads are being made.

To provide a top line for reference, I reckon three vital things are necessary to create technology that understands us:

  • The creation of a new multidisciplinary field
  • New models of human cognition and behaviour
  • Socially intelligent systems that learn naturally
Digging a little deeper still, let me share some of the specific HMU challenges I’m currently working on as part of the dedicated Cambridge Consultants Strategic Technology team. Context understanding is a key one, and it is still not understood what type and number of modalities are needed to achieve the highest level of accuracy in affect classification (affect being the outward display of someone’s emotional state). Essentially, there’s still plenty more work to be done to address the obstacle of incorporating contextual information into the affect classification process.

Learn how we’re developing AI-powered systems to detect emotional states in real time 

Read My Article

Personalisation is another interesting area. Existing deep learning methods have a mixed performance when it comes to human emotion detection. A one-size-fits-all machine learning model is inherently ill-suited to predicting outcomes like mood and stress, which vary greatly due to individual differences.

There are ways to assess emotional state, including postural movements, facial expressions, physiological markers and language. But they must be combined in a unique setting to best represent each individual. Real time understanding of human state, affect again, is crucial. It’s obviously not enough to fill in a survey then take an action a day later. There will be a way through this, starting perhaps by pushing the boundaries of neural interfaces, non-invasive measurement, wearables and so on.

Plenty to be going on with then, but in the meantime, I hope my snapshot has whetted your appetite for the potential of HMU. Please drop me a line to continue the conversation and watch this space for a more details of our exploratory work in the space. Intelligent systems will undoubtedly continue to improve in their ability to calm, comfort and soothe us, to earn our trust and rapport. And it will happen when neuroscientists and psychologists successfully join forces with engineers to teach computers to truly understand humans. Viva la revolution.

If you want to read more about some of the fascinating work we’re doing in this area, then read my other article – Unlocking Human-Machine Understanding through real-time emotional state monitoring

This article was originally published in Information Age – Human-Machine Understanding: how tech helps us to be more human

専門家

Innovation Director | お問い合わせ

Sally is an Innovation Director at Cambridge Consultants, leveraging deep tech expertise for transformative solutions. Dedicated to pioneering breakthroughs with AI, SynBio, and Quantum technologies.

関連するインサイト

ディープテック

新規なテクノロジーや、そのテクノロジーを通じた長期的に持続可能な価値の創出について、ご関心をお持ちでしょうか。

当社はビジネスとテクノロジーが交差する場所での創造性に主眼を置き、お客様の事業を再定義するようなソリューションを創出します。

産業分野

お客様が目指す産業分野に関する深い見識を備え、ブレークスルーをもたらすディープテックを活用できて、価値を創出する活動で確かな実績を持つパートナーが必要です。

お客様の事業分野における当社の実績や、どのような事業上の優位性をお届けできるかについて、ご確認ください。

インサイト

ケンブリッジコンサルタンツの最新のインサイト、アイデア、視点をご確認ください。

ビジネスと社会の将来を形成するディープテックの動向を、最前線の事例を通じてお伝えします。

キャリア

ご自身の能力が評価され、真の差異を産み出せるような仕事に興味はありませんか。

これからキャリアをスタートする方でも、経験豊富な方でも、ぜひご連絡をお待ちしています。