Oh, what it is to be human… many of us have been musing on the big questions of our existence and will continue to do so as lockdown gradually loosens around us. In our periods of isolation, we were reminded of what we were denied. Essential human contact, physical touch, our freedom to smell, see, hear and feel the wider world. For me, the phase has also highlighted the core component of my professional life, because human senses are the means by which digital services deliver value.
Delivering digital breakthroughs that transform markets
I set out the importance of human senses in my January article on service innovation. As the digital world matures, we seek to bridge the digital and physical worlds with connections that use more and more of our senses. So far, we’ve only scratched the surface with vision and touch – predominantly via screens – but there are more to be harnessed and much more for us to be better at.
A good way to spark a lively debate around the family dinner table is to pose the question of how many senses there are. Five, seven, nine… 23? Depending on the definition – do we include post-sensory cognitive and physiological, for example – then all those answers are correct. But to ease the task of exploring the opportunities, let’s go with my conservative selection of nine:
- Vision, seeing (seeing speech i.e. lip reading and facial movement)
- Hearing, audio (hearing speech, the acoustics)
- Smell
- Taste
- Touch
- Vestibular (balance, feeling gravity)
- Proprioception (the ability for you to close your eyes and know where to touch your nose)
- Temperature
- Brain waves
Some of these reflect obvious digital connections: how we see websites on screens and mobile apps, how we hear the audible alerts of push notifications. We are familiar with using haptic engines in Apple Watches, swiping and pressing buttons with touch. But we have also tapped into the vestibular, using the accelerometer in smartphones to shake a user interface command or to record movement in our health apps.
Brain-computer interfaces
Perhaps one of the most powerful yet untapped sensory asset is brain waves, especially the potential of brain-computer interfaces. There are already domestic consumer devices such as the Muse 2 brain sensing headband which measures brain activity. Most of these devices are used in health and wellness but quite soon the measurement of sentiment and feelings will be combined into other digital service interactions.
What is the business angle here? Fundamentally, business delivers value and information via these interfaces. Screens were the interface of the information age. But now with IoT and advances in technologies such as 5G NR (New Radio), network slicing and edge computing driving unprecedented connectivity, we are moving into the experience age. For me, the true business opportunity is using all the interfaces to connect into the human condition and to allow interaction with the objects around us.
I’m talking about elegantly simple things, like facial recognition, speaking to a home heating system or a door opening after detecting movement. And it’s not just about connecting to humans to deliver information and receive instructions, it’s about understanding intention, emotion and the reaction of humans to these interfaces. The more deeply they understand, the more likely business are to deliver better customer experiences.
The digital-physical bridge
These interfaces are the crucial digital-physical bridge. There is always a physical interface to humans with digital services. The task of ensuring that it is done effectively is referred to as converged design – something of a strength here at Cambridge Consultants. We have capabilities in designing physical products, in areas such as industrial design, as well as digital interfaces and human computer interfaces – all under the collective umbrella of User Experience (UX).
On the human side we have the senses, on the digital side we have the array of digital interfaces… the so-called ambient interfaces. We have moved from screens on desktop computers and smartphones and tablets. We have wearables in watches and health trackers and nearables in low energy Bluetooth devices such as beacons or glasses (we await the successor to Google Glass!). For voice we now have Personal Voice Assistants (PVA) such as Alexa and Siri. We also acknowledge the guidance of Golden Krishna who suggested that ‘The best interface is no interface’, in reference to the belief that the digital-physical interface can be achieved in other ways than with a screen.
Virtual reality (VR) and augmented reality (AR) both hijack our visual and hearing senses. The experience is distorted, and the human user is fooled into seeing something in addition to what is actually there. While VR and AR are in their infancy, they are a precursor perhaps to further augmentations and distortions that will be made to the other senses. For example, disturbing the vestibular to create a sense of weightlessness or making you believe you can smell something which is not actually in the air. VR has already been used in gaming, such as our Haptx example, which is now expanding into more industrial applications such as cobotics for control and movement.
Using the senses well
The use of ambient interfaces to simulate the human senses follows a lifecycle as the vendors master the interface technology, the product managers work out the most appropriate use cases, the designers work how best to design for the technology and the end users become more proficient in using them.
Let’s take a moment to think about screens and vision. The first smart apps where unwieldy at first when driven via a keyboard, but then touch screen interface unleashed the ability to point and swipe. Product managers saw new opportunities to move from mCommerce to retail banking apps to augmented reality mapping apps. Designers got better at UX design, thinking about the simplicity of the interface and the revolution of moving from skeuomorphism to flat design.
Users had to evolve from a simple touch to working out the difference between a click, a hover and forced touch to swipe. Users increased their proficiency incredibly – you have only to watch a millennial’s fingers dance across a smartphone as they skip from Instagram to YouTube to WhatsApp to marvel at how the interface and the senses are connecting.
In voice, and in messaging interfaces and chatbots, we are at an earlier stage. The audio detection and the natural language recognition in PVAs has reached a level of sufficient usefulness. Conversational design is in its infancy. Designers understand the importance of brand voice and good dialogue design but there are as many bad bots as good bots right now. Product managers are still not sure when a voice interface makes sense because the technology is not accurate and not every language or dialect is provided for. Users are having fun with voice interfaces, but they still curate their utterance in their head before saying it… it’s not quite a natural interface yet. But we know quite soon we will all just naturally talk to most connected things.
We can expect the same cycle of evolution to happen with the other senses too. The implications will be profound. When we bridge the gap for the sense of smell to act as a digital interface, for example, perfumery will take its place as a new digital design skill.
If you have a new idea or concept to develop one of the senses into a new digital interface, we have the right combination of science and engineering skills to help make it a reality. If you want to make the best of the digital interfaces that already exist to connect the right senses in the best possible way, I can talk to you about our UI skills and Service Design Toolkit. The better and deeper you understand the interaction with customers, the more likely you are to deliver better customer experiences. Drop me an email if you’d like to chat.