We help our customers develop ground-breaking ways to sense and control the physical world.

There’s a lot written about new sensors that are used in cars to make them more autonomous or safer. LIDAR, radar and externally facing cameras are often mentioned.

However, in the automotive market, we’re seeing an increasing drive to integrate more and more sensor technology inside the cockpit. 

This is set to facilitate exciting new applications but poses questions – what sensors should we use? How can they be integrated within the vehicle? And how can we store, interpret and communicate data? 

These are some of the questions I’ll be discussing in this article.

New applications in health and wellbeing

People can spend many hours each day in a car, so it’s a great place to take repeated measurements of health-related factors. This allows you to identify long term patterns in things like blood pressure or mobility, providing more intelligence than a snapshot you’d get at a visit to the doctor.

Another area of interest is attention monitoring. Counterintuitively, this may become more important as cars become more autonomous, as before full Level 5 autonomy is available there will always be times when the driver is required to take the wheel. Sensors can determine whether a driver is paying attention and produce warnings to ensure they are in a suitable state of alert.

Sensors can also improve the life of elderly drivers. Like health monitoring, there’s an opportunity to continuously observe a person doing a task that requires concentration. This can spot slowing reaction times or cognitive impairment, which can provide vital intelligence to improve safety and long-term health.

Choosing the sensor

There are now a wide variety of sensors that can be considered for car cockpits, including multispectral imaging. This allows you to look at a target in a variety of wavelengths, from ultraviolet to infrared, and gain more information than merely looking with visible light. This is being investigated for many new applications in other markets, one example being the analysis of skin condition.

But visible light shouldn’t be ignored. Cameras work particularly well with AI and machine learning (a topic we addressed in our previous automotive blog) as there is a large body of pre-existing images to train from. This can provide access to a whole host of valuable measurements. 

It’s also worth considering that a single sensor can be used for multiple purposes – for example, a retina scanner installed for security, may also provide early detection of diseases, or a fingerprint scanner in the steering wheel could also detect pulse rates.

While the use of retinal scans for security is somewhat futuristic, the use of voice sensing to control features of the car certainly isn’t. More features will fall into this category, and ultra-low power voice recognition technology, enabled by more powerful and flexible IP blocks such as Sapphyre™, will offer transformative opportunities.

Integrating the sensor

One of the common themes in our projects has been to transfer the success of technology from one field to another (derived from the sheer scope of clients we serve). Something that is established in consumer technology may be new to automotive, providing a solution that can’t be achieved any other way. 

We’ve had significant success in integrating sensors into wearables devices that are harsh and complicated environments for technology (the Nixon Mission ultra-rugged smartwatch and the accesso Prism wristband, for example). Some of the cutting-edge techniques used in these projects are applicable to in-car cockpits, where the sensor needs to be hidden in fabric or other substrate (think steering wheels and armrests) and provide robust readings whilst coping with lots of vibration and movement.

Computing sensor data

Installing the sensor is only part of the picture. You need computing to interpret data, communication to move it around and display or actuation at the end. 

This brings some interesting trade-offs. To reduce the amount of information being sent over the network, you may decide to process data near the sensor. The trade-off in this case being more hardware, which can be expensive and difficult to update. Alternatively, you can centralise computation, either in the electronic control unit (ECU), or in a highspeed networking device.
 
When it comes to processing data, the hardware choice depends on the type of processing required. The automotive variants of traditional central processing units (CPUs) continue to get more and more capable, but integrated graphics processing units (GPUs) such as the Tegra in Nvidia’s Pegasus and Xavier platforms enable a whole different set of algorithms to run in real time, especially neural networks, or anything that needs a lot of parallel computations.

Alternatively, reconfigurable architectures, such as the automotive variant of Xilinx’s Zynq, which combines an ARM core with FPGA fabric, are ideally suited to blending the output of many disparate sensors.

Meet us in Detroit

Cambridge Consultants will be discussing all these topics and more at the forthcoming TU Automotive event (6 – 7 June). 

If you are a company that is an expert in cockpit design and want to innovate with cutting-edge sensors and AI, then please drop us an email and we’ll be delighted to meet you at the event. 

We are geared up to help our clients develop a truly unique competitive advantage, in way that is much faster and lower risk than immediately developing the internal capability. If in time our clients develop their own team, they already own the IP we’ve developed together, so handover is seamless. 

We look forward to seeing you there.
 

Author
Chris Roberts

I'm a Senior Consultant working on a huge range of fields, from Wifi and microprocessors, to literal fields for precision agriculture. I enjoy working with innovative technology, solving hard problems and communicating complex technical ideas to anyone who'll listen.