Two decades of ceaseless automotive innovation have vastly increased the electrical complexity of vehicles. Our cars are now more like computers, requiring thousands of lines of code to whisk us from A to B. The disruption has put incumbent original equipment manufacturers (OEMs) – some of which began making mechanical machines in the early 20th century – on the back foot. They are having to pivot urgently to address industry advances and respond to the emergence of eager new competitors.
AI in the driving seat
These entrants into the changing market have a distinct advantage. Especially when it comes to implementing new, centralised in-vehicle architectures to optimise sensor processing for advanced driver-assistance systems (ADAS) and autonomous driving (AD). Crucially, emergent companies can design cars from scratch to meet the requirements for high-level autonomous driving. With their legacy architectures and supply chain relationships, the incumbent OEMs face an uphill road – but I believe advances in processing at the vehicle edge could be the key to helping them to keep pace and compete.
A high-profile new entrant is of course Tesla, founded in 2003 and now one of the leading players in both electric vehicle and autonomous driving technology. Last year Tesla introduced its full self-driving (FSD) chip – a custom, centralised ASIC for autonomous driving. It contains a range of custom hardware accelerators, including two neural processors running at 2GHz and offering 36 tera operations per second (TOPS). Without the constraints of legacy vehicle architectures to contend with, Tesla has been able to create a single centralised chip (alongside a second chip for redundancy) that is able to efficiently execute AI inference algorithms on inputs from a range of different sensors.
This greenfield approach is not a very accessible option for incumbent OEMs. Companies such as BMW and Toyota have been gradually adding complexity to vehicles over a much longer timeframe. They have their legacy architecture to consider when attempting to make changes in vehicle functionality. Centralising compute resources is a significant challenge when car ranges have become increasingly distributed and architecturally complex over the past 20 years.
This disadvantage is compounded by the supply chain relationships they need to maintain to create their vehicles.These are the reasons why centralising compute onto a single chip for ADAS/AD processing is going to be so challenging. Incumbents may need to look elsewhere when forging a route to increasingly advanced functionality. With centralisation posing such a significant challenge, can edge processing offer a competitive route forwards for the incumbents?
Intelligent edge processing
Vehicle edge computing involves co-locating specific compute power close to a sensor in order to optimise processing for that sensor. Traditional automotive semiconductor companies have begun to target the vehicle edge by offering processing solutions for specific target applications – a front view camera for example – without having a fully-centralised solution. Market leader NXP has released its eIQ (edge Intelligence Environment) allowing intelligent edge processing to be implemented on NXP’s MCUs for target automotive applications.
Infineon has also introduced AI edge processing for automotive through its AURIX chips. Sony are indirectly addressing the space in a significant way, having recently released what it claims is the ‘world's first intelligent vision sensor with AI processing functionality’. This is a stacked configuration which contains an image signal processor and an AI processor in a single module. Although Sony hasn’t targeted this at the automotive market specifically, it is very likely that it will soon feature on its automotive road map. This likelihood is based on its strong position in supplying image sensors to the automotive market, and the fact that images are a key sensor input for ADAS and autonomous systems.
We at Cambridge Consultants are also addressing the challenge of processing at the edge as part of our work to accelerate the next generation mobility technologies. Working alongside ARM, we have created the capability to develop low power AI Edge inference processors. They are capable of processing inputs from a range of sensors that feature in a typical autonomous driving setup, including radars, ultrasonic sensors and cameras.
As this technology continues to improve and the range of AI processors that can be deployed at the vehicle edge broadens, there will be strong opportunity for incumbent OEMs to maintain their distributed architectures. They’ll also achieve the high levels of autonomy that will be required to compete in the automotive industry of the future. Maintaining a modular architecture offers the possibility of cost-effective implementation across vehicles at a range of different price-points. It also has the added safety benefits of removing a single point of failure from the architecture. Additionally, incumbents will only need to broaden their supplier base to companies which can provide AI edge processing, rather than having to completely re-architect the internal communication architectures of their vehicles.
The challenge facing OEMs will be the production of autonomous driving algorithms that are able to use pre-processed, rather than raw, sensor outputs in sensor fusion algorithms. Processing at the edge will mean that raw sensor fusion will not be possible, as the sensor output will have already been processed and some of the data discarded prior to being fused with other sensor outputs. Cambridge Consultants, amongst others, is working on innovations in AI algorithms and object-level sensor fusion to overcome this challenge. This is allowing incumbents to keep pace with new entrants, which are able to take a significantly more flexible approach to selecting an architecture for autonomous driving.
Levels of ADAS functionality
The use of a distributed architecture with edge processing also presents the option of providing different levels of ADAS functionality across a range of cars. BMW has been vocal about its autonomous driving stack and how it intends to use a scalable platform to address all five levels of autonomous driving. It envisions adding additional processing units to the vehicle as increased functionality is required, allowing it to scale.
Conversely, Tesla only offers two options for ADAS in its cars, with its autopilot feature which is included with all car purchases and its FSD. The Tesla FSD currently costs $7,000, which is expected to increase to $8,000 in the near future, on top of the cost of the vehicle. This pricing makes sense for a premium car maker, with standard models costing $35,000 - $90,000. However, for incumbent OEMs, which have a range of cars at lower price points, a scalable architecture across the different autonomy levels is likely to be a much better fit.
With incumbents facing such tough challenges in the constantly evolving automotive industry, we are already seeing some taking a different approach to the new entrants in an attempt to maintain their industry-leading position. I think it is likely that others will opt for the edge to find a way forward.
We have decades of experience here in helping our clients achieve the most complex breakthrough innovations at rapid speed – channelling deep expertise in sensing, wireless connectivity, AI and edge computing. We would be delighted to discuss how our expertise can help you unlock the huge potential of autonomy in your business. You can discover more about our work in mobility technologies here.