Delivering AI projects for our clients often means taking machine learning out of the data centre and into the real world.
The AI renaissance: why it’s taken off and where it’s going
Out here, the constraints of size, cost, power and connectivity present significant challenges in getting Edge AI to work at its best.
This was the backdrop to a recent project we completed for an Asian technology giant.
The project resulted in accelerating our client’s AI algorithms by nearly 10x on candidate edge compute devices. This also gave them a true performance benchmark on which to base their future design decisions.
In my video I share more insights from the project, which leveraged a unique combination of expertise across our AI and semiconductor groups. This expertise has generated billions of dollars of value for our clients by creating and optimising world-leading silicon platforms.
If you’d like to talk more about Edge AI or any other business challenge needing a unique AI solution, please get in touch at firstname.lastname@example.org
You can find more about our work in AI and analytics here.
We’re seeing artificial intelligence systems equalling or bettering human performance on tasks almost every week. To really capture the value of these achievements though, our clients are working through the challenge of how get these algorithms out of the data centre and onto edge devices in the hands of their customers.
Our team recently helped the AI development team at an Asian technology giant with the edge component of their intelligent platform. Our work involved accelerating their AI algorithms by nearly 10x on candidate edge compute devices. This gave them a true performance benchmark on which to base their design decisions.
The challenge they were facing was that, whilst AI frameworks like Tensorflow and Pytorch are open and cross-platform, as soon as we deploy models to a specific processor, the tools become much less straightforward and are specific to each device. Consequently, our client engineering team had struggled to get some of their models to run fast enough and others failed to convert at all.
Our team was able to identify parts of each model which were not needed for inference, or operations which were supported in different ways on the different processor platforms. By considering these and a host of other optimisation techniques, we shrank the model by 75%, the memory use by 80% and made it run 10x faster. All whilst only dropping 3% in accuracy. Where certain key features of other models were simply not supported by the tools, we could advise on potential workarounds.
As a result of our work, the client had a true measure of the performance of their candidate processor platforms for specific workloads and an understanding of the abilities and constraints of the tools. This gave them confidence to move forwards with their vision of putting AI into the hands of their customers in factories, hospitals and cities globally.
If you’d like to talk more about building high-performing embedded AI systems at the edge or any other business challenge needing a unique AI solution, please get in touch.
You can find more about our work at cambridgeconsultants.com/ai