Cloud-first has become the dominant mantra in today’s best performing digital organisations: Data is the new oil;collect it into a massive data lake and unleash the might of big data analytics and AI to build insights that help you drive new revenues, cut costs, or monetise that data through a partner. This seems to be the dominant data strategy for digitally aware executives building intelligent new products and services.
The AI renaissance: why it’s taken off and where it’s going
However, this blunt one-size-fits-all approach might just be reaching its peak. I contend that the days of sucking up any and all data that you can get your hands on are over. Cloud-first will be replaced with edge-first thinking for AI applications.
For increasing numbers of applications, the optimal user experience and lowest cost of service will come from an architecture that conducts at least some portion of sensor fusion, data processing, perception and decision making in the device, or at the very least at the network edge. Centralised, or cloud, computing will not disappear in this model. But cloud will cease to be dominant in the way it is now.
I don’t think I am alone in thinking this will be an important trend in the next decade. It can be observed in real-life examples, and it’s an assumption that is recognised by the European Commission’s (EC) new data strategy.
Take, for example, the wake-word for your home-speaker or smartphone voice assistant – ‘Alexa’, ‘Siri’ or ‘Cortana’. That functionality is enabled by machine learning models stored and operating locally on your device. That is much more responsive, secure and cheaper than streaming a continuous audio stream to the cloud just to recognise when to turn the fully-featured voice assistant on.
Even the EC’s proposed data strategy leans on Gartner’s prediction that 80% of data will be processed in edge devices by 2025, up from 20% today. The total growth in data produced in the world means that the volume of data processed in data centres will still grow in this time period, but not anywhere near the growth rate of edge data processing.
Figure 1: Cloud is here to stay, but edge will outstrip its share of processing. Source: Based on EC and IDC, via EC
What does AI at the edge offer you?
This can be summarised into three broad areas: responsiveness, security and cost savings.
Communications systems these days are fast. Nonetheless, every extra step that data needs to be transferred adds delay. How comfortable would you feel if the pedestrian detection algorithm for the automatic brakes in your car was executed in a data centre on the other side of the planet? Where latency is important, processing needs to be onboard. Clearly, that is important for safety critical use cases. It is also vital for ensuring high quality consumer experiences. VR systems for example are highly dependent on minimising latency to a level which enables a realistic experience.
Data governance has, rightly, become a hot topic as increasingly personal data is collected by a variety of organisations that we all interact with. Often there are many worthwhile collective benefits to sharing individual data – in medical research for example. However, once that data leaves your own smartwatch, or medical centre what happens to it? The more networks that data traverses, and the more organisations that are involved the larger the attack surfaces become. Federated approaches to learning, where an individual’s raw data is not shared beyond the edge device and only collectively valuable inference is transmitted, are part of the answer to these challenges.
This type of model puts businesses with personal devices (wearables, smartphones) and devices in the home (smart TVs, smart speakers, routers and set-top boxes amongst many others) in a privileged position. This provides a great opportunity to work with detailed user data, provided they can maintain a trusting relationship with their users.
Intelligent digital services, broadly speaking, incur ongoing costs from two sources – communications and compute.
Shifting AI processing to the edge device reduces communications bandwidth requirements and their costs. This is perhaps less important when operating a device which piggybacks on a user’s smartphone connection, but there are costs nonetheless – never mind that piggybacking on a user’s smartphone relies on their active cooperation, for example with their choice of OS upgrade and device settings.
It is true that cloud compute capacity is priced competitively. However, there is also a lot of potentially under-utilised capacity available in user devices – a resource that was tapped as long ago as the 1990s by SETI.
Relatively few applications will be able to rely on excess smartphone capacity. Indeed, for many edge applications the cost of extra compute and more complex device management will be balanced against reduced communications and data storage costs.
There is a growing imperative to act sooner rather than later. Rapid service innovation is becoming essential in securing and maintaining competitive advantage and rich customer relationships. This means that tomorrow’s services need to learn in real time.
As my colleague Martin Cookson points out: to engage in that rapid service innovation “we need to go beyond delivering a smart interaction and be smart enough to constantly learn from interactions.” This means getting both the learning and inference elements of AI as close to the user as possible. The hardware and software advances we are currently experiencing are revolutionising the trade-offs that we may have made previously.
Specialised, low power silicon
With the entire silicon industry rushing to address the opportunity at the intersection between AI and the Internet of Things, products and IP are increasingly being made available.
- Smartphones commonly include ‘neural’ or ‘bionic’ chips are available from all the main vendors
- Low-ish power (<10W) off the shelf products are available as modules from the likes of Nvidia (Jetson Nano)and Google (Coral.ai)for under $100
- Start-ups all the way to established industry players offer everything from IP blocks for low power neural accelerators, to chips, to entire systems like the Jetson Nano above
- Emerging silicon architectures like the Chiplets approach offer a lower barrier to entry to ASICs designed with a specific low power task in mind
Custom development work can help go beyond the limits of these off-the-shelf solutions. Our Sapphyre ecosystem is an example of a tool that enables us to rapidly design ASICs optimised to specific use cases. With it we were able to design a Voice Activity Detector that used 100th of the power of a modern hearing aid.
Machine Learning tools and frameworks
Alongside the increasing availability of specialised silicon, software tools and frameworks suited to AI at the edge, use cases are also developing quickly.
Google’s Tensorflow Lite for deep learning on mobile platforms has been available since 2017 and both Facebook and Google have supported federated learning capabilities since last year. Having these frameworks in place means developers can move faster and learn from collective industry experience rather than break new ground for every new development.
What’s your edge AI strategy?
Cloud won’t ever go away, but for an increasing number of use cases intelligence deployed at the edge will be a crucial part of the solution. If you are considering how to maximise the potential of AI for your business, edge is likely to be an important part of the solution.
Cambridge Consultants has decades of experience in helping customers conceive, design and develop high performance products and services. We would be delighted to discuss how AI at the edge could help you discover the next step-change in your industry.