Advanced audio processing in a fraction of a square millimetre and a few micro Watts.

The view from the ‘thing end’ of the IoT makes a lot of edge computing seem huge.

Research firm IDC defines edge computing as a “mesh network of micro data centers that process or store critical data locally and push all received data to a central data center or cloud storage repository, in a footprint of less than 100 square feet”.

This might look very much like the edge from the perspective of a cloud provider, but it looks more like a data center to an IoT device that has a footprint of less than a hundredth of a square foot.

The true edge of the IoT is where the electronics end. For this reason, we need to look towards technologies that transfer the lessons of scalability and operations from cloud deployments, to systems that truly operate on the edge.

There are some positive steps - with Intel’s grip on the cloud compute space loosening, more software projects are eyeing up support for ARM architectures.

ARM are the dominant player in the processors used in IoT devices, having shipped over 100 billion processors, so it’s a big deal which software will or won’t run on them.

This shift has now reached the point that pivotal cloud technologies, such as Kubernetes, now support ARM out of the box.

This significantly reduces the size of the smallest platforms that are able to support these technologies, allowing them to be used a lot closer to the actual edge.

In parallel, machine learning (ML) and artificial intelligence workloads are taking an ever increasing share of cloud capacity, which is driving an architectural shift in the cloud.

Rather than running on traditional general purpose processor architectures, these new workloads are favouring GPUs that are increasingly being customised for the ML compute requirement.

NVIDIA are leading this charge, but Google is taking a different approach and has designed their own TPU chip for their ML workloads.

Customised silicon is common in the embedded world of IoT devices, although current workloads are more about running an already trained model.

This video shows ARM technology that's capable of processing an HD video in real time using computer vision to identify objects within view. This technology is lightweight, heavily power optimised and capable of running on the edge, distilling the video feed in real time down to the few bytes of information needed to build IoT systems.

On another note, our own Ecoutez technology shows how advanced audio processing is possible in a fraction of a square millimetre and a few micro Watts.

To put this processing power into perspective, a smart building system we deployed to monitor 200 occupants generated a data stream that could easily be analysed by the Sapphyre core used by Ecoutez.

Devices and processors that are truly on the IoT edge are capable of running significant analytics – a capability that will continue to increase due to the many embedded chips still using comparatively large transistor sizes around 40nm, or larger. This will provide more opportunity to shrink transistors, improve compute performance and enable a greater shift towards training, rather than running models.

Over recent years the focus on edge computing has been dominated by equipment being shipped to data centres. These developments highlight that the focus must shift to encompass the growing number of innovations that are redefining the classification and genuinely moving it all the way to the edge. Only by doing this will we foster the innovation and collaboration needed to realise the true potential of IoT.

Author
Robert Milner
Head of Connectivity