AS THE INTERNET OF THINGS and artificial intelligence (AI) advance and become inexorably linked, there’s a growing need to handle computation on actual devices rather than in the cloud or in data centers.
Machine learning typically relies on computing power in the cloud to tackle complex math problems. However, IoT solutions often need embedded intelligence on drones, cameras, vehicles, industrial machines, and sensors. “”The traditional approach doesn’t work well on the edge,”” says Zach Shelby, CEO of Edge Impulse, a machine learning development platform provider.
Enter Tiny Machine Learning (TinyML), which refers to an emerging group of open source techniques and approaches that support on-device AI workloads “”in resource-constrained, ultra-low-power edge devices,”” says James Kobielus, principal analyst at Franconia Research.
TinyML is a significant step forward in the evolution of machine learning, Kobielus notes. “”Many device-level AI operations—such as calculations for training and inferencing—must occur serially, thereby placing a priority on fast local execution,”” he explains. Within such scenarios, constant roundtrips to the cloud undermine performance—or render a process useless.
5G and inexpensive Raspberry Pi RP2040 microcontrollers now offer a computing and communications framework that fully supports TinyML. “”There’s been a perfect storm of better and more efficient machine learning, better batteries, and more efficient embedded compute capabilities. We’ve arrived at machine learning on a milliwatt,”” says Shelby. He calls these advances “”a complete game changer.””
Devices Get Smarter
TinyML lies at the intersection of ubiquitous edge processing, powerful analytics, and AI. It addresses two primary challenges. First, it consumes fewer computational resources. Second, it reduces latency. “”TinyML can boost the speed and efficiency at which AI models can be run by several orders of magnitude,”” Kobielus notes. “”[It enables] fast AI models to run on edge devices for hours, leveraging only a single CPU core without appreciably draining device batteries.””
The technology framework changes the stakes in several ways. For example, TinyML can automate the tuning of neural network architectures, hyperparameters, and other features within AI to fit the hardware constraints of target platforms.
It also allows device-resident AI models to perform search queries, compute counts, and other operations on efficiently compressed and cached local sensor data. And it can compress local AI models by pruning the less important neural-network connections, reweighting the connections, and applying a more efficient model for encoding, Kobielus explains.
Out of the Clouds and onto the Edge
TinyML is still in its early stages, but it’s maturing rapidly. It’s already being used for tasks as diverse as machine maintenance, monitoring power lines, precision agriculture, and hyper-local weather predictions. TinyML will likely appear in Apple devices and other smartphones and wearables within the next few years. “”The alternative—uploading device sensor data to be processed by AI running in a cloud data center—introduces latencies and may, as a result, be a non-starter for performance-sensitive AI apps at the edge,”” Kobielus says.
In addition, several toolchains that optimize AI models through TinyML have emerged, including AWS NNVM Compiler, Intel nGraph, and NVIDIA TensorRT 3. “”Open source AI-model compilers ensure that the toolchain automatically optimizes AI models for fast, efficient edge execution without compromising model accuracy,”” Kobielus explains.
For now, systems integrators can benefit by understanding and communicating the value of TinyML to clients. They can also help clients tap TinyML tools and technologies. This includes neural-net compression, lightweight encodings, and structural pruning methods used in 5G devices. Learning materials and resources are available at the TinyML Foundation and an ML learning program offered by Harvard University.
“”Systems integrators should begin familiarizing themselves with TinyML and begin to look for opportunities to put it to use,”” Shelby says.