Out of the Clouds and onto the Edge
TinyML is still in its early stages, but it’s maturing rapidly. It’s already being used for tasks as diverse as machine maintenance, monitoring power lines, precision agriculture, and hyper-local weather predictions. TinyML will likely appear in Apple devices and other smartphones and wearables within the next few years. “The alternative—uploading device sensor data to be processed by AI running in a cloud data center—introduces latencies and may, as a result, be a non-starter for performance-sensitive AI apps at the edge,” Kobielus says.
In addition, several toolchains that optimize AI models through TinyML have emerged, including AWS NNVM Compiler, Intel nGraph, and NVIDIA TensorRT 3. “Open source AI-model compilers ensure that the toolchain automatically optimizes AI models for fast, efficient edge execution without compromising model accuracy,” Kobielus explains.
For now, systems integrators can benefit by understanding and communicating the value of TinyML to clients. They can also help clients tap TinyML tools and technologies. This includes neural-net compression, lightweight encodings, and structural pruning methods used in 5G devices. Learning materials and resources are available at the TinyML Foundation and an ML learning program offered by Harvard University.
“Systems integrators should begin familiarizing themselves with TinyML and begin to look for opportunities to put it to use,” Shelby says.