Toggle Search
   Arm Enables

TinyML Enables Smallest Endpoint AI Devices

The latest trend in machine learning has developers thinking very, very small – and that’s huge.

TinyML

TinyML is proof that good things come in small packages. Instead of running complex machine learning (ML) models in the cloud, on large, power-hungry computers, this new approach involves running optimized models for pattern recognition in endpoint devices, on microcontrollers no bigger than a grain of rice consuming only milliwatts of power.

This emerging segment, TinyML, supported by Arm and industry-leaders Google, Qualcomm and others, has the potential to transform the way we deal with Internet of Things (IoT) data, where billions of tiny devices are already being used to provide greater insight and efficiency in sectors including consumer, medical, automotive and industrial.

Why target microcontrollers with TinyML?

Microcontrollers such as the Arm Cortex-M family are an ideal platform for ML because they’re already used everywhere. They perform real-time calculations quickly and efficiently, so they’re reliable and responsive, and because they use very little power, can be deployed in places where replacing the battery is difficult or inconvenient. Perhaps even more importantly, they’re cheap enough to be used just about anywhere. The market analyst IDC reports that 28.1 billion microcontrollers were sold in 2018, and forecasts that annual shipment volume will grow to 38.2 billion by 2023.

ML on microcontrollers gives us new techniques for analyzing and making sense of the massive amount of data generated by the IoT. In particular, deep learning methods can be used to process information and make sense of the data from sensors that do things like detect sounds, capture images, and track motion.

Advanced pattern recognition in a very compact format

Looking at the math involved in machine learning, data scientists found they could reduce complexity by making certain changes, such as replacing floating-point calculations with simple 8-bit operations. These changes created machine learning models that work much more efficiently and require far fewer processing and memory resources.

TinyML technology is evolving rapidly thanks to new technology and an engaged base of committed developers. Only a few years ago, we were celebrating our ability to run a speech-recognition model capable of waking the system if it detects certain words on a constrained Arm Cortex-M3 microcontroller using just 15 kilobytes (KB) of code and 22KB of data.

Since then, Arm has launched a new machine learning (ML) processor, called Ethos-U55, a microNPU specifically designed to accelerate ML inference in area-constrained embedded and IoT devices. The Ethos-U55, combined with the AI-capable Cortex-M55 processor, will provide a significant uplift in ML performance and improvement in energy efficiency over the already impressive examples we are seeing today. We’re expecting silicon in the next 12 months, so watch this space!

TinyML takes endpoint devices to the next level

The potential use cases of TinyML are almost unlimited. Developers are already working with TinyML to explore all sorts of new ideas: responsive traffic lights that change signaling to reduce congestion, industrial machines that can predict when they’ll need service, sensors that can monitor crops for the presence of damaging insects, in-store shelves that can request restocking when inventory gets low, healthcare monitors that track vitals while maintaining privacy. The list goes on.

TinyML can make endpoint devices more consistent and reliable, since there’s less need to rely on busy, crowded internet connections to send data back and forth to the cloud. Reducing or even eliminating interactions with the cloud has major benefits including reduced energy use, significantly reduced latency in processing data and security benefits, since data that doesn’t travel is far less exposed to attack.

It’s worth nothing that these TinyML models, which perform inference on the microcontroller, aren’t intended to replace the more sophisticated inference that currently happens in the cloud. What they do instead is bring specific capabilities down from the cloud to the endpoint device. That way, developers can save cloud interactions for if and when they’re needed.

TinyML also gives developers a powerful new set of tools for solving problems. ML makes it possible to detect complex events that rule-based systems struggle to identify, so endpoint AI devices can start contributing in new ways. Also, since ML makes it possible to control devices with words or gestures, instead of buttons or a smartphone, endpoint devices can be built more rugged and deployable in more challenging operating environments.

TinyML gaining momentum with an expanding ecosystem

Industry players have been quick to recognize the value of TinyML and have moved rapidly to create a supportive ecosystem. Developers at every level, from enthusiastic hobbyists to experienced professionals, can now access tools that make it easy to get started. All that’s needed is a laptop, an open-source software library and a USB cable to connect the laptop to one of several inexpensive development boards priced as low as 15 dollars. 

Arm is a strong proponent of TinyML because our microcontroller architectures are so central to the IoT, and because we see the potential of on-device inference. Arm’s collaboration with Google is making it even easier for developers to deploy endpoint machine learning in power-conscious environments. The combination of Arm CMSIS-NN libraries with Google’s TensorFlow Lite Micro framework, allows data scientists and software developers to take advantage of Arm’s hardware optimizations, without needing to become experts in embedded programming. On top of this, Arm is investing heavily in its optimized tooling for Cortex-M hardware, Keil MDK, plus our IoT operating system, Mbed OS to help developers get from prototype to production quickly when deploying ML applications.

TinyML would not be possible without a number of early influencers. Pete Warden, a “founding father” of TinyML and a technical lead of TensorFlow Lite Micro at Google, Arm Innovator, Kwabena Agyeman,who developed OpenMV, a project dedicated to low-cost, extensible, Python-powered machine-vision modules that support machine learning algorithms, and Arm Innovator, Daniel Situnayake a founding TinyML engineer and developer from Edge Impulse, a company that offers a full TinyML pipeline that covers data collection, model training and model optimization. Also, Arm partners such as Cartesiam.ai, a company that offers NanoEdge AI, a tool that creates software models on the endpoint based on the sensor behavior observed in real conditions have been pushing the possibilities of TinyML to another level.

Arm, is also a partner of the TinyML Foundation, an open community that coordinates meet-ups to help people connect, share ideas, and get involved. We’ve just established the UK TinyML meet-up. The virtual meet-up is every other Tuesday at 4pm BST, and anyone can attend the sessions. Simply register here.

Learn more about TinyML at Arm DevSummit

Our upcoming virtual conference, Arm DevSummit (October 6 to 8), is filled with hands-on TinyML sessions, including talks, workshops and Q&A sessions with Pete Warden from Google, Massimo Banzi from Arduino, and more, covering image classification, embedded ML libraries, and predictive maintenance solutions based on TinyML. 

Back to top