Apply for the development board now

How to implement AI on wearable devices

In ipXchange’s third and final piece of sponsored coverage of Ambiq’s attendance at CES 2024, there’s been a little bit of a shake-up. 

Though ipXchange filmed an in-person interview with Carlos Morales, Ambiq’s Vice-President of Artificial Intelligence, while in Las Vegas, we thought that we could cover even more ground after the show for such a deep topic. In this ipXperience interview, Eamon chats to Carlos about the latest AI developments in Ambiq’s product range and how to put these to use in the next generation of wearable devices. 

The following article was produced to accompany the original discussion, so we’ve shared this below, though the recorded conversation might go in some different directions. 

Keep designing! 

How a low-power breakthrough for MCUs enables more sophisticated AI in endpoint devices 

With Carlos Morales, Vice-President of Artificial Intelligence

What is the limit of artificial intelligence (AI) in devices powered by a tiny battery? And what factor determines this limit?  

These questions are timely for embedded system developers because of the huge scope today to differentiate designs for endpoint devices by enabling them to do real work with AI.  

In fact, the answer to the second question is simple: power consumption.  

Now Ambiq, which with its Apollo system-on-chip (SoC) and microcontroller (MCU) products has smashed records for ultra-low power consumption according to standard industry benchmarks for microcontrollers, has demonstrated a series of reference and example designs that provide a new answer to the first question.  

For instance, AI applications running locally on Ambiq MCUs can detect many kinds of heart abnormalities in health-monitoring products, enhance speech through background noise cancellation, and control smart home devices without requiring the user to learn keyword combinations – all in devices such as wristbands and diagnostic patches that run on tiny coin cell batteries. 

Here’s how the next generation of endpoint AI has been made possible.  

Migrating advanced AI from the cloud to the endpoint 

The goal of enabling endpoint AI is not to perform functions that were never possible before. Using a cloud computing data center’s huge computing and energy resources, AI applications can do far more sophisticated inferencing than will ever be possible in a highly constrained endpoint device. Nevertheless, endpoint devices are capable of running AI locally to produce highly useful insights from the data collected by their sensors. 

Power consumption has long been the limiting factor that constrains AI at the endpoint. Lifting this constraint enables more sophisticated AI functions to be performed locally (wholly or in part). The benefits of locally executed AI inferencing are threefold:  

  • Better latency and reliability – local inferencing eliminates the time lag caused by data transmission to the cloud and the wait for the cloud server to respond. Local inferencing also insulates the user from the risk of AI downtime caused by connectivity problems.  
  • Privacy and security – on-device AI protects private medical, personal or commercial data from exposure to a cloud service. And if AI data is not transmitted to the cloud, hackers cannot snoop on or tamper with it in transit.  
  • Less data traffic – many AI applications require huge amounts of data to be analyzed by an inference engine. Local inferencing or pre-processing of data eliminates or greatly reduces the amount of data to be transferred to the cloud, freeing up bandwidth for other operations. This in turn reduces power consumption, as data transfers are power-hungry, and cuts the cost of data transfers levied by network service providers.  

Until recently, however, local execution on power-constrained devices such as fitness bands and health monitors using conventional MCUs was limited to the simplest AI functions, such as step counting or voice detection.  

All the more valuable uses for AI call for always-on sensing and inferencing. Examples include voice user interfaces, speech denoising, and fall detection. When implemented on a conventional MCU, always-on AI is incredibly power-hungry and, therefore, not viable in battery-powered endpoints. This means that the key to enabling sophisticated and useful endpoint AI is to maintain unprecedentedly low power consumption across every facet of the MCU’s operation, from sensor interfacing and signal processing to data storage to AI inferencing.  

And this is what Ambiq enables with the SPOT® (Subthreshold Power Optimized Technology) platform on which its Apollo family of SoCs and MCUs is based.  

Sub-threshold operation reduces power by 10x 

The SPOT platform is a combination of multiple techniques that enable circuits inside the processor to operate near the transistors’ threshold voltage, or sometimes far below it. Conventional electronics design theory insists that the proper operation of a transistor switch requires a swing from ground to above the specified threshold voltage, which is typically in the range 0.8V-1.2V in today’s MCUs.  

In fact, analog engineers are familiar with the behavior of semiconductor circuits in a continuum from 0V upwards. The SPOT platform uses this behavior to implement logic functions at voltages as low as 0.3V. Since in transistor switching, energy is proportional to a square of the voltage, huge reductions in power consumption may be gained from reducing the operating voltage in the way that the SPOT platform makes possible.  

This is borne out in tests of Apollo4 MCUs using industry-standard EEMBC benchmarks: a range of 4x to 16x lower power consumption than any other MCU, with an average of a 90% reduction in power consumption compared to the general MCU market.  

This superior power performance is also observed in AI applications. The MLPerf Tiny benchmark is the standard test of AI performance for resource-constrained endpoint devices. Here again, Apollo4 MCUs achieve a 10x reduction in power consumption compared to the average mid-range MCU. The Apollo4 MCU executes AI operations on its Arm® Cortex®-M4 CPU (see Figure 1): its power advantage is maintained even against MCUs requiring complex AI function partitioning into a dedicated neural processing unit.  

So, what is the impact on endpoint device designs? If an Apollo4 MCU uses 1W when a comparable MCU consumes 10W, an Apollo4-based design has 9W of headroom in its power budget to either improve the accuracy and speed of an AI application or perform more sophisticated AI functions.  

In fact, in new reference and demonstration designs developed by Ambiq, it has been possible to do both.  

Fig. 1: block diagram of the Ambiq Apollo4 SoC, which provides a rich set of microcontroller functions 

Raising the ceiling on what AI can do at the endpoint 

For example, applications developed by Ambiq for the Apollo4 MCUs all run on coin cell batteries and are compatible with the resource constraints of wearable devices such as earbuds, fitness bands, or medical diagnostic patches.  

Example code or ready-made application firmware are available today for: 

  • Speech enhancement – cancellation of ambient noise to enable clearer voice functions that are easier to hear 
  • Speech to intent conversion – spoken commands have a characteristic pitch/intensity pattern. When this pattern is represented as a spectrogram, a machine learning algorithm can be trained to ‘see’ it. This capability can be used to enable voice control of, for instance, smart home appliances without requiring the user to learn a special verbal formula or set of keywords. Instead, the Ambiq AI understands the intent of the user’s natural language, and responds accordingly.  
  • Detection of heart abnormalities – Ambiq’s endpoint AI diagnostic program uses electrocardiogram (ECG) and photoplethysmography (PPG) data to detect the signs of various cardiac abnormalities, including atrial fibrillation, arrhythmia, and more (see Figure 2). The system uses only the Apollo4 MCU’s on-chip memory resources.  

The development of these and other AI applications is supported by a comprehensive and intuitive set of enablement tools provided by Ambiq for the Apollo MCUs. It includes the neuralSPOT software development kit (SDK), which provides an abstraction layer for application development that AI engineers will recognize and feel comfortable with.  

Ambiq also provides an AutoDeploy tool, which automates the process of deploying a trained model to the TensorFlow Lite for Microcontrollers run-time. AutoDeploy characterizes the operation of the hardware resources used by the model, enabling developers to fine-tune the AI application and optimize it for power or performance.  

Fig. 2: the heart arrhythmia classification example model performs real-time heart arrhythmia classification using 1-lead ECG and, optionally, PPG data 

Fast expanding scope for AI in resource-constrained applications 

The Apollo4 MCUs greatly ease the power constraint that had previously limited the scope for AI on endpoint devices. Now, AI developers can explore the exciting opportunities to create new value from using AI in wearable and other devices.  

Ambiq sees big potential today in healthcare, patient monitoring, and the industrial IoT (IIoT). In the domain of healthcare, the world has to face the prospect of a steadily aging population and the difficulty of training enough physicians to meet even today’s demand. At-home patient monitoring supported by AI can enable patients to be seen by a doctor when their condition merits attention, eliminating routine or unnecessary visits to the doctor.  

In IIoT, endpoint AI can continuously monitor machine health or the integrity of structures, even in remote locations with no connection to the grid. This can enable intelligent preventive maintenance, cutting the cost of scheduled maintenance, extending uptime, and reducing the frequency of unplanned downtime for repairs.  

Using an Apollo MCU, this is possible today in devices operating on a small coin cell battery.  

Looking further ahead, the capability of endpoint AI is only going to become more exciting. Enabled by Ambiq energy-saving technology, earbuds will be able to perform real-time language translation. Computer vision will also be possible in smartwatches and smart glasses, enabling IIoT applications such as a battery-powered camera that reads a utility meter’s analog display. 

Get started today with hardware and software enablement tools 

Embedded and AI developers can start discovering how to take advantage of the ridiculously low power consumption of the Apollo4 MCUs with Ambiq’s range of hardware and software tools and resources. These include the Apollo4-SoC-EVB evaluation board (see Figure 3), backed by the Model Zoo library of AI code examples, the NeuralSPOT SDK, and a wide range of training videos, articles, and white papers available online.  

Fig. 3: the Apollo4-SoC-EVB evaluation board integrates with a PC host to help developers make training data sets compatible with the MCU environment 

Note: Ambiq also produces another version of this evaluation board that features a display shield for immediate development of AI-capable wearable devices. Learn more and apply to evaluate the technology by following the link to the board page below. 

Ambiq Apollo4 Plus SoC Display Kit

Developing the next generation of AI-capable wearable devices?

Apply for the development board now

Share this

We’re disrupting the world of component evaluation for design engineers and manufacturers!

Discover why ipXchange is a game changer.

Get industry related news

Sign up for our newsletter and get news about the latest development boards direct to your inbox.

We care about the protection of your data. Read our Privacy Policy.