Taking Energy Back from Next-Generation MCU Designs

Microamp-per-megahertz thinking served the microcontroller (MCU) community well for decades. As the focus shifts to connectivity and always-on use cases, bigger cores and wireless IP blocks push energy use in the wrong direction. Next-generation MCUs can ill afford to spend more energy just to manage themselves. Any mandatory software to make an MCU run usually frustrates customers considering design-ins. How does the MCU ecosystem manage energy moving forward?

From beginnings as register-oriented compute nuggets, MCUs grew bigger cores and wider data paths backed by higher resolution A/D and D/A peripherals. The “work hard, sleep harder” philosophy debuted. By waking up and performing computations as quickly as possible, then getting back to sleep, overall duty cycle is reduced and energy saved. A low duty cycle driven by software has its limits, however; it can cost more energy just to wake up and shut down a processor core than it takes to sample a sensor.

An “MCU-on-steroids” slims heterogeneous multicore architectures derived from mobile application processor designs, seeking a cost and power consumption profile MCU customers can accept. With more DMA-capable I/O cores in play, designers often turn to network-on-chip (NoC) IP, solving integration and multi-rate, multi-protocol challenges. NoC-based designs enable easier partitioning of interconnect logic into the MCU power architecture, greatly reducing the gate count in the “always on” portion of the design. A NoC-based approach also helps designers spin more MCU variants quickly.

Most advanced MCU designers still start by choosing the lowest-power processor core, occasionally with DVFS support, and parsing IP blocks into clock and power gated domains. Efficient cores, interconnect, and gating only go so far against stringent ultra-low power goals for most IoT and wearable applications operating on battery power or energy harvesting technology. Microamps-per-megahertz is an incomplete metric now, where crucial work in the MCU is often occurring outside of the processing core.

With power consumption being left in the lurch, the MCU design pendulum is swinging back to energy management, but with a huge plot twist: little or no software involved. In most mobile SoC designs, software-based schemes controlled by sophisticated operating systems sequence power carefully. For MCU design-ins, either compact RTOSes or bare metal programming are the norm. Forcing users to write their own code to sequence power states for hardware that they didn’t design is a bad idea.

Next-generation MCU with Sonics EPU for hardware-based energy management

What is the right approach in taking energy back from next-generation MCU designs? More and more MCU designers are now discovering energy management hardware IP, opening new possibilities. Sonics’ Energy Processing Unit (EPU) can exploit idle moments – in an IoT protocol, or a sensor processing algorithm, or a control loop – up to 500x faster and with far finer granularly than is possible with software.

To date, Sonics has mostly focused on the mobile SoC design problem, where teams have hit the limits of software-based power management. EPU IP technology may be an even better fit for MCU chip manufacturers and application developers using MCUs. How does this look in an MCU context?

  • Distributed energy management – EPU architecture distributes hardware controllers across the different power grains of the MCU using a timing-friendly fabric driven by centralized sequencers. Instead of brute force power and clock gating by programmable registers, automated design tools put the ICE-Grain IP to work without power management experts and months of effort.
  • Low overhead – EPU architecture is autonomous, and runs completely independently of the highest power digital components in an advanced MCU: the processor core – and its memories. This means energy management can be active chip-wide while the processor core is powered down. Reacting to sensor inputs without waking the processor is a huge win.
  • More designs, less time – MCU designers want flexibility, and packages and pins matter. They spin variants rapidly, cutting peripherals in and out creating exact fits for an application. Configurability wreaks havoc on a software-based power management scheme with inter-dependent sequencing, but is no problem for EPU hardware IP that directly captures such constraints.
  • Use case exploration – A sticking point is optimizing the energy control approach in critical use cases. Thoroughly and completely evaluating just a couple use cases MCU-wide can be months of effort when power management is complex. EPU architecture lends itself to rapid exploration with automation in play, which means more use cases can be optimized quickly.

Clusters implement high-level power states and transitionsWhat I see in working with EPU IP technology are two opportunities for deeper optimization of MCU designs, far beyond what conventional methods achieve.

First is the concept of clusters, grains managed together with a controller to implement a set of user-defined high-level power states and transitions. EPU designs are specified using EDA tools generating an optimized IP configuration from diagrams and tables. Clusters can bring system-level power management to life, easily visualized and automatically verified.

Second is exploiting the short but numerous idle moments that live in-between active moments in an algorithm. Traditionally, MCU users are forced to make power decisions based on operating modes at the millisecond level or slower; functions are gated on or off depending on system needs. The EPU IP offers far finer granularity, both in space and time. A breakthrough case study decomposed the Google G2 VP9 Decoder IP block, snatching 94% of the overall energy from 480i60 playback with EPU IP inserted.

How much energy is locked inside a bursty IoT protocol, or MEMS sensor sampling, or a low-power display? Idle moments exist, with required digital IP blocks fully on only because turning them off and back on takes too long – using the processor and software. (Analog blocks with settling time can be trickier.) With hardware IP doing energy management, off/on timing shrinks drastically, perhaps 50 to 500 times, and finer grains expose more and longer idle moments to control.

Ready for a first-hand look at EPU IP technology? Click the “Free Trial” button in the upper right corner, or look at my previous post walking through the EPU Studio Configuration Trial.

It’s time to break away from the core-level power management trap that limits what MCUs can achieve. For IoT and wearable devices to succeed with consumers, the industry needs more than incremental MCU power management improvements. Sonics EPU IP technology takes the task of energy management from a team of highly specialized hardware and software designers to an intermediate (or experienced) functional designer who needn’t worry about power management details. Cracking the code – literally, with a hardware-based implementation – will lead to breakthrough MCU products using far less energy.

Leave a Reply

Your email address will not be published. Required fields are marked *