The History of Power (Energy) Management in IC Design

Power management in IC design is loaded with exciting new developments. There have been attempts to solve power-related  issues by altering the overall semiconductor process, biasing selected portions of a design, implementing various power distribution structures, introducing power gating, changing on chip network architectures, introducing microprocessor-based power channels, introducing energy processing units, and even more. Before jumping right into this topic, I think it’s best to provide an overview of what got us to this point.  In future posts, I’ll delve more into the state of the art and the important trends affecting the future of power-efficient IC design.

In the very earliest days of digital IC design, there was not much attention paid to the optimization of power in a design. Keep in mind that at that time there were not a lot of transistors in a chip compared to what we are used to today. If you kept adding logic to a design, eventually the die would get large enough to have yield problems, long before you ran into power problems. For devices with wired power, power almost seemed infinite because of the limitation of the amount of logic in a design.

In .5µm and .35µm processes, we did have some new problems to consider. As the number of transistors went up, so did the temperature of the device. This meant we had to consider what to do about chip packaging in order to dissipate that heat. Plastic packaging was cheaper, but could only handle so much heat before we needed to consider more expensive packaging like ceramic. Of course, the designer could try to partition the design into multiple chips and pass the problem to the board level. Still, the designers were mostly stuck in a mindset that the system and logic design was correct and it was the IC implementation that needed to solve the energy problems.

How far was the IC implementation team supposed to go before we went back to the system and logic designers to help solve the problem? Very far, indeed. I was working in the design automation department at Trilogy Systems in the first half of the 1980s. They were using a fast, yet power hungry, semiconductor process (ECL), along with an architecture utilizing triplicated logic with voter-MUX latches, to attempt to build a wafer-scale integrated part which was to be an IBM-compatible mainframe computer. I know – we were crazy. But it was one way to get more logic in one semiconductor device. How did they plan to cool these wafers which would be hot enough and large enough to be very expensive cooking surfaces? With water, of course. Though the company failed, the patents from developing that cooling technology remained valuable for years to come.

Clearly the choice of semiconductor manufacturing process can have a large impact on the power consumption of a design. MOS processes became the dominant process in part because they were a good compromise between speed and power. Then as process dimensions became smaller, we found we could also decrease the operating voltages, but only so far. Early chips using only one or two metal layers would also use the polysilicon layer for interconnect. Since polysilicon is far more resistive (which costs power and adds heat) than its metal alternatives, we stopped doing that once we had three layers of metal.

As we added still more interconnect layers and further shrunk the feature sizes a new type of energy problem arose, voltage (IR) drop. With the increased peak current associated with additional transistors, more of that smaller operating voltage was lost due to supply network resistance. So, EDA (Electronic Design Automation) tools were made to analyze and improve the supply network, which then impacted place and routing. The tool approach to accommodating the designers’ wishes without changing their design style had become a leading impetus at EDA companies.

The dramatic increase in transistor leakage that arrived with the 40nm technology node was a big shock to the designers. What a triple whammy – higher transistor count (more things to leak), higher leakage per transistor (due to shorter gate lengths) and an increased market appetite for mobile chips (battery operated).  Multiple techniques needed to be introduced simultaneously to solve these challenges. This required chips to introduce power gating and DVFS (Dynamic Voltage Frequency Scaling). This drove customers to work with EDA companies to create power intent standards (now IEEE-1801 UPF) and extend tools to model multi-voltage and gated supply networks. Other techniques already in use, such as the use of on-chip networks to reduce wire counts, also saw increased adoption.

For their part, the EDA suppliers did add useful tools for calculating power as the challenges grew. Tools were created to measure all sorts of energy-related values. There were tools in emulation, simulation, and verification that could estimate energy values at the full chip level on down to much smaller pieces of a design. This could be done under different workloads or usage scenarios. As the need for tool interoperability was obvious, the community has recently coalesced these ideas into IEEE 1801 (UPF). For more details on IEEE standards around power design, see Karen Bartleson’s Electronic Design article, “These 3 Standards Will Foster Smart Low-Power Design.”  These standards were made to capture the chip architects’ intent for power design, as well as to document measured results from tools (tool interoperability). These are useful steps, but by themselves they do not provide a solution to the overall problems faced by chip architects or energy management experts.

Though you have not heard them say it, the EDA companies have come to realize that their tools are not going to solve today’s IC design energy problems. To be fair, these new EDA tools are not related to the energy architecture solution in the way hammer and nails are related to a building architect. Instead, it is more like the modern building architects’ CAD system. Funny, EDA used to be known as CAD (Computer-Aided Design). History has proven that a tools-based solution is insufficient. Modern techniques rely on on-chip power management, usually with some software-based control. It involves properly turning on and off portions of the chip based on when those portions are needed. In fact, most modern chips would melt if the entire design was turned on at the same time. These are not problems that can be solved simply with EDA tools.

The problems we see in energy management today will be solved by architectural methods, including the use of predefined, yet customizable, semiconductor IP, and software. Energy management methodologies will be implemented by the chip architect and his/her energy management team. The future is daunting and exciting! More on the modern approach to solving IC energy management problems will be provided in a follow-up blog later this spring.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *