Push for energy efficient computing

EDN, Electronics Design and Strategy and news has an article on energy-efficient computing.

Industry standards lead push toward energy-efficient computing

Environmental concerns and rising energy costs are spurring industry and government groups to develop requirements for high-efficiency AC and DC power conversion, leading to energy-efficient servers. Meeting the newest specifications will demand knowledge of competing power-conversion topologies, components, and design.

By Lee Harrison, Peritus Power -- EDN, 11/12/2009

AT A GLANCE

  • The ac/dc power-conversion step in the overall power chain for server farms can yield some of the most significant gains in power efficiency.
  • To meet industry standards, manufacturers have taken different approaches, including using interleaved PFC (power-factor control), bridgeless PFC, and resonant topologies.
  • Thanks to its 0V switching losses, higher switching frequencies, and smaller footprints, resonant-converter topology may be able to achieve Energy Star Platinum standards.
  • For the near future, silicon will remain the dominant switchingsemiconductor material, and gallium nitride will start to make inroads over the next year.

In addition to environmental concerns, the increasing cost of electricity is driving data-center managers to more energy-efficient installations. As utility bills become the primary expense for data centers, electricity costs now outweigh real-estate costs, with power consumption per data center ranging from 2 to 22 MW. In 2007, the Internet accounted for 9.4% of total US electricity consumption and 5.3% of global electricity consumption. Networking equipment, such as modems, routers, hubs, and switches, accounted for about 25% of the electricity demand in an average office. If the computers and servers in an infrastructure require 200 kW, then the networking components in that infrastructure need 50 kW. In addition, 45% of the power a data center consumes is for air-conditioning and cooling. In modern data centers, performance per watt has become more critical than performance per processor.

If you are a power supply/conversion geek read the rest of the article.  Here is more background about the author.

Lee Harrison is director of Peritus Power. Previously, he worked at Sun Microsystems as a power-system architect and technical leader for the Sparc and x86 platforms, developing the technology and strategy for power conversion with Emerson Network Power, Delta, Lineage, Power One, and FDK from 2000 to 2009. He provides input to the Environmental Protection Agency on power-related issues and has been a voting member of the Climate Savers ac/dc work group for the last four years. Before joining Sun, Harrison was an engineering manager for a UK-based defensepower-supply company, specializing in high-density, low-profile dc/dc and ac/dc power conversion and nuclear-protected electronics. Click here for his LinkedIn profile.

Peritus Power brings up interesting challenges to make systems more energy-efficient.

The Problems Attaining High Efficiency.

Higher efficiency sounds like an easy task. It is littered with problems for the unwary. Taking a legacy design and modifying it is not always possible, new designs quite often require starting from scratch. Higher efficiency means faster switching edges, this leads to precaution needing to be taken to ensure EMI radiated is not a problem. More noise in the PSU can cause I2C communication issues, and of course, power sapping snubbing is not an option. Cost is an issue if you are not careful throughout the entire design.

I’ve spent many hours working with the EMI testing team while at Apple and it makes total sense that higher efficiency power switching edges has an effect on EMI radiation.

The last thing you want is high efficiency systems that interfere with the operations of other IT equipment.