Pacific NW gets $89 mil of the $620 mil DOE Smart Grid grants

Pacific Northwest National Laboratory will manage the Pacific Northwest Smart Grid Demonstration project.

NW power grid project gets $89 million from DOE

A project to examine how high technology can improve the Pacific Northwest's electric power grid has received an $88.8 million grant from the U.S. Department of Energy.

By The Associated Press

RICHLAND — A project to examine how high technology can improve the Pacific Northwest's electric power grid has received an $88.8 million grant from the U.S. Department of Energy.

The money, to help pay for the $177.6 million Pacific Northwest Smart Grid Demonstration Project, was the largest among 32 grants DOE announced Tuesday as part of $620 million in stimulus aid.

The grant will go to Battelle Memorial Institute's Pacific Northwest National Laboratory in Richland, which will manage the project. The remainder of the project's cost will be borne by energy providers, utilities, technology companies and research organizations taking part.

Electricity Infrastructure Operations Center

Electricity Infrastructure Operations Center  by PNNL - Pacific Northwest National Laboratory.

The Electricity Infrastructure Operations Center at PNNL is a user-based facility dedicated to energy and hydropower research, operations training and back-up resources for energy utilities and industry groups.

Smart meters are part of the project.

Among those taking part in the project are the campuses of the University of Washington in Seattle and Washington State University in Pullman. At both schools, "smart meters" will be installed to provide real-time information on power consumption, along with software and other gear to automate and monitor the electricity distribution system.

I wonder if anyone has thought including the Pacific NW data centers in Washington and Oregon in the project?  Problem is almost all the big data center operators wouldn’t want the public to know the power consumption of their data centers.

I hope someone proves me wrong and signs up with PNNL.

Read more

What Could Data Center Regulation Look Like?

Mike Manos discussed data center regulation in this post.

Coming Soon to a Data Center near you, Regulation.

June 19, 2009 by mmanos

As an industry, we have been talking about it for some time.  Some claimed it would never come and it was just a bunch of fear mongering. Others like me said it was the inevitable outcome of the intensifying focus on energy consumption.   Whether you view this to be a good thing or bad thing its something that you and your company are going to have to start planning for very shortly.  This is no longer a drill.

CRC – its not just a cycle redundancy check

I have been tracking the energy efficiency work being done in the United Kingdom for quite some time and developments in the Carbon Reduction Commitment (CRC).  My recent trip to London afforded me the opportunity to drive significantly harder into the draft and discuss it with a user community (at the Digital Realty Round table event) who will likely be the first impacted by such legislation. For those of you unfamiliar with the initiative let me give a quick overview of the CRC and how it will work. 

The main purpose of the CRC is a mandatory carbon reduction and energy efficiency scheme aimed at changing energy use behaviors and further incent the adoption of technology and infrastructure.  While not specifically aimed at Data Centers (its aimed at everyone) you can see that by its definition Data Centers will be significantly affected.  It was introduced as part of the Climate Change Act 2008.

Here are a list of some ways data centers could be regulated.

  1. What is your PUE.  A number too high may be penalized for inefficiency.
  2. Do you purchase carbon credits? Report the specifics
  3. How do you calculate your carbon footprint?  Submit your calculations.  Has reporting been audited?
  4. What % of your servers are ENERGY STAR? Tell us a number and how many more.
  5. What % of your servers are running virtualization? How many virtualized server images do you run?
  6. What is the water use of your data center?  What is quality of your waste water?
  7. What is average inlet temperature of your servers? Are you overcooling your equipment?
  8. What is your eWaste policy for IT equipment?
  9. What is your long term commitment to carbon reduction?  What is your current status?  What are penalties for non-compliance?
  10. What investments are being made in carbon neutral power?

These are just some ideas quickly jotted down, but i am sure the government organizations can come up with a lot more as a new revenue stream to tax the rich.  Yes, the large data centers are in general run by rich organizations.  You could exempt the poor and middle class corporations by defining a number like any corporation using more than 2mW of data center power.

Read more

Educating Procurement to buy Energy Efficient Servers, Testing x86 Low Power CPU based Servers

I just posted on the idea of biggest opportunity to green the data center is procurement. One day later AnandTech has a post on testing energy efficiency servers.  If you want the version of this article that is all one page go to this print version as there are 12 pages.

Testing the latest x86 rack servers and low power server CPUs

Testing the latest x86 rack servers and low power server CPUs

Date: July 22nd, 2009
Topic: IT Computing
Manufacturer: Various
Author: Johan De Gelas

The x86 rack server space is very crowded, but is still possible to rise above the crowd. Quite a few data centers have many "gaping holes" in the racks as they have exceeded the power or cooling capacity of the data center and it is no longer possible to add servers. One way to distinguish your server from the masses is to create a very low power server. The x86 rack server market is also very cost sensitive, so any innovation that seriously cuts the costs of buying and managing the server will draw some attention. This low power, cost sensitive part of the market does not get nearly the attention it deserves compared to the high performance servers, but it is a huge market. According to AMD, sales of their low power (HE and EE) Opterons account for up to 25% of their total server CPU sales, while the performance oriented SE parts only amount to 5% or less. Granted, AMD's presence in the performance oriented market is not that strong right now, but it is a fact that low power servers are getting more popular by the day.

The low power market is very diverse. The people in the "cloudy" data centers are - with good reason - completely power obsessed as increasing the size of a data center is a very costly affair, to be avoided at all costs. These people tend to almost automatically buy servers with low power CPUs. Then there is the large group of people, probably working in the Small and Medium Enterprise businesses (SMEs) who know they have many applications where performance is not the first priority. These people want to fill their hired rack space without paying a premium to the hosting provider for extra current. It used to be rather simple: give heavy applications the (high performance) server they need and go for the simplest, smallest, cheapest, and lowest power server for applications that peak at 15% CPU like fileservers and domain controllers. Virtualization made the server choices a lot more interesting: more performance per server does not necessarily go to waste; it can result in having to buy fewer servers, so prepare to face some interesting choices.

This article is quite long, and another reason why procurement professional would not read and digest this, plus few have the engineering skills to follow the article.  The writers of this article did not even intend the BDM for server purchasing in this article, thinking the Server Admin and CIO are the audience.

Does that mean this article is only for server administrators and CIOs? Well, we feel that the hardware enthusiasts will find some interesting info too. We will test seven different CPUs, so this article will complement our six-core Opteron "Istanbul" and quad-core Xeon "Nehalem" reviews. How do lower end Intel "Nehalem" Xeons compare with the high end quad-core Opterons? What's the difference between a lower clocked six-core and a highly clocked quad-core? How much processing power do you have to trade when moving from a 95W TDP Xeon to a 60W TDP chip? What happens when moving from a 75W ACP (105W TDP) six-core Opteron to a 40W ACP (55W TDP) quad-core Opteron? These questions are not the ultimate goal of this article, but it should shed some light on these topics for the interested.

How many procurement people would understand this?

The Supermicro Twin2

This is the most innovative server of this review. Supermicro places four servers in a 2U chassis and feeds them with two redundant 1200W PSUs. The engineers at Supermicro have thus been able to combine very high density with redundancy - no easy feat. Older Twin servers were only attractive to the HPC world were computing density and affordable prices were the primary criteria. Thanks to the PSU redundancy, the Twin2 should provide better serviceability and appeal to people looking for a web, database, or virtualization server.

Most versions of this chassis support hot swappable server nodes, which makes the Twin2 a sort of mini-blade. Sure, you don't have the integrated networking and KVM of a blade, but on the flip side this thing does not come with TCO increasing yearly software licenses and the obligatory expensive support contracts.

By powering four nodes with a 1+1 PSU, Supermicro is able to offer redundancy and at the same time can make sure that the PSU always runs at a decent load, thus providing better efficiency. According to Supermicro, the 1200W power supplies can reach up to 93% efficiency. This is confirmed by the fact that the power supply is certified by the Electric Power Research Institute as an "80+ Gold" PSU with a 92.4% power efficiency at 50% load and 91.2% at 20% load. With four nodes powered, it is very likely that the PSU will normally run between these percentages. Power consumption is further reduced by using only four giant 80mm fans. Unfortunately, and this is a real oversight by Supermicro, as the fans are not easy to unplug and replace. We want hot swappable fans.

Supermicro managed to squeeze two CPUs and 12 DIMM slots on the tiny boards, which means that you can outfit each node with 48GB of relatively cheap 4GB DIMMs. Another board has a Mellanox Infiniband controller and connector onboard, and both QDR and DDR Infiniband are available. To top it off, Supermicro has chosen the Matrox G200W as a 2D card, which is good for those who still access their servers directly via KVM. Supermicro did make a few compromises: you cannot use Xeons with a TDP higher than 95W (who needs those 130W monsters anyway?), 8GB DIMMs seem to be supported only on a few SKUs right now, and there is only one low profile PCI-e x16 expansion slot.

The Twin2 chassis can be outfitted with boards that support Intel "Nehalem Xeons" as well as AMD "Istanbul Opterons". The "Istanbul version" came out while we were typing this and was thus not included in this review.

Power measurement used this gear.

Power was measured at the wall by two devices: the Extech 38081…

…and the Ingrasys iPoman II 1201. A big thanks to Roel De Frene of Triple S for letting us test this unit.

The Extech device allows us track power consumption each second, the iPoman only each minute. With the Supermicro Twin2, we wanted to measure four servers at once, and the iPoman device was handy for measuring power consumption of several server nodes at once. We double-checked power consumption readings with the Extech 38081.

Read more

Biggest Opportunity to Green the Data Center - Procurement

I just had a pleasant and thought provoking conversation with IBM’s VP of Deep Computing, David Turek, regarding IBM’s energy efficiency supercomputers press release.

"Modern supercomputers can no longer focus only on raw performance," said David Turek, vice president, deep computing, IBM. "To be commercially viable these systems most also be energy efficient. IBM has a rich history of innovation that has significantly increased energy efficiency of our systems at all levels of the system that are designed to simultaneously reduce data center costs and energy use."

There are many things Dave and I discussed.  One of the areas is the role of procurement, realizing the impacts and issues of energy cost.

Procurement is the acquisition of goods and/or services at the best possible total cost of ownership, in the right quality and quantity, at the right time, in the right place and from the right source for the direct benefit or use of corporations, individuals, or even governments, generally via a contract.

IBM’s blue gene group has a paper on TCO to help procurement groups see how energy costs effect TCO.

image

image

Now ask yourself how many procurement people know how energy costs effect the TCO of data center services?

Unfortunately, training the whole procurement staff to learn the impact of energy is an impossible task. The answer to this is to add data center impact modeling software/tools to the procurement process, calculating TCO including energy costs.

Think about the roles of procurement as you green your data center.  I bet none of you have, but hopefully now you will.

Read more

PUE that changes behavior – DC Cost Usage Effectiveness (CUE)

My blogging was not much last week as I was on vacation for 4 days, at a Microsoft energy efficiency event, visited a client, trip to LBNL, and Data Center Dynamics SF.  But, all this time away from blogging has just built up a bunch more that I need to write.  Mike Manos wrote a good post on the marketing of PUE which drove a lot of conversation and his participation in a panel at DCD SF.

At the Intersection of Marketing and Metrics, the traffic lights don’t work.

July 4, 2009 by mmanos

To that end there is no doubt that the PUE metric has been instrumental in driving awareness and visibility on the space.  The Green Grid really did a great job in pulling this metric together evangelizing it to the industry.  Despite a host of other potential metrics out there, PUE has captured the industry given its relatively straight forward approach.   But PUE is poised to be a victim of its own success in my opinion unless the industry takes steps to standardizes its use in marketing material and how it is talked about.

I had a chance to catch up with Mike at DCD SF and many others, and shared my latest crazy idea to change data center behavior.

A week ago on vacation i wrote about Charity: Water’s three techniques for transparent philanthropy.

So what’s his secret? Mr. Harrison’s success seems to depend on three precepts:

First, ensure that every penny from new donors will go to projects in the field. He accomplishes this by cajoling his 500 most committed donors to cover all administrative costs.

Second, show donors the specific impact of their contributions. Mr. Harrison grants naming rights to wells. He posts photos and G.P.S. coordinates so donors can look up their wells on Google Earth. And in September, Mr. Harrison is going to roll out a new Web site that will match even the smallest donation to a particular project that can be tracked online.

Third, leap into new media and social networks. This spring, charity: water raised $250,000 through a “Twestival” — a series of meetings among followers on Twitter. Last year, it raised $965,000 by asking people with September birthdays to forgo presents and instead solicit cash to build wells in Ethiopia. The campaign went viral on the Web, partly because Mr. Harrison invests in clever, often sassy videos.

The big eye opener was thinking about how to allocate data center costs to change behavior. I sketched the idea in the below diagram to organize some of the ideas.

photo

Typical data center costs go to an allocation model which is based on the business units using IT services based on space, power, network access and maybe a few other usage stats.  While this method is accepted as standard practice, what incentives do the business units have to be more efficient?  As we all know.  Not much.

Taking the idea of cost allocation and 100% effectiveness of funding from Charity: Water, separate data center costs into those that the business units actually use to run their business services, and all other costs of over provisioning, waste, excess design, etc that some executive, architect, paranoid individuals said are must be requirements of the data center send them the monthly costs for their decisions.

Imagine what the executives will do when they see they are charged for their data center decisions they made.  I told this idea to one finance guy, and he admits at first he thought why?  Then, he got it and his head started spinning.  Oh my god! You would shake up things so much.  People would start to ask how much am I going to get charged?  I don’t have the budget for that.  Wait you don’t understand, we have  the budget as were going to have to pay for it anyway, we are just going to move the cost reporting to your department instead of the business units.

When you take a million dollars of excess data center capacity and spread it amongst dozens of business units no one does anything.  Take that same million dollars and charge it to the executive who made the decision, then watch what happens in data center decisions.

PUE is great for the data center design, construction, and operations staff to measure the efficiencies of power and cooling systems, but PUE does not drive efficient, green behaviors in the data center by others in the organization.

What is needed is a Cost Usage Effectiveness (CUE) for data centers take the Total Costs of the Data center divided by the Costs used by the Business Units to equal the data center cost usage effectiveness.

 

Read more