I just posted about HP's Butterfly Flexible Data Center. Now that HP has official announced the solution, there is more on the HP Press Room.
Economical, Efficient, Environmental is a theme for HP's video presentation.
Here are numbers HP uses to demonstrate a 50% lower CAPex
And lower OPex.
And HP discusses yearly water usage. Yippee!!!
Typically, data centers use 1.90 liters of water
per kWh of total electricity. A one-megawatt data
center with a PUE of 1.50 running at full load for
one year is expected to consume 13 million kWh
and will consume 6.5 million U.S. gallons of water
annually. FlexDC uses no water in some climates and
dramatically reduces the consumption of water in
others. Actual amounts can vary depending on system
selection and local weather patterns.
HP has a white paper that is a must read for anyone designing a data center.
Introduction
If we briefly go back to the late 1960s and the advent
of transistor, efficiencies and cycles of innovation in
the world of electronics have increased according to
Moore’s law. However, data center facilities, which
are an offshoot of this explosion in technology, have
not kept pace with this legacy. With the magnitude of
capital required and costs involved with the physical
day-to-day operations of data centers, this existing
paradigm could impede the growth of data center
expansions, unless a new group of innovative solutions
is introduced in the market place.
With sourcing capital becoming more difficult to
secure, linked with potential reductions in revenue
streams, an environment focused on cost reduction
has been generated. The pressure to reduce capital
expenditure (CAPEX) is one of the most critical issues
faced by data center developers today. This is helping
to finally drive innovation for data centers.
The key contributors which can reduce CAPEX
and operational expenditure (OPEX) are typically
modularity, scalability, flexibility, industrialization,
cloud computing, containerization of mechanical and
electrical solutions, climate control, expanding criteria
for IT space, and supply chain management. All these
factors come into play when planning a cost-effective
approach to data center deployment. Every company
that develops and operates data centers is attempting
to embrace these features. However, businesses
requiring new facilities usually do not explore all the
strategies available. Generally, this is either due to
lack of exposure to their availability or a perceived
risk associated with changes to their existing
paradigm. Emerging trends such as fabric computing
further exacerbate the silo approach to strategy and
design, where “what we know” is the best direction.
The Four Cooling system alternatives are:
This adaptation of an
industrial cooling approach includes the following
cooling technologies: air-to-air heat exchangers with
direct expansion (Dx) refrigeration systems; indirect
evaporation air-to-air heat exchangers with Dx assist;
and direct evaporation and heat transfer wheel with
Dx assist.
Reducing fan power. Fan power is a hidden inefficiency in the data center whether in the mechanical systems or IT equipment. HP discusses how it reduces fan power.
To obtain the maximum use of the environment, supply
air temperature set points need to be set at the highest
temperature possible and still remain within the
warranty requirement range of the IT equipment. The
next critical component is to control the temperature
difference between the supply and return air streams
to a minimum range of 25° F. This reduces the amount
of air needed to cool the data center, thus reducing
fan energy. The configuration of the data center in
general must follow certain criteria in order to receive
greater benefits available through the use of this
concept, as follows:
• Server racks are configured in a hot aisle
containment (HAC) configuration.
• There is no raised floor air distribution.
• The air handlers are distributed across a common
header on the exterior of the building for even air
distribution.
• Supply air diffusers are located in the exterior wall,
connected to the distribution duct. These diffusers
line up with the cold aisle rows.
• The room becomes a flooded cold aisle.
• The hot aisle is ducted to a plenum, normally
created through the use of a drop ceiling. The hot
air shall be returned via the drop ceiling plenum
back to the air handlers.
• Server racks are thoroughly sealed to reduce the
recirculation of waste heat back into the inlets of
nearby servers.
• Server layout is such that the rows of racks do not
exceed 18 feet in length.
The control for the air handlers shall maintain
maximum temperature difference between the supply
and return air distribution streams. The supply air
temperature is controlled to a determined set point
while the air amount is altered to maintain the desired
temperature difference by controlling the recirculation
rate in the servers.
Electrical distribution techniques are listed as well.
Traditional data centers have electrical distribution
systems based on double conversion UPS with battery
systems and standby generators. There are several
UPS technologies offered within FlexDC, which
expands the traditional options:
• Rotary UPS—94% to 95% energy efficient
• Flywheel UPS—95% energy efficient
• Delta Conversion UPS—97% energy efficient
• Double Conversion UPS—94.5% to 97% energy
efficient
• Offline UPS—Low-voltage version for the 800 kW
blocks, about 98% energy efficient
FlexDC not only specifies more efficient transformers
as mandated by energy standards, it also uses best
practices for energy efficiency. FlexDC receives power
at medium voltage and transforms it directly to a
server voltage of 415 V/240 V. This reduces losses
through the power distribution unit (PDU) transformer
and requires less electrical distribution equipment,
thus, saving energy as well as saving on construction
costs. An additional benefit is a higher degree of
availability because of fewer components between the
utility and the server.
And HP takes a modeling approach.
HP has developed a state-of-the-art energy evaluation
program, which includes certified software programs
and is staffed with trained engineers to perform a
comprehensive review of the preliminary system
selections made by the customer. This program
provides valuable insight to the potential performance
of the systems and is a valuable tool in final system
selection process. The following illustrations are typical
outputs for the example site located in Charlotte,
North Carolina. This location was chosen due its very
Figure 4: Shows state-of-the-art data center annual electricity consumption
reliable utility infrastructure and its ability to attract
mission critical type businesses. The illustrations
compare a state-of-the-art designed data center using
current approaches and HP FlexDC for the given
location.
Comparing two different scenarios.
Scenario A: Base case state-of-the-art
brick-and-mortar data center.
A state-of-the-art legacy data center’s shell is typically
built with concrete reinforced walls. All of the cooling
and electrical systems are located in the same shell.
Traditional legacy data center cooling systems entail
the use of large central chiller plants and vast piping
networks and pumps to deliver cooling air handlers
located in the IT spaces. Electrical distribution systems
typically are dual-ended static UPS system with good
reliability but low efficiencies due to part-loading
conditions. PUE for a traditional data center with tier
ratings of III and above are between 1.5–1.9.
Scenario B: HP FlexDC
The reliability of the system configuration is equivalent
to an Uptime Institute Tier III, distributed redundant.
The total critical power available to the facility is
3.2 MW. The building is metal, using materials
standard within the metal buildings industry. The
electrical distribution system is a distributed redundant
scheme based on a flywheel UPS system located in
prefabricated self-contained housings. The standby
generators are located on the exterior of the facility
in prefabricated self-contained housing with belly tank
fuel storage.
The mechanical cooling systems are prefabricated
self-contained air handlers with air-to-air heat
exchangers using Dx refrigerant cooling to assist
during periods of the year when local environment is
not capable of providing the total cooling for the data
center IT space.
The IT white space is a non-raised floor environment.
The IT equipment racks are arranged in a hot aisle
containment configuration. The hot return air is
directed into the drop ceiling above and returned to
the air handlers.
The following life-cycle cost analysis matrix quantifies
the CAPEX and OPEX costs and the resultant PV
dollars for the base case and the alternative scenario.
Which feeds this summary.
And a NPV cost savings of 37%.
Besides HP sharing Flexible Data Center design approach, they have published a set of documents that anyone building their own data center can use.
Kfir thanks for taking a step to share more information in industry and show them a better path to green a data center, being economically, efficietly, and environmentally.