Energy Logic WP0208
Energy Logic WP0208
in Business-Critical Continuity
The model demonstrates that reductions in energy consumption at the IT equipment level
have the greatest impact on overall consumption because they cascade across all supporting
systems. This led to the development of Energy Logic, a vendor-neutral roadmap for optimizing
data center energy efficiency that starts with the IT equipment and progresses to the support
infrastructure. This paper shows how Energy Logic can deliver a 50 percent or greater reduction
in data center energy consumption without compromising performance or availability.
This approach has the added benefit of removing the three most critical constraints faced by
data center managers today: power, cooling and space. In the model, the 10 Energy Logic
strategies freed up two-thirds of floor space, one-third of UPS capacity and 40 percent of
precision cooling capacity.
All of the technologies used in the Energy Logic approach are available today and many can be
phased into the data center as part of regular technology upgrades/refreshes, minimizing
capital expenditures.
The model also identified some gaps in existing technologies that could enable greater energy
reductions and help organizations make better decisions regarding the most efficient
technologies for a particular data center.
1
Introduction In a report to the U.S. Congress, the Data center energy
Environmental Protection Agency consumption is being
The double impact of rising data center concluded that best practices can reduce driven by the demand
energy consumption and rising energy costs data center energy consumption by 50 within almost every
has elevated the importance of data center percent by 2011. organization for greater
efficiency as a strategy to reduce costs, computing capacity
manage capacity and promote environ- The EPA report included a list of Top 10 and increased IT
mental responsibility. Energy Saving Best Practices as identified by centralization.
the Lawrence Berkeley National Lab. Other
Data center energy consumption has been organizations, including Emerson Network
driven by the demand within almost every Power, have distributed similar information
organization for greater computing capacity and there is evidence that some best
and increased IT centralization. While this practices are being adopted.
was occurring, global electricity prices
increased 56 percent between 2002 and The Spring 2007 DCUG Survey found that
2006. 77 percent of respondents already had their
data center arranged in a hot-aisle/cold-
The financial implications are significant; aisle configuration to increase cooling sys-
estimates of annual power costs for U.S. tem efficiency, 65 percent use blanking pan-
data centers now range as high as $3.3 els to minimize recirculation of hot air and
billion. 56 percent have sealed the floor to prevent
This trend impacts data center capacity as cooling losses.
well. According to the Fall 2007 Survey of While progress has been made, an objec-
the Data Center Users Group (DCUG®), an tive, vendor-neutral evaluation of efficiency
influential group of data center managers, opportunities across the spectrum of data
power limitations were cited as the primary center systems has been lacking. This has
factor limiting growth by 46 percent of made it difficult for data center managers to
respondents, more than any other factor. prioritize efficiency efforts and tailor best
In addition to financial and capacity consid- practices to their data center equipment
erations, reducing data center energy use and operating practices.
has become a priority for organizations This paper closes that gap by outlining a
seeking to reduce their environmental holistic approach to energy reduction,
footprint. based on quantitative analysis, that enables
There is general agreement that improve- a 50 percent or greater reduction in data
ments in data center efficiency are possible. center energy consumption.
2
The distinction between Data Center Energy Consumption percent of total consumption. Supply-side
demand and supply systems include the UPS, power distribu-
power consumption is The first step in prioritizing energy saving tion, cooling, lighting and building
valuable because reduc- opportunities was to gain a solid under- switchgear, and account for 48 percent of
tions in demand-side standing of data center energy consumption. consumption.
energy use cascade
Emerson Network Power modeled energy Information on data center and infrastruc-
through the supply side.
consumption for a typical 5,000-square- ture equipment and operating parameters
foot data center (Figure 1) and analyzed on which the analysis was based are pre-
how energy is used within the facility. sented in Appendix A. Note that all data
Energy use was categorized as either centers are different and the savings poten-
“demand side” or “supply side.” tial will vary by facility. However, at mini-
Demand-side systems are the servers, stor- mum, this analysis provides an order-of-
age, communications and other IT systems magnitude comparison for data center
that support the business. Supply-side energy reduction strategies.
systems exist to support the demand side. The distinction between demand and sup-
In this analysis, demand-side systems— ply power consumption is valuable because
which include processors, server power sup- reductions in demand-side energy use cas-
plies, other server components, storage and cade through the supply side. For example,
communication equipment—account for 52 in the 5,000-square-foot data center used to
Figure 1. Analysis of a typical 5000-square-foot data center shows that demand-side computing equipment accounts
for 52 percent of energy usage and supply-side systems account for 48 percent.
* This represents the average power draw (kW). Daily energy consumption (kWh) can be captured by multiplying the power draw by 24.
3
analyze energy consumption, a 1 Watt complete. The energy saving measures The first step in the
reduction at the server component level included in Energy Logic should be consid- Energy Logic approach is
(processor, memory, hard disk, etc.) results ered a guide. Many organizations will already to establish an IT equip-
in an additional 1.84 Watt savings in the have undertaken some measures at the end ment procurement
power supply, power distribution system, of the sequence or will have to deploy some policy that exploits the
UPS system, cooling system and building technologies out of sequence to remove energy efficiency bene-
entrance switchgear and medium voltage existing constraints to growth. fits of low power proces-
-2.84 W
= Cumulative Saving
and .10 W here
Figure 2. With the Cascade Effect, a 1 Watt savings at the server component level creates a
reduction in facility energy consumption of approximately 2.84 Watts.
4
Total floor space choose not to employ power management Reducing Energy Consumption and
required for IT equip- because of concerns about response times. A Eliminating Constraints to Growth
ment was reduced by 65 significant opportunity exists within the
percent. industry to enhance the sophistication of Employing the Energy Logic approach to the
power management to make it an even more model data center reduced energy use by 52
powerful tool in managing energy use. percent without compromising performance
or availability.
The next step involves IT projects that may
not be driven by efficiency considerations, but In its unoptimized state, the 5,000-square-
have an impact on energy consumption. They foot data center model used to develop the
include: Energy Logic approach supported a total
• Blade servers compute load of 588 kW and total facility
• Server virtualization load of 1127 kW. Through the optimization
strategies presented here, this facility has
These technologies have emerged as “best been transformed to enable the same level of
practice” approaches to data center manage- performance using significantly less power
ment and play a role in optimizing a data and space. Total compute load was reduced
center for efficiency, performance and man- to 367 kW, while rack density was increased
ageability. from 2.8 kW per rack to 6.1 kW per rack.
Once policies and plans have been put in This has reduced the number of racks required
place to optimize IT systems, the focus shifts to support the compute load from 210 to 60
to supply-side systems. The most effective and eliminated power, cooling and space limi-
approaches to infrastructure optimization tations constraining growth (Figure 4).
include:
• Cooling best practices Total energy consumption was reduced to
• 415V AC power distribution 542 kW and the total floor space required for
• Variable capacity cooling IT equipment was reduced by 65 percent
• Supplemental cooling (Figure 5).
• Monitoring and optimization Energy Logic is suitable for every type of
Emerson Network Power has quantified the data center; however, the sequence may be
savings that can be achieved through each of affected by facility type. Facilities operating at
these actions individually and as part of the high utilization rates throughout a 24-hour
Energy Logic sequence (Figure 3). Note that day will want to focus initial efforts on
savings for supply-side systems look smaller sourcing IT equipment with low power
when taken as part of Energy Logic because processors and high efficiency power sup-
those systems are now supporting a smaller plies. Facilities that experience predictable
load. peaks in activity may achieve the greatest
benefit from power management technology.
Figure 6 shows how compute load and type
of operation influence priorities.
5
Savings Independent of Energy Logic Savings
Other Actions with the Cascade Effect
Energy Saving Action ROI
Cumulative
Savings (kW) Savings (%) Savings (kW) Savings (%)
Savings (kW)
High-efficiency power
141 12% 124 11% 235 5 to 7 mo.
supplies
Power management
125 11% 86 8% 321 Immediate
features
TCO reduced
Blade servers 8 1% 7 1% 328
38%*
TCO reduced
Server virtualization 156 14% 86 8% 414
63%**
Building entrance switchgear and genset (kW) 1169 620 549 (47%)
Figure 4. Energy Logic removes constraints to growth in addition to reducing energy consumption.
6
P1 P2 P1 P2 P1 P2
D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 Control
CW CRAC
CW CRAC
Center
HOT AISLE HOT AISLE HOT AISLE
D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2
D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2
CW CRAC
CW CRAC
HOT AISLE HOT AISLE HOT AISLE
Battery
D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 Room
D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2
CW CRAC
CW CRAC
HOT AISLE HOT AISLE HOT AISLE
D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 UPS Room
CW CRAC
CW CRAC
D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2 D1 R R R R R R R R R R D2
P1 P2 P1 P2
D1 R R R R R R R R D2 D1 R R R R R R R D2 Control
CW CRAC
CW CRAC
Center
HOT AISLE HOT AISLE
D1 R R R R R R R R D2 D1 R R R R R R R D2
XDV
XDV
XDV
XDV
XDV
XDV
XDV
D1 R R R R R D2 D1 D2
CW CRAC
CW CRAC
HOT AISLE HOT AISLE
Battery
XDV
XDV
XDV
XDV
XDV
XDV
XDV
XDV
XDV
XDV
D1 R R R R R D2 D1 D2 Room
UPS Room
CW CRAC
Figure 5. The top diagram shows the unoptimized data center layout. The lower diagram shows the data center after the
Energy Logic actions were applied. Space required to support data center equipment was reduced from 5,000 square feet
to 1,768 square feet (65 percent).
7
1. Lowest power processor 1. Virtualization
operations
3. Low power
2. Low power
processor
processor
4. High-efficiency
3. High-efficiency power supplies
power supplies
4. Blade servers
Figure 6 . The Energy Logic approach can be tailored to the compute load and type of operation.
The Ten Energy Logic Actions lower power processors deliver the same per-
formance as higher power models (Figure 8).
1. Processor Efficiency
In the absence of a true standard measure of In the 5,000-square-foot data center mod-
processor efficiency comparable to the U.S. eled for this paper, low power processors cre-
Department of Transportation fuel efficiency ate a 10 percent reduction in overall data
standard for automobiles, Thermal Design center power consumption.
Power (TDP) serves as a proxy for server
2. Power Supplies
power consumption.
As with processors, many of the server power
The typical TDP of processors in use today is supplies in use today are operating at efficien-
between 80 and 103 Watts (91 W average). cies below what is currently available. The
For a price premium, processor manufactur- U.S. EPA estimated the average efficiency of
ers provide lower voltage versions of their installed server power supplies at 72 percent
processors that consumes on average 30 in 2005. In the model, we assume the unopti-
Watts less than standard processors (Figure mized data center uses power supplies that
7). Independent research studies show these average 79 percent across a mix of servers
that range from four-years old to new.
Intel 2 1.8-2.6 80 W 50 W 30 W
Figure 7. Intel and AMD offer a variety of low power processors that deliver average savings
between 27 W and 38 W.
8
AS3AP
System Performance Results
Transaction/Second (Higher is Better) Best-in-class power supplies are available
30000
today that deliver efficiency of 90 percent.
Use of these power supplies reduces power
25000
draw within the data center by 124 kW or 11
percent of the 1127 kW total.
20000
As with other data center systems, server
power supply efficiency varies depending on
Transactions/Second
90%
85%
Efficiency
80%
Model A
Model B
75%
70%
0% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100%
Figure 9. Power supply efficiency can vary significantly depending on load and power supplies
are often sized for a load that exceeds the maximum server configuration.
9
the nameplate rating and the typical configu- are disabled because of concerns regarding While the move to blade
ration is 67 percent of the nameplate rating. response time; however, this decision may servers is typically not
Server manufacturers should allow pur- need to be reevaluated in light of the signifi- driven by energy consid-
chasers to choose power supplies sized for a cant savings this technology can enable. erations, blade servers
typical or maximum configuration. can play a role in reduc-
In the model, we assume that the idle power ing energy consump-
3. Power management software draw is 80 percent of the peak power draw tion.
Data centers are sized for peak conditions that without power management, and reduces to
may rarely exist. In a typical business data cen- 45 percent of peak power draw as power
ter, daily demand progressively increases from management is enabled. With this scenario,
about 5 a.m. to 11 a.m. and then begins to power management can save an additional
drop again at 5 p.m. (Figure 10). 86 kW or eight percent of the unoptimized
data center load.
Server power consumption remains relatively
high as server load decreases (Figure 11). In 4. Blade Servers
idle mode, most servers consume between Many organizations have implemented blade
70 and 85 percent of full operational power. servers to meet processing requirements and
Consequently, a facility operating at just 20 improve server management. While the
percent capacity may use 80 percent of the move to blade servers is typically not driven
energy as the same facility operating at 100 by energy considerations, blade servers can
percent capacity. play a role in energy consumption.
Server processors have power management Blade servers consume about 10 percent less
features built-in that can reduce power when power than equivalent rack mount servers
the processor is idle. Too often these features because multiple servers share common
30
100 80
25
% CPU Time
20
Watts
75 60
15
50 40
10
5
25 20
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Hours
0 0
Figure 10. Daily utilization for a typical Figure 11. Low processor activity does not translate into low power
business data center. consumption.
10
Implementing virtual- power supplies, cooling fans and other data center airflow (Figure 12). Many organi-
ization provides an components. zations, including Emerson Network Power,
incremental eight per- offer CFD imaging as part of data center
In the model, we see a one percent reduction
cent reduction in total assessment services focused on improving
in total energy consumption when 20 percent
data center power draw cooling efficiency.
for the 5,000-square- of rack-based servers are replaced with blade
foot facility. servers. More importantly, blades facilitate Additionally, temperatures in the cold aisle
the move to a high-density data center archi- may be able to be raised if current tempera-
tecture, which can significantly reduce energy tures are below 68° F. Chilled water tempera-
consumption. tures can often be raised from 45° F to 50° F.
11
8. Variable Capacity Cooling Newer technologies such
Data center systems are sized to handle peak as Digital Scroll com-
loads, which rarely exist. Consequently, oper- pressors and variable
ating efficiency at full load is often not a good frequency drives in com-
indication of actual operating efficiency. puter room air condi-
tioners (CRACs) allow
Newer technologies, such as Digital Scroll
Figure 12. CFD imaging can be used to evalu- high efficiencies to be
compressors and variable frequency drives in
ate cooling efficiency and optimize airflow. maintained at partial
This image shows hot air being recirculated computer room air conditioners (CRACs),
loads.
as it is pulled back toward the CRAC, which is allow high efficiencies to be maintained at
poorly positioned. partial loads.
Figure 13. 415V power distribution provides a more efficient alternative to using 208V power.
12
Supplemental cooling In the chilled water-based air conditioning
units are mounted system used in this analysis, the use of vari-
above or alongside able frequency drives provides an incremental
equipment racks and saving of four percent in data center power
pull hot air directly from consumption.
the hot aisle and deliver
cold air to the cold aisle. 9. High Density Supplemental Cooling
Traditional room-cooling systems have
proven very effective at maintaining a safe,
controlled environment for IT equipment.
However, optimizing data center energy effi-
ciency requires moving from traditional data
center densities (2 to 3 kW per rack) to an
Figure 14. Supplemental cooling enables
environment that can support much higher
higher rack densities and improved cooling
densities (in excess of 30 kW). efficiency.
13
TEAMWORK MODE
Control Disables
Humidifier ON Humidifier ON Dehumidification ON Compressors ON heaters in “Cold Zone”
Figure 15. System-level control reduces conflict between room air conditioning units
operating in different zones in the data center.
In the model, an incremental saving of one • Monitor and reduce parasitic losses from The Energy Logic model
percent is achieved as a result of system-level generators, exterior lighting and perimeter also provides a roadmap
monitoring and control. access control. For a 1 MW load, generator for the industry to deliv-
losses of 20 kW to 50 kW have been er the information and
Other Opportunities for Savings measured. technologies that data
Energy Logic prioritizes the most effective center managers can use
What the Industry Can Learn from the to optimize their
energy reduction strategies, but it is not
Energy Logic Model facilities.
intended to be a comprehensive list of energy
reducing measures. In addition to the actions The Energy Logic model not only prioritizes
in the Energy Logic strategy, data center man- actions for data center managers. It also pro-
agers should consider the feasibility of the vides a roadmap for the industry to deliver the
following: information and technologies that data center
managers can use to optimize their facilities.
• Consolidate data storage from direct
Here are specific actions the industry can take
attached storage to network attached stor-
to support increased data center efficiency:
age. Also, faster disks consume more
power so consider reorganizing data so that 1. Define universally accepted metrics for
less frequently used data is on slower processor, server and data center efficiency
archival drives. There have been tremendous technology
advances in server processors in the last
• Use economizers where appropriate to
decade. Until 2005, higher processor per-
allow outside air to be used to support data
formance was linked with higher clock speeds
center cooling during colder months,
and hotter chips consuming more power.
creating opportunities for energy-free
Recent advances in multi-core technology
cooling. With today’s high-density
have driven performance increases by using
computing environment, economizers can
more computing cores operating at relatively
be cost effective in many more locations
lesser clock speeds, which reduces power
than might be expected.
consumption.
14
An industry standard of Today processor manufacturers offer a range eliminate much of the resistance to using
data center efficiency of server processors from which a customer power management.
that measures perform- needs to select the right processor for the
3. Matching power supply capacity to server
ance per Watt of energy given application. What is lacking is an easy-
configuration
used would be extreme- to-understand and easy-to-use measure such
Server manufacturers tend to oversize power
ly beneficial in measur- as the miles-per-gallon automotive fuel effi-
supplies to accommodate the maximum
ing the progress of data ciency ratings developed by the U.S.
configuration of a particular server. Some
center optimization Department of Transportation, that can help
users may be willing to pay an efficiency
efforts. buyers select the ideal processor for a given
penalty for the flexibility to more easily
load. The performance per Watt metric is
upgrade, but many would prefer a choice
evolving gradually with SPEC score being used
between a power supply sized for a standard
as the server performance measure, but more
configuration and one sized for maximum
work is needed.
configuration. Server manufacturers should
This same philosophy could be applied to the consider making these options available
facility level. An industry standard of data cen- and users need to be educated about the
ter efficiency that measures performance per impact power supply size has on energy
Watt of energy used would be extremely ben- consumption.
eficial in measuring the progress of data cen-
4. Designing for high density
ter optimization efforts. The PUE ratio devel-
A perception persists that high-density envi-
oped by the Green Grid provides a measure of
ronments are more expensive than simply
infrastructure efficiency, but not total facility
spreading the load over a larger space. High-
efficiency. IT management needs to work
density environments employing blade and
with IT equipment and infrastructure manu-
virtualized servers are actually economical as
facturers to develop the miles-per-gallon
they drive down energy costs and remove
equivalent for both systems and facilities.
constraints to growth, often delaying or elim-
2. More sophisticated power management inating the need to build new facilities.
While enabling power management features
5. High-voltage distribution
provides tremendous savings, IT manage-
415V power distribution is used commonly
ment often prefers to stay away from this
in Europe, but UPS systems that easily sup-
technology as the impact on availability is not
port this architecture are not readily available
clearly established. As more tools become
in the United States. Manufacturers of critical
available to manage power management fea-
power equipment should provide the 415V
tures, and data is available to ensure that
output as an option on UPS systems and can
availability is not impacted, we should see
do more to educate their customers regard-
this technology gain market acceptance.
ing high-voltage power distribution.
More sophisticated controls that would allow
these features to be enabled only during peri- 6. Integrated measurement and control
ods of low utilization, or turned off when criti- Data that can be easily collected from IT sys-
cal applications are being processed, would tems and the racks that support them has
15
yet to be effectively integrated with support IT consolidation projects also play an impor- Together these strate-
systems controls. This level of integration tant role in data center optimization. Both gies created a 52 per-
would allow IT systems, applications and sup- blade servers and virtualization contribute to cent reduction in energy
port systems to be more effectively managed energy savings and support a high-density use in the 5,000-square-
based on actual conditions at the IT equip- environment that facilitates true foot data center model
ment level. optimization. developed by Emerson
Network Power while
Conclusion The final steps in the Energy Logic optimiza- removing constraints to
tion strategy is to focus on infrastructure sys- growth.
Data center managers and designers, IT
tems, employing a combination of best prac-
equipment manufacturers and infrastructure
tices and efficient technologies to increase
providers must all collaborate to truly opti-
the efficiency of power and cooling systems.
mize data center efficiency.
Together these strategies created a 52 per-
For data center managers, there are a number
cent reduction in energy use in the 5,000-
of actions that can be taken today that can
square-foot data center model developed by
significantly drive down energy consumption
Emerson Network Power while removing
while freeing physical space and power and
constraints to growth.
cooling capacity to support growth.
Appendix B shows exactly how these savings
Energy reduction initiatives should begin with
are achieved over time as legacy technologies
policies that encourage the use of efficient IT
are phased out and savings cascade across
technologies, specifically low power proces-
systems.
sors and high-efficiency power supplies. This
will allow more efficient technologies to be
introduced into the data center as part of the
normal equipment replacement cycle.
16
Appendix A: Data Center Assumptions Used to Model Energy Use
Figure 16. Server operating parameters used in the Energy Logic model.
17
Storage Cooling system
• Storage Type: Network attached storage. • Cooling System is chilled water based.
• Capacity is 120 Terabytes. • Total sensible heat load on the precision
• Average Power Draw is 49 kW. cooling system includes heat generated
by the IT equipment, UPS and PDUs,
Communication Equipment building egress and human load.
• Routers, switches and hubs required to
interconnect the servers, storage and • Cooling System Components:
access points through Local Area Network - Eight 128.5 kW chilled water based
and provide secure access to public precision cooling system placed at the
networks. end of each hot aisle. Includes one
redundant unit.
• Average Power Draw is 49 kW. - The chilled water source is a chiller
Power Distribution Units (PDU): plant consisting of three 200 ton
• Provides output of 208V, 3 Phase through chillers (n+1) with matching con-
whips and rack power strips to power densers for heat rejection and four
servers, storage, communication equip- chilled water pumps (n+2).
ment and lighting. (Average load is - The chiller, pumps and air condition-
539kW). ers are powered from the building dis-
• Input from UPS is 480V 3-phase. tribution board (480V 3 phase).
• Efficiency of power distribution is 97.5 - Total cooling system power draw is
percent. 429 kW.
UPS System Building substation:
• Two double conversion 750 kVA UPS • The building substation provides 480V
modules arranged in dual redundant 3-phase power to UPS’s and cooling
(1 +1) configuration with input filters for system.
power factor correction (power factor = • Average load on building substation is
91 percent). 1,099 kW.
• Utility input is 13.5 kVA, 3-phase
• The UPS receives 480V input power for
connection.
the distribution board and provides a
• System consists of transformer with isola-
480V, 3 Phase power to the power distri-
tion switchgear on the incoming line,
bution units on the data center floor.
switchgear, circuit breakers and distribu-
• UPS efficiency at part load: 91.5 percent. tion panel on the low voltage line.
• Substation, transformer and building
entrance switchgear composite efficiency
is 97.5 percent.
18
Appendix B: Timing Benefits from Efficiency Improvement Actions
supplies
Server power
86 9 26 43 65 86
management
Blade servers 7 1 7 7 7 7
Projects
IT
Virtualization 86 9 65 69 86 86
415V AC power
20 0 0 20 20 20
distribution
Practices
Best
Variable capacity
49 49 49 49 49 49
cooling
Infrastructure
High density
72 0 57 72 72 72
Projects
supplemental cooling
19
Note: Energy Logic is an approach developed by Emerson Network Power
Emerson Network Power to provide organizations 1050 Dearborn Drive
with the information they need to more effective-
ly reduce data center energy consumption. It is P.O. Box 29186
not directly tied to a particular Emerson Network Columbus, Ohio 43229
Power product or service. We encourage use of 800.877.9222 (U.S. & Canada Only)
the Energy Logic approach in industry discussions 614.888.0246 (Outside U.S.)
on energy efficiency and will permit use of the
Energy Logic graphics with the following Fax: 614.841.6022
attribution:
EmersonNetworkPower.com
The Energy Logic approach developed by Liebert.com
Emerson Network Power.
While every precaution has been taken to ensure
To request a graphic send an email to accuracy and completeness in this literature, Liebert
Corporation assumes no responsibility, and disclaims all
[email protected]. liability for damages resulting from use of this informa-
tion or for any errors or omissions.
Specifications subject to change without notice.
© 2007 Liebert Corporation. All rights reserved
throughout the world.
Trademarks or registered trademarks are property of
their respective owners.
® Liebert and the Liebert logo are registered trade-
marks of the Liebert Corporation
Business-Critical Continuity, Emerson Network Power
and the Emerson Network Power logo are trademarks
and service mark of the Emerson Electric Co.
WP154-158-117
SL-24621 (0208)