Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
11 views103 pages

Archnes: Jeremy

This thesis presents a low-cost, systematic approach to enhance energy efficiency in data centers, which currently consume a significant portion of global electricity. A pilot program at a Raytheon site resulted in a 23% reduction in electricity consumption, translating to annual savings of over $53,000, primarily through improved heat removal and lighting efficiency. The document also outlines managerial techniques and a rollout plan for replicating these improvements across other data centers.

Uploaded by

akarshr233
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views103 pages

Archnes: Jeremy

This thesis presents a low-cost, systematic approach to enhance energy efficiency in data centers, which currently consume a significant portion of global electricity. A pilot program at a Raytheon site resulted in a 23% reduction in electricity consumption, translating to annual savings of over $53,000, primarily through improved heat removal and lighting efficiency. The document also outlines managerial techniques and a rollout plan for replicating these improvements across other data centers.

Uploaded by

akarshr233
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 103

Developing a Low-Cost, Systematic Approach to Increase an Existing Data

Center's Energy Efficiency

By MASSACHUSETTS INSTJITTE
By OF TECHNOLOGY

Jeremy M. Stewart JUN 0 8 2010


B.S. Materials Engineering, University of Cincinnati,
2003
M.S. Manufacturing Engineering, University of Washington, 2007L
LIBRARIES
B
Submitted to the MIT Sloan School of Management and the Engineering Systems Division in
Partial Fulfillment of the Requirements for the Degrees of
Master of Business Administration
AND ARCHNES
Master of Science in Engineering Systems
In conjunction with the Leaders for Global Operations Program at the
Massachusetts Institute of Technology
June 2010
( 2010 Massachusetts Institute of Technology. All rights reserved.

Signature of Author

( S l May 7, 2010
Department of Engineering Systems, MIT Sloan School of Management

Certified by
Leon Glicksman, Thesis Supervisor
Professor of Building Technology & Mechanical Engineering

Certified by

Sarah Slapghter, Thesis Supervisor


Senior Lecturer, MIT Sloan School of Management

Accepted by _________

t bDebbie Berechman
Executiv ector of MBA Progra IT Sloan School of Management

Accepted by
Nancy Leveson
Chair, Engineering Systems Division Education Committee
Professor of Aeronautics and Astronautics, Engineering Systems at MIT
Developing a Low-Cost, Systematic Approach to Increase an
Existing Data Center's Energy Efficiency
By

Jeremy M. Stewart

Submitted to the MIT Sloan School of Management and the Engineering Systems Division on
May 7, 2010 in partial fulfillment of the requirements for the degrees of

Master of Business Administration and

Master of Science in Engineering Systems

Abstract
Data centers consume approximately 1.5% of total US electricity and 0.8% of the total world
electricity, and this percentage will increase with the integration of technology into daily lives. In
typical data centers, valued added IT equipment such as memory, servers, and networking account
for less than one half of the electricity consumed, while support equipment consumes the remaining
electricity.

The purpose of this thesis is to present the results of developing, testing, and implementing a low-
cost, systematic approach for increasing the energy efficiency of data centers. The pilot process was
developed using industry best practices, and was piloted at a Raytheon site in Garland, TX. Because
the approach is low-cost, there is an emphasis on increasing the energy efficiency of data centers'
heat removal and lighting equipment.

The result of implementing the low-cost systematic approach, consisting of both technical and
behavior modifications, was a 23% reduction in electricity consumption, leading to annual savings of
over $53,000. The improvement to the heat removal equipment's energy efficiency was 54%. In
addition to presenting the results of the pilot, recommendations for replicating the pilot's success are
provided. Two major managerial techniques are described - creating an aligned incentive structure in
both Facilities and IT departments during the data center design phase, and empowering employees
to make improvements during the use phase. Finally, a recommended roll out plan, which included a
structure for Data Center Energy Efficiency Rapid Results Teams, is provided.

Thesis Advisors:
Leon Glicksman, Professor of Building Technology & Mechanical Engineering
Sarah Slaughter, Senior Lecturer, MIT Sloan School of Management
Acknowledgements
First, I would like to acknowledge the Leaders for Global Operations (LGO) program at MIT, my
peers in the LGO class of 2010, and my advisors Sarah Slaughter and Leon Glicksman for their
support of this work.

Second, I would like to thank the amazing team I worked with at Raytheon Intelligence &
Information Systems - Roy Villanueva and Joe Hall - for their support during and after the
internship that produced this thesis.

Third, I would like to thank my friends, colleagues and mentors at Boeing. I appreciate your advice,
faith and sponsorship during the two year LGO program. I look forward to the making an impact
and addressing the challenges that lie ahead.

Finally, and most importantly, I would like to thank my wife, my parents, and my in-laws for their
love and support. I realize that the LGO program is one step in a lifelong journey, and I am glad
that I have y'all by my side.
TABLE OF CONTENTS

ABSTRACT ............................................................................................................................ III

ACKNOW LEDGEM EN TS ................................................................................................ IV

1 INTRODUCTION...... ................................................ 9

1.1 PROJECT M OTIVATION.......................................................................................................................9


1.1.1 INCREASE ENERGY EFFICIENCY OF DATA CENTERS WITH BEHAVIOR CHANGE .............................. 9
1.1.2 USE ONLY LOW-COST TECHNIQUES TO CHANGE BEHAVIORS ........................................................... 9
1.1.3 REDUCE EMISSIONS DUE TO ENERGY CONSUMPTION ........................................................................ 9
1.2 PROBLEM STATEMENT.................................................................................................................... 10
1.3 THESIS OVERVIEW .......................................................................................................................... 11

2 BACKGROUN D ................................................................................................................ 12

2.1 COMPANY BACKGROUND - RAYTHEON....................................................................................... 12


2.1.1 RAYTHEON INTELLIGENCE AND INFORMATION SYSTEMS ................................................................ 13
2.1.2 BACKGROUND OF GARLAND IIS SITE .................................................................................................. 15
2.1.2.1 History of the Garland IIS Site........................................................................................................15
2.1.2.2 Analysis of Garland's Electricity Consum ption................................................................... 16
2.1.2.3 Analysis of Garland's Natural Gas Consum ption.............................................................. 18
2.1.2.4 Sum mary of Garland's Energy Consum ption...................................................................... 19
2.2 DATA CENTER OVERVIEW .............................................................................................................. 21
2.2.1 INTRODUCTION ............................................................................................................................................ 21
2.2.2 TYPICAL ELECTRICITY USAGE WITHIN A DATA CENTER................................................................... 22
2.2.2.1 Electricity Required to Deliver Power................................................................................... 22
2.2.2.2 Electricity Required for IT Equipm ent................................................................................... 23
2.2.2.3 Electricity Required to Rem ove Heat...................................................................................... 24
2.2.3 IMPACT OF DATA CENTERS........................................................................................................................ 27

3 HYPOTH ESIS ...................................................................................................... ... 29

4 METH ODOLOGY ......................................................................................................... 30

4.1 DATA COLLECTION AND ANALYSIS.............................................................................................. 30


4.2 REVIEW OF INDUSTRY BEST PRACTICES ....................................................................................... 30
4.2.1 POWER DELIVERY EQUIPMENT ENERGY EFFICIENCY..................................................................... 31

4.2.1.1 Rightsizing the Data Center ............................................................................................................ 31


4.2.1.2 On-site Power Generation ............................................................................................................... 33
4.2.1.3 Improve the Operating Efficiency of the UPS .................................................................... 34
4.2.2 IT EQUIPMENT ENERGY EFFICIENCY ................................................................................................... 35

4.2.2.1 Replace and/or Remove Obsolete Equipment................................................................. 35


4.2.2.2 Proactively Manage the Power Settings of the Equipment........................................... 36
4.2.2.3 Server Consolidation &Virtualization.................................................................................. 37
4.2.3 HEAT REMOVAL ENERGY EFFICIENCY ................................................................................................. 39

4.2.3.1 Use In-rack Water Cooling...............................................................................................................39


4.2.3.2 Use Cooling Economizer...................................................................................................................40
4.2.3.3 Improve Airflow Management.......................................................................................................42
4.2.4 MISCELLANEOUS EQUIPMENT ENERGY EFFICIENCY ....................................................................... 50

4.3 CONCEPTUAL FRAMEWORK............................................................................................................ 51

5 RESULTS AND DISCUSSION OF PILOTED IMPROVEMENT PROCESS..............54

5.1 OVERVIEW OF PILOT DATA CENTER.............................................................................................. 54

5.2 BASELINE CONDITION OF PILOT DATA CENTER....................................................................... 55

5.2.1 POWER DELIVERY AND IT EQUIPMENT ELECTRICAL LOAD ........................................................... 55

5.2.2 HEAT REMOVAL EQUIPMENT ELECTRICAL LOAD..............................................................................56

5.2.3 MISCELLANEOUS EQUIPMENT ELECTRICAL LOAD............................................................................ 58

5.2.4 SUMMARY OF BASELINE ELECTRICAL LOAD FOR DATA CENTER .................................................. 58

5.2.5 BASELINE AIRFLOW MEASUREMENTS.....................................................................................................59

5.2.6 BASELINE AIR TEMPERATURE MEASUREMENTS .............................................................................. 60

5.2.7 ADDITIONAL VISUAL OBSERVATIONS INSIDE PILOT DATA CENTER ............................................. 63

5.2.8 SUMMARY OF BASELINE CONDITION................................................................................................... 65

5.3 IMPLEMENTED IMPROVEMENTS AND THEIR AFFECTS ON THE PILOT DATA CENTER ............ 66

5.3.1 AIRFLOW MEASUREMENTS........................................................................................................................67

5.3.2 AIR TEMPERATURE MEASUREMENTS................................................................................................. 68

5.3.3 HEAT REMOVAL EQUIPMENT ELECTRICAL LOAD..............................................................................72

5.3.4 MISCELLANEOUS EQUIPMENT ELECTRICAL LOAD............................................................................ 74

5.3.5 SUMMARY OF ELECTRICAL LOAD FOR DATA CENTER ..................................................................... 74


6 RECOMMENDATIONS TO MAXIMIZE RETURN ON IMPROVEMENT PROCESS
77

6.1.1 CORRECTING DESIGN PHASE MISTAKES THROUGH MANAGEMENT TECHNIQUES......................77

6.1.2 CORRECTING USE PHASE MISTAKES THROUGH EMPLOYEE ENGAGEMENT ................................. 80

6.2 SUGGESTED IMPLEMENTATION PLAN FOR DATA CENTERS ..................................................... 82


6.2.1 GAIN BUY-IN AND COMMITMENT FROM LEADERSHIP FOR DATA CENTER IMPROVEMENTS............83

6.2.2 CREATE RAPID RESULTS TEAMS TO IMPLEMENT THE IMPROVEMENT PROCESS.........................84

6.2.3 ALLOW FEEDBACK AND INCREASED KNOWLEDGE TO BE SHARED AMONGST THE RRTS .......... 84

7 CONCLUSION ................. ............................................................ ........ 86

R BIBLIOGRAPHV ..................................................
ER'OR!I BnOOKMARKW MC~r TN DPTXTfl

APPENDIX I - PROCESS FOR ANALYZING ENERGY USAGE ................................. 92

APPENDIX II - SUMMARY OF ALTERNATIVE EFFICIENCY SCENARIO


ASSU M PTION S......................................................................................................................93

APPENDIX III - LIST OF ELECTRICAL POWER CALCULATION TOOLS........94

APPENDIX IV - CHANGES MADE TO THE PILOT DATA CENTER ...................... 95

APPENDIX V - SENSITIVITY OF CHILLER'S ASSUMED ELECTRICITY


CON SUM PTION .......................................... ................................................................. 96

APPENDIX VI - BREAKDOWN OF ELECTRICITY CONSUMPTION FOR BUILDING


CONTAINING THE PILOT DATA CENTER...................................98

APPENDIX VII - SPECIFIC LESSONS LEARNED AND IMPLEMENTATION PLAN


FO R RAYTH EON..................................................................................................................99
LIST OF FIGURES
Figure 1: Overview of Raytheon's Intelligence and Information Systems ......................................... 14
Figure 2: Overview of Major IIS Site Locations ..................................................................................... 15
Figure 3: 2008 IIS Energy Consumption by Major Site..........................................................................15
Figure 4: Photo of the Garland site in the 1950s ...................................................................................... 16
Figure 5: Timeline of the G arland Site....................................................................................................... 16
Figure 6: G arland IIS Cam pus, circa 2009 ............................................................................................... 17
Figure 7: Breakdown of Garland electricity consumption (instantaneous reading) ........................... 17
Figure 8: Illustration of Electricity Analysis performed for the Garland Campus ............................. 18
Figure 9: U S Clim ate Zones for 2003 ......................................................................................................... 18
Figure 10: Garland Natural Gas Consumption, 2005- 2008 .................................................................. 19
Figure 11: Garland Natural Gas and Electricity Consumption, 2005- 2008....................................... 19
Figure 12: Example of energy consumption over a one year period at the Garland IIS site............20
Figure 13: Breakdown of Garland energy consumption, 2005-2009................................................... 20
Figure 14: Exam ple of a data center.......................................................................................................... 21
Figure 15: Example of racks contained in a data center ......................................................................... 21
Figure 16: Diagram of the Electricity Flow into a Data Center..............................................................22
Figure 17: Typical Electricity Consumption of a Data Center..............................................................22
Figure 18: Diagram of Electricity Flow through the Power Delivery System..................23
Figure 19: Typical Electricity Consumption of a Data Center.............................................................. 23
Figure 20: Estimated Electricity Consumed by the IT Equipment in US Data Centers from 2000-
2 0 0 6 ...................................................................................................................................................................... 24
Figure 21: Relative contributions to the total thermal output of a typical data center ......... ........... 25
Figure 22: Diagram of a Chilled Water System used to Remove Heat from a Data Center.......26
Figure 23: Heat Energy Removal Via the Refrigeration Cycle................................................................26
Figure 24: Typical Electricity Consumption of by the heat removal system of a Data Center'..........27
Figure 25: Projected Electricity Usage of Data Centers for Different Scenarios................................28
Figure 26: Screen shot of HP Power Advisor, which helps customers estimate the load of future
racks in their data centers..................................................................................................................................32
Figure 27: UPS efficiency as a function of load comparing latest generation UPS to historic published
d ata........................................................................................................................................................................34
Figure 28: Effect of internal UPS loss on Efficiency .............................................................................. 34
Figure 29: Typical power of servers in the world over time.................................................................. 35
Figure 30: Comparison of Power Consumption when "Performance State" of AMD servers.............37
Figure 31: Average CPU utilization of more than 5,000 servers during a six-month period. Servers
are rarely completely idle and seldom operate near their maximum utilization. ............................... 38
Figure 32: Server power usage and energy efficiency at varying utilization levels, from idle to peak
performance. Even an energy-efficient server still consumes about half its full power when doing
virtually no work.................................................................................................................................................38
Figure 33: Various forms of consolidation................................................................................................39
Figure 34: Diagram illustrating Virtualization...............................................................................................39
Figure 35: Example of how a liquid-cooling configuration could be configured in a data center ........ 40
Figure 36: Diagram of a Chilled Water System using chilled water directly from the cooling tower to
rem ove heat from the D ata Center ................................................................................................................. 41
Figure 37: Diagram of a Data Center relying only on heat removal through an Air-Side Economizer
............................................................................................................................................................................... 41
Figure 38: Rack arrangement without hot and cold aisles (side view) Red indicates hot air, while blue
in dicates cold air.................................................................................................................................................43
Figure 39: Hot/cold aisle rack arrangement (side view). Red indicates hot air, while blue indicates
co ld air..................................................................................................................................................................4 3
Figure 40: Example of poor placement of perforated floor tiles, or "floor vents", as it is referred to
here ....................................................................................................................................................................... 44
Figure 41: Diagram showing heat removal capacity of perforated floor tiles depending on the tile's
airflow ................................................................................................................................................................... 44
Figure 42: Rack arrangement showing layout of CRAC units in relation to the hot/cold aisle (top
view). This diagram is for a raised floor data center...............................................................................45
Figure 43: Example of CRAC chimneys, which are circled in green ................................................... 46
Figure 44: Example of poor placement of perforated floor tiles, or "floor vents", as it is referred to
h ere ....................................................................................................................................................................... 48
Figure 45: Example of a cable cutout that prevents the waste of cold air ............................................ 48
Figure 46: Example of rack without blanking panels (left) and with blanking panels (right) ........... 49
Figure 47: Missing racks in a row of racks allow hot air to re-circulate .............................................. 49
Figure 48: Power supplies acting as a raised floor obstruction.............................................................50
Figure 49: Network cabling acting as a raised floor obstruction...........................................................50
Figure 50: List of improvements for existing data centers .................................................................... 53
Figure 51: Diagram of Pilot Data Center .................................................................................................. 54
Figure 52: Current profile of the IT equipment over a one week period............................................ 55
Figure 53: Current of the data center CRAC units on a three day (left) and one day (right) scale.
Highlighted area in three day chart is area of one day chart...................................................................57
Figure 54: Detail of power consumption in a data center efficiency model....................................... 59
Figure 55: Baseline electricity usage for the pilot data center................................................................ 59
Figure 56: Variance in airflow in the pilot data center (baseline condition) ......................................... 60
Figure 57: Comparison of actual and required airflow for pilot data center (baseline condition) ........ 60
Figure 58: Variance in air temperature directly in front of each rack in the pilot data center (baseline
co nditio n).............................................................................................................................................................6 1
Figure 59: Diagram showing location of racks on which air temperatures were measured........62
Figure 60: Air temperature measurements in the cold and hot aisle for selected racks (baseline
condition). Note that Rack 21 and Rack 33 have perforated tiles in the cold aisle, while Rack 18 and
Rack 36 do n ot....................................................................................................................................................62
Figure 61: Diagram showing the average air temperature at various locations within the pilot data
center (note only a portion of the data center is shown here). The average air temperature shown
was derived from data taken over a one-week period..................................................................................63
Figure 62: Picture showing missing blanking panels, missing doors, and unblocked cable cut-outs
(baseline condition)............................................................................................................................................64
Figure 63: Picture showing missing unblocked cable cut-out and raised floor obstructions (baseline
con ditio n).............................................................................................................................................................64
Figure 64: Picture showing randomly placed perforated tile blowing pieces of paper into the air
(baseline condition)............................................................................................................................................64
Figure 65: Picture showing missing doors from racks (baseline condition) ......................................... 64
Figure 66: List of improvements for existing data centers .................................................................... 66
Figure 67: Example of the changes (installation of egg crates above CRAC and blanking racks) made
during P hase I and II ......................................................................................................................................... 67
Figure 68: Example of the changes (installation of egg crates in hot aisles and chimneys on CRACs)
m ade during Phase I and II .............................................................................................................................. 67
Figure 69: Variance in airflow in the pilot data center. Note that only the common perforated tiles
are shown in this chart.......................................................................................................................................68
Figure 70: Comparison of actual and required airflow for pilot data center ...................................... 68
Figure 71: Comparison of air temperature during different phases of the pilot improvement process
............................................................................................................................................................................... 69
Figure 72: Diagram showing the average and change from baseline air temperature after the Phase I
changes had been implemented within the pilot data center. ................................................................ 70
Figure 73: Air temperatures near CRAC A after Phase II changes were implemented.....................71
Figure 74: Air temperatures near CRAC B after Phase II changes were implemented ...................... 71
Figure 75: Air temperatures near CRAC C after Phase II changes were implemented....................71
Figure 76: Air temperatures near CRAC D after Phase II changes were implemented .................... 71
Figure 77: Diagram showing location of racks on which air temperatures were measured........72
Figure 78: Air temperature measurements in the cold and hot aisle for selected racks. Note that
Rack 33 has a perforated tile in the cold aisle, while Rack 36 does not............................................... 72
Figure 79: Current of the data center CRAC units before and after the reheat function was disabled
on all four in room CR A C units. ..................................................................................................................... 73
Figure 80: Current of the data center CRAC units before and after one of the units was turned off.
In addition, dehumidification and humidification was turned off.........................................................73
Figure 81: Baseline electricity usage for the pilot data center ............................................................... 75
Figure 82: Post pilot electricity usage for the pilot data center ............................................................. 75
Figure 83: Comparison of electricity used by the pilot data center and annual savings achieved during
different phases of the pilot program ......................................................................................................... 76
Figure 84: Survey results of what is keeping data centers managers "up at night" ..................... 78
Figure 85: Example of Staff Alignment for Energy Efficiency ............................................................. 78
Figure 86: Example of slide shown to Facility Leadership to gain buy-in of importance of data
cen ters .................................................................................................................................................................. 84
Figure 87: Example of slide show to Facility Leadership to keep momentum of data center projects
............................................................................................................................................................................... 84
Figure 88: Jishuken Dynam ic............................................................................................................................85
Figure 89: Building Electricity before and after Phase I changes were implemented ........................ 96
Figure 90: Building's electricity consumption by monthly end use for 2008.......................................98
Figure 91: Building's electricity consumption by end use for 2008...................................................... 98
Figure 92: Building's electricity consumption as a function of weather from 2006-2009.........98
Figure 93: Building's electricity for two weeks in 2009............................................................................98
LIST OF TABLES
Table 1: Percentage of Load Consumed at the Garland Site from 2005-2009.....................................19
Table 2: Overview of Strategies for Reducing Date Center Electricity Consumption ....................... 31
Table 3: Strategies for Reducing Electricity Consumption in New Data Centers ............................... 51
Table 4: Strategies for Reducing Electricity Consumption of Existing Data Centers.........................52
Table 5: Comparison of typical and actual characteristics of the pilot data center..............................54
Table 6: Electrical load of the IT equipment in the pilot data center...................................................56
Table 7: Chilled water usage and calculated electrical loads .................................................................. 57
Table 8: Total baseline electrical load for the pilot data center..............................................................58
Table 9: Issues identified by data analysis in the pilot data center ........................................................ 65
Table 10: Statistics for airflow per tile during different phases of the pilot process (non-zero,
p erforated tiles)...................................................................................................................................................68
Table 11: Statistics for air temperature in the cold aisle during different phases of the pilot process .69
Table 12: Electricity consumed by CRACs during different stages of project.....................................73
Table 13: Chilled water usage and calculated electrical loads..................................................................74
Table 14: Comparison of the baseline and post pilot electrical load for the data center....................75
Table 15: Issues identified by data analysis in the pilot data center ...................................................... 77
Table 16: Key questions and answers that highlight a successful implementation process .............. 83
Table 17: Analysis of electricity saved during the piloted improvement process ............................... 96
Table 18: Calculations to determine sensitivity of Chiller Electricity Consumption..........................97
1 Introduction
1.1 Project Motivation

1.1.1 Increase Energy Efficiency of Data Centers with Behavior Change


Energy efficiency in the US is lower than in all other developed countries, and a substantial amount
of energy consumption could be avoided by changing energy consuming behaviors. The industrial
sector accounts for nearly half of all the energy consumed, so it is ripe with opportunities for
increased energy efficiency. One well-known area of potential improvement for almost all industries
is data center efficiency. Because data centers are set-up and operated by information technology
professionals, this project and thesis explores in depth the mechanisms for changing behavior of
organizations containing such professionals. While much has been published on behavior change of
individuals, few have explored what methods (i.e. incentives, organization alignment, and
performance management) can be used to change the energy consuming behaviors of entire technical
organizations. For example, how should organizations utilize awareness, commitment, action,
feedback, and/or competition to maximize behavior change? Also, how should organizations
develop and scale energy efficiency improvement processes? This thesis will explore both questions.

1.1.2 Use only Low-Cost Techniques to Change Behaviors


Energy consumption by data centers is costly and ironically, decreases the amount of upfront capital
available to make infrastructure and equipment changes to existing data centers. Therefore, the goal
of this project was not to develop a general improvement process for data centers; rather, the goal
was to develop a low-cost improvement process for data centers that relied heavily on behavior
change rather than capital investment for energy efficiency gains.

1.1.3 Reduce Emissions due to Energy Consumption


While energy reduction started as a cost reduction initiative, it is frequently becoming part of a larger
corporate sustainability strategy.' Ideally, a corporate sustainability strategy would span a company's
economic, environmental and social impacts, though in recent years there has been increased
emphasis on an organization's environmental impact, most notably, their greenhouse gas (GHG)
emissions. 2

1 (Shah and Littlefield 2009)


2 (EPA n.d.)
There are six major greenhouse gases being tracked by companies participating in GHG reduction
programs: carbon dioxide (CO 2), methane (CH 4), nitrous oxide (N 20), hydroflourocarbons (HFCs),
perflourocarbons (PFCs), and sulfur hexafluoride (SF6).3 The largest GHG emission from most
manufacturing companies is carbon dioxide resulting from the combustion of fossil fuels.

1.2 Problem Statement


Raytheon Intelligence & Information Systems (IIS) was an ideal location to complete this thesis since
most of their energy usage is due to data centers, and since the three motivations outlined previously
apply to their business. First, IIS has an IT organization, comprised of technical professionals,
responsible for both the initial design and the day-to-day operations of IIS data centers. Second,
only $500 total was available to make technical upgrades; therefore, the improvement process
developed had to be extremely low-cost and depend primarily on behavior changes to achieve energy
savings. Third, IIS has aggressive GHG reduction goals, which are tracked and reported by the
facilities organization.

Since the high-cost for electricity leads to a high overhead rate that lowers their likelihood of IIS
winning cost-based defense contracts, there is an intrinsic reason for the IIS IT organization to strive
to increase data centers' energy efficiency. Furthermore, because most of the electricity consumed by
IIS is derived from fossil fuels, their energy consumption leads to additional GHG in the
environment. This provides the Facilities organization incentive to help increase the energy efficiency
of data centers. Since there are two organizations - IT and Facilities - who are incentivized to
increase the energy efficiency of data centers, behavior change techniques were developed in such a
way to apply to both organizations. Additionally, organization alignment, interaction, and incentives
were considered levers to use to increase the teamwork necessary for large organizations to achieve
energy efficiency reductions.

3 (EPA 2010)
1.3 Thesis Overview
The document is organized as described below:
Chapter 1 outlines the general motivation for the thesis and provides an overview of the thesis
contents.
Chapter 0 provides the company background and a brief discussion of data centers and the
Chapter 3 presents the hypothesis for the study undertaken.
Chapter 4 describes existing knowledge on how to improve the energy efficiency of data centers,
and outlines the developed pilot process that was used to test the hypothesis
Chapter 5 details the pilot data center and provides the before and after electricity consumption
data.
Chapter 6 provides recommendations for improving the energy efficiency of data centers through
engineering design and managerial technique, and through extensive roll-out of the piloted process
with Rapid Results Teams.
Chapter 7 presents an overview of the business case, including a summary of key findings.
2 Background
2.1 Company Background - Raytheon
The Raytheon Company, established in 1922, is a technology leader specializing in defense, homeland
security, and other government markets throughout the world. With a history spanning more than
80 years, Raytheon provides electronics, mission systems integration, and other capabilities in the
areas of sensing, effects, communications and intelligence systems, as well as a broad range of
mission support services. Raytheon has around 75,000 employees worldwide and generated $25
billion in 2009 sales. 4 Raytheon has the following six business units: 5

Integrated Defense Systems (21% of sales and 29% of operating profits in 2008) is a leading
provider of integrated joint battle space (e.g., space, air, surface, and subsurface) and homeland
security solutions. Customers include the U.S. Missile Defense Agency (MDA), the U.S. Armed
Forces, the Dept. of Homeland Security, as well as key international customers. Main product lines
include sea power capability systems, focusing on the DDG-1000, the Navy's next-generation naval
destroyer; national & theater security programs, including the X-band radars and missile defense
systems; Patriot programs, principally the Patriot Air& Missile Defense System; global operations;
and civil security and response programs.

Intelligence & Information Systems (13% of sales and 9% of profits) provides systems,
subsystems, and software engineering services for national and tactical intelligence systems, as well as
for homeland security and information technology (IT) solutions. Areas of concentration include
signals and image processing, geospatial intelligence, air and space borne command & control,
weather and environmental management, information technology, information assurance, and
homeland security.

Missile Systems (21% of sales and 19% of profits) makes and supports a broad range of leading-
edge missile systems for the armed forces of the U.S. and other countries. Business areas include
naval weapon systems, which provides defensive missiles and guided projectiles to the navies of over
30 countries; air warfare systems, with products focused on air and ground-based targets, including
the Tomahawk cruise missile; land combat, which includes the Javelin anti-tank missile; and other
programs.

4 (Raytheon Company 2010)


s (Standard & Poor's 2010)
Network Centric Systems (18% of sales and 18% of profits) makes mission solutions for
networking, command and control, battle space awareness, and transportation management.
Business areas include combat systems, which provides ground-based surveillance systems; integrated
communication systems; command and control systems; Thales-Raytheon Systems, a joint venture
between the two companies; and precision technologies and components, which provides a broad
range of imaging capabilities.

Space & Airborne Systems (17% of sales and 19% of profits) makes integrated systems and
solutions for advanced missions, including traditional and non-traditional intelligence, surveillance
and reconnaissance, precision engagement, unmanned aerial operations, Special Forces operations,
and space. SAS provides electro-optic/infrared sensors, airborne radars for surveillance and fire
control applications, lasers, precision guidance systems, electronic warfare systems and space-
qualified systems for civilian and military applications.

Technical Services (10% sales and 6% profits) provides technical, scientific, and professional
services for defense, federal government and commercial customers worldwide, specializing in
mission support, counter-proliferation and counter-terrorism, range operations, product support,
homeland security solutions, and customized engineering services.

2.1.1 Raytheon Intelligence and Information Systems


Raytheon Intelligence and Information Systems (IIS) is a leading provider of intelligence and
information solutions that provide the right knowledge at the right time, enabling their customers to
make timely and accurate decisions to achieve mission goals of national significance. As shown in
Figure 1, their leading edge solutions include integrated ground control, cyber security,
environmental (weather, water, &climate), intelligence, and homeland security.
dM1MWMMW" - - - - ..........
.....
............. . . .. -...
.......
......................

ll1S Capabilities I 11S Mar kets

Figure 1: Overview of Raytheon's Intelligence and Information Systems 6

There are 9,200 employees at various IIS sites, which are shown in Figure 2. The headquarters of
IS, which is located in Garland, TX, consumes about 42% of the total IIS energy consumption, as
shown in Figure 3. Eighty percent of IIS employees have received security clearances from the US
government. The secret nature of the IIS business is important in that most areas of the company
are considered a Sensitive Compartmented Information Facility (SCIF); therefore, information and
data is not easily allowed to be taken in and out of the data centers. In addition, data centers that are
considered SCIFs must be essentially self-sustaining. As the Physical Security Standards for Special
Access Program Facilities states 7, "walls, floor and ceiling will be permanently constructed and
attached to each other. To provide visual evidence of attempted entry, all construction, to include
above the false ceiling and below a raised floor, must be done in such a manner as to provide visual
evidence of unauthorized penetration."

6 (Raytheon Company 2009)


7 (Joint Air Force - Army - Navy 2004)
Springfield
Unthicum_ 0.85%
1.17%
Landover/Riverdale
3.07%
State
College

lrora uoaiTUnthicum
Springfield

Garland Falls Church


Landover

Figure 2: Overview of Major IIS Site Figure 3: 2008 IIS Energy Consumption by
Locations 8 Major Site

2.1.2 Background of Garland IIS Site

2.1.2.1 History of the Garland IIS Site 9


The Garland site has been occupied since the 1950s. The site's legacy company - Texas Engineering
and Manufacturing Company (TEMCO) - originally utilized the site, shown in Figure 4, to
manufacture airframe components. However, the site's principal product had become systems
engineering by the late 1950s. As shown in Figure 5, over the next three decades the site built
additional buildings and changed ownership through mergers and acquisitions several times, before
eventually becoming the headquarters of Raytheon Intelligence and Information Systems in May
1995 through a merger between E-Systems and Raytheon.

8 (Raytheon Company 2009)


9 (Raytheon Company 2009)
1945-19S0 Umcunbe
Miraft

1950- 1960 Mat e Conpany


(TEMCO)&Ung7tTO

LIV
1961 -1972

1972-1995 E-Systenorporated

1995
-Preset R oytadn

Figure 4: Photo of the Garland site in the 1950s10 Figure 5: Timeline of the Garland Site"

2.1.2.2 Analysis of Garland's Electricity Consumption


The IIS Garland campus, shown in Figure 6, encompasses 1.142 million square feet and is home to
2,600 employees, contractors, and visitors.12 An instantaneous analysis of electricity was completed
in conjunction with the utility provider; Garland Power and Light, to better understand which
buildings on the Garland campus used the majority of the electricity. As shown in Figure 7, the
major buildings (551, 552, 580, 582, and 584) consume about 92% of the electricity, and no one
building could be considered the primary electricity consumer on the Garland site. Additional
analysis, as described in Appendix I - Process for Analyzing Energy Usage, was completed using past
electricity data, so that we could better understand how electricity is consumed on the Garland site.

10 (Raytheon Company n.d.)

11 (Raytheon Company n.d.)


12 (The Raytheon Corportation n.d.)
Bldg. 575

7%

2%

1S Campus, circa 200913 Figure 7: Breakdown of Garland electricity


consumption (instantaneous reading)

As shown in Figure 8, the constant plug load makes up the majority of the energy consumed at
Garland. To be the leading provider of intelligence and information solutions, Raytheon Intelligence
and Information Systems (IIS) must have the latest computer equipment, particularly in their data
centers. Therefore, the constant plug load in the data centers is comprised of IT equipment, cooling
equipment used to remove the heat produced by the IT equipment, lighting, and electricity loss that
results from transformers. In addition, emergency lighting and required evacuation systems are
responsible for constant electricity consumption, albeit a minor amount. Finally, it is likely that some
employees leave equipment - both intentionally due to program requirements and unintentionally
due to long standing behavior patterns - turned on 24 hours, and this equipment is included in the
constant load. The total constant plug load for Garland is about 5,200 kW per hour, and is about
89% of the total electricity consumed annually at the Garland site.

A varying amount of electricity, labeled "February Daily Load" and "June Daily Load" in Figure 8, is
consumed each weekday as IIS employees at the Garland site use computers, printers, and other
equipment. As shown in Figure 8, this daily load, which is considered the variable plug load, is
consistently about 1000 kW per hour every month of the year in 2009, and is about 7% of the total
electricity consumed at the Garland site.

13 (Google Maps n.d.)


One M*nWs Eeclidty Wage Garland
February and June, 2009

2w 4

0 40

r 1
72E231 t 2 3 4 02 8
72 12 17
2011

Figure 8: Illustration of Electricity Analysis


1 20 2 2V
M0 7A250
M

-
-
]
Z- 1.

z.km.
lmate Zones
l1. than2W00
Zn 2. ls tMan
Zn 3 ileS-
2,
than
20
OD gne

m
tn.
7,000HDD.
5500-000HDD.
D and
CD Mn4,000-5,499
HDD.
.t. .,000 cDOand &an4,000
zn. s2,000 cDnor.nd I.n..,0

Figure 9: US Climate Zones for 2003 14


HDD.
MD.

performed for the Garland Campus

The final consumer of electricity at the Garland site is the Heating, Ventilating, and Air Conditioning
(HVAC) systems. As shown in Figure 8, the amount of electricity consumed in February is lower
every hour than the electricity consumed in June. The minimum amount of electricity consumed in

June is higher than the minimum electricity consumed in February due to the HVAC required to cool
the buildings at all times, even weekends and nights. The increased weekend cooling requirements in
June (as well as any non-winter month) are due to the extremely hot weather in Garland, as shown in
Figure 9. The electricity required to achieve this minimum cooling varies month to month, and is
illustrated as "Minimum to cool in June" in Figure 8. In addition to a minimum amount of cooling,
HVAC is required to keep occupants comfortable while they are working at the Garland site. To
determine how much electricity was required to achieve this "comfort cooling", the weekend
electricity during June was observed, as shown in Figure 8. The total HVAC, which is the
summation of the "minimum cooling" and "HVAC cooling" each month, averages about 555 kW
per hour, and is about 4% of the total electricity consumed at the Garland site.

2.1.2.3 Analysis of Garland's Natural Gas Consumption


Natural gas is used at the Garland site for two primary purposes - hot water heating and occupant
comfort (heating) during the winter months. Analysis similar to the analysis performed on electricity
was performed on the Garland natural gas data to determine the amount of energy that is used for
each purpose. As shown in Figure 10, the minimum amount of natural gas is consumed during the
summer months, and is about 200 MWh monthly (approximately 277 kW per hour). Since a minimal
amount of heating is required in the summer months at Garland for occupants' comfort, it is likely

14 (Energy Information Administration 2003)


that almost all of this natural gas consumption is for hot water heating. The remainder of the natural
gas consumed each month is attributed to heating required to achieve occupants' comfort. It is
notable that in some summer months, natural gas is consumed at a much higher level than the
minimum required for hot water production. This is an indication that the HVAC system is over-
cooling some areas of the Garland site, and occupants are "correcting" this mistake by turning on the
heat in those areas. As shown in Figure 10, a minimal amount of "correcting" took place in the
summer months of 2007; however, an increased consumption took place in the summer months of
2008. This could be a signal that the HVAC system's controls were not utilized as much in 2008 as
they were in 2007. Finally, as shown in Figure 11, the total amount of natural gas is relatively small
compared to the amount of electricity consumed at Garland.

100
1400 7000 100

11200
11000 70
20
E 60-

600 50D
60
03
400 4 aaN00
a
0 a20

Z 200~ J 10
D'a -00

00
M1Tj Nabi Gas mTte Ekec"~~ -Avg TenW*sWe(F)

Figure 10: Garland Natural Gas Consumption, Figure 11: Garland Natural Gas and
2005- 2008 Electricity Consumption, 2005- 2008

2.1.2.4 Summary of Garland's Energy Consumption


The results of the analysis of both electricity and natural gas consumption were compiled, as shown
in Table 1 and Figure 12. It is clear from Table I that the constant plug load, which results from data
centers, equipment turned on 24 hours each day, and emergency lighting, is responsible for the
majority of Garland's energy consumption. The HVAC system, utilized for occupants' comfort, is
also responsible for a significant amount of energy consumption at Garland, while daily plug load
and natural gas used for hot water heating are responsible for a very small amount of energy
consumption.

Table 1: Percentage of Load Consumed at the Garland Site from 2005-2009


Constant HVAC (Gas Gas for Water
Load + Electricity) Heating Daily Plug Load
2005 73% 18% 5% 4%
2006 79% 13% 5% 3%
2007 69% 20% 7% 4%
2008 77% 13% 5% 5%
The past four years of Garland energy consumption was obtained and analyzed. As shown in Figure
13, Garland was able to reduce their energy consumption by 21% from 2006-2008, indicating that an
emphasis was placed on energy consumption during this period. Facilities employees and leadership
confirmed this hypothesis during conversations stating, "our buildings are older, and we replaced
some of the original lights and chillers the past few years. In addition, we installed instantaneous hot
water heaters [in 2008], which also led to a substantial decrease in energy consumption. Finally, IIS
recently began using more energy efficiently monitors, computers, and data center equipment, which
also had an impact on the energy reduction achieved from 2006-2008. We keep updating the
memory, processors, and other data center equipment.. .with each upgrade to more energy efficiency
products, I'm sure we are reducing our constant load."

80,000

70,000 -
12%Decrease

50,000

40,000 --

30,000 -

W20;000 - -

10,000

0.

2005 2006 2007 2008


ElectritHVAC EGas
(HVAC) gDailyPlugLoad EGas (HotWater) PlugLoad
N Constant

Figure 12: Example of energy consumption Figure 13: Breakdown of Garland energy
over a one year period at the Garland IIS consumption, 2005-2009
site

From the analysis of the energy consumed at Garland, it is obvious that the constant load deserves
special attention. Since the majority of the constant load is related to data centers, a thorough
process for improving the operating energy efficiency was developed and piloted using industry best
practices.
2.2 Data Center Overview

2.2.1 Introduction
Even though most consumers do not know what a data center is, anyone who accesses the Internet
uses data centers. As shown in Figure 14, a data center is a room that contains "Information
Technology Equipment" - data processing electronic equipment such as servers, network equipment,
and storage devices (memory). This electronic equipment is arranged in racks, as shown in Figure 15.
Practically all companies in all sectors of the economy have at least one data center and many
companies have multiple data centers to ensure required computing can occur. For example, if a
company's employees have access to the internet, have a company email address, or store files on a
company file server, than the company likely has data centers that support their computing
infrastructure.

15 Figure 15: Example of racks contained in a


Figure 14: Example of a data center
data center 16

Because data centers are used to house computing equipment, they typically have no windows and
have a minimum amount of incoming fresh air. Data centers range in size from small rooms (server
closets) within a conventional building to large buildings (enterprise class data centers) dedicated to
housing servers, storage devices, and network equipment.17 Large data centers are becoming
increasingly common as smaller data centers consolidate.18 In addition, in most cases data centers are
used for many years, resulting in a mixture of state-of-art and obsolete computing equipment.

15 (Lawrence Berkeley National Laboratory 2003)


16 (System Energy Efficiency Lab n.d.)
17 (US Environmental Protection Agency 2007)
18 (Carr 2005)
2.2.2 Typical Electricity usage within a Data Center
There are several components required to ensure a data center operates correctly, as shown in Figure
16. Each of the components consumes electricity, and can be grouped into three categories - power
delivery, IT equipment, and heat removal (or cooling equipment). Additionally, miscellaneous
equipment consumes a relatively small amount of electricity. The amount of electricity consumed by
each component varies between data centers. Even the "typical" consumption varies by source, as
shown in Figure 17. Generally, the IT equipment and the cooling equipment required to remove
heat generated inside the data center consume the bulk of the electricity. However, the power
delivery system consumes a substantial amount of electricity, especially in older data centers.

Power from UWAY


anr 100%k

intoaratr Cete atCner9

2.2..1 EectrcityRequred
o DeiverPowe

The irs elctrcityreqireentfor daa cnte is he owe deiver sytemtha suplie elctrcit
2.2.2.1 ElectricityRequid Towelie oe

Network
Rooms 2008

20I 2ystem (reee Greed Grideen


20L 2007)m (Then Green Grideen

benefial209
2enef(Sal209
through transformers and switchgears. The electricity is then sent to an uninterruptible power supply
(UPS), which is essentially a set of batteries that ensure that electricity is always available to the data
center. This uninterruptible supply of electricity is important because most data centers operate 24
hours a day, 7 days a week. However, grid disruptions can stop the supply of electricity from the
utility, making the UPS a key component of the power delivery system. As shown in Figure 18,
electricity is transformed from the UPS to the power distribution unit (PDU), which then sends the
electricity to the electrical panels that supply electricity to the data center IT equipment.

100%

277/480V (90%-
AC UPS
277/48oV PDU 170%
LO
AC

T T IT I
UP$tris PD Wrng Swthga

Panel Panel rPanel


LoadLosnes
UNo * ProortionalLoss Loss
Souare-Law

Figure 18: Diagram of Electricity Flow Figure 19: Typical Electricity Consumption
through the Power Delivery System 22 ,23 of a Data Center24

As shown in Figure 18, in the UPS the electricity is converted from AC to DC to charge the
batteries. Power from the batteries is then reconverted from DC to AC before leaving the UPS.
Electricity consumed in this power delivery chain accounts for a substantial portion of overall
building load. 2s Inherent losses are present for the power delivery system, as shown in Figure 19, and
increase if the components used are oversized for the data center they are serving.

2.2.2.2 Electricity Required for IT Equipment


Three primary types of IT equipment comprise a data center: servers, storage devices, and network
equipment. As shown in Figure 20, servers account for about 75% of the electricity consumed by
data center equipment, while storage devices and networking equipment account for 150/ and 10%,
respectively, of the electricity consumed.

22 (Rasumssen 2006)
23 (US Environmental Protection Agency 2007)

24 (Rasmussen, Electrical Efficiency Modeling of Data Centers 2006)

25 (US Environmental Protection Agency 2007)


30

C 25

-20
Zi E
0a 1
r 15

m 10

55

Uj 0
2000 2001 2002 2003 2004 2005 2006
U Server Electricity EStorage Device Electricity *Networking Equipment Electricity

Figure 20: Estimated Electricity Consumed by the IT Equipment in US Data Centers from
2000-200626

As electricity is consumed by IT equipment, heat is generated. Approximately 50% of the heat


energy released by servers originates in the microprocessor itself, and over 99% of the electricity used
to power IT equipment is converted into heat. 27 Therefore, it is very important to understand how
this heat is removed, and how electricity is consumed during the heat removal process.

2.2.2.3 Electricity Required to Remove Heat


In order to keep the components of data center IT equipment within the manufacturers' specified
temperature and humidity range, heat produced within the data center must be removed. If the IT
equipment is not kept within the manufacturers' specified limits, the equipment's reliability is
reduced.

To put the heat load that must be removed from data centers in perspective, a fully populated rack of
blade servers consumes 30 kW of electricity (720 kW per day)28, which is the equivalent of 43 average

26 (US Environmental Protection Agency 2007)


27 (Evans, Fundamental Principles of Air Conditioners for Information Technology 2004)
28 (Hughes 2005)
US homes. 29 All of this electric power is converted to heat, which must be removed from the data
center. In addition to IT equipment, other data center components, as shown in Figure 21, generate
heat that must be removed from the data center. In total about 42 kWh (143,310 BTUs) of heat is
produced daily that must be removed from the data center.

UPS
13%

IT Loads
71%
Lighting
10%

Power Dist.
4%
Personnel
2%

Figure 21: Relative contributions to the total thermal output of a typical data center30

There are five basic methods to collect and transport unwanted heat from the IT environment to the
outdoor environment, and each method uses the refrigeration cycle to transport orpump heat from
the data center: 31
" Air cooled systems (2-piece)
* Air cooled self-contained systems (1-piece)
* Glycol cooled systems
* Water cooled systems
* Chilled water systems

The five methods are primarily differentiated in the way they physically reside in the IT environment
and in the medium they use to collect and transport heat to the outside atmosphere. Obviously, each
method has advantages and disadvantages. The decision on which cooling system to choose should
be based on the uptime requirements, power density, geographic location and physical size of the IT

29 (US Energy Information Administration 2009)


30 (Evans, The Different Types of Air Conditioning Equipment for IT Environments 2004)
31 (Evans, Fundamental Principles of Air Conditioners for Information Technology 2004)
environment to be protected, the availability and reliability of existing building systems, and the time
and money available for system design and installation. 32

Most large data centers use chilled water systems to remove the heat from data centers, as shown in
Figure 22. Heat generated in the data center is transported to the top of the computer room air
conditioner (CRAC) by the air circulating in the data center. The condenser coil then uses chilled
water to complete the refrigeration process, which is shown in Figure 23, on the heated air.

Heat fram

Figure 22: Diagram of a Chilled Water Figure 23: Heat Energy Removal Via the
System used to Remove Heat from a Data Refrigeration Cycle34
Center33

The major consumers of electricity of the heat removal system are the pumps used to transport the
heat-carrying medium from the data center to the outside atmosphere, the computer room air
conditioners (CRAC), and the ongoing upkeep/production of the heat-carrying medium. The
amount of electricity these consumers use is different for each system. For example, the chilled
water system uses electricity to pump the water from the cooling tower to the water chiller, to pump
the chilled water from the water chiller to the CRAC, and to pump the warm water from the CRAC
to the outside atmosphere. In addition to pumping water, the CRAC uses electricity to reheat air if
the humidity of incoming air from the data center is too low. Finally, electricity is used to reduce the
temperature of the water both in the cooling tower and in the water chiller.

32 (Evans, Fundamental Principles of Air Conditioners for Information Technology 2004)


33 (Evans, The Different Types of Air Conditioning Equipment for IT Environments 2004)

34 (Evans, Fundamental Principles of Air Conditioners for Information Technology 2004)


Estimates of the electricity required for the pumps and chilled water production, and the computer
room air conditioner vary by data center, as shown in Figure 24. The difference in electricity
breakdown depends on the efficiency, age, and level of maintenance of all the components of the
heat removal system (cooling tower, pumps, water chiller, fans, etc). In addition, the efficiency
depends on how efficient heat generated from the IT equipment is transported to the CRAC units.

100%

80%

~20%

0%
ASCE
(2001} ASCE
(2002) ACEEE162 GreenGrid Engineering
AHUs Systers
Fans,CRACs,
ChillerPlantandPumps

Figure 24: Typical Electricity Consumption of by the heat removal system of a Data
73
Center3s5 8
2.2.3 Impact of Data Centers
Most data centers are more energy intensive than other buildings. This is due to the high power
requirements of the IT equipment and the power and cooling infrastructure needed to support this
equipment. In fact, data centers can be more than 40 times as energy intensive as conventional office
buildings. 39 As an aggregate in 2005, data centers accounted for 1.5% of the total US electricity
consumption and 0.8% of the total world electricity consumption.40 To put this in perspective, in
2007 the carbon dioxide emitted because of data center energy consumption was more than the
carbon dioxide emitted by both Argentina and the Netherlands. 41 This percentage of total electricity
consumed - and therefore the impact of data centers - is expected to increase over the next decade
for various reasons, such as:

3s (Blazek, et al. 2004)


36 (Tschudi, et al. 2003)

37 (The Green Grid 2007)


38 (Salim 2009)
39 (Greenburg, et al. 2007)

4 (Koomey 2007)
41 (Kaplan, Forrest and Kindler 2008)
MEN

* Industries (banking, medical, finance, etc.) moving from paper records to electronic records.
* Increased use of electronic equipment, such as global positioning system (GPS) navigation
and radio-frequency identification (RFID) tracking in everyday activities.
* Increased access to information technology in developing countries such as India, China, etc.
* Increased storage of records (both original and duplicates) for websites, databases, emails,
etc.

As the move to a digital way of life continues, the electricity consumed by data centers will increase
as shown in Figure 25.

140
4- Historical nergy use Future nergy Historical trends
S120 use projections 4 scenario
Current efficieny
trends scenario~
~100
emproved operation

40 State of the art


scenario
C
£
~20
0
2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011

Figure 25: Projected Electricity Usage of Data Centers for Different Scenarios 42

As illustrated in Figure 25, the level of efficiency, both in the IT equipment and the data center
operating principles will have an enormous impact on how much electricity data centers consume.
The different assumptions for each efficiency scenarios can be seen in Appendix II - Summary of
Alternative Efficiency Scenario Assumptions.

42 (US Environmental Protection Agency 2007)


3 Hypothesis
The energy efficiency of existing data centers can be increased substantially through relatively low-
cost changes. However, the changes involve not only technical acumen, but also require leadership
support and employee engagement. Moreover, the energy consuming behaviors of working
professionals can be changed through a simple behavior change model of providing feedback,
incentives, and reinforcement in sequential order.
4 Methodology
4.1 Data Collection and Analysis
In order to develop and test a data center improvement process, a three-step process was used:
1. Research industry best practices and develop an initial improvement process (as discussed in
this section).
2. Test the improvement process on a pilot data center in Garland (as discussed in the Chapter

5).
3. Integrate lessons learned into the initial improvement process (as discussed in Chapter 6).

In order to measure the affect of the improvement process, data was collected prior to, during, and
after the implementation of the improvement process.

4.2 Review of Industry Best Practices


There is no shortage of suggestions for improved energy efficiency of data centers available in white
papers, top ten lists, and various reports. Entire conferences are dedicated to the energy efficiency of
data centers. 43 This illustrates that there are many ways to improve the energy efficiency of data
centers, a sampling of which can be seen in Table 2. However, the changes required vary in cost and
impact depending on the baseline efficiency of the data center. While medium and high-cost
techniques are reviewed herein, the emphasis of this thesis is to develop a low-cost improvement
process. Therefore, an emphasis is placed on techniques that required minimal budget and time to
implement. The improvement opportunities are categorized by the energy component of the data
center they impact, namely power delivery, information technology (IT) equipment, heat removal
equipment, and miscellaneous equipment.

43 An example can be seen at http://greendatacenterconference.com/


..........
. ....

Table 2: Overview of Strategies for Reducing Date Center Electrici Consumption


Data Center Improvement Capital Labor Behavior
Component Description Required' Required' Change1 Savn
Power Supply Ri htsizin 1-3
Add on-site power
Power Supply generation to reduce 2-7
or eliminate UPS
Power Supply Im roved UPS 4-11
IT Equipment Consolidation unkn
IT Equipment epmet30-5
equipment
IT Equipment Virtualization 10-4
IT Equipment Manage power
Heat Use in-rack water 7_
Removal cooling
Heat Use Cooling 4_
Removal Economizer
Heat Improve airflow -5
Removal management
Miscellaneous Install Energy 1-3
Efficient Lighting
'Red = "Yes", Yellow = "In some Instances", Green = "No"
2
(Rasmussen, Implementing Energy Efficient Data Centers 2006)
3
Assumed inefficiency of UPS was eliminated (Rasmussen, Implementing Energy Efficient Data
Centers 2006)
4Sum of More Efficient Air Conditioner Architecture, More Efficient Floor Layout, Coordinate Air
Conditioners, Locate Vented Floor Tiles Correctly, Reduce Gaps between racks/floors/walls,
and Install Blanking Panels and Floor Grommets

4.2.1 Power Delivery Equipment Energy Efficiency


Three ways to increase the efficiency of the power supply equipment will be discussed in detail in the
following sections. Unfortunately, none of these three techniques were used for the pilot data center
because they require capital and/or the methods discussed must be performed during the design
phase of a data center, rather than during a retrofit or upgrade.

4.2.1.1 Rightsizing the Data Center


Of all the techniques available to increase the efficiency of data centers, rightsizing the physical
infrastructure to the IT load has the most impact on electrical consumption, with savings of up to
50% observed in some cases." Once the data center is set-up and running, it is nearly impossible to
retrofit the power delivery equipment and the heat removal equipment to the actual requirements.
Therefore, rightsizing must be performed in the design stages of a data center.

4 (The Green Grid 2007)


As the name implies, rightsizing is the process of ensuring the power delivery system is designed to
supply the IT equipment with the electricity it requires. While it sounds very simple, rightsizing is
very difficult because the average load requirements that will be present once the IT equipment is
purchased and installed depends on many factors, such as rack configuration and eventual use of the
equipment. Therefore, many data center managers design the data center for the maximum
theoretical load of the equipment to ensure that their infrastructure can handle the worst-case
scenario. However, the reality is that most data centers operate well below their maximum load,
which will be discussed in Section 4.2.1.3. Therefore, a better design practice would be to size
electrical and mechanical systems such that they will operate efficiently while overall loading is well
below design, yet be scalable to accommodate larger loads should they develop. 45 To further assist in
rightsizing, many manufacturers of servers, storage devices, and networking equipment offer
programs that will calculate the typical load depending on the configuration of the racks in the data
center. An example of such software is shown in Figure 26.

Figure 26: Screen shot of HP Power Advisor, which helps customers estimate the load of
future racks in their data centers 46

As shown in Figure 26, the user of the software is able to input what servers, storage devices, and
networking equipment will comprise their racks. In addition, the user is able to estimate utilization
of the rack, so that a realistic estimate of the typical power requirements can be determined. If this
type of process is used to design the entire data center, the energy efficiency will be improved.

4s (Greenburg, et al. 2007)


46 (HP 2009)
Indeed, this system design approach has a much greater effect on the electrical consumption than
does the efficiency of individual devices. 47 A table of other calculators can be seen in Appendix III -
List of Electrical Power Calculation Tools.

In addition, to better estimating the future IT equipment load, the data center can be designed not
only to accommodate current requirements, but also to allow for future growth if required. Indeed,
planning for the present and the future can improve overall system efficiency. Examples of adding
redundant capacity and sizing for true peak loads are as follows:48
e Upsize duct/plenum and piping infrastructure used to supply cooling to the data center.
" Utilize variable-speed motor drives on chillers, pumps for chilled and condenser water, and
cooling tower fans.
* Pursue efficient design techniques such as medium-temperature cooling loops and waterside
free cooling.
* Upsize cooling towers so that chiller performance is improved.

4.2.1.2 On-site Power Generation


The combination of a nearly constant electrical load and the need for a high degree of reliability make
large data centers well suited for on-site generation.49 If onsite generation is utilized for data centers,
the need to have back-up power standing by, namely the battery charged UPS, is eliminated. Since
the UPS consumes 6-18% of the total data center electricity and generates heat that must be
removed, savings from utilizing on-site generation can be substantial. In addition, on-site generation
equipment could replace any currently utilized backup generator system. There are several important
principles to consider for on-site generation: 50
* On-site generation, including gas-fired reciprocating engines, micro-turbines, and fuel cells,
improves overall efficiency by allowing the capture and use of waste heat.
" Waste heat can be used to supply cooling required by the data center with absorption or
adsorption chillers, reducing chilled water plant energy costs by well over 50%. In most
situations, the use of waste heat is required to make site generation financially attractive.
This strategy reduces the overall electrical energy requirements of the mechanical system by

47 (The Green Grid 2007)


48 (Greenburg, et al. 2007)
49 (Tschudi, et al. 2003)
50 (Greenburg, et al. 2007)
-A . _
........
.........
.. ...
. ...... 11
........... . . .......................
..... .....................

eliminating electricity use from the thermal component, leaving only the electricity
requirements of the auxiliary pump and motor loads.
* High-reliability generation systems can be sized and designed to be the primary power
source while utilizing the grid as a backup. Natural gas used as a fuel can be backed up by
propane.
* Where local utilities allow, surplus power can be sold back into the grid to offset the cost of
the generating equipment. Currently, the controls and utility coordination required to
configure a data center-suitable generating system for sellback can be complex; however, but
efforts are underway in many localities to simplify the process.

4.2.1.3 Improve the Operating Efficiency of the UPS


There are two fundamental ways to increase the operating efficiency of a UPS. The first is to simply
purchase a new, best-in-class UPS, which as shown in Figure 27, has 70% less losses than a legacy
UPS at typical loads.5 The second way to increase the operating efficiency of a UPS is to load the
UPS in such a way to maximize efficiency. As shown in Figure 28, the efficiency of the UPS depends
on the difference in the IT load and the capacity of the UPS. The closer the load is to the capacity,
or full power rating of the UPS, the lesser the amount of electricity that will be wasted due to
inefficiency. To increase the load on the UPS, the load of two or more data centers may be
combined and served from one on-site UPS, assuming more than one data center exists.

UPS EfficiencWis UP wo-rem OMu UECY

100% 2LM

70% . 0% ft M6 V6% V0% 0% M0 0% M0 M0 80% IM0


10% 20% M0% 40% 50% C0% 7% 0% W0% 10M% UPow
%UPSLd

Figure 27: UPS efficiency as a function of Figure 28: Effect of internal UPS loss on
load comparing latest generation UPS to Efficiency 53
historic published data5 2

51(Rasmussen, Implementing Energy Efficient Data Centers 2006)


52 (The Green Grid 2009)
s3 (Rasmussen, Electrical Efficiency Modeling of Data Centers 2006)
4.2.2 IT Equipment Energy Efficiency
Four ways to increase the efficiency of the IT equipment will be discussed in detail in the following
sections. As with the power delivery system, none of the methods discussed were used for the pilot
data center because they require capital and/or because existing contracts, under which Raytheon IIS
completes its work, do not allow IT equipment to be consolidated.

4.2.2.1 Replace and/or Remove Obsolete Equipment


If those servers currently being utilized in data centers were replaced with new, energy efficient
servers, 25% less energy would be consumed across a range of typical processor utilization levels. 54
It is clear that with servers, as with other technology, the energy efficiency increases with almost
every new release. However, unlike other technology, each new release is more powerful and capable
of higher computer performance, even though the size of the component is roughly the same. For
example, microprocessor performance improved 50% per year from 1986 to 2002.55 Most of the
major server and processor companies are now touting improved performance per watt -
performance improvements of 35% to 150% or more over previous generations. 56 This performance
has lead to more power in a smaller amount of space, which has lead to an increase in power
requirements and heat density of data centers. This increase in computing power has resulted in a
higher power per unit, as shown in Figure 29, which is directly related to a higher heat output per
unit. It is important to note that this heat must be removed from the data center by the heat removal
equipment.
14M00
1A 1282
12000 -

1222
~I 0 0
0_ _ _ _

54 (US Environmental Protection Agency 2007)


5 (Hennessy and Patterson 2007)
52(Loper and Parr 2007)
57 (Koomey 2007)
Storage devices are also a substantial energy consumer in data centers and are becoming more
important as more information is digitalized. Storage devices are so important that ENERGY STAR
recently announced an initiative to develop an energy efficiency standard for data center storage
devices. 58

In addition to purchasing new servers, storage equipment, and networking equipment to replace
obsolete equipment, data center managers can save a substantial amount of energy simply be
removing old, unused IT equipment from their data center. Oftentimes, new servers are installed
into data centers to replace old servers, but they often do not replace those servers immediately.
Rather, there is a transition time in which both the old and new servers are in use. Once the
transition is complete, the data center manager sometimes forgets to remove the old, unused server.
These servers will almost never be utilized (with only the occasional spikes of utilization when
standard housekeeping tasks - backups, virus scans, etc. - run); however, the machines continue to
consume power and produce heat that must be removed.5 9 Unused servers can be identified by
analyzing network statistics, and should be decommissioned to save energy. 60

4.2.2.2 Proactively Manage the Power Settings of the Equipment


There are two key components of energy management strategy for IT equipment. The first is to use
power management settings on the equipment, and the second is to turn off equipment when it is
not in use. Both of these tools can save a substantial amount of energy. For example, enabling
power saving architecture, which comes standard on most new IT equipment, can result in overall
power savings of up to 20%.61 As shown in Figure 30, enabling power saving mode can have a
drastic effect on the amount of power consumed by IT Equipment.

58 (Fanara 2009)

s9 (Blackburn 2008)
60 (Blackburn 2008)
61 (Blackburn 2008)
Figure 30: Comparison of Power Consumption when "Performance State" of AMD servers6 2

The second way to save energy using a power management strategy is to simply turn off equipment
when it is not in use. Most IT equipment has the ability to completely shut down during a certain
time of the day (for example, 6pm-8am), and/or for certain periods of the year (for example, the
December holiday shutdown that many businesses have). However, in most cases the option must
be turned on and managed closely by the data center manager.

4.2.2.3 Server Consolidation & Virtualization


As shown in Figure 31 and Figure 32, most servers typically operate in an energy inefficient range, in
which they consume between 60 to 90% of their maximum system power. 63

62 (Blackburn 2008)
63 (US Environmental Protection Agency 2007)
0.102

5
002 0.010

07

43 0 1 0 3 0 5 0 7 0 9 0

0 0.1 02 03 0A 0.5 0 07 0.8 09 1.01 0 30 4 0 6 7 0 9 0


CPUuilizaton (percent
Utilization

Figure 31: Average CPU utilization of more Figure 32: Server power usage and energy
than 5,000 servers during a six-month efficiency at varying utilization levels, from
period. Servers are rarely completely idle idle to peak performance. Even an energy-
and seldom operate near their maximum efficient server still consumes about half its
utilization.64 full power when doing virtually no work.65

Two solutions exist to increase the utilization and thus energy efficiency of IT equipment - physical
consolidation and computer consolidation (virtualization). Several examples of physical
consolidation are shown in Figure 33. Physical consolidation increases both operating efficiency and
heat removal efficiency since the consolidation increases the utilization of the servers in use, and
allows data center managers to decrease the areas of the data center in which heat must be removed.
In addition, some of the extremely underutilized servers may be able either decommissioned or used
for a different application, saving the data center manager in energy usage and required expenditures
for new servers.

Virtualization, as shown in Figure 34, is a type of consolidation that allows organizations to replace
several dedicated servers that operate at a low average processor utilization level with a single "host"
server that provides the same services and operates at a higher average utilization level.66
Virtualization may offer significant energy savings for servers because although many services are
only occasionally busy, the power consumed by the idle hardware is almost as much as the power
required for active operation. 67 When these servers are virtualized, a smaller number of servers can
provide significant energy savings because fewer servers are required to meet the computing

64 (Google 2007)
65 (Google 2007)
66 (US Environmental Protection Agency 2007)
67 (Hirstius, Jarp and Nowak 2008)
requirements, and each of the servers operates at a higher utilization and therefore higher energy
efficiency.

Virtual
Virtual Machine
Machine achine Virtual
- _ | Machine

Figure 33: Various forms of consolidationos Figure 34: Diagram illustrating


Virtualization 69

4.2.3 Heat Removal Energy Efficiency


Improvements to the heat removal equipment can drastically affect the overall energy efficiency of a
data center. Three ways to increase the efficiency of the heat removal equipment will be discussed in
detail in the following sections. As with the previous improvement opportunities, "use in rack water
cooling" and "use cooling economizers" require substantial capital and are difficult to implement in
an existing data center. However, unlike the previous improvements, "improve airflow
management," is low-cost and was utilized in the pilot data center.

4.2.3.1 Use In-rack Water Cooling


Liquid cooling, as shown in Figure 35, can be far more efficient for removing heat from data centers
since it eliminates the mixing of cold and hot air, and due to the much higher volumetric specific
heats and higher heat transfer coefficients of liquid. 70 For example, in large data centers, air cooling
systems often require an amount of electricity equivalent to almost 100% of the IT load to remove

68 (American Power Conversion (APC) n.d.)


69 (Williams 2006)
70 (Greenburg, et al. 2007)
the heat; by contrast, chilled water systems will require only about 70% of the system wattage to
remove the heat.71

Figure 35: Example of how a liquid-cooling configuration could be configured in a data


center72

There are three major barriers that must be overcome to make liquid cooling more prominent in data
centers, as follows:
1) the established mindset that air cooling is the best practice in data centers.
2) the fear that liquid cooling can destroy equipment if failure occurs.
3) the new infrastructure, including a new plumbing system, which must be put into place to
support a liquid cooling strategy.

If these barriers can be overcome or mitigated, the potential savings from direct, liquid cooling are
substantial. Not only can the computer room air conditioners (CRAC) be eliminated, but also in
some environments the water from a chilling tower is cold enough without mechanical chilling to
73
remove the heat from data centers.

4.2.3.2 Use Cooling Economizer


There are two types of cooling economizers that can be deployed in data centers (water-side and air-
side) to reduce the electricity required by the data center. Water-side economizers save electricity by
reducing the mechanical chilling of water required. This reduction is achieved by supplying chilled

71 (Sawyer 2004)
72 (Hwang 2009)
73 (Greenburg, et al. 2007)
water directly from a cooling tower to the CRAC units in order to remove heat from the data center,
as shown in Figure 36.

H"s from

Air EconomizerCuprmn
Hot air isflushed outside, and
outside ar isdrawn in Emini
AkA
3 ~~n tc 11
17 __
41

Figure 36: Diagram of a Chilled Water Figure 37: Diagram of a Data Center relying
System using chilled water directly from the only on heat removal through an Air-Side
cooling tower to remove heat from the Data Economizer 75
Center74

Water-side economizers are best suited for climates that have wet bulb temperatures lower than 55*F
for 3,000 or more hours per year, and can reduce the chilled water plant energy consumption by up
to 75%.76 While water-side economizers are considered low-risk by data center professionals, these
77
same professionals are split on the risk when using air-side economizers.

As shown in Figure 37, air-side economizers simply transport cool outside air into the data center
instead of using chilled water to remove heat from the data center. Heat produced from the IT
equipment is flushed outside. Two characteristics of the outside air raise concern with data center
professionals - humidity and contamination. However, tests have shown that these fears may be
unjustified, and that air-side economizers can reduce the electricity consumption of a data center by
60%.78

74 (Evans, The Different Types of Air Conditioning Equipment for IT Environments 2004)
75 (Atwood and Miner 2008)
76 (Greenburg, et al. 2007)
77 (Greenburg, et al. 2007)
78 (Atwood and Miner 2008)
4.2.3.3 Improve Airflow Management
The goal of air management in a data center is to separate the hot air generated from the IT
equipment from the cold air that cools the IT equipment. Airflow management is one of the most
practical ways to improve the overall energy efficiency of data centers, and very large increases in
energy efficiency can result from robust airflow management. There are several reasons to resolve
79
heat removal problems in the data center, as follows:
e There are practical, feasible, and proven solutions.
* Many fixes can be implemented in existing data centers.
e Large improvements (20% or more) can result from little to no investment.
* Both IT people and facilities people can contribute to fixing the problem.
* Solutions are independent of facility or geographic location.
e They lend themselves to correction through policies that are simple to implement.

The potential solutions to heat removal problems are described below, and are organized from
relatively difficult to easy solutions.

4.2.3.3.1 Floor Layout


If airflow is to be managed, the first step is to separate the hot air generated from the IT equipment
from the cold air that is meant to cool the equipment. The first step that must be taken to achieve
this is to design the data center in the correct layout. Eighteen percent of the total 51% savings in
data center electricity is due to improving the floor layout, which includes hot/cold aisle arrangement
and the location of perforated floor and ceiling tiles. 80

4.2.3.3.1.1 Hot/Cold Aisle Configuration


As shown in Figure 38, if the racks containing IT equipment are not properly arranged, hot air from
the racks mixes with the cold air coming from the raised floor, resulting in warm air entering the IT
equipment. This warm air is not able to cool the IT equipment efficiently, which results in a
requirement for colder air from the CRAC, which in turn requires additional chilled water and
therefore electricity.

79 (Rasmussen, Avoidable Mistakes that Compromise Cooling Performance in Data Centers and

Network Rooms 2008)


80 (Rasmussen, Implementing Energy Efficient Data Centers 2006)
Plenum Plenum

R Rcowd R RCold R
R
aR R
a a a *ot a mAl.l[ a go a Aile a et
Nel C Male JO
c ixn" C ixn C "xin C
k k k k kk k Ik

Cold Ai' Raised Floor *i' Cold A Raised Floor tId A'

Figure 38: Rack arrangement without hot and Figure 39: Hot/cold aisle rack arrangement
cold aisles (side view) Red indicates hot air, (side view). Red indicates hot air, while blue
while blue indicates cold air. indicates cold air.

Other problems exist in the arrangement shown in Figure 38. For example, because the hot air is not
evacuated into the plenum, it must travel directly over the racks in order to return to the CRAC. As
this hot air travels from rack to CRAC, there are many opportunities for it to take a path of lesser
resistance. Unfortunately, oftentimes these paths of lesser resistance are into another rack. This
situation also arises because the rack is pulling in air in an attempt to cool the IT equipment. Not
only does this create the same situation previously discussed, but it also leads to a situation of the hot
air not returning to the CRAC, which leads to inefficiencies in the operation of the CRAC. If the
incoming air to the CRAC does not meet the CRAC's minimum incoming air temperature setpoint,
the electric strip heaters inside the CRAC will heat up the incoming air until the incoming
temperature requirement is eventually met.

A much better arrangement for a data center is shown in Figure 39, in which the inlets of the rack are
facing each other, and the outlets of racks are coordinated so that mixing of hot and cold air is not
possible. In addition, perforated ceiling tiles are added in order to utilize the ceiling plenum both as a
way to create a path of least resistance for the hot air and also as a path for the hot air to flow back
to the CRAC. This hot/cold aisle arrangement meets the basic goal of segregating the cold and hot
air.8 Additional improvements, such as proper placement of perforated floor and ceiling tiles can be
made to improve the airflow management once the hot/cold aisle arrangement is set-up.

4.2.3.3.1.2 Perforated Floor and Ceiling Tiles


Proper placement of perforated floor and ceiling tiles help ensure that segregated cold and hot air do
not mix. The key to perforated floor tiles is to place them as close to the equipment air intakes as

81 (The Green Grid 2007)


possible to keep the cold air in the cold aisles only; indeed, poorly located perforated floor tiles are
very common and erase almost all of the benefit of a hot/cold aisle layout. 82

Two mistakes are common when it comes to perforated floor tile placement. The first is placing the
perforated tile anywhere besides the cold aisle. One example of this situation, with the perforated
floor tile, or "floor vent", placed too close to the CRAC unit, is shown in Figure 40. This situation
creates a "short circuit", in which the cold air from the CRAC returns to the CRAC unit without
cooling the IT equipment. As discussed previously, this returning cold air is reheated by the CRAC,
which is a waste of electricity.

Raised Floor Airflow


Poor Vent Placement con "Short Circult"Cold Air S.sa

- 77ZD's, ac* igf

R~eturn '~og I a~~


"Short Circuit EMPTY / CL
C of Cold Air 0
R
A Floor Vents (
C TooClose ~~ AL
COLD
oeoos
to CRAC AISLE
Xii, [47.2] [94A4] [141.6] 1181,81 [236.0] (283.21 [330.4] [277.61 [424.8] [471.]

FI""'' s"""'f "**r Less cold air for other vents jI


T1e Airflow(CFM) [ L/s ]

Figure 40: Example of poor placement of Figure 41: Diagram showing heat removal
perforated floor tiles, or "floor vents", as it is capacity of perforated floor tiles depending on
referred to here8 3 the tile's airflow 84

The second common mistake made with perforated floor tile is that the incorrect number of floor
tiles is placed within the cold aisle. As shown in Figure 41, the perforated tile's airflow is directly
related to how much heat it can remove from IT racks. Therefore, the perforated tiles should be

places in such a way to remove the heat from the areas in which it is generated. For example, a rack
that generates 1 kW of heat (consumes 1 kW of electricity) should have cold air flowing at 160 CFM
in front of the rack. This airflow could be achieved either from one tile directly in front of the rack,
or from two floor tiles in the cold aisle near the rack.

82 (Rasmussen, Avoidable Mistakes that Compromise Cooling Performance in Data Centers and

Network Rooms 2008)


83 (Neudorfer 2008)
84 (Dunlap 2006)
To ensure that perforated tile placement is correct, many data center managers use computational
fluid dynamics (CFD) to "tune" floor tile placement and the percent that the tiles are open. 85

4.2.3.3.2 Computer Room Air Conditioning Modifications


The second important aspect of data center airflow management involves the computer room air
conditioner (CRACs). While the floor layout is very important, approximately 25% of the total 36%
potential savings in data center electricity is attributable to the CRAC improvements. These
improvements include the placement of the CRACs in the data center, the use of chimneys to create
closed loop airflow, and the coordination of all CRACs in the data center.86

4.2.3.3.2.1 Placement of CRACs in Data Center


CRAC units should be placed perpendicular to the racks, as shown in Figure 42. If there is a raised
floor used to transport the cold air to the racks, then the CRACs should be located at the end of the
hot aisles; while a non-raised floor data center should have the CRACs at the end of the cold aisle. 87

7FQ

Figure 42: Rack arrangement showing layout of CRAC units in relation to the hot/cold aisle
(top view). This diagram is for a raised floor data center. 88

This alignment of the aisle with the CRACs ensures that the incoming air into the CRACs will be as
warm as is possible, which will increase the efficiency and therefore the capacity of the CRACs. In
addition, this aisle/CRAC alignment minimizes the opportunity for hot and cold air mixing,
especially if perforated ceiling tiles are placed in the hot aisles. In addition to aisle/CRAC alignment,
CRACs should be placed so the path between the CRAC and the cold aisle it is serving is minimized.

85 (The Green Grid 2007)


86 (Rasmussen, Implementing Energy Efficient Data Centers 2006)
87 (Dunlap 2006)
88 (Dunlap 2006)
The shorter the distance from the CRAC to the cold aisle, the less the fan power that must be used
to "push" the cold air from the CRAC to the racks.

4.2.3.3.2.2 CRAC Chimneys


Another method that can be used to form closed-loop airflow in a data center is to install a
"chimney" from the CRAC to the ceiling plenum, as shown in Figure 43. The chimney should only
be installed if heat generated from the racks is evacuated into a ceiling plenum. As with the
placement of the CRAC, chimneys help ensure that the hottest air possible is returning to the CRAC,
which in turn increases the capacity and efficiency of the CRAC.

Figure 43: Example of CRAC chimneys, which are circled in green 89

4.2.3.3.2.3 Coordinate Air Conditioners


Perhaps the easiest way to reduce electricity usage in a data center is to coordinate the CRACs. As
shown in Figure 43, there are oftentimes several CRACs serving a data center. It is extremely
common in such cases for two CRAC units to be wastefully fighting each other to control humidity,
which occurs if the return air to the two CRAC units is at slightly different temperatures, if the
calibrations of the two humidity sensors disagree, or if the CRAC units are set to different humidity
settings. 90 The problem of CRAC fighting can be a 20-30% reduction in the efficiency of the

89 (Hirstius, Jarp and Nowak 2008)


90 (Rasmussen, Avoidable Mistakes that Compromise Cooling Performance in Data Centers and
Network Rooms 2008)
CRACs, and even worse can result in downtime due to insufficient cooling capacity. 91 There are four
92
ways to correct CRAC fighting, as follows:
1) Implement central humidity control.
2) Coordinate humidity control among the CRAC units.
3) Turn off one or more humidifiers in the CRACs.
4) Use deadband settings (when the deadband setting is set to +/-5% the problem will usually
be corrected).

4.2.3.3.3 Rack Improvements


The third important aspect of data center airflow management involves the individual racks of IT
equipment. While the floor layout is very important, approximately 8% of the total 51% potential
savings in data center electricity is attributable to the rack improvements, which includes the use of
cable cut-outs and blanking panels; the elimination of gaps between racks; and the minimization of
obstructions in the airflow. 93

4.2.3.3.3.1 Cable cutouts


One of the best examples of the need for behavior change in data centers is a requirement for cable
cutouts. Even though measurements have shown that 50-80% of available cold air escapes
prematurely through unsealed cable openings94 , many data centers still have large, unblocked holes
for transporting cables. As shown in Figure 44, these holes allow cold air from the raised floor to
flow in undesired locations, such as the hot aisle, instead of reaching the IT equipment. The result of
this waste of cold air is additional requirements on the CRAC units, which uses additional electricity.

91 (Dunlap 2006)
92 (Rasmussen, Avoidable Mistakes that Compromise Cooling Performance in Data Centers and

Network Rooms 2008)


93 (Rasmussen, Implementing Energy Efficient Data Centers 2006)
94 (Dunlap 2006)
Figure 44: Example of poor placement of Figure 45: Example of a cable cutout that
perforated floor tiles, or "floor vents", as it is prevents the waste of cold air9
referred to here95

One way to prevent the waste of costly cold air is to use air containment, brush-type collars kits, as
shown in Figure 45.97 While this technology is well established, it is not fully deployed because data
center managers oftentimes do not realize the impact that cable cut-outs have on the airflow and
electricity usage. Another way to prevent the waste of costly cold air is to run all wires (networking
and power) in the data center above the racks, which prevents the needs for cable cut-outs in floor
tiles. This solution is easiest to implement when the data center is first designed and built, since
moving the power and networking cables requires all IT equipment to be powered off.

4.2.3.3.3.2 Blanking Panels and Blanking Racks


Another common problem seen in data centers is missing blanking panels. Blanking panels are
plastic pieces that match the width and height dimensions of a standard rack insert. The purpose of
blanking panels is to eliminate a path for hot air generated from the rack to return to the front/input
of the IT equipment, as shown in Figure 46. In addition to missing blanking panels, sometimes
complete racks are missing from a line of data centers, which have the same re-circulation affect as a
missing blanking panel, as shown in Figure 47.

95 (Neudorfer 2008)
96 (The Green Grid 2009)
97 (Neudorfer 2008)
N - -- - - dhftmg

* t *
41
4N*

Figure 46: Example of rack without blanking Figure 47: Missing racks in a row of racks
panels (left) and with blanking panels (right)98 allow hot air to re-circulate

If hot air is allowed to re-circulate and re-enter the IT equipment, an overheating of the equipment
can occur. In addition, when the hot air generated by the IT equipment is re-circulated, it does not
return to the CRAC, which lowers the CRAC's efficiency and cooling capacity. There are two
reasons that blanking panels and blanking racks are not fully deployed, as follows: 99
1) Data center managers believe they serve only aesthetic purposes.
2) They are difficult and time consuming to install (however, new snap-in blanking panels can
help alleviate this problem).

Both of these factors are not technical in nature, and require behavior change and aligned incentives
in order to ensure that blanking panels and blanking racks are used.

4.2.3.3.3.3 Minimize Obstructions to Proper Airflow


The presence of obstructions, as illustrated in Figure 48 and Figure 49, has a great influence on
airflow uniformity. 100 Examples of raised floor obstructions are currently used network and power
cable, and the lesser desired previously used network and power cables, trash, and dirt.

98 (Rasmussen, Implementing Energy Efficient Data Centers 2006)

99 (Rasmussen, Improving Rack Cooling Performance Using Airflow Management Blanking Panels,
Revision 3 2008)
1o (VanGilder and Schmidt 2005)
Figure 48: Power supplies acting as a raised floor Figure 49: Network cabling acting as a
obstruction 01 raised floor obstruction 02

One of the best ways to minimize raised floor obstructions is through frequent or interval cleaning
and inspection. Another way to minimize raised floor obstructions is to run all wires (networking
and power) in the data center above the racks. As stated earlier, this solution is easiest to implement
when the data center is first designed and built, since moving the power and networking cables
requires all IT equipment to be powered off.

4.2.4 Miscellaneous Equipment Energy Efficiency


As discussed in Section 2.2.2, miscellaneous equipment accounts for 2-3% of the total electricity
consumed by data centers. Miscellaneous equipment is comprised of lighting, printers, copiers, and
other equipment that may be inside the data center. Over time, most data center managers have tried
to minimize the amount of non-essential equipment contained in data centers, since space is at a
premium in many data centers, and since any non-essential equipment clutters the data center and
obstructs efficiently airflow. Therefore, there are two ways to increase the efficiency of the
miscellaneous equipment - remove it from the data center and focus on the efficiency of the data
center lighting.

Lighting in data centers often provides an opportunity for electricity savings. Relatively low-cost
timers and motion-activated lighting can reduce the amount of time that lights are on. The impact of

101 (Phelps 2009)


102 (The Green Grid 2009)
lighting is double, since lights not only consume electricity, but also produce heat that must be
removed by the heat removal equipment. 103

4.3 Conceptual Framework


Since the focus of this thesis is to improve the electricity efficiency of existing data centers, best
practices for the design of a new data center, as shown in Table 3, where not used in the pilot data
center.

Table 3: Strategies for Reducing Electricity Consum tion in New Data Centers
Data Center Improvement Capital Labor Behavior
Component Description Required' Required Change
Power Supply Rightsizin
Add on-site power
Power Supply generation to reduce 2-
or eliminate UPS
Heat Use in-rack water
Removal cooling
Heat Use Cooling
Removal Economizer
1Red = "Yes", Yellow = "In some Instances", Green = "No"
2(Rasmussen,
Implementing Energy Efficient Data Centers 2006)
3Assumed inefficiency
of UPS was eliminated (Rasmussen, Electrical Efficiency Modeling of
Data Centers 2006)

On the other hand, strategies that could be used to improve the electricity efficiency of existing data
centers are shown in Table 4. Of the improvement opportunities listed, only "improve airflow
management" and "install energy efficient lighting" were piloted. Improving the UPS was not an
option because the data center did not utilize a UPS. However, the efficiency of the transformer
utilized by the pilot data center was calculated for information purposes only. Consolidation,
replacing obsolete equipment, and virtualization were not piloted because each of these methods
requires capital, and because the IT equipment contained in the pilot data center was governed by
existing contracts between Raytheon and the US Government. While Raytheon should pursue
modifying existing contracts and putting emphasis of the IT equipment flexibility, it was out of the
scope of this thesis. Managing the IT power consumption was not included in the pilot because
proprietary agreements prevented the analysis of the power settings on the individual IT equipment
contained in the data center.

103 (The Green Grid 2007)


Table 4: Strategies for Reducing Electricity Consumption of Existing Data Centers
Improvement Capital Labor Behavior
Piloted? aig
2ains
Description Required' Required' Change'
Improved UPS No 4-10%
Consolidation No unknown
Replace obsolete
No 30-50%
equipment
Virtualization No 10-40%
Manage power No
Improve airflow Yes
management 16-51%3

Install Energy
Yes 1-3%
Efficient Lighting
1Red = "Yes", Yellow = "In some Instances", Green = "No"
2(Rasmussen,
Implementing Energy Efficient Data Centers 2006)
3 Sum of More Efficient Air Conditioner Architecture, More Efficient Floor Layout, Coordinate Air

Conditioners, Locate Vented Floor Tiles Correctly, Reduce Gaps between racks/floors/walls,
and Install Blanking Panels and Floor Grommets

Of the two remaining improvement opportunities ("improve airflow management" and "install
energy efficient lighting"), a phased process was developed based on the difficulty and cost of
implementing each improvement. As shown in Figure 50, there are short-term and medium/long
term items within the improvement process. Because of time and budget constraints, only short-
term, low-cost improvements were implemented in the pilot data center and the following data was
analyzed to understand the impact of each improvement phase:
1) Airflow in front of racks
2) Air temperature at input and output of representative racks
3) Air temperature in plenum and near ceiling
4) Power load of CRACs
5) Instantaneous chilled water
Short Term. Low Cost Improvements

(E) Set all CRAC


units' to same set
points

(E)Turn off Reheat


arund t able
on CRAC units

(E) Add light on/off


i
94e
Bin g
switch

Ben oa ti hange Minimal Time and Investment Required


[ AsV (E)Easy
(D) Difficult

Figure 50: List of improvements for existing data centers


5 Results and Discussion of Piloted Improvement Process
5.1 Overview of Pilot Data Center
The pilot data center at the Raytheon Intelligence and Information Systems Garland site is
approximately 1500 square feet, while the room could be about 1200 square feet if the space was
optimally used. The size of the data center qualifies it as a "Localized Data Center", and a
comparison of the actual and predicted characteristics is shown in Table 5.

Table 5: Comparison of typical and actual characteristics of the pilot data center
Typical System Characteristics 04 Actual Characteristics of Pilot Data Comparison
Center
Under-floor or overhead air distribution Under-Floor air distribution, and four Same
systems and a few in-room CRAC units. in-room CRAC units
CRAC units are more likely to be water CRAC units utilize chilled water from Different
cooled and have constant-speed fans central chilled water plant for cooling,
and are thus relatively low efficiency and have constant-speed fans
Operational staff is likely to be minimal, 1-4 employees manage IT equipment Same
which makes it likely that equipment staff, but are not responsible for energy
orientation and airflow management are efficiency
not optimized
Air temperature and humidity are tightly Air temperature and humidity is Same
monitored monitored 24/7 for out of range
conditions
Power and cooling redundancy reduce Two CRACs would probably meet Same
overall system efficiency requirements, four are present

The data center is a test and development area, and some racks leave the room after they have been
tested. Because of this testing environment, the number of racks varies from 45-55. There is an 18-
inch raised floor and ceiling plenum in the room. Finally, as shown in Figure 51, there are four
peripheral computer room air conditioning units (CRACs) in the room, which are connected to
building's chilled-water system.

F .. 1: Dit D C

Figure 51: Diagram of Pilot Data Center

104 (Bailey, et al. 2007)


In addition, as shown in Figure 51, there are several areas of the room filled with non-essential
equipment. For example, several computers, shelves, work benches, and cabinets are contained
around the exterior of the room. The reason that the data center contains this non-essential
equipment is because the room is a Sensitive Compartmented Information Facility (SCIF), and
equipment is not easily taken in and out of the data center. For example, to avoid going through the
security protocol each time a new rack needs to be assembled, the equipment required to assemble
the rack is simply left in the room after assembly, so that it can be used the next time a rack is built.
There is an additional, important aspect of the data center since it is a SCIF - it must be essentially
self-sustaining. As the Physical Security Standards for Special Access Program Facilities states' 05,
"walls, floor and ceiling will be permanently constructed and attached to each other. To provide
visual evidence of attempted entry, all construction, to include above the false ceiling and below a
raised floor, must be done in such a manner as to provide visual evidence of unauthorized
penetration." This requirement prevents the data center from using an air economizer; indeed, the
only material that enters and exits the room is the chilled water from the building's chilled-water
system.

5.2 Baseline Condition of Pilot Data Center

5.2.1 Power Delivery and IT Equipment Electrical Load


To understand the current state of the data center, baseline data was gathered. The first data that
was collected was the electrical load of the room's IT equipment. As shown in Figure 52, the IT load
was very consistent.

Irnms RMS Ch.D 9/29/09 - 10/05/09

.............. I I
Ipeazk Imp . Ch. D 9/29/99 - 10/95/9

S I I I I
09:49 24 Hours/div 08:23

Figure 52: Current profile of the IT equipment over a one week period

105 (Joint Air Force - Army - Navy 2004)


The power for the IT equipment enters at 480V and is reduced to 220V by the power delivery unit
(PDU). The 220V supply is fed directly to the electrical panels that supply electricity to the IT
equipment. As shown in Table 6, the load is about 141 kW, and the transformer is extremely
efficient (96.6%) at transforming the electricity from 480V to 220V.

Table 6: Electrical load of the IT equipment in the pilot data center


480V

Max Avg
Min Current Current Current Average Daily
(Arms) (Arms) (Arms) Voltage Average Load (kW) Load (kWh)
Phase A 171 198 184.5 279.2 52 1236
Phase B 147 174 160.5 279.2 45 1075
Phase C 168 190 179 279.2 50 1199
146 3511
______________ 220V _______________

Max Avg
Min Current Current Current Average Daily
(Arms) (Arms) (Arms) Voltage Average Load (kW) Load (kWh)
Phase A 396 418 407 120.15 49 1174
Phase B 352 366 359 120.15 43 1035
Phase C 398 422 410 120.15 49 1182
141 3391

Transformer Efficiency 96.6%

5.2.2 Heat Removal Equipment Electrical Load


The cooling system for the pilot data center consists of four CRACs, each fed by the building chilled-
water system. One problem with determining the efficiency of the data center's overall cooling
system is that the data center does not occupy the entire building. In fact, two other data centers are
contained in the same building as the pilot data center. In addition, about one-half of the two-floor
building is comprised of office space. To further complicate the calculation of electricity required to
supply chilled water to the pilot data center, the building's chilled-water system is linked to another
building's chilled-water system - both systems contain both a cooling tower and a chiller. Therefore,
for the purpose of this pilot, a general efficiency value of 0.53 kWh/tonnage of cooling was used
based on modern, large-scale chilled-water systems.106 This value includes both the electricity
required to produce the chilled water (cooling tower + chiller). Note that analysis of this value can
be seen in Appendix V - Sensitivity of Chiller's Assumed Electricity Consumption.

The baseline condition of the pilot data center's cooling system is one of severe underutilization.
Each of the four units is nominally rated at 422,000 BTU/hr or 124 kW of cooling, so there is almost

106 (The Green Grid 2009)


..............................
.. ...............................................................

500 kW of cooling capacity in the data center. Since the IT load is 141 kW (reference Table 6), there
is 3.5 times the amount of cooling capacity needed in the pilot data center. In addition to calculating
the cooling capacity and comparing it to the IT-generated heat load, the load required to run the data
center's CRAC units was monitored with a submeter. As shown in Figure 53, the load, which results
from fan power and electric strip heaters (to reheat/humidify), is very cyclical. The average electrical
load was calculated to be about 73 kW.

r~ -x
__
_ __ _ __ -
A, l ~'t

rw.
Ip~n.I~i~ i'B~i9 - 1i1f%

?5 4 ktxrs'or

Figure 53: Current of the data center CRAC units on a three day (left) and one day (right)
scale. Highlighted area in three day chart is area of one day chart.

This cyclical nature of the electrical load on the CRAC units indicates that the units are cycling
between humidifying and dehumidifying functions. However, because this data center is closed to
the outside atmosphere, there is no need to control the humidity. Since humidity is not a factor, the
CRAC units were overcooling the room, which resulted in a need to reheat the air that cycled
through the CRAC unit, only to remove the heat prior to sending the air into the raised floor. In
addition to measuring the electricity required by the CRACs, the chilled water valves on the units
were monitored, as shown in Table 7. The measurement showed that the chilled water required was
65 tons, which according to the calculations shown in Table 7, required 34.7 kW of electricity to
produce. Therefore, the total electricity required for heat removal equipment is 108 kW.

Table 7: Chilled water usage and calculated electrical loads


CRAC Chilled Water Valve (%opened) Equivalent Chilled
Water Tonnage'
A 55% 19
B 31% 11
C 54% 19
D 47% 16
Total Tonnage of Chilled Water Required 65

Electrical Load from CRAC Submeter 73 kW


Calculations Electrical Load for chilled water 35kW
Total electricity required for heat removal 108kW
1Each CRAC has a capacity of 35 tons
5.2.3 Miscellaneous Equipment Electrical Load
The only additional equipment that required electricity in the data center was the lighting. The data
center contained 28 - 4 foot T8 lamp fixtures. Since each fixture containing four T8 bulbs requires
96 W of electricity, the total load for the lighting in the pilot data center was 2.7 kW. This electricity
was required 24 hours a day, 7 days a week because there was no light switch present in the data
center.

5.2.4 Summary of Baseline Electrical Load for Data Center


A summary of the baseline electrical load is shown in Table 8.

Table 8: Total baseline electrical load for the pilot data center
Component Electrical Daily Electricity Baseline %
Load (kW) Usage (kW/day)
Power Delivery System (loss 5 120 2
from transformer)
IT Equipment 141 3384 55
CRAC 73 1752 28
Chilled Water 35 840 14
Lighting 2.7 65 1
Total - 256.7 6161 100

In order to understand the electricity measurements and the data center efficiency, the power usage
effectiveness (PUE), as illustrated in Figure 54, was calculated for the baseline condition of the pilot
data center. A mathematically ideal PUE would be 1.0, in which 100% of the power from the utility
is delivered to the IT equipment, with none being used by the infrastructure. Any value over 1.0
represents the "infrastructure tax" to run the data center. 107

107 (The Green Grid 2009)


Data center . .. Data center
TOTAL INPUT NC , rr USEFUL OUTPUT

kaD 4, P~ I USEFULOUT
Efficiency = TOTAL IN

W Transformer Loss
Lighting 2%
1%
Figure 54: Detail of power consumption in a Figure 55: Baseline electricity usage for the pilot
data center efficiency model 08 data center

The average PUE for industry data centers is 2.5, and data centers with the most efficient equipment
and no over provisioning of capacity have been able to reach a PUE of 1.6.10 9 The pilot data center,
which has an electricity breakdown as shown in Figure 55, has a PUE of 1.82. However, there is a
caveat to this value- there is no UPS in the pilot data center. The UPS in data centers can consume
as much as 25% of the total electricity required for the data center.110 In any case, the PUE provided
a method to compare the effectiveness of the improvement process as it was implemented in the
pilot data center. In addition to electrical load, airflow and air temperatures were measured in order
to determine what affect the improvement process had on these critical aspects of the data center.

5.2.5 Baseline Airflow Measurements


Airflow is an important measurement because it indicates the amount of cold air that is reaching the
IT Equipment from the CRAC units. In the case of a raised floor, airflow measurements can help a
data center manager determine if the CRAC units are pressurizing the raised floor enough, too much,
or optimally. As shown in Figure 56, the airflow is very dependent on whether a perforated floor tile
is in front of the rack (note the location in Figure 51). However, even those racks that have
perforated floor tiles directly in front of them do not exhibit the same airflow. This variation is due
to the location in the room, as well as the size and number of holes on the perforated tile. The
average value of the non-zero airflows for the baseline condition was 683 CFM.

108 (Rasmussen, Electrical Efficiency Modeling of Data Centers 2006)


109 (Stansberry 2008)

110 (The Green Grid 2007)


Airflowas a Functionof RackNumber
1400

,L =2.76 kW
-- - 0
----
----
10 -- uIe
JS equired
43 CFM

I too M a W 71W IN W lo
4721 [%A) 141,M tUSZ) a"3 a )
"A3 PIA 143) 4"A1
1 11 21 31 41 51
Rack
Number

Figure 56: Variance in airflow in the pilot Figure 57: Comparison of actual and required
data center (baseline condition) airflow for pilot data center (baseline condition)

As shown in Figure 57, the baseline airflow (683 CFM) was much higher than required to remove the
heat generated by the IT equipment (141 kW/51 racks, or an average of 2.76kW/rack) in the pilot
data center. In fact, the airflow was 167% of the required airflow. This condition is not surprising,
considering that the four CRACs serving the pilot data center had the capacity to provide 3.5 times
the amount of cooling required according to the IT equipment load.

5.2.6 Baseline Air Temperature Measurements


In addition to measuring the airflow, the air temperature was measured in front of each rack, near the
ceiling, and in both the ceiling plenum and the raised floor. Finally, air temperature was taken in
several locations on a few racks in order to understand how efficiently the IT equipment was
supplied cold air.

As shown in Figure 58, the air temperature in front of the racks was between 62 and 70.90F; the
average was 67.0F and the standard deviation was 2.4oF. None of the air temperatures are close to
the 80.6oF maximum that inlet air temperatures can be according to ASHRAE; indeed, most
measurements are closer to the 64.40F minimum specified by ASHRAE, and four data points are
below the minimum.11' This is an indication that the CRAC setpoints is too low, and electricity
could be saved by raising the setpoints while still meeting the ASHRAE requirements.

111 (ASHRAE 2008)


Temperature as a Function of Rack Number
80
!

75 ASHRA Recommended Range

S 70-

65

E
- 50 - -

45

40
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55
RackNumber

Figure 58: Variance in air temperature directly in front of each rack in the pilot data center
(baseline condition).

As shown in Figure 59 and Figure 60, the air temperature in the hot aisles (outlet of the racks) is
generally higher than the air temperature in the cold aisles (inlet of the racks). Also, the air
temperature in the raised floor is 68-69-F in three of the four racks, while the air temperature in the
raised floor in front of Rack 21 was 710F, which could be an indication of blockage in between the
CRAC output and the perforated tile in front of Rack 21. While the ceiling had a plenum, it was not
used to evacuate air when the pilot began, as illustrated by the lower air temperature in the plenum
than in room in some cases. This lack of evacuation of air from the hot aisle likely resulted in a
"short circuit" and air mixing, which could have led to the higher temperatures as the distance from
the ground increased in the cold aisle. This higher temperature near the top of racks could have also
been due to missing blanking panels, which allowed air from the hot aisle to return to the cold air
and re-enter the IT equipment.
...
....
..... ....
........
...
.....
..
..
.....
.....
.....

J
17 T1,

FiF , 5 'Rack.
Rdck

Rack18 Rack21 Rack33/34 Rack36/37

Figure 60: Air temperature measurements in the


Figure 59: Diagram showing location of cold and hot aisle for selected racks (baseline
racks on which air temperatures were condition). Note that Rack 21 and Rack 33 have
measured. perforated tiles in the cold aisle, while Rack 18
and Rack 36 do not.

The air temperature in the ceiling plenum, 6 inches below the ceiling (and above racks), and near two
of the CRAC units was measured. As shown in Figure 61, the average air temperature 6 inches from
the ceiling and above the racks varied from 78-84-F, with a standard deviation of various
measurement points exhibiting a standard deviation up to 1.2oF. This relatively large standard
deviation indicates that the hot and cold air was mixing. The temperature in the ceiling plenum
varies from 73-78oF, and individual measurement points exhibited a standard deviation of about
0.30F. Most concerning was the air temperature above CRAC B and C, which was 76oF and 820F,
respectively. This air temperature is lower than the air temperature in other parts of the room, as
shown in Figure 61. This low air temperature, especially near CRAC B, indicates that the hot air
generated by the IT equipment was not returning to the CRAC units, resulting in inefficient use of
energy for both the CRAC units and the chilled water production.
Figure 61: Diagram showing the average air temperature at various locations within the pilot
data center (note only a portion of the data center is shown here). The average air
temperature shown was derived from data taken over a one-week period.

5.2.7 Additional Visual Observations inside Pilot Data Center


Not only was electricity and air temperature documented, but a walkthrough of the data center was
also completed in order to identify opportunities for improvement. The following observations were
made during the walkthrough:
* Blanking panels within the racks were missing about 50% of the time (see Figure 62)
" Missing racks in the middle of rows existed in five locations
" Networking cables resulted in raised floor obstructions in several locations (see Figure 63)
e Perforated tiles were placed in two locations in which no rack existed (see Figure 64)
* Side and back doors were missing from several racks (see Figure 65)
* Cable cut-outs were much larger than required, and extra space was not blocked (see Figure 62
and Figure 63)
Figure 62: Picture showing missing Figure 63: Picture showing missing
blanking panels, missing doors, and unblocked cable cut-out and raised floor
unblocked cable cut-outs (baseline obstructions (baseline condition)
condition)

Figure 64: Picture showing randomly Figure 65: Picture showing missing
placed perforated tile blowing pieces of doors from racks (baseline condition)
paper into the air (baseline condition)
5.2.8 Summary of Baseline Condition
After reviewing the baseline data of the pilot data center, the issue, solution, and hypothetical result
were developed, as shown in Table 9.

Table 9: Issues identified by data analysis in the pilot data center


Description Root Cause Solution Hypothetical Result

No lighting Install light switch


24/7 Lighting controls and occupancy Electricity savings
sensors
Default settings of Disable reheat,
Air is reheat, humidified, and CRACs wereu ed' humidify, and Electricity savings
dehumidified unnecessarily even though the dehumidify settings
room is a closed on CRAC
system
Heat Removal Equipment Room was Turn off 1-2
Overcapacity (3.5 times the overdesigned CRACS
IT equipment load)
Poor segregation Evacuate hot air
Cold Air/Hot Air Mixing of hot and cold into plenum
________________________aisle
Path hot air must
Hot Air not returning to take allows Create vacuum from Increased efficiency of
CRAC opportunities for plenum to CRAC
mixing heat removal equipment,
Missing blanking panels and leading to electricity
racks
Raised floor obstructions In sttem, savings
Poorly located Perforated I t behavior train personnel. In
Tiles by data center long-term, align
incentives and
Missing doors on racks (back manager create standard
operating
and side) procedures.
Unblocked cable cut-outs
.................................

5.3 Implemented Improvements and their Affects on the Pilot Data Center
In order to understand how the efficiency of a data center could be improved, the short-term, low-
cost improvement process developed from industry best practices, as shown in Figure 66, was
implemented in the pilot data center. A detailed diagram showing the changes made can be found in
Appendix IV - Changes made to the pilot data center.

Short Term, Low Cost inrovements


Phase IPhasel Phasen
Pae1
1E) St CRAC (E)Add mhton off
units' to Sme set s Itch

Turn MhE)m
Ouff frssnwl tsui

Minma Tke and invetwMn Requred


(E Easy
(D)D6L ts
Figure 66: List of irnprovernents for existing data centers

For logistical reason, the improvements were implemented in two phases; however, all the short-
term, low-cost improvements shown in Figure 66 could be implemented at the same time in future
data center improvement projects. As shown in Figure 67 and Figure 68, the changes made during
this pilot were low-cost (- $300), and were not difficult to complete.
Figure 67: Example of the changes Figure 68: Example of the changes (installation
(installation of egg crates above CRAC and of egg crates in hot aisles and chimneys on
blanking racks) made during Phase I and II CRACs) made during Phase I and II

This phased approach allowed the team to differentiate the effects both the equipment (CRAC) and
non-equipment changes had on the airflow, which will be discussed in the following sections. After
all of the short-term, low-cost improvements were implemented, one CRAC unit (CRAC B) was
turned off. The affects this shut down had on air temperatures, airflow, and electricity usage will be
discussed in the following sections. Of particular interest was whether the changes and elimination
of one CRAC would cause the IT equipment to overheat due to a lack of cold air from the raised
floor.

5.3.1 Airflow Measurements


As shown in Figure 69 and Table 10, the airflow increased after the Phase I changes were
implemented, and then returned to about the same value once the CRAC was turned off in Phase II.
The 13.6% increase (93 CFM on average) in airflow observed after Phase I resulted because the
problem of unplugged cable cut-outs and randomly placed perforated tiles was eliminated, stopping
air from "short circuiting" - escaping from the raised floor prior to reaching the cold aisles. Even
after one CRAC was turned off, the airflow only decreased by 4.4%, and still yielded an 8.6%
increase from the baseline airflow. This is extremely strong evidence for how important it is to
ensure perforated tiles are placed in the correct places and cable cut-outs are blocked.
Table 10: Statistics for airflow per tile during
different phases of the pilot process (non-zero,
perforated tiles)
Post Post
Baseline
Phase I Phase 11
Avg Airflow 683 776 742
(CFM)
Std Dev 373 503 377
(CFM)
Min Airflow 0 99 439
(CFM)
Max 1161 1296 941
Airflow(CFM) 1
Figure 69: Variance in airflow in the pilot data
center. Note that only the common
perforated tiles are shown in this chart.

As shown in Figure 70, the airflow was much higher than it needs to be to remove the heat generated
by the IT equipment (2.76kW per rack) in the pilot data center. Even after one CRAC was shut off,
the airflow per perforated tile (average of 742 CFM) was much more than required. This indicates
that "CRAC fighting" was occurring when all four CRACs were turned on, and there were probably
small "cyclones" of cold air remaining in the raised floor rather than reaching the racks.

TOpW Wfh
c

.
Baseline Pae
Lack2. 7 6 kW

equired =
.1 1 435 CFM
0 100 200 300 400 500 600 700 800 900 1000
(47.2] [94.4] [141.6] [1868] [236.01 [283.2] [330A) [377.6] [424.8 [471.9]

ile Airflow (CFM) [ L/s

Figure 70: Comparison of actual and required airflow for pilot data center

5.3.2 Air Temperature Measurements


As shown in Figure 71, the air temperature in front of the racks was not adversely affected by the
changes made to the data center. At no time did the air temperatures reach a temperature close to
the 80.60F maximum that inlet air temperatures can be according to ASHRAE; indeed, the average
temperature in front of the rack was closer to the 64.40F minimum specified by ASHRAE, as shown
in Table 11.112

As shown in Figure 71, the air temperatures after all changes were implemented (Post Phase II) were
lower than the baseline air temperatures at almost every point. It's quite remarkable that even after
one entire CRAC unit was turned off (Post Phase II), the average temperature in the cold aisle was
1-F lower than the baseline condition. This is an indication that there was probably "CRAC
fighting" occurring before the changes were made. The lower temperature is also an indication that
the hot air from IT equipment is leaving the room via the ceiling plenum, rather than returning to the
cold aisle and mixing with the cold air.

Table 11: Statistics for air temperature in the cold


en~afatiu as a FurSOonof Rak Nubt aisle during different phases of the pilot process
ASHRA RecommendedRange - Post Post
Baseline Phase Ii Phase II
Avg Temp 67.0 65.0 66.1
(F)
Std Dev (F) 2.4 9.5 3.1
Min Temp 62.0 61.6 60.3
(F)
Max Temp 70.9 73.1 70.8
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 S (F) I
+"Btaseline I
PostPhuse PostPhase
I1

Figure 71: Comparison of air temperature


during different phases of the pilot
improvement process

In addition to taking measurements in the cold aisle, air temperatures were taken in the ceiling
plenum, six inches below the ceiling, and near two of the CRACs. As shown in
Figure 72, there was a large increase in the air temperatures in the ceiling plenum. This large increase
was due to the installation of perforated ceiling tiles, or "egg crates", which replaced the solid ceiling
tiles that existed prior to the Phase I changes. These egg crates allowed hot air to flow from the data
center to the ceiling plenum, and then flow from the plenum to the CRAC units. However, as
indicated by the increase of the air temperature in the ceiling plenum above CRAC C, not all of the
hot air was returning to the CRACs after the Phase I changes were implemented.

112 (ASHRAE 2008)


Kaisea moor

Figure 72: Diagram showing the average and change from baseline air temperature after the
Phase I changes had been implemented within the pilot data center.

However, as shown in Figure 73-Figure 76, once the chimneys were installed onto the CRACs in
Phase II, the temperature of the air returning to the CRACs increased to about the same
temperatures observed in the ceiling plenum above the racks shown in Figure 72. This increase in
hot return air resulted in an increase in the efficiency of the heat removal equipment, which will be
discussed in section 5.3.3.

Also, as shown in Figure 73, mixing of hot and cold air that was evident above CRAC A was
eliminated once the chimney was installed, while the mixing continued to occur above CRAC B as
shown in Figure 74, since no chimney was installed. Another somewhat surprising result that was
after CRAC B was turned off, the air temperatures in the raised floor, which were measured below
racks near the applicable rack, decreased, as shown in Figure 74-Figure 76. This is a clear indication
that CRAC B and CRAC C were involved in "CRAC fighting" prior to the elimination of CRAC B.
In addition to illustrating "CRAC fighting", Figure 74 shows that a certain amount of suction of hot
air from the plenum was occurring prior to the shut off CRAC B. This indicates that if installing
chimneys is not possible, perforated ceiling tiles above CRACs help somewhat. However, as
discussed previously, a benefit of the CRAC chimney is the elimination of mixing above the CRAC.
Raised floor near
CRACshowed lower
No temperatures
Chimney CRACB
Chimney CRACB
Installed Turned off
Installed Turned off

Figure 73: Air temperatures near CRAC A Figure 74: Air temperatures near CRAC B
after Phase II changes were implemented after Phase II changes were implemented

Raised floor near


CRAC showed lower T
S hel 11OnOUIle d'
temperatures locate d within
Chimneyr

Chimney CRACB Chimney CRACB


Installed Turned off
Installed Turned off

Figure 75: Air temperatures near CRAC C Figure 76: Air temperatures near CRAC D
after Phase II changes were implemented after Phase II changes were implemented

In addition to the perforated ceiling tiles, the blanking panels and blanking racks that were installed
during Phase I probably helped to reduce the suction of hot air into the cold aisle, which in turn
allowed more hot air to return to the CRAC units, leading to a higher efficiency. To understand the
affect of the blanking panels and blanking racks, the air temperatures surrounding two racks were
collected. As shown in Figure 78, the cold aisle exhibited a more consistent temperature range after
the blanking rack (shown in Figure 77) was installed. For example, Rack 33, which originally had a
blank space beside it, showed a range of air temperatures in the cold aisle of 73-77oF prior to the
installation of the blanking rack. After the blanking rack was installed, the range of air temperatures
in the cold aisle was 69-72oF. Both the overall temperatures and the range were reduced. In
addition, the air temperatures in the hot aisle for Rack 33 increased from a range of 79-83oF to 77-
87-F after the blanking rack was installed. These changes are evidence of the reduction in mixing
blanking racks can have on a data center. Rack 36, which never had an empty space beside it,
showed less change from the baseline condition, as shown in Figure 78.

Before After Before After

. sii
Ale
. . .. . .lankingRack.

F Raised
Floor EBot Mid 3Top U Plenum

Figure 78: Air temperature measurements in the


Figure 77: Diagram showing location
cold and hot aisle for selected racks. Note that Rack
of racks on which air temperatures
33 has a perforated tile in the cold aisle, while Rack
were measured.
36 does not.

5.3.3 Heat Removal Equipment Electrical Load


The electricity required by the heat removal equipment was altered in three ways:
1. Mixing of hot and cold air was drastically reduced, leading to an decrease in the amount of
bulk cold air required, and thus a reduction in the electricity required by the CRAC units and
also in the chilled water usage
2. Reheat, humidification, and dehumidification were disabled since the air contained in the
data center was static
3. Hot air generated from the IT equipment was returned to the CRACs, leading to an increase
in the operating efficiency of both the CRACs and the chilled water.

As shown in Figure 79 and Figure 80, a drastic reduction in the electricity took place during the pilot.
In addition, the cyclical nature of the load was reduced so that the electrical load of the CRACs
became almost constant. The total reduction in electricity required by the CRACs was 68% for the
pilot data center.
rIICti.D
O.)1/1VIrms RMS Ch.D 11/04/09 - 11/5-/Oc
114f pftet 7 After Turning Off One CRAC
~ F rU 'i eliinating I? Afler L
r
eliinating
rehea
____ ___18L

pek Iplse h.1 1peak Imp. Ch.D 11/4/09 -11/05C

0 0

0:?00 2 Hours/div 14:09 11: 08 6 Hours./div 0

Figure 79: Current of the data center CRAC Figure 80: Current of the data center C R.AC
Fgunitsbere an af the rat funtin as units before and after one of the units was turned
units before and after the reheat function was off. In addition, dehumidification an d
disabled on all four in room CRAC units. humidification was turned off

As shown in Table 12, the electricity consumed by the CRACs changed significantly during the
implementation of the improvement process. Since only three of the four CRAC units served the
data center upon completion of the pilot, the cooling capacity was reduced 25% from 500kW to 372
kW. Even after shutting down one CRAC, there is still 2.6 times the amount of cooling capacity
needed in the pilot data center, which is must closer to the n+1 condition desirable in critical data
centers.

In addition to measuring the electricity required by the CRACs, the chilled water valves on the units
were monitored, as shown in Table 13. The measurements showed that the chilled water required an
average of 51.2 tons. Therefore, the electricity required by the chilled water system was reduced by
22% (27 kW versus the baseline value of 35 kW), and the total electricity required by the heat
removal system is 50 kW/ton (a 54% reduction).

Table 12: Electricity consumed by CRACs during different staes of project


Average Average Daily Change Annual
Hourly Daily Reduction from Reduction
Load (kW) (kWh) Baseline (kWh)

Original AHU Fan Load 73 1,741 - 0% -


Post Phase I, post reheat 29 707 1,034 -59% 377,514
elimination I II _ I

Post Phase II, post turning one 23 556 1,186 -68% 432,820
off I
----
------
- -.......

Table 13: Chilled water usage and calculated electrical loads


CRAC Chilled Water Valve (% opened on average after pilot Equivalent Chilled
was complete) Water Tonnage1
A 46% 16.1
B 0% 0
C 59% 20.7
D 41% 14.4
Total Tonnage of Chilled Water Required 51.2

Electrical Load from CRAC Submeter 23 kW


Calculations
Electrical Load for chilled water 27 kW

Baseline Electricity Required for Heat Removal 108 kW


'Each CRAC has a capacity of 35 tons

5.3.4 Miscellaneous Equipment Electrical Load


A light switch was added next to the entrance of the data center so that the lights could be turned off
when no one was present. The next step for the data center would be to add occupancy sensors to
ensure that the lights were turned off when no one is present.

The data center contained 28 - 4 foot T8 lamp fixtures. Since each fixture requires 96 W of
electricity, the baseline total load for the lighting - which was turned on 24 hours a day, 7 days a
week - in the pilot data center was 2.7 kW. However, from interviews with the data center manager,
the room is normally unoccupied 20 hours a day. It was assumed that the lights would be turned off
during those hours, so that the electricity required for lighting the data center during use was 0.5 kW.

5.3.5 Summary of Electrical Load for Data Center


As shown in Figure 15, there was no change to the power delivery and IT equipment load, since
changes such as consolidation and virtualization could not be completed due to budget and
contractual constraints. Furthermore, no PUE existed in the pilot data center, and the transformer's
efficiency was over 96%. Therefore, the only opportunities for savings were in the heat removal
equipment and in the miscellaneous equipment. As discussed in the previous sections, the savings in
electricity consumption by the CRAC and lighting was 50kW and 2.2 kW, respectively.
Table 14: Comparison of the baseline and post ilot electrical load for the data center
Baseline Baseline Daily Post Pilot Post Pilot Daily Change
Component Electrical Electricity Usage Electrical Electricity Usage Basee
Load (kW) (kWh/day) Load (kW) (kWh/day)
Power Delivery System 5 120 5 120 0
(loss from transformer)
IT Equipment 141 3384 141 3384 0
CRAC 73 1752 23 552 -68.5
Lighting 2.7 65 0.5 12 -81.5
Chilled Water 35 840 27 648 -22.9
Total 257 6161 240.5 4716 -23.5

The improvements made a substantial improvement to the data center efficiency, as shown by
comparing Figure 81 and Figure 82. Because of the 23% reduction in electricity consumption of the
data center, the PUE of the pilot data center after the improvement process was implemented 1.39, a
23% increase from the baseline condition, which yielded a PUE of 1.82. As discussed earlier, the
average PUE for industry data centers was found to be 2.5, and data centers with the most efficient
equipment and no over provisioning of capacity have been able to reach a PUE of 1.6.113 Even
though the pilot data center had a PUE of 1.39 after the pilot was complete, the same caveat exists -
there is no UPS in the pilot data center. The UPS in data centers can consume as much as 25% of
the total electricity required for the data center. 1 4

KLight
Transformer Loss
3%

0.2%

Transformer Loss
Lgtng 2%
1%
Figure 81: Baseline electricity usage for the Figure 82: Post pilot electricity usage for the
pilot data center pilot data center

113 (Stansberry 2008)

114 (The Green Grid 2007)


The improved PUE of the pilot data center further illustrates the improvement that occurred in the
energy efficiency through simple, low-cost improvements. Additionally, when the electricity savings
are translated into monetary savings, the results are staggering. As shown in Figure 83, the annual
savings achieved in the pilot data center are approximately $53.6k (recall only $300 was used to
115
implement all the changes in the pilot). The savings were calculated as follows:

SavingsAnnual = ElectricSavingsDaily X 365 x $0.106 (1)

Additional potential savings not achieved during the pilot were estimated by assuming that 20% of
the electricity consumed by the racks could be eliminated if power management software was utilized
(as discussed in Section 4.2.2.2). This assumption seems reasonable since the constant load of the
building containing the data center shows a decrease in the constant load every December during the
Raytheon holiday break, indicating that racks can be powered off. This additional savings, as shown
in Figure 83, could increase the overall annual savings in the pilot data center to approximately
$83.9k.

7000 !90

-6000 --
$83 9k$89

b 80

$70
~*
i

5000

4000$50 3
Ughtig
0Trns. Loss
3000 $40

+3000 - -

4000
$10
10020

0
Baseline Post Phase I Post Phase Il Past Phase III
(Assumed)

Figure 83: Comparison of electricity used by the pilot data center and annual savings
achieved during different phases of the pilot program

115Average 2008 electricity rate for commercial buildings:


http://www.eia.doe/gov/cneaf/electricity/epm/table5_6.a.html
6 Recommendations to Maximize Return on Improvement Process
Most of the improvements made to the pilot data center are not technical, but behavioral in nature.
While simply correcting the mistakes will increase the energy efficiency of data centers in the short-
term, addressing the root cause in either the design phase or in the day-to-day management of the
data center is essential to not only maintaining the changes, but also avoiding them in the first place.
Two phases of a data center life cycle are key to the energy efficiency - the design phase and the use
phase, as shown in Table 15. Both of these phases will be discussed in detail in the following
sections.

Table 15: Issues identified by data nalysis in the pilot data center
Description Root Cause Short-term Long-term Change
Solution Required
Install light switch Implement lighting
24/7 Lighting No lighting controls and occupancy controls in DESIGN
sensors PHASE
Air is reheat, humidified, Default settings of Disable reheat,
and dehumidified CRACs were used, humidify, and Right size during the
unnecessarily even though the room dehumidify settings DESIGN PHASE
is a closed system on CRAC
Heat Removal Equipment Room was Turn off 1-2 Right size during the
Overcapacity (3.5 times overdesigned CRACs DESIGN PHASE
the IT equipment load)
Cold Air/Hot Air Mixing Poor segregation of Evacuate hot air Improve DESIGN
ColdAir/HoAirMixing hot and cold aisle into plenum PHASE; Intensify
Path hot air must take PAE nesf
Hot Air not returning to Patot tae Create vacuum from DATA CENTER
CRAC for mixing plenum to CRAC MANAGEMENT

Missing blanking panels Install blanking


and racks anels
Raised floor obstructions Rctons
Poorly located Perforated Incorrect behavior by Move perforated IEN TA
Tiles data center manager tiles MANAGEMENT
Missing doors on racks Install doors on
(back and side) racks
Unblocked cable cut-outs Use foam or special
cable cut-out tiles

6.1.1 Correcting Design Phase Mistakes through Management Techniques


As discussed in section 4.2.1.1, rightsizing during the design of a data center can reduce the electricity
usage of a data center by up to 30%. As shown in the pilot data center, the heat removal system was
overdesigned such that one entire CRAC unit could be turned off. In fact, turning one of the four
CRACs off actually decreased the temperature of the cold air getting to the IT equipment and
increased the airflow, both desired outcomes. So, why are data center overdesigned, and what are the
keys to rightsizing?

In the case of the pilot data center, the room was designed to meet the maximum electrical
requirements of the IT equipment. This is a very common, because the cost of downtime due to a
data center has soared in recent years. 116 To ensure that the data center experiences no downtime,
facility designers simply design the data center to the worst-case scenario - all pieces of IT equipment
consuming the maximum amount of electricity possible. Two solutions exist to correct
overdesigning - rightsizing with provisions and integrated design teams.

Rightsizing should be performed as discussed in section 4.2.1.1; however, provisions, such as


temporary heat removal equipment should be purchased to avoid under capacity, and more
importantly, to put data center managers at ease. To ensure that rightsizing occurs correctly, the IT
and facilities departments must be better integrated. As shown in Figure 84, the power of IT
equipment are the top two concerns among data center managers, yet there is an inherent lack of
communication between the IT and facilities departments of most companies. The result is that
most data centers design and build or upgrade projects are painful, lengthy, and costly.117

cmma. forponwe ms~d wt


aig ingpureyre
re8 of -uS
Lackfdatacmtsanagter "mp afm

Da -ue smiry
IAbbtc ofcearkg acepit dan th i
1 tag
( pier e40) M A20 )

n
118( t iader 0I5 ) - 1-1 1 1
119 (hGtheab

Figure 84: Survey results of what is keeping Figure 85: Example of Staff Alignment for
data centers managers "up at night"'1 8 Energy Efficiency 19

In the case of the pilot data center, the room was designed and built by the facilities organization;
while the day-to-day activities, including purchasing and setting up IT equipment, is managed by the

116 (Stansberry 2008)


117 (Mhe Green Grid 2007)
118 (Sneider 2005)

119 (The Green Grid 2007)


IIS program who uses the servers, data storage, and network equipment contained in the data center.
Facilities involvement after the initial design and build is minimum, and includes periodic
maintenance of the heat removal equipment and responding to tickets that specify an undesired
condition, such as "the room is too hot." In addition, the facilities organization manages the
payment of the utility bills; however, they are not incentivized to reduce the electricity consumed
because any savings simply affects the accounting bottom line and is not provided to the facilities
department for additional projects. Those who manage the data center have no visibility of the
electricity consumed by the data center, and also are not incentivized to reduce the electricity
consumed; rather, their performance is based on the amount of downtime and the speed at which
they can get new IT equipment running. To remedy this situation, organizations and the incentive
structures must be better aligned to ensure that energy efficiency is important to the data center
design team.

An example of an alignment that is conducive to energy efficiency considerations is shown in Figure


85, in which an IT Facilities group reports directly to the Chief Information Officer (CIO) and is
responsible for the initial design of data centers. While organizational structure puts emphasis on
energy efficiency, incentive and accountability must be set-up to ensure there is follow through by
the data center design team. One barrier that must be overcome in order to implement
accountability is lack of visibility. For many organizations, including the case of the pilot data center,
only one bill for the entire site is received from the utility company for electricity. Most of the time,
the bill is received not by the IT department, but by the finance or facilities department. After
checking the bill quickly for obvious errors, and in seldom cases comparing the bill to the previous
month or previous year, the finance or facilities department simply pays the bill to ensure that there
is no disruption to the power into the plant. Indeed, in a 2008 survey on data centers' purchasing
intentions, 36% of the respondents did not even know whether their utility power costs had
increased or decreased since 2007.120

Therefore, to set up accountability and the correct incentive structure, submetering of high-energy
users, such as data centers, should be implemented to whatever extent possible. The submetering
can be temporary, as it was with the pilot data center, or permanent. The electricity data should be
used to charge the high-energy users for their portion of the electricity bill and to set-up incentives,
such as performance management metrics that affect future salaries and bonuses, to make sure that
those with the ability to create change are motivated to do so. If it is too difficult because of existing

120 (Stansberry 2008)


customer contracts and multiple customers to separate the bill and charge end users, then at least
goals should be set for energy efficiency targets for high-energy users such as data centers. For
example, a 25% increase in PUE could be required annually and listed as a performance management
metric of both the IT and facilities leadership team. This would not only incentivize the IT and
facilities departments to design data centers using techniques such as rightsizing and virtualization,
but it would also drive a culture of continuous improvement year-to-year, forcing IT and facilities
departments to collaborate and search for consolidation techniques and purchase the most energy
efficient equipment available for new contracts. Unfortunately, the importance of purchasing the
most energy efficient IT equipment is often overlooked. It is unfortunate because the three-year cost
of powering and cooling servers is currently 1.5 times the capital expense of purchasing the server
hardware.121

In summary, in order to increase the energy efficiency of a data center over the life of the equipment,
the IT and facilities departments should be better integrated through organizational alignment. In
addition, incentives should drive accountability and a culture of continuous improvement year-to-
year to ensure that right decisions are made not only during the design of the new data centers, but
also during the purchasing of new equipment for existing data centers.

6.1.2 Correcting Use Phase Mistakes through Employee Engagement


While the design of a data center has a substantial impact on its lifetime electricity consumption, the
day-to-day management is the most important factor when it comes to the efficiency of the heat
removal system, which can account for as much as 50% of the total electricity consumed by a data
center.

As discussed in the previous section, those who manage the data centers at IIS have no visibility of
the electricity consumed by the data center and are not incentivized to reduce the electricity
consumed. Rather, their performance is based on the amount of downtime and the speed at which
they can get new IT equipment running. However, when the project team - consisting of only
facilities personnel - chose the pilot data center, the personnel responsible for the room showed a
remarkable amount of engagement during the improvement process. The purpose of this section is
to discuss how a high level of employee engagement was achieved, and how the process can be
replicated in other data centers.

121 (Stansberry 2008)


Five aspects played a key role in achieving employee engagement of the program personnel who
"owned" the pilot data center, as follows:
1. Awareness
2. Commitment
3. Action
4. Feedback
5. Competition

The first step of engaging the pilot data center's manager was to raise awareness of data center energy
efficiency by discussing the current state of the pilot data center. In addition, the team provided the
data center manager the total monthly and annual bill for electricity at the Garland IIS site, and how
the electricity bill affected the overhead cost and therefore the ability of IIS to win new contracts. In
addition, the environmental impact of Garland's energy usage was discussed. In addition, benefits of
improving the energy efficiency of the pilot data center, such as additional capacity in the data center
and increased reliability/redundancy of the heat removal equipment, were discussed. Finally, rather
than simply stating what was wrong with the pilot data center, the team - including facilities and the
data center manager - walked through the data center and discussed possible improvement
opportunities. By making the data center manager part of the team, it was much easier to make
improvements once the pilot began. Prior to making any changes, the plan was discussed with the
data center manager in order to cultivate an extremely inclusive team environment.

The second step of achieving employee engagement was gaining public commitment. Research
suggests that a public commitment (as opposed to a one-on-one or private commitment) leads to a
reduction in energy usage. For example, homes that made a commitment to decreasing their energy
use reduced their usage 10-20%, whereas a control group that made only a private commitment
showed no change in their energy usage. 122 In the case of the pilot data center, the team gained
public commitment by holding a meeting between facilities management, the data center manager,
and the data center manager's boss. In the meeting, proposed changes were reviewed, and the team
asked for commitment. One common flaw when achieving commitment is to pressure people in to
commitment, which research suggests does not work. 123 To avoid applying pressure for the pilot
data center, the team created a very informal environment when asking for public commitment.

122 (Pallak, Cook and Sullivan 1980)


123 (McKenzie-Mohr and Smith 1999)
The third step of achieving employee engagement was encouraging the data center manager to take
action, rather than directing facilities personnel to make agreed upon changes. For example, rather
than simply installing the blanking panels and blanking racks, the team provided the data center
manager with a white paper on the importance of blanking panels, and asked what assistance would
be helpful in ensuring blanking panels were installed in the short and long-term. One day after the
discussion took place, all missing blanking panels had been installed by the data center personnel.

In order to continue cultivating employee engagement, feedback was provided to the data center
manager and his team. Feedback is essential in order to sustain change. In one study, simply
providing feedback increased recycling by more than 25%.124 However, when feedback was
combined with public commitment, the resulting increase in recycling was even higher - 40%.125 To
provide feedback during the pilot, the electricity consumed by the CRACs were provided before and
after the reheat function had been disabled, illustrating the impact that the project was having on the
data center's electricity consumption, the building's electricity consumption, and the overall Garland
site's electricity consumption.

Finally, the last step of achieving employee engagement is to create a competitive environment. Most
companies and sites have multiple data centers. While the team was unable to create a
comprehensive competitive environment in only six months, one measure that could be used to
create a competitive environment among data center managers is the power usage effectiveness
(PUE). Not only could the PUE be used to create competition, but it also allows a way to prioritize
the implementation of the improvement process across multiple data centers.

6.2 Suggested Implementation Plan for Data Centers


Oftentimes it is difficult in large organization to roll-out large initiatives, even if there is evidence vis-
A-vis a pilot that illustrates potential results. To increase the chances of a successful roll-out of the
piloted improvement process, Raytheon (and other organizations) should observe why the Garland
team was successful, as illustrated in Figure 16.

124 (Deleon and Fuqua 1995)


125 (Deleon and Fuqua 1995)
Table 16: Key questions and answers t at highlight a successful implementation process
Key Question 26 Answer
Why was the team successful in Garland? 1) Facilities/IT leadership buy-in
2) Clear leader for implementation
3) Employee engagement
4) Team was empowered to make decisions and
implement changes
What is similar between the Garland site 1) Increasing data center energy efficiency can be a
and other sites/companies where the pilot compelling and competitively significant goal
will be implemented? that every employee can relate to viscerally
2) Data centers are owned jointly by facilities, IT,
and program support
3) Electricity is paid for by facilities

What is different between the Garland site 1) Data center attributes like age, layout,
and other site/companies where the pilot number/site, etc vary from site to site
will be implemented? 2) Emphasis on energy conservation

From the answers to the above questions, a three step process was developed that could be followed
to implement the pilot process in both Raytheon and also other companies to ensure that the results
achieved during the pilot could be replicated:
1) Gain buy-in and commitment from leadership for data center improvements
2) Create Rapid Results Teams (RRTs) to implement the improvement process
3) Allow feedback and increased knowledge to be shared amongst the RRTs

Each of these steps will be discussed in more detail in the following sections.

6.2.1 Gain buy-in and commitment from leadership for data center improvements
When reflecting on why the team was so successful in implementing the pilot process, the first thing
that comes to mind is leadership buy-in. Because the leadership team from Facilities, IT, and the
Program agreed upon the need to improve the energy efficiency of data centers, the team was
empowered to make changes quickly, without repeatedly asking for permission. The method for
achieving leadership buy-in was to communicate the importance of data center energy efficiency
through anecdotes and relatable information. As shown in Figure 86, prior to the start of the project,
the leadership was shown the extent to which data centers were affecting their operations cost. To
make the data relatable, a car, in this case the Nissan Versa was used to illustrate how much money
was spent daily on electricity for Garland data centers. In addition, once the pilot was complete, the
information shown in Figure 87 was provided to the leadership team. The savings information not
only recognized the team's success, but also provided the leadership team with a strong incentive -

126 (Beers November-December 1996)


I - , No ..
.

almost $500k in savings - to continue the implementation of the data center energy efficiency
process.

Importance of Data Centers @ US nuS*. Back of Envelope Calculations amo .

Data Centers are ot20%of


Gartand's ota squar fo Wag. yet
consunme about8s% of ts erWgy

Purpos*of LO kamgo wasto hod MS


DoteCentearoudd bethe TopPdority for VS

Figure 86: Example of slide shown to Facility Figure 87: Example of slide show to Facility
Leadership to gain buy-in of importance of data Leadership to keep momentum of data center
centers projects

6.2.2 Create Rapid Results Teams to implement the improvement process


Generally, two organizations - Facilities and IT - are responsible for data centers. Therefore, the
leadership of these two organizations should create and empower Rapid Result Teams (RRTs),
comprised of facilities and IT personnel, to implement the data center improvement process. The
first deliverable assigned to the RRT should be the creation of a prioritized list of data centers to
improve. Next, the leadership of Facilities and IT should assign each data center both a specific RRT
and a specific date for full implementation of the data center improvement process. Setting
aggressive goals for implementation will empower the teams, communicate the importance of the
initiative to the entire organization, and provide savings rapidly. If RRTs are results oriented,
127
vertical, and fast, maximum benefits will result from their existence.

6.2.3 Allow feedback and increased knowledge to be shared amongst the RRTs
One very important capability required to create a high performing, high velocity organization is
knowledge sharing. 128 In the case of RRTs, the best approach to share information and further refine
the improvement process is through a Lean tool called jishuken (pronounced "jee-shoe-ken"').
Jishuken, an example of which is shown in Figure 88, is the process of first asking RRTs from
different sites to perfect data center improvement processes specific to the data centers at their site.

127 (Matta and Ashkenas Septmeber 2003)


128 (Spear 2009)
Next, RRTs visit other sites and work with local data center RRTs to further increase the energy
efficiency of each site's data centers.
SpITether 1"WeveA step3 O0e upolth
Stim 1: ofmewfrk on,
Wmm
to pv"tY
rems Inpfoe-e

Figure 88: Jishuken Dynamic12 9

129 (Spear 2009)


7 Conclusion

The impact of the low-cost, systematic approach for increasing the energy efficiency of data centers
was a 23% reduction in electricity consumption, leading to annual savings of over $53k. If other
low-cost measures are implemented, the total reduction could be 34% and the annual savings could
be over $80k. Raytheon and other organizations should roll-out this low-cost process across all
existing data centers to not only increase their energy efficiency, but also to change the energy
consuming behavior of their professional staff.

While these results are impressive, they highlight two problems that many data centers have - poor
initial design that does not include an emphasis on energy efficiency and poor day-to-day monitoring
of the data center's electricity consumption. In order to remedy these problems, management must
utilize incentives and performance management to drive behavior change during the design process.
In addition, similar management techniques must be used to engage employees and set continuous
improvement goals so that the electricity consumption of data centers decreases over time. During
this project, the improvement process was broken down into three phases. This phased approach
allowed the team to test a behavior change process on the IT organization responsible for the data
center. We first highlighted the overall problem (low energy efficiency). Then, we helped the
employees brainstorm improvement ideas, and encouraged them to implement their suggestions in a
manner that allowed us to monitor each action's affects. Finally, we reinforced their positive
behavior by illustrating the impact to both the employees and to the employees' management team.
This simple, well established behavior change model worked to motivate the employees to action.

The next steps for any company that would like to increase the energy efficiency of their data centers
is to roll-out the piloted process systematically using rapid results team who are empowered by IT
and Facilities leadership to make changes. Lean techniques such as jishuken could be used to ensure
feedback is integrated into the piloted process and to maximize the organization's return on data
center energy efficiency in both the short and long-term. Because the improvement process outlined
in this thesis is low-cost, it provides a foundation for any organization that wants to start improving
their data center's energy efficiency. I would encourage organizations that utilize the piloted low-cost
process to not simply replicate the process, but rather to work to continuously improve the pilot
process to yield more impressive results.
8 Bibliography
American Power Conversion (APC). Solutions For: Server Consolidation.
http://www.apc.com/solutions/csegment.cfm?segmentID=3&cid=10 (accessed 12 14, 2009).
ASHRAE. 2008 ASHRAE EnvironmentalGuidelinesfor DatacomEquipment. Refrigerating, and Air-
Conditioning Engineers, Inc. American Society of Heating. August 1, 2008.
http://tc99.ashraetcs.org/documents/ASHRAEExtendedEnvironmentalEnvelopeFinalAug_1
_2008.pdf (accessed December 21, 2009).
Atwood, Don, and John Miner. Reducing Data Center Cost with an Air EconomiZer.Intel Corporation,
IT@Intel Brief, 2008.
Bailey, M, M Eastwood, T Grieser, L Borovick, V Turner, and R C Gray. SpecialStudy: Data Centers of
the Future. New York, NY: IDC, 2007.
Beers, Michael C. "The Strategy that Wouldn't Travel." HarvardBusiness Review, November-December
1996: 18-31.
Blackburn, Mark. "Five Ways to Reduce Data Center Sever Power Consumption." White Paper #7,
The Green Grid, 2008.
Blazek, Michele, Huimin Chong, Woonsien Loh, and Jonathan G. Koomey. "Data Centers Revisited:
Assessment of the Energy Impact of Retrofits and Technology Trends in a High-Density Computing
Facility."Journalof Infrastructure System (ASCE), September 2004: 98-104.
Brealey, Richard, Stewart Myers, and Franklin Allen. Principlesof CorporateFinance. 9th Edition. New
York, NY: McGrawHill/Irwin, 2008.
Carr, Nicholas G. "The End of Corporate Computing." MIT Sloan Management Review 46, no. 3
(2005): 66-73.
Deleon, Ig, and R Fuqua. "The Effects of Public Commitment and Group Feedback on Curbside
Recylcing." Environment and Behavior27, no. 2 (March 1995): 230-250.
Dranove, David, and Neil Gandal. "The DVD vs. DIVX Standard War: Empirical Evidence of
Vaporware." Edited by UC Berkeley: Competition Policy Center. 2000.
Dunlap, Kevin. CoolingAuditfor Identing PotentialCooling Problems in Data Centers.White Paper #40,
American Power Conversion (APC), 2006.
Eaton, Robert J. Getting the Most out of EnvironmentalMetrics. 5 1, 2009.
http://www.nae.edu/nae/bridgecom.nsf/weblinks/NAEW-4NHM88 (accessed 5 1, 2009).
Energy Information Administration. US Climate Zonesfor 2003 for CBECS. 2003.
http://www.eia.doe.gov/emeu/cbecs/climatezones.html (accessed 11 05, 2010).
EPA. Climate Leaders Basic Information. 2010. http://www.epa.gov/stateply/basic/index.html (accessed
01 26, 2010).
-. "The Lean and Energy Toolkit." www.epa.gov/ lean.
http://www.epa.gov/lean/toolkit/LeanEnergyToolkit.pdf (accessed 06 25, 2009).
Evans, Tom. Fundamental PrinaplesofAir Conditionersfor Information Technology. White Paper #57,
American Power Conversion (APC), 2004.
Evans, Tom. The Diferent Types ofAir ConditioningEqupmentfor IT Environments. White Paper #59,
American Power Conversion (APC), 2004.
Fanara, Andrew. "Enterprise Storage Announcement." Letter, ENERGY STAR Program, U.S.
Environmental Protection Agency, Climate Protection Partnerships Division , Washington, 2009.
Google Maps. http://www.maps.google.com (accessed 06 17, 2009).
Google. The CaseforEnergy - ProportionalComputing. Luiz Andre Barroso and Urs Holzle. December
2007.
http://dsonline.computer.org/portal/site/computer/index.jsp?pageID=computer-levell&path=co
mputer/homepage/Dec07&file=feature.xml&xsl=article.xsl (accessed December 14, 2009).
Greenburg, Steve, Evan Mills, Bill Tshudi, Peter Rumsey, and Bruce Myatt. "Best Practices for Data
Centers: Lessons Learned from Benchmarking 22 Data Centers." Proceedingsof thE ACEEE Summer
Study on Energy Effiaeng in Buildings in Asilomar,CA. Asilomar: ACEEE, 2007. 76-87.
Hennessy, John, and David Patterson. ComputerArchitecture:A QuantitativeApproach,. 4th Edition. San
Francisco, CA: Morgan Kauffman, 2007.
Hirstius, Andreas, Sverre Jarp, and Andrzej Nowak. Strategiesforincreasing data centrepower efien.
CERN Openlab, 2008.
HP. HP Power Advisor Utiliy:a toolfor estimatingpower requirementsforHP Proilantserver systems.
Technology Brief, HP, 2009.
Hughes, Ron. CaliforniaData Center Design Group. Data Center Journal Online. May 2005.
http://www.cdcdg.com/articles/9/ (accessed December 7, 2009).
Hwang, Douglas. "Data Center Power Consumption Reductions Achievable with Liquid Cooling: A
Case Study." Thesis, Mechanical Engineering, Tufts University, Somerville, 2009.
Joint Air Force - Army - Navy. "Phsyical Security Standards for Special Access Program Facilities."
Manual, 2004, 50.

Journal, Silicon Valley/San Jose Business. 01 07, 2002.


http://sanjose.bizjournals.com/sanjose/stories/2002/01 /07/daily34.html (accessed 05 03, 2010).
Kaplan, James, William Forrest, and Noah Kindler. "Revolutionizing Data Center Energy
Efficiency." McKinsey & Company, 2008.
Koomey, Jonathan G. "Estimating Total Power Consumption by Servers in the US and the World."
Lawrence Berkeley National Lab, 2007.
Lawrence Berkeley National Laboratory. Data CenterEnergy Benchmarking Case Study, Facility 4. Case
Study, Environmental Energy Technologies Division , University of California, Berkeley, Oakland:
Rumsey Engineers, Inc, 2003.
Lawrence Berkeley National Laboratory. "Data Center Energy Benchmarking Case Study, Facility 1."
Case Study, Environmental Energy Technologies Division, University of California, Berkeley, 2001.
Lawrence Berkeley National Laboratory. "Data Center Energy Benchmarking Case Study, Facility 2."
Case Study, Environmental Energy Technologies Division , University of California, Berkeley, 2001.
Lawrence Berkeley National Laboratory. "Data Center Energy Benchmarking Case Study, Facility 3."
Case Study, Environmental Energy Technologies Division, University of California, Berkeley, 2001.
Lawrence Berkeley National Laboratory. Data CenterEnergy Benchmarking Case Study, Fadili 5. Case
Study, Environmental Energy Technologies Division, University of California, Berkeley, Oakland:
Rumsey Engineers, Inc, 2003.
Loper, Joe, and Sarah Parr. "Energy Efficiency in Data Centers: A New Policy Frontier." with
support from Advanced Micro Devices, Alliance to Save Energy, 2007.
Matta, Nadim F., and Ronald N. Ashkenas. "Why Good Projects Fail Anyway." HarvardBusiness
Review, Septmeber 2003: 109-114.
McKenzie-Mohr, Doug, and William Smith. Fostering SustainableBehavior: an Introduction to Communio-
Based Socia/Marketing.Gabriola Island, BC: New Society Publishers, 1999.
Moore, Geoffrey. Crossing the Chasm. Harper Business Essentials, 1991.
Neudor fer, Julius. How to Keep Things Cool in Your Data Center- 11 Tipsfor Summer. eWeek.com. July 23,
2008. http://www.eweek.com/c/a/IT-Infrastructure/How-to-Keep-Things-Cool-in-Your-Data-
Center- 11-Tips-for-Summer/ (accessed December 14, 2009).
Pallak, M, D Cook, and J Sullivan. "Commitment and Energy Conservation." Applied SocialPychology
Annual, 1980: 235-253.
PEW Internet & American Life Project surveys. PEWlnternet. 2010.
http://www.pewinternet.org/Trend-Data/Home-Broadband-Adoption.aspx (accessed 05 02, 2010).
Phelps, Wally. "Adaptive Airflow Management "Cool Green"." AFCOM. Boston, 2009.
Rasmussen, Neil. Avoidable Mistakes that Compromise Cooling Performance in Data Centers and Network
Rooms. White Paper, American Power Conversion, American Power Conversion, 2008.
Rasmussen, Neil. CalculatingTotal Cooling Requirementsfor Data Centers, Rev 2. White Paper, American
Power Conversion (APC), 2007.
Rasmussen, Neil. ElectricalEffidency Modeling of Data Centers. White Paper #113, American Power
Conversion (APC), 2006.
Rasmussen, Neil. Implementing Energy Effcient Data Centers. White Paper #114, American Power
Conversion (APC), 2006.
Rasmussen, Neil. Improving Rack Cooling Performance Using Airflow Management Blanking Panels, Revision 3.
White Paper #44, American Power Conversion (APC), 2008.
Rasumssen, Neil. Increasing Data CenterEfcieng by Using Improved High Density Power Distribution.White
Paper #128, American Power Converstion (APC), 2006.
Raytheon Company. About Garland.http://home.iis.ray.com/Garland/aboutGarland.htm (accessed
07 01, 2009).
Raytheon Company. Backgoumder. Waltham, MA: Raytheon, 2010.
-. "Raytheon Intelligence and Information Systems." RTNIIS_homemain. 10 23, 2009.
http://www.raytheon.com/businesses/riis/ (accessed 10 25, 2009).
Raytheon. Raytheon Sustainability. 5 13, 2009.
http://www.raytheon.com/responsibility/stewardship/sustainability/energy/index.html (accessed 5
14, 2009).
Salim, Munther. "Energy In Data Centers: Benchmarking and Lessons Learned." EngineeringSystems
Magazine,April 1, 2009.
Sawyer, Richard. CalculatingTotal Power RequirementsforData Centers.White Paper #3, American Power
Conversion (APC), 2004.
Shah, Mehul, and Matthew Littlefield. Energ Management: Driving Value in IndustrialEnvironments.
Boston: Aberdeen Group, 2009.
Sneider, Richard. The DataCenter of the Future: What is Shaping It?Joint Interunity Group-AFCOM
Study, 2005.
Spear, Steven J. Chasing the Rabit.New York: McGraw-Hill, 2009.
Standard & Poor's. Raytheon CorporationStock Report. The McGraw-Hill Companies, Inc., 2010.
Stansberry, Matt. The Green DataCenter: Energy Efficieng in the 21st Century. SearchDataCenter.com,
2008.
System Energy Efficiency Lab. System Energy Efcieng Lab.
http://seelab.ucsd.edu/virtualefficiency/overview.shtml (accessed December 3, 2009).
The Green Grid. Assessment of EPA Mid-Tier Data Center at Potomac Yard. The Green Grid, 2009.
The Green Grid. Fundamentalsof Data CenterPower and Cooling Efcieng Zones. White Paper #21, The
Green Grid, 2009.
The Green Grid. GuidelinesforEnergy-EfcientData Centers. White Paper, The Green Grid, 2007.
The Green Grid. ProperSizing of IT Power and Cooling Loads White Paper.White Paper, The Green Grid,
2009.
The Green Grid. Using VirutaliZationto Improve Data CenterEffcieng. White Paper #19, The Green
Grid, 2009.
The Raytheon Corportation. GarlandNews - Employee Communications.
http://home.iis.ray.com/Garland/aboutGarland.htm (accessed November 2, 2009).
Tschudi, William, Priya Sreedharan, Tengfang Xu, David Coup, and Paul Roggensack. Data Centers
andEnergy Use - Let's Look at the Data. Panel Report, American Council for an Energy-Efficent
Economy, 2003.
US Energy Information Administration. Electric Sales, Revenue, and Pricefor2007. Energy Information
Administration, 2009.
US Environmental Protection Agency. "Report to Congress on Server and Data Center Energy
Efficiency." ENERGY STAR Program, 2007.
VanGilder, James, and Roger Schmidt. "Airflow Uniformity Through Perforated Tiles in a Raised-
Floor Data Center." IPACK2005. San Francisco: ASME, 2005.
Williams, Ben. "Maximizing Data Center Efficiencies through Microprocessor Innovations."
Conference on Enterprise Servers and DataCenters: OpportunitiesforEnergy Savings. 2006.
Appendix I - Process for Analyzing Energy Usage

Find constant Determine daily Determine HVAC


electricity plug electrical plug load by electrical load by
load (usage/day) forIeach
1 o t h b subtracting the subtracting plug load +
by analyzing the absolute minimum from cooling minimum +
weekends of each the total usage for the absolute minimum
month ImonIIt h en n month that contains the from each months'
absolute minimum total usage

jme~r
Baseline

"Constant
Constan Dil PT
P lug
Iu7g Loadt
L
Appendix II - Summary of Alternative Efficiency Scenario Assumptions1 3 0

Seenario IT Equiment Site infrastrueture syAtems


* Vohanr scver virtuzaaoi leading to a physical " PUE raio decning to 1 7 by 2011
ervai reduction ratio of 104 to I (for server closets) for all space types aqsumnig:
ard 1 M8to 1 (for ill other spee type-,) by 2011
" 95% efficient traisformers
* 3% of server ehnunated through virtualma on " 80% efficient UPS
efforts arc not replaced (ec, lgacy appheatiuons)
Air cooled dnect exchange system
Improved "- nergy efficient serverb rrescnt 5% of volume
operation server hipmxts in 2007 arid 15% of Ahipicnts in
2011 " Constant speed fans
* Powin management enabled on 100% of applicable " Hunidification control
* Redundant air handling units
* Average energy use per enteqrne storage drive
deelning 7% by 2011
* Moderate voluee saerver virtualat*ion leading to a SPUE ratio decling to 1.7 by 2011
phyatial serva redu=non ratio of 1.33 to I (for for aerverelesets and server room
server closets) and 2 to I (for all tHer acae types) (using previous asonpions)
by 2011 PUE ratio declining to 15 by 2011
* 5% of tnera elniunated through vrirtwluation for data centers asuming-
efforts are not replaced (e.g. egacy applications) 9% efficient traisforners
* "Energy effieienL"servers represent 100% of a 90% effcient UPS
prac"te volunic erver alipmenan 2007 to 2011
SVanable-speed drive chiller with
" Powei nanagemenit enabled on 10D0%
of apphcable conmuzr cooling or water-aide
free cooling (in moderate or miki
" Average energy use per eterprere surage drive climate region)
deLIning 7% by 2011 a Varable-speed fanmand punias
" Moderate re&tion in applicable storage devices * Redundant air-handling urits
i1 5 M)11 h 2011

* AWi5'vc volume server vtiialrtatin leading to a * PUE ratio decining to L7 by 2011


physical ser-er reduction rntio of 1.66 to I (for for serer closets and server rooms
saersr closct) and 5 to 1 (for all other %pacetypes) (using previous asumpions)
by 2011 a PUE ratio dechning to 1-5 by 2011
* 5% of Servers eirnated through virtualuatioii fur localized and mid-tier data
efforts ase not replaced (e.g., legay applications) eenters (uing previous assanptions)
a "Energy eiffeiitr servera represent 100% of * PUE raiodeching to L4 by 2011
volne servier Auprent 2007 to 2011 fur enterpne-class data centers
Stae-of-tie- assuamring
al a PoWer management enabled on 100% of apphcable
9ervers a 9g% efficient traiformers
* Average energy use per enterprte srage drive 9 95% efficient UPS
declining 7% by 2011 - Liquidwcoling tothe racks
* Aggreieat reduction of applcable sorage devices
a Cooling owa (in moderate at mid
(-2.4 5o 1)by 2011 climate region)
a Vanable-speed Fss and pumpS
a CHP

130 (US Environmental Protection Agency 2007)


Appendix III - List of Electrical Power Calculation Tools13 1

Company Link Comments


HP TOOL: HP also has a white paper on
http://h30099.www3.hp.com/configurator/powercaIcs.asp the tool (see second link)
WHITE PAPER:
http://search.hp.com/redirect.html?type=REG&qt-powe
r+calculator&url-http%3A//h2OOOO.www2.hp.com/bc/
docs/support/SupportManual/c00881066/c00881066.
pdf%3Fjumpid%3DregR1002_USEN&pos=1

Dell PLANNING FOR ENERGY REQUIREMENTS: This link includes instructions


http://www.dell.com/calc and Data Center planning
tools

IBM Calculator for the blades and modular product lines (x86
architecture server systems and rack storage), the tool can
be downloaded at:
http://www-03.ibm.com/systems/bladecenter/resources/
powerconfig/

For the Power Processor-based server systems, an online


tool is available at the following link:
www.ibm.com/systems/support/tools/estimator/energy

Sun Power Tool:


Calculators http://www.sun.com/solutions/ecoinnovation/
powercalculators.jsp

Cisco Power Tool: Registration required


Calculator https://tools.cisco.com/cpc/authc/forms/CDClogin.
fcc?TYPE-33619969&REALMOID=06-00071a1O-6218-
13a3-a831-83846dc9OOO&GUID=&SMAUTHREASON=0&
METHOD=GET&SMAGENTNAME=$SM$GDuJAbSsi7kExzQD
RfPKUltt%2bPcjKOjTGlbtk%2fRp7BdNYLiP9yOBjXBU5PAxIX
D&TARGET-http%3A%2F%2Ftools.cisco.oom%2Fcpc%2F

131 (The Green Grid 2009)


Appendix IV - Changes made to the pilot data center

Prior to Phase I Improvements

.......... 1..*

IF,
4c

-- tC

Step 2) Foamrubber was used to stop alt


Step 4) 14 -4' x 2' egg Step 1)Random perforated cable cut-outs (not shown indrawing)
Step 6) Masonite was used crate ceiling tiles placed in tiles were replaced with
as a "blanking rack' hot aisles normal floor tiles Step 3)Blanking panels were installed in
I aN racks

Step 7 (not shown) Chimneys Step 5) 6 - 2' x 2' and 2 - 4'


x 2 egg crate ceiling tiles
were built on the CRAC units placed above CRACs
Appendix V - Sensitivity of Chiller's Assumed Electricity Consumption

To understand the sensitivity of the assumed chiller electricity consumption per ton of chilled water, a
sensitivity analysis was performed. As seen in Figure 89 and Table 17, the building's electricity
consumption decreased by 133 kWh from the week prior to the start of the improvement process to the
time in which all improvements had been made. However, when only the constant load, which includes
data centers, is analyzed, a drop from about 925 kWh to 825 kWh is observed. Therefore, a value of 100
kWh can be assumed to be the amount of electricity avoided in the data center due to the improvement
process.

1200

Table 17: Analysis of electricity saved during


the piloted improvement process

S Phase II
Baseline Changes
Completed
Total Electricity 167,977 145,6541
Consumed (kWh)
200 Phase I Phase I PhaseIl Phase Ir Average Outside Air 62.7
Changes Changes Changes Changes
63.6
Temp (F)
Made Completed in-process Compleled
0 Total Reduction of n/a 22,323
Electricity (kWh)
It is notable that when excluding the chiller's contribution, the Daily Reduction in n/a 3,189
pilot data center accounted for 222 kWh in the "Baseline" time Electricity (kWh)
period (about 24% of the total constant load) and 170 kWh in the Reduction in
"Phase II Changes Completed" time period (about 21% of the
total cnstant llod)
Electricity (kqh) n/a 133
I Missing data on electricity consumption for
Figure 89: Building Electricity before and after Saturday was assumed to be the same as Sunday
Phase I changes were implemented

Of the 100 kWh avoided, 52.2 kWh can be attributed to the CRAC and the lighting improvements, which
leaves 47.8 kWh of avoided electricity that is not accounted for. As mentioned previously, the data center
owner copied the improvement process in other data centers during the completion of the pilot, which
would have led to some of the decrease. Also, it is reasonable to assume the "typical value of 0.53
kW/ton" is too low for the Garland, TX site. As shown in Table 18, when the assumed 0.53 kWh/ton is
used, the savings is only 7.7 kWh. To explore what chiller electricity rate is required to account for the 47.8
kWh of avoided electricity, Excel Solver was used to determine what value would be required to achieve
47.8 kWh. A value of 3.30 kWh/ton was calculated, which is unreasonably large.The correct value is
probably between 0.60-0.85 kWh/ton. While 0.53 kW/ton is too low, it was nonetheless used in the
analysis in order to be conservative with the savings calculations.
Table 18: Calculations to determine sensitivity of Chiller Electricity Consumption
Excluding Chiller Electricity
Baseline excluding Post Pilot excluding Change
Chiller Chiller
Electrical Load (kW) 221.9 169.5

Daily Electricity Usage 5325.6 4068 -23.6%


(kW/day)

Baseline (kWh) Post Pilot (kWh) Change


Chilled Water 65.45 50.95 -22.2%
Re utred (Tons)
Assumed Chiller Baseline (kWh) Post Pilot (kWh) Difference (kWh)
kWh/ton
0.32 20.9 16.3 4.6
0.39 25.5 19.9 5.7
0.46 30.1 23.4 6.7
0.53 34.7 27.0 7.7
0.60 39.3 30.6 8.7
0.70 45.8 35.7 10.2
0.85 55.6 43.3 12.3
1.00 65.5 51.0 14.5
2.00 130.9 101.9 29.0
3.00 196.4 152.9 43.5

Cost of $0.102
Electricity/kW

Assumed Chiller Change Including Annual Savings


kWh/ton Chiller

Assumed Chiller Change Including Annual Savings


kW/ton Chiller

0.32 -23.5% $50,966


0.39 -23.5% $51,873
0.46 -23.4% $52,780
0.53 -23.4% $53,687
0.60 -23.4% $54,594
0.70 -23.4% $55,890
0.85 -23.3% $57,833
1.00 -23.3% $59,776
2.00 -23.1% $72,733
Appendix VI - Breakdown of electricity consumption for building containing
the pilot data center

Building Electricity Usage Breakdown for 2008 Total Electricity in 2008 for Building

1,000.000 HVAC
900.000
19%

800.000

700.000

600,000

500.00

I
400,000

300.00 Variable Plug Load


200,000 5%

100.000
0 EEEEEEEEE

UCI0t Plug
L~d P~gLoad
M370606. MHVAC

Figure 90: Building's electricity consumption by Figure 91: Building's electricity


monthly end use for 2008 consumption by end use for 2008

Building Electricity Usage as a Function of Weather


40.000 January, 2006 - July, 2009 350 Two Week's Electricity Usage for Building
1400

30,000 1300
-250 E
25,000 - 00
12006
20,000. 9
00
1~k.
150 O 1100
15000

S10 00010 61000

100
906

111/06 7/20/06 2/5/07 8/24/07 3/11/08 9/27/08 4/15/09


6113 6116 E17 6/10 6121 6123 625 W627 u12

Figure 92: Building's electricity consumption as Figure 93: Building's electricity for two
a function of weather from 2006-2009 weeks in 2009
Appendix VII - Specific lessons learned and implementation plan for Raytheon

Lessons Learned
- No one agrees on data center "ownership" within closed 11S areas
- Program owns compuLing equipment
- Facilities owns layout and HVAC equipment
- IT owns standardizaion

- Employee engagement Iskey to success


- Program data center owner hted inmensely
- Leadership of building was willing to work with us to met up pilot
- Facilities leadersl4 slowed us to spend budget (chnneys)

- Sustained behavior change is difficult to achieve


- There are still unblocked cable cut-outs
- Movement of racks sil seems -inpossible"

- Low hanging fruit Is abundant, but searching is required


- Data created visiblity of unnecessary reheat cycle that could be turned off
- Upgrading data centers is inexpensive, and does not take very nuch time

- Lack of incentives drives inefliciency


- Programs do not pay electriuity fr data centers, and are not ioerdivized to drive
improvements

High Level Implementation Plan


1. Gan bu -in and commitment from ItS leadership for data center improvement initiative by
O011010
S ProMgra Lderip
e ITLeadership
a Facliies Leadership

2. Leadership team assigns anprovement team for every IS data centers, comprised of contact by
01/3112010
- Fee
- Progr Teams

3. Improvement Team survey all data centers by 02J2812010


* Currest state of air rnvement
* Current setigs on CRACs (e.g. rebest on?)
- Poer consumpnion (when possible)

4. Energy team prioritize data centers for 2010 improvement by 04101/2010

5. Improvement team makes improvements to uS data centers by 0910112010

6. Set up surveying schedule to ensure improvements do not deteriorate over time by 01/01/2011

You might also like