Principle of Cloud Computing
(314261)
Week
10
Dr.Hassan Al-Sukhni
PRINCIPLES OF
CLOUD COMPUTING
Module Two Part 3
Src: www.favpng.com
Copyright of the Course Materials
The course’s materials are adapted from several online and
offline sources.
3
Outline
Learning Objectives
Datacenter Overview
Datacenter Architecture
Datacenter Challenges
Cloud Computing Datacenters
4
Learning Objectives
Understanding the concept of datacenters and their role in modern computing.
Identifying the components and infrastructure of a typical datacenter and the
different types of datacenters.
Understanding the design principles and best practices for building a datacenter
Understanding the architecture and challenges of cloud datacenters.
5
DATACENTER OVERVIEW
6
Data Centers
Data center (DC) is a physical facility that enterprises use to house computing and storage
infrastructure in a variety of networked formats.
Main function is to deliver utilities needed
by the equipment and personnel:
Power
Cooling
Shelter
Security
Size of typical data centers:
500 – 5000 sqm buildings
1 MW to 10-20 MW power (avg 5 MW)
7
Example data centers
8
Datacenters around the globe
9
https://docs.microsoft.com/en-us/learn/modules/explore-azure-infrastructure/2-azure-datacenter-locations
Modern DC for the Cloud Architecture
Geography:
- Two or more regions
- Meets data residency requirements
- Fault-tolerant from complete region failures
Region:
- Set of datacenters within a metropolitan area
- Network latency perimeter < 2ms
Availability Zones:
- Unique physical locations within a region
- Each zone made up of one or more DCs
- Independent power, cooling, networking
- Inter-AZ network latency < 2ms
- Fault tolerance from DC failure 10
Src: Inside Azure Datacenter Architecture with Mark Russinovich.
Data Centers
Traditional data centers
Host a large number of relatively small- or medium-sized applications, each running on a
dedicated hardware infrastructure that is decoupled and protected from other systems
in the same facility.
Usually for multiple organizational units or companies.
Modern data centers (a.k.a., Warehouse-scale computers)
Usually belong to a single company to run a small number of large-scale applications
Google, Facebook, Microsoft, Amazon, Alibaba, etc.
Use relatively homogeneous hardware and system software.
Share a common systems management layer.
Sizes can vary depending on needs
11
DATACENTER ARCHITECTURE
12
Scale-up vs. scale-out
Scale-up: high-cost powerful CPUs, more cores, more memory.
Scale-out: adding more low cost, commodity servers.
Supercomputer vs. data center
Scale
Blue waters = 40K 8-core “servers”
Microsoft Chicago Data centers = 50 containers = 100K 8-core servers
Network architecture
Supercomputers: InfiniBand, low-latency, high bandwidth protocols
Data Centers: (mostly) Ethernet-based networks
Storage
Supercomputers: separate data farm
Data Centers: use the disk on node + memory cache
13
Main components of a datacenter
14
src: The Datacenter as a Computer – Barroso, Clidaras, Holzle
Traditional Data Center Architecture
Servers mounted on 19’’
rack cabinets
Racks are placed in single rows forming
corridors between them.
15
Src: the datacenter as a computer – an introduction to the design of warehouse-scale machines
A Row of Servers in a Google Data Center
Src: the datacenter as a computer – an introduction to the design of warehouse-
scale machines
16
Inside a modern data center
Today’s DC use shipping containers packed
with 1000s servers each.
For repairs, whole containers are replaced.
17
Costs for operating a data center
DCs consume 3% of global electricity Monthly cost = $3’530’920
supply (416.2 TWh > UK’s 300 TWh).
DCs produce 2% of total greenhouse gas Servers
emissions. 1 4%
3
Networking Equipment
DCs produce as much CO2 as The %
Netherlands 1 Power Distribution & Cooling
or Argenti. 8
% 57%
31% power Power
8
%
Other Infrastructure
45,978 servers, 3yr server & 10 yr infrastructure amortization
18
Power Usage Effectiveness (PUE)
PUE is the ratio of
The total amount of energy used by a DC facility
To the energy delivered to the computing equipment
PUE is the inverse of data center infrastructure efficiency
Total facility power = covers IT systems (servers, network, storage) +
other equipment (cooling, UPS, switchgear, generators, lights, fans, etc.)
19
Achieving PUE
Location of the DC – cooling and power load factor
Raise temperature of aisles
Usually 18-20 C; Google at 27 C
Possibly up to 35 C (trade off failures vs. cooling costs)
Reduce conversion of energy
E.g., Google motherboards work at 12V
rather than 3.3/5V
Go to extreme environments
Arctic circle (Facebook)
Floating boats (Google)
Underwater DC (Microsoft)
Reuse dissipated heat 20
Evolution of data center design
Case study: Microsoft
21
https://www.nextplatform.com/2016/09/26/rare-tour-microsofts-hyperscale-datacenters/
Evolution of datacenter design
Gen 6: scalable form factor (2017)
- Reduced infrastructure, scale to demand
- 1.17-1.19 PUE
Gen 7: Ballard (2018)
- Design execution efficiency
- Flex capacity enabled
- 1.15-1.18 PUE
Gen 8: Rapid deploy datacenter (2020)
- Modular construction and delivery
- Equipment skidding and preassembly
- Faster speed to market
Project Natick (future) – 1.07 PUE or less
- Project Natick: (future) rapid deployment, close
to population centers, high energy density,
resistant to hurricanes, solar storms,
Src: Inside Azure Datacenter Architecture with Mark Russinovich 22
earthquake, 1.07 or less PUE