SG 247928
SG 247928
Marian Friedman
Michele Girola
Mark Lewis
Alessio M. Tarenzio
ibm.com/redbooks
International Technical Support Organization
May 2011
SG24-7928-00
Note: Before using this information and the product it supports, read the information in
“Notices” on page vii.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . xi
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
iv IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Chapter 3. Data center network functional components . . . . . . . . . . . . . 149
3.1 Network virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
3.2 Impact of server and storage virtualization trends on the data center network
152
3.3 Impact of data center consolidation on the data center network . . . . . . . 157
3.4 Virtualization technologies for the data center network. . . . . . . . . . . . . . 163
3.4.1 Data center network access switching techniques . . . . . . . . . . . . . 163
3.4.2 Traffic patterns at the access layer . . . . . . . . . . . . . . . . . . . . . . . . . 164
3.4.3 Network Node virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
3.4.4 Building a single distributed data center . . . . . . . . . . . . . . . . . . . . . 169
3.4.5 Network services deployment models. . . . . . . . . . . . . . . . . . . . . . . 176
3.4.6 Virtual network security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
3.4.7 Virtualized network resources in servers . . . . . . . . . . . . . . . . . . . . 185
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Contents v
vi IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that
does not infringe any IBM intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm
the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on
the capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the
sample programs are written. These examples have not been thoroughly tested under all conditions. IBM,
therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
The following terms are trademarks of the International Business Machines Corporation in the
United States, other countries, or both:
AIX® MVS™ Service Request Manager®
BladeCenter® Netcool® Solid®
CloudBurst™ Parallel Sysplex® System p®
DB2® POWER Hypervisor™ System Storage®
DS6000™ Power Systems™ System x®
DS8000® POWER5™ System z®
Dynamic Infrastructure® POWER6® Systems Director VMControl™
ESCON® POWER7™ Tivoli®
FICON® PowerPC® TotalStorage®
GDPS® PowerVM™ Virtual Patch®
Geographically Dispersed POWER® X-Architecture®
Parallel Sysplex™ PR/SM™ X-Force®
Global Business Services® Processor Resource/Systems XIV®
HiperSockets™ Manager™ z/Architecture®
IBM® Proventia® z/OS®
iDataPlex™ pSeries® z/VM®
Informix® RACF® z/VSE™
Maximo® Redbooks® z10™
Micro-Partitioning™ Redbooks (logo) ®
Data ONTAP, and the NetApp logo are trademarks or registered trademarks of NetApp, Inc. in the
U.S. and other countries.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United
States, other countries, or both.
Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks
of Intel Corporation or its subsidiaries in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
viii IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Preface
The enterprise data center has evolved dramatically in recent years. It has
moved from a model that placed multiple data centers closer to users to a more
centralized dynamic model. The factors influencing this evolution are varied but
can mostly be attributed to regulatory, service level improvement, cost savings,
and manageability. Multiple legal issues regarding the security of data housed in
the data center have placed security requirements at the forefront of data center
architecture. As the cost to operate data centers has increased, architectures
have moved towards consolidation of servers and applications in order to better
utilize assets and reduce “server sprawl.” The more diverse and distributed the
data center environment becomes, the more manageability becomes an issue.
These factors have led to a trend of data center consolidation and resources on
demand using technologies such as virtualization, higher WAN bandwidth
technologies, and newer management technologies.
The network has been widely viewed as the “plumbing” of the system. It has
been architected without much if any consideration for the type of device at the
end point. The usual consideration for the end-point requirements consisted of
speed, duplex, and possibly some traffic engineering technology. There have
been well-documented designs that allowed redundancy and availability of
access ports that were considered interchangeable at best and indistinguishable
at worst.
With the rise of highly virtualized environments and the drive towards dynamic
infrastructures and cloud computing, the network can no longer remain just
plumbing. It must be designed to become dynamic itself. It will have to provide
the ability to interconnect both the physical and virtual infrastructure of the new
enterprise data center and provide for cutting edge features such as workload
mobility that will drive enterprise IT architecture for years to come.
Michele Girola is part of the ITS Italy Network Delivery team. His area of
expertise includes Network Integration Services and Wireless Networking
solutions. As of 2010, Michele is the Global Service Product Manager for GTS
Network Integration Services. Prior to joining IBM, Michele worked for NBC News
and Motorola. He holds an M.S. degree “Laurea” in Telecommunications
Engineering from the Politecnico di Torino, and an M.B.A. from the MIP -
Politecnico di Milano.
Mark Lewis is in the IBM Global Technology Services Division where he serves
as the Network Integration Services Portfolio Team Leader in the Integrated
Communications Services product line. Network Integration Services
project-based services assist clients with the design and implementation of
campus and data center networks. He is responsible for understanding the
network marketplace, developing the investment justification for new services,
defining and managing development projects, and deploying the new services to
the marketplace. Mark has 20 years of data center experience with IBM working
in the areas of mainframe development, data center structured fiber cabling, and
network integration services. He has a Bachelor of Science degree in Electrical
Engineering from Rensselaer Polytechnic Institute, located in Troy, New York.
x IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Thanks to the following people for their contributions to this project:
Jeffrey Sanden
Global Technology Services, Executive IT Architect, Global Integrated
Communications Services CTO team
Alan Fishman, Stephen Sauer, Iain Neville, Joe Welsh, Dave Johnson, and
WesToman
IBM
Find out more about the residency program, browse the residency index, and
apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
Preface xi
We want our books to be as helpful as possible. Send us your comments about
this book or other IBM Redbooks® in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an e-mail to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
xii IBM Data Center Networking: Planning for Virtualization and Cloud Computing
1
IBM uses an evolutionary approach for efficient IT delivery that helps to drive
business innovation. This approach allows organizations to be better positioned
to adopt integrated new technologies, such as virtualization and cloud
computing, to help deliver dynamic and seamless access to IT services and
resources. As a result, IT departments will spend less time fixing IT problems
and more time solving real business challenges.
2 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
1.1 Key operational challenges
The delivery of business operations impacts IT in terms of application services
and infrastructure. Business drivers also impose key requirements for data
center networking:
Support cost-saving technologies such as consolidation and virtualization
with network virtualization
Provide for rapid deployment of networking services
Support mobile and pervasive access to corporate resources
Align network security with IT security based on enterprise policies and legal
issues
Develop enterprise-grade network design to meet energy resource
requirements
Deliver a highly available and resilient networking infrastructure
Provide scalability to support applications’ need to access services on
demand
Align network management with business-driven IT Service Management in
terms of processes, organization, Service Level Agreements, and tools
When equipped with a highly efficient, shared, and dynamic infrastructure, along
with the tools needed to free up resources from traditional operational demands,
IT can more efficiently respond to new business needs. As a result, organizations
can focus on innovation and on aligning resources to broader strategic priorities.
Decisions can be based on real-time information. Far from the “break or fix”
mentality gripping many data centers today, this new environment creates an
infrastructure that provides automated, process-driven service delivery and is
economical, integrated, agile, and responsive.
1
Virtualization 2.0: The Next Phase in Customer Adoption. Doc. 204904 IDC, Dec. 2006 - from the
IBM's Vision for the New Enterprise Data Center white paper
http://www-05.ibm.com/innovation/nl/shapeyourfuture/pdf/New_Enterprise_Data_Center.pdf
2
IBM Information Infrastructure Newsletter, March 2009
http://www-05.ibm.com/il/systems/storage/pdf/information_infastructure_custnews1Q.pdf
4 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
1.1.2 Energy efficiency
The larger a company grows, the greater its need for power and cooling. But with
power at a premium—and in some areas, capped—organizations are forced to
become more energy efficient.
Further, the proliferation of data sources, RFID and mobile devices, unified
communications, cloud computing, SOA, Web 2.0 and technologies such as
mashups and XML create opportunities for new types of business solutions.
6
http://wwic2008.cs.tut.fi/1-Internet_of_Smart_Things.pdf
7
ftp://public.dhe.ibm.com/common/ssi/ecm/en/ciw03040usen/CIW03040USEN.PDF
8
http://cssp.us/pdf/Global%20CEO%20Study%20The%20Enterprise%20of%20the%20Future.pdf
6 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
The delivery of standardized applications via the Internet, such as:
http://SalesForce.com
is bringing a new model to the market.
Today, the power of information, and the sharing of that information, rests firmly
in the hands of the user while real-time data tracking and integration are
becoming the norm.
9
This section is sourced from “Capturing the Potential of Cloud - How cloud drives value in
enterprise IT strategy”, IBM Global Business Services® White Paper. Document
#GBW03097-USEN-00, September, 2009. Available at:
ftp://submit.boulder.ibm.com/sales/ssi/sa/wh/n/gbw03097usen/GBW03097USEN.PDF
Cloud has evolved from on demand and grid computing, while building on
significant advances in virtualization, networking, provisioning, and multitenant
architectures. As with any new technology, the exciting impact comes from
enabling new service consumption and delivery models that support business
model innovation.
8 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Figure 1-1 10 demonstrates these five layers of cloud offerings.
The distinction between the five categories of cloud offering is not necessarily
clear-cut. In particular, the transition from Infrastructure as a Service to Platform
as a Service is a very gradual one, as shown in Figure 1-2 on page 10.
10
Youseff, Lamia: Toward a Unified Ontology of Cloud Computing, November 2008, Available from:
http://www.cs.ucsb.edu/~lyouseff/CCOntology/CloudOntology.pdf
Table 1-1 on page 11 lists the benefits and challenges of SaaS, PaaS, and IaaS.
10 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Table 1-1 Benefits and challenges of SaaS, PaaS, and IaaS
Benefits Challenges
12 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
– Limited capabilities for monitoring access to applications hosted in the
cloud
Governance and regulatory compliance
Large enterprises are still trying to sort out the appropriate data governance
model for cloud services, and ensuring data privacy. This is particularly
significant when there is a regulatory compliance requirement such as SOX or
the European Data Protection Laws.
Service level agreements and quality of service
Quality of service (availability, reliability, and performance) is still cited as a
major concern for large organizations:
– Not all cloud service providers have well-defined SLAs, or SLAs that meet
stricter corporate standards. Recovery times may be stated as “as soon as
possible” rather than a guaranteed number of hours. Corrective measures
specified in the cloud provider's SLAs are often fairly minimal and do not
cover the potential consequent losses to the client's business in the event
of an outage.
– Inability to influence the SLA contracts. From the cloud service provider's
point of view it is impractical to tailor individual SLAs for every client they
support.
– The risk of poor performance is perceived higher for a complex
cloud-delivered application than for a relatively simpler on-site service
delivery model. Overall performance of a cloud service is dependent on
the performance of components outside the direct control of both the client
and the cloud service provider, such as the network connection.
1. Integration and interoperability
Identifying and migrating appropriate applications to the cloud is made
complicated by the interdependencies typically associated with business
applications. Integration and interoperability issues include:
– A lack of standard interfaces or APIs for integrating legacy applications
with cloud services. This is worse if services from multiple vendors are
involved.
– Software dependencies that must also reside in the cloud for performance
reasons, but which may not be ready for licensing on the cloud.
– Interoperability issues between cloud providers. There are worries about
how disparate applications on multiple platforms, deployed in
geographically dispersed locations, can interact flawlessly and can provide
the expected levels of service.
A private cloud:
Is a cloud computing infrastructure owned by a single party.
Provides hardware and virtualization layers that are owned by, or reserved for,
the client.
Presents an elastic but finite resource.
May or may not be connected to the public Internet.
While very similar solutions from a technical point of view, there are significant
differences in the advantages and disadvantages that result from the two models.
Table 1-2 compares the major cloud computing features for the two solutions,
categorizing them as:
- a major advantage
- an advantage
- a disadvantage
- a major disadvantage
Table 1-2 Comparison of major cloud computing features
Feature Public cloud Private cloud
Initial investment - No up-front capital investment in - The infrastructure has to be
infrastructure. provisioned and paid for up front
(however, may leverage existing
resources - sunk costs).
Consumption- - The client pays for resources as - The client pays for resources as
based pricing used, allowing for capacity fluctuations used, allowing for capacity fluctuations
over time. over time.
14 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Feature Public cloud Private cloud
Cloud operating - Operating costs for the cloud are - The client maintains ongoing
costs absorbed in the usage-based pricing. operating costs for the cloud.
SLAs - The client has no say in SLAs or - SLAs and contractual terms and
contractual terms and conditions. conditions are negotiable between
client and the cloud vendor to meet
specific requirements.
Service Level - Vendors are motivated to deliver to - Users may not get service level
Commitment contract. desired
Data Security - Sensitive data is shared beyond the - All data and secure information
corporate firewall. remains behind the corporate firewall.
Geographic - Distance may pose challenges with - The option exists for close proximity
locality access performance and user to non-cloud data center resources or
application content. to offices if required for performance
reasons.
Platform choice - Limited choices: Support for - Private clouds can be designed
operating system and application for specific operating systems,
stacks may not address the needs of applications and use cases unique to
the client. the client.
There is no clear “right answer”, and the choice of cloud model will depend on the
application requirements. For example, a public cloud could be ideally suited for
development and testing environments, where the ability to provision and
decommission capacity at short notice is the primary consideration, while the
requirements on SLAs are not particularly strict. Conversely, a private cloud
could be more suitable for a production application where the capacity
fluctuations are well-understood, but security concerns are high.
Many of IBM’s largest clients are looking at the introduction of Web 2.0
applications and cloud computing style data centers. Numerous newer
The infrastructures that are exploiting some of the early cloud IT models for
either acquiring or delivering services, or both, can be very nimble and scale very
quickly. As a style of computing in which IT-enabled capabilities are delivered as
a service to external clients using Internet technologies—popularized by such
companies as Google and SalesForce.com—cloud computing can enable large,
traditional-style enterprises to start delivering IT as a service to any user, at any
time, in a highly responsive way. Yet, data centers using these new models are
facing familiar issues of resiliency and security as the need increases for
consistent levels of service.
The solution to these challenges will not come from today’s distributed computing
models, but from more integrated and cost-efficient approaches to managing
technology and aligning it with the priorities of organizations and users. To that
end, we see an evolutionary computing model emerging, one that takes into
account and supports the interconnected natures of the following:
The maturing role of the mobile web
The rise of social networking
Globalization and the availability of global resources
The onset of real-time data streaming and access to information
16 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
1.6.1 Reduce cost
In conversations with numerous IT executives and professionals over the last few
years, a recurring theme has surfaced: continued concerns about the magnitude
of the operational issues they face, including server sprawl, virtualization
management, and the increasing cost of space, power, and labor.
Cloud computing brings a new paradigm to the cost equation, as well. This
approach of applying engineering discipline and mainframe-style security and
control to an Internet-inspired architecture can bring in improved levels of
economics through virtualization and standardization. Virtualization provides the
ability to pool the IT resources to reduce the capital expense of hardware,
software, and facilities. Standardization, with common software stacks and
operational policies, helps to reduce operating expenses, such as labor and
downtime—which is far and away the fastest-growing piece of the IT cost.
While these technologies have improved the way we do business, they can put a
tremendous strain on data centers and IT operations. IT professionals must
balance the challenges associated with managing data centers as they increase
in cost and complexity against the need to be highly responsive to ongoing
demands from the business.
An effective service delivery model is optimized and aligned to meet the needs of
both the internal business and external consumers of goods and services. As the
infrastructure becomes more dynamic and flexible, integrated service
management takes on a primary role to help organizations do the following:
Identify new opportunities for substantial cost efficiencies
Measurably improve service quality and reliability
Discover and respond quickly to new business opportunities
Manage complexity and change, improve security and compliance, and
deliver more value from all business assets
18 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
companies are demanding that both internal and external users have
instantaneous access to this information, putting extra—and often
conflicting—pressure on the enterprise for improved availability, security, and
resilience in the evolving IT environment.
With regards to infrastructures, the good news is that there is more than one
“right” answer. The correct solution is to pull the best from existing infrastructure
designs and harness new technologies, solutions and deployment options that
20 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
1.8 The shift in the data center network architectural
thinking
In order to enable a dynamic infrastructure capable of handling the new
requirements that have been presented in the previous section, a radical shift in
how the data center network is designed is required.
The figures here show the comparison between the traditional thinking and the
new thinking that enables this change of paradigm.
Core Switches
Access/Distribution
Switch Pairs
Si Si Si Si
Network Accessible
Storage Switches
Switches
Storage Services
The emerging trend toward dynamic infrastructures, such as cloud, call for elastic
scaling and automated provisioning; attributes that drive new and challenging
requirements for the data center network.
By comparing the two diagrams we can highlight some important trends that
impact the data center network architects and managers and that are described
in detail throughout this document:
22 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
The virtual machine is the new IT building block inside the data center. The
physical server platform is no longer the basic component but instead is made
up of several logical resources that are aggregated in virtual resource pools.
Network architects can no longer stop their design at the NIC level, but need
to take into account the server platforms’ network-specific features and
requirements, such as vSwitches. These will be presented in detail in
Chapter 2, “Servers, storage, and software components” on page 31.
The virtualization technologies that are available today (for servers, storage,
and networks) decouple the logical function layer from the physical
implementation layer so that physical connectivity becomes less meanigful
than in the past. A deeper understanding of the rest of the infrastructure
becomes paramount for network architects and managers to understand for a
proper data center network design. The network itself can be virtualized;
these technologies are described in Chapter 3, “Data center network
functional components” on page 149.
The architecture view is moving from physical to logical, focused on functions
and how they can be deployed rather than on appliances and where they
should be placed.
Infrastructure management integration becomes more important in this
environment because the inter-relations between appliances and functions
are more difficult to control and manage. Without integrated tools that simplify
the DC operations, managing the infrastructure box-by-box becomes
cumbersome and more difficult. IBM software technologies that can be
leveraged in order to achieve this goal are presented in Chapter 2, “Servers,
storage, and software components” on page 31.
The shift to “logical” applies also when interconnecting distributed data
centers. In order to achieve high availability and maximize resource utilization,
it is key to have all the available data centers in an active state. In order to do
this, from a network standpoint, the network core must be able to offer layer 1
virtualization services (lamda on fiber connections), layer 2 virtualization
services (for example VPLS connectivity to enable VM mobility) together with
the usual layer 3 services leveraging proven technologies such as MPLS.
Availability
Availability means that data or information is accessible and usable upon
demand by an authorized person. Network availability is affected by failure of a
component such as a link.
Two factors that allow improved availability are redundancy of components and
convergence in case of a failure.
Redundancy - Redundant data centers involve complex solution sets
depending on a client’s requirements for backup and recovery, resilience, and
disaster recovery.
Convergence - Convergence is the time required for a redundant network to
recover from a failure and resume traffic forwarding. Data center
environments typically include strict uptime requirements and therefore need
fast convergence.
As to network devices, the backup and recovery ability typically requires the use
of diverse routes and redundant power supplies and modules. It also requires
defined processes and procedures for ensuring that current backups exist in
case of firmware and configuration failures.
Configuration management
Configuration management covers the identification, recording, and reporting of
IT/network components, including their versions, constituent components, states
and relationships to other network components. Configuration Items (CIs) that
24 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
should be under the control of configuration management include network
hardware, network software, services, and associated documentation.
Also, a trail of the changes that have been made to the different components is
needed for auditing purposes.
Disaster recovery
The requirement for disaster recovery is the ability to resume functional
operations following a catastrophic failure. The definition of functional operations
depends upon the depth and scope of an enterprise’s operational requirements
and its business continuity and recovery plans.
Multidata center environments that provide a hot standby solution is one example
of a disaster recovery plan.
Environment
There are environmental factors such as availability of power or air conditioning
and maximum floor loading that influence the average data center today. The
network architecture must take these factors into consideration.
Failure management
All failures must be documented and tracked. The root cause of failures must be
systematically determined and proactive measures taken to prevent a repeat of
the failure. Failure management processes and procedures will be fully
documented in a well-architected data center environment.
Reliability
Reliabilty is the ability of a system to perform a given function, under given
conditions, for a given time interval.
26 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Table 1-3 Reliability
Total downtime (HH:MM:SS)
Scalability
In networking terms, scalability is the ability of the network to grow incrementally
in a controlled manner.
For enterprises that are constantly adding new servers and sites, architects may
want to specify something more flexible, such as a modular-based system.
Constraints that may affect scalability, such as defining spanning trees across
multiple switching domains or additional IP addressing segments to
accommodate the delineation between various server functions, must be
considered.
Security
Security in a network is the definition of permission to access devices, services,
or data within the network. The following components of a security system will be
considered:
Security policy
Security policies define how, where, and when a network can be accessed.
An enterprise normally develops security policies related to networking as a
requirement. The policies will also include the management of logging,
monitoring, and audit events and records.
Network segmentation
Network segmentation divides a network into multiple zones. Common zones
include various degrees of trusted and semi-trusted regions of the network.
Serviceability
Serviceability refers to the ability to service the equipment. Several factors can
influence serviceability, such as modular or fixed configurations or requirements
of regular maintenance.
Service level agreements can apply to all parts of the data center, including the
network. In the network, SLAs are supported by various means such as QoS and
configuration management, and availability. IT can also negotiate Operational
Level Agreements (OLAs) with the Business Units in order to guarantee and
end-to-end service level to the final users.
28 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Standards
Network standards are key to the ongoing viability of any network infrastructure.
They define such standards as:
Infrastructure naming
Port assignment
Server attachment
Protocol, ratified for example by IEEE and IETF
IP addressing
System management
Network management must facilitate key management processes, including:
Network Discovery and Topology Visualization
This includes the discovery of network devices, network topology, and the
presentation of graphical data in an easily understood format.
Availability management
This provides for the monitoring of network device connectivity.
Event management
This provides for the receipt, analysis, and correlation of network events.
Asset management
This facilitates the discovery, reporting, and maintenance of the network
hardware infrastructure.
Performance management
This provides for monitoring and reporting network traffic levels and device
utilization.
Incident management
The goal of incident management is to recover standard service operation as
quickly as possible. The incident management process is used by many
functional groups to manage an individual incident. The process includes
minimizing the impact of incidents affecting the availability and/or
performance, which is accomplished through analysis, tracking, and solving of
incidents that have impact on managed IT resources.
Problem management
This includes identifying problems through analysis of incidents that have the
same symptoms, then finding the root cause and fixing it in order to prevent
malfunction reoccurrence.
30 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
2
32 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
2.1 Virtualization
Virtualization refers to the abstraction of logical resources away from their
underlying physical resources to improve agility and flexibility, reduce costs, and
thus enhance business value. Virtualization allows a set of underutilized physical
infrastructure components to be consolidated into a smaller number of better
utilized devices, contributing to significant cost savings.
Server virtualization
A physical server is abstracted to provide, or host, multiple virtual servers or
multiple operating systems on a single platform.
Storage virtualization
The storage devices used by servers are presented in the form of abstracted
devices, partitioned and dedicated to servers as needed, independent of the
actual structure of the physical devices.
Network virtualization (will be described in more detail in Chapter 3, “Data
center network functional components” on page 149).
Network devices such as switches, routers, links and network interface cards
(NICs) are abstracted to provide many virtualized network resources on few
physical resources, or combine many physical resources into few virtual
resources.
Currently there are three common types of hypervisors: type 1, type 2, and
containers, as explained here:
Type 1 - Virtualization code that runs directly on the system hardware that
creates fully emulated instances of the hardware on which it is executed. Also
known as “full,” “native,” or “bare metal.”
App App
App
... App
OS
... OS App ... App
OS OS
Hypervisor OS Virtualization
Hypervisor Host OS Host OS
SMP server SMP server SMP server
Type 1 Type 2 Containers
Type 1 or bare metal hypervisors are the most common in the market today, and
they can be further classified into three main subtypes:
Standalone (for example, VMware ESX and vSphere)
Hybrid (for example, Hyper-V or XenServer)
Mixed (Kernel Virtual Machine)
Type 1 standalone
In a standalone hypervisor, all hardware virtualization and virtual machine
monitor (VMM) functions are provided by a single, tightly integrated set of code.
This architecture is synonymous with the construct of VMware vSphere and
previous generations of the ESX hypervisor. Figure 2-2 on page 35 shows a
sample diagram of the architectural overview of VMware vSphere 4.0 (also
referred to as ESX 4).
34 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
section of this document, ESXi (the embedded version of the hypervisor) does
not contain this service console instance.
Key:
Storage & Network I/O
Type 1 Hybrid
The hybrid type 1 architecture includes a split software model where a “thin”
hypervisor provides hardware virtualization in conjunction with a parent partition
(privileged virtual machine) which provides virtual machine monitor (VMM)
functionality. This model is associated primarily with Microsoft® Hyper-V and
Xen-based hypervisors.The parent partition, also called “Domain 0” (Dom0), is
typically a virtual machine that runs a full version of the native operating system
with root authority. For example, Dom0 for Xen-enabled and executed within
Novell SUSE Linux Enterprise Server (SLES) would execute as a full instance of
SLES, providing the management layer of VM creation, modification, deletion,
and other similar configuration tasks. At system boot, the Xen-enabled kernel
loads initially, followed by the parent partition, which runs with VMM privileges,
serves as the interface for VM management, and manages the I/O stack.
Partition
Type 1 Hypervisor Hybrid
(Hyper-V, XenServer,Virtual Iron)
Key:
Storage & Network I/O
Type 1 mixed
The Linux-based Kernel Virtual Machine (KVM) hypervisor model provides a
unique approach to Type 1 architecture. Rather than executing a proprietary
hypervisor on bare-metal, the KVM approach leverages open-source Linux
(including RHEL, SUSE, Ubuntu, and so on) as the base operating system and
provides a kernel-integrated module (named KVM) that provides hardware
virtualization.
The KVM module is executed in user mode (unlike standalone and hybrid
hypervisors, which run in kernel or root mode), but it enables virtual machines to
execute with kernel-level authority using a new instruction execution context
called Guest Mode; see Figure 2-4 on page 37.
36 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Traditional OS Traditional OS Emulated Storage & Network I/O
Virtualization is a critical part of the data center. It offers the capability to balance
loads, provide better availability, reduce power and cooling requirements, and
allow resources to be managed fluidly.
For more detailed information, see IBM Systems Virtualization: Servers, Storage,
and Software, REDP-4396.
LUN 1
LUN 2
LUN 3
Fiber SAN
channel
Switch
Figure 2-5 Storage virtualization overview
This SAN is connected to the server through a Fibre Channel connection switch.
38 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
service downtime. The System z platform is commonly referred to as a
mainframe.
The newest zEnterprise System consists of the IBM zEnterprise 196 central
processor complex, IBM zEnterprise Unified Resource Manager, and IBM
zEnterprise BladeCenter Extension. The z196 is designed with improved
scalability, performance, security, resiliency, availability, and virtualization. The
z196 Model M80 provides up to 1.6 times the total system capacity of the z10™
EC Model E64, and all z196 models provide up to twice the available memory of
the z10 EC. The zBX infrastructure works with the z196 to enhance System z
virtualization and management through an integrated hardware platform that
spans mainframe and POWER7™ technologies. Through the Unified Resource
Manager, the zEnterprise System is managed as a single pool of resources,
integrating system and workload management across the environment.
For more information about the System z architecture, refer to the IBM
zEnterprise System Technical Guide at:
http://www.redbooks.ibm.com/abstracts/sg247833.html?Open
PR/SM provides the logical partitioning function of the central processor complex
(CPC). It provides isolation between partitions, which enables installations to
separate users into distinct processing images, or to restrict access to certain
workloads where different security clearances are required.
40 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
There are two types of partitions, dedicated and shared:
A dedicated partition runs on the same dedicated physical processors at all
times, which means the processors are not available for use by other
partitions even if the operating system running on that partition has no work to
do. This eliminates the need for PR/SM to get involved with swapping out one
guest and dispatching another.
Shared partitions, alternatively, can run on all remaining processors (those
that are not being used by dedicated partitions). This allows idle systems to
be replaced by systems with real work to do at the expense of PR/SM
overhead incurred by the dispatching of the operating systems. In contrast
with dedicated partitions, shared partitions provide increased processor
utilization, but at the potential expense of performance for a single operating
system.
The building blocks that make up a CSS are shown in Figure 2-7.
System z
One of the major functions is the Multiple Image Facility (MIF). MIF capability
enables logical partitions to share channel paths, such as ESCON®, FICON®,
and Coupling Facility sender channel paths, between logical partitions within a
processor complex. If a processor complex has MIF capability, and is running in
LPAR mode, all logical partitions can access the same shared channel paths,
thereby reducing the number of required physical connections. In contrast, if a
42 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
processor complex does not have MIF capability, all logical partitions must use
separate channel paths to share I/O devices.
For more information about CSS, refer to section 2.1 in IBM System z
Connectivity Handbook, SG24-5444, which can be found at:
http://www.redbooks.ibm.com/abstracts/sg245444.html?Open
HiperSockets
HiperSockets provides the fastest TCP/IP communication between consolidated
Linux, z/VM, z/VSE, and z/OS virtual servers on a System z server. HiperSockets
provides internal “virtual” LANs, which act like TCP/IP networks in the System z
server. It eliminates the need for any physical cabling or external networking
connection between these virtual servers. Figure 2-8 shows an example of
HiperSockets connectivity with multiple LPs and virtual servers.
z/VM LP-1
Sysplex A
Linux Linux Linux z/VM
Guest 1 Guest 2 Guest n TCP/IP z/OS z/OS z/OS
LP-5 LP-6 LP-7
Sysplex traffic
isolated from
non-sysplex
Application HiperSockets FE traffic
traffic FD
separation
Different sysplex
FF traffic on same
server
Linux Linux Linux
LP-2 LP-3 LP-4
z/OS z/OS z/OS
LP-8 LP-9 LP-10
FC Sysplex B
Since HiperSockets does not use an external network, it can free up system and
network resources, eliminating attachment costs while improving availability and
performance. And also, because HiperSockets has no external components, it
provides a very secure connection.
WLM is responsible for enabling business-goal policies to be met for the set of
applications and workloads. IRD implements the adjustments that WLM
recommends to local sysplex images by dynamically taking the hardware
(processor and channels) to the LPAR where it is most needed.
44 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Dynamic channel path management (DCM)
Dynamic channel path management is designed to dynamically adjust the
channel configuration in response to shifting workload patterns. It is a function
in IRD, together with WLM LPAR processor management and Channel
Subsystem I/O Priority Queuing.
DCM can improve performance by dynamically moving the available channel
bandwidth to where it is most needed. Prior to DCM, the available channels
had to be manually balanced across the I/O devices, trying to provide
sufficient paths to handle the average load on every controller. This means
that at any one time, some controllers probably have more I/O paths available
than they need, while other controllers possibly have too few.
With Parallel Sysplex clustering and its ability to support data sharing across
servers, IT architects can design and develop applications that have one
integrated view of a shared data store. This eliminates the need to partition
databases, which in non-System z environments typically creates workload
skews requiring lengthy and disruptive database repartitioning. Also, ensuring
data integrity with non-System z partitioned databases often requires
application-level locking, which in high-volume transaction environments could
lead to service level agreements not being met.
For more information on System z Parallel Sysplex, refer to IBM z/OS Parallel
Sysplex Operational Scenarios, SG24-2079, which can be found here:
http://www.redbooks.ibm.com/abstracts/sg242079.html?Open
Linux on System z is able to run in three modes: basic, LPAR, and z/VM guest.
The most common way to consolidate distributed applications is to a single
image of Linux running in one of the other two modes, shown in Figure 2-9,
depending on application footprints and resource usage.
A Linux workload on the IFL does not result in any increased IBM software
charges for the traditional System z operating systems and middleware.
46 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
For more information on Linux on System z, refer to z/VM and Linux Operations
for z/OS System Programmers, which can be found here:
http://www.redbooks.ibm.com/abstracts/sg247603.html?Open
2.2.4 z/VM
z/VM is key to the software side of virtualization on the mainframe. It provides
each user with an individual working environment known as a virtual machine
(VM). The virtual machine uses virtualization to simulate the existence of a real
machine by sharing resources of a real machine, which include processors,
storage, memory, and input/output (I/O) resources.
In other words, a first-level z/VM operating system sits directly on the hardware,
but the guests of this first-level z/VM system are virtualized. By virtualizing the
hardware from the first level, as many guests as needed can be created with a
small amount of actual real hardware.
Memory
Memory
Memory
Guests/Virtual
Machines
(User Ids)
Second Level z/VM z/OS Linux02
Memory
Memory
Hypervisor z/VM -- (Control Program)
First Level
CPUs CPUs
Memory
Memory
Hardware CPUs CPUs
z/VM consists of many components and facilities that bring the reliability,
availability, scalability, security, and serviceability characteristics of System z
servers, such as:
TCP/IP for z/VM brings the power and resources of the mainframe server to
the Internet. Using the TCP/IP protocol suite of TCP/IP for z/VM, multiple
vendor networking environments can be reached from the z/VM system.
Applications can be shared transparently across z/VM, Linux, and other
environments. Users can send messages, transfer files, share printers, and
access remote resources with a broad range of systems from multiple
vendors.
Open Systems Adapter-Express (OSA-Express), Open Systems Adapter
Express2 (OSA-Express2), and Open Systems Adapter Express3
(OSA-Express3) are integrated hardware features that enable the System z
48 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
platform to provide industry-standard connectivity directly to clients on local
area networks (LANs) and wide area networks (WANs).
The Resource Access Control Facility (RACF®) Security Server for z/VM is a
security tool that works together with existing functions in the z/VM base
system to provide improved data security for an installation. RACF protects
information by controlling access to it. RACF also controls what can be done
on the operating system and protects the resources. It provides this security
by identifying and verifying users, authorizing users to access protected
resources, and recording and reporting access attempts.
The virtual switch can operate at Layer 2 (data link layer) or Layer 3 (network
layer) of the OSI model and bridges real hardware and virtualized LANs, using
virtual QDIO adapters.
50 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
the OSA port strips the Ethernet frame and forwards the IP packets to the virtual
switch for delivery to the guest system based on the destination IP address in
each IP packet.
When operating in Ethernet mode (Layer 2), the virtual switch uses a unique
MAC address for forwarding frames to each connecting guest system. Data is
transported and delivered in Ethernet frames. This provides the ability to
transport both TCP/IP and non-TCP/IP based application data through the virtual
switch. The address-resolution process allows each guest system’s MAC
address to become known to hosts residing on the physical side of the LAN
segment through an attached OSA port. All inbound or outbound frames passing
through the OSA port have the guest system’s corresponding MAC address as
the destination or source address.
The switching logic resides in the z/VM Control Program (CP), which owns the
OSA port connection and performs all data transfers between guest systems
connected to the virtual switch and the OSA port; see Figure 2-11.
Real
Virtual Switch Device
Ethernet
Switch
OSA Port
52 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Here, some further considerations about the network features of the OSA,
Hipersocket and Dynamic Cross-System Coupling Facility interfaces are
provided.
Design considerations
To design connectivity in a z/OS environment, the following considerations
should be taken into account:
As a server environment, network connectivity to the external corporate
network should be carefully designed to provide a high-availability
environment, avoiding single points of failure.
If a z/OS LPAR is seen as a standalone server environment on the corporate
network, it should be designed as an end point.
If a z/OS LPAR will be used as a front-end concentrator (for example, making use
of HiperSockets Accelerator), it should be designed as an intermediate network
or node.
Although there are specialized cases where multiple stacks per LPAR can
provide value, in general we recommend implementing only one TCP/IP stack
per LPAR. The reasons for this recommendation are as follows:
A TCP/IP stack is capable of exploiting all available resources defined to the
LPAR in which it is running. Therefore, starting multiple stacks will not yield
any increase in throughput.
When running multiple TCP/IP stacks, additional system resources, such as
memory, CPU cycles, and storage, are required.
Multiple TCP/IP stacks add a significant level of complexity to TCP/IP system
administration tasks.
One example where multiple stacks can have value is when an LPAR needs to
be connected to multiple isolated security zones in such a way that there is no
network level connectivity between the security zones. In this case, a TCP/IP
stack per security zone can be used to provide that level of isolation, without any
network connectivity between the stacks.
54 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
anyone in a non-z/OS TCP/IP environment to access the z/OS Communications
Server, and perform tasks and functions provided by the TCP/IP protocol suite.
Routing support
z/OS Communications Server supports static routing and two different types of
dynamic routing: Open Shortest Path First (OSPF) and Routing Information
Protocol (RIP). z/OS Communications Server also supports policy-based routing,
which determines the destination based on a defined policy. Traffic descriptors
such as TCP/UDP port numbers, application name, and source IP addresses
can be used to define the policy to enable optimized route selection.
Prior to the introduction of the virtual MAC function, an OSA interface only had
one MAC address. This restriction caused problems when using load balancing
technologies in conjunction with TCP/IP stacks that share OSA interfaces.
The single MAC address of the OSA also causes a problem when using TCP/IP
stacks as a forwarding router for packets destined for unregistered IP addresses.
With the use of the VMAC function, packets destined for a TCP/IP stack are
identified by an assigned VMAC address and packets sent to the LAN from the
stack use the VMAC address as the source MAC address. This means that all IP
addresses associated with a TCP/IP stack are accessible through their own
VMAC address, instead of sharing a single physical MAC address of an OSA
interface.
56 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
responsiveness of target servers in accepting new TCP connection setup
requests, favoring those servers that are more successfully accepting new
requests.
Internal workload balancing within the sysplex ensures that a group or
cluster of application server instances can maintain optimum performance
by serving client requests simultaneously. High availability considerations
suggest at least two application server instances should exist, both
providing the same services to their clients. If one application instance
fails, the other carries on providing service. Multiple application instances
minimize the number of users affected by the failure of a single application
server instance. Thus, load balancing and availability are closely linked.
– Portsharing
In order for a TCP server application to support a large number of client
connections on a single system, it might be necessary to run more than
one instance of the server application. Portsharing is a method to
distribute workload for IP applications in a z/OS LPAR. TCP/IP allows
multiple listeners to listen on the same combination of port and interface.
Workload destined for this application can be distributed among the group
of servers that listen on the same port.
External application workload balancing
With external application workload distribution, decisions for load balancing
are made by external devices. Such devices typically have very robust
capabilities and are often part of a suite of networking components.
From a z/OS viewpoint, there are two types of external load balancers
available today. One type bases decisions completely on parameters in the
external mechanism, while the other type uses sysplex awareness matrixes
for each application and each z/OS system as part of the decision process
through the Load Balancing Advisor (LBA) function. Which technique is best
depends on many factors, but the best method usually involves knowledge of
the health and status of the application instances and the z/OS systems.
z/OS Parallel Sysplex
z/OS Parallel Sysplex combines parallel processing with data sharing across
multiple systems to harness the power of plural z/OS mainframe systems, yet
make these systems behave like a single, logical computing facility. This
combination gives the z/OS Parallel Sysplex unique availability and scalability
capabilities.
58 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
– Additional MIB support is also provided by enterprise-specific MIB, which
supports management data for Communications Server TCP/IP
stack-specific functions.
Power 795
Power 780
Power 770
Power 740
Power 750
Power 720
Power 755
Power 730
Power 710
Blades
60 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
The PowerVM Enterprise Edition is only available on the new POWER6-based
and POWER7-based systems, and adds PowerVM Live Partition Mobility to the
suite of functions.
Table 2-2 lists the versions of PowerVM that are available on each model of
POWER7 processor technology-based servers.
Combined with features designed into the IBM POWER processors, the POWER
Hypervisor delivers functions that enable capabilities including dedicated
processor partitions, Micro-Partitioning, virtual processors, IEEE VLAN
compatible virtual switch, virtual Ethernet adapters, virtual SCSI adapters, and
virtual consoles.
The POWER Hypervisor is a firmware layer sitting between the hosted operating
systems and the server hardware, as shown in Figure 2-13 on page 63. The
POWER Hypervisor has no specific or dedicated processor resources assigned
to it.
62 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Virtual and physical resources
P5 P5 P5 P5 P5 P5 P5
POWER Hypervisor
MEM MEM
POWERS POWERS POWERS POWERS
I/O Slot
I/O Slot
I/O Slot
I/O Slot
I/O Slot
MEM MEM
MEM
Server hardware resources
Figure 2-13 The POWER Hypervisor abstracts the physical server hardware
This IBM Power 750 system can be configured with up to 32 cores, and the IBM
Power 770 and 780 servers up to 64 cores. At the time of writing, these systems
can support:
Up to 32 and 64 dedicated partitions, respectively
Up to 160 micro-partitions
Live Partition Mobility makes it possible to move running AIX or Linux partitions
from one physical POWER6 and POWER7 server to another without disruption.
The movement of the partition includes everything that partition is running, that
is, all hosted applications. Some possible uses and their advantages are:
Moving partitions from a server to allow planned maintenance of the server
without disruption to the service and users
Moving heavily used partitions to larger machines without interruption to the
service and users
Moving partitions to appropriate servers depending on workload demands;
adjusting the utilization of the server-estate to maintain an optimal level of
service to users at optimal cost
Consolidation of underutilized partitions out-of-hours to enable unused
servers to be shut down, saving power and cooling expenditure
A partition migration operation can occur either when a partition is powered off
(inactive), or when a partition is providing service (active).
64 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
logical partitions. The Virtual I/O Server can use both virtualized storage and
network adapters, making use of the virtual SCSI and virtual Ethernet facilities.
For virtual Ethernet we can define Shared Ethernet Adapters on the Virtual I/O
Server, bridging network traffic from the virtual Ethernet networks out to physical
Ethernet networks.
The Virtual I/O Server technology facilitates the consolidation of LAN and disk
I/O resources and minimizes the number of physical adapters that are required,
while meeting the non-functional requirements of the server. To understand the
support for storage devices, refer to the website at:
http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/d
atasheet.html
VLAN 12 VLAN 6
By default, POWER Hypervisor has only one virtual switch but it can be changed
to support up to 16 virtual Ethernet switches per system. Currently, the number of
switches must be changed through the Advanced System Management (ASM)
interface. The same VLANs can be used on different virtual switches and traffic
will still be separated by the POWER Hypervisor.
66 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
SEA is the component that provides connectivity to an external Ethernet switch,
bridging network traffic from the virtual Ethernet adapters out to physical
Ethernet networks. In 2.3.8 “External connectivity” on page 67 we discuss SEA
architecture in detail and how external Ethernet switches can be connected.
All virtual Ethernet adapters in a Shared Ethernet Adapter (SEA) must belong to
the same virtual switch, or SEA creation will fail.
Virtual Virtual
Virtual Virtual Virtual
Ethernet Ethernet
Ethernet Ethernet Ethernet
Trunk Trunk
Adapter Adapter Adapter
POWER Adapter Adapter
Physical
Hypervisor Ethernet
Adapter
VLAN 6
VLAN 12
Ethernet
Network
To avoid single points of failure, the SEA can be configured in redundant failover
mode; see Figure 2-16. This mode is desirable when the external Ethernet
network can provide high availability. Also, more than one physical adapter is
needed at the POWER System.
Physical Physical
Ethernet Ethernet
Adapter Adapter
VLAN 6
VLAN 12
Ethernet
Ethernet POWER Hypervisor VLAN 99 (control channel)
Network
Network
PSU_temp_0522
68 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Primary and backup SEAs communicate through a control channel on a
dedicated VLAN visible only internally to the POWER Hypervisor. A proprietary
protocol is used between SEAs to determine which is the primary and which is
the backup.
One backup SEA may be configured for each primary SEA. The backup SEA is
idle until a failure occurs on the primary SEA; failover occurs transparently to
client LPARs.
Each SEA in a failover domain has a different priority. A lower priority means that
an SEA is favored to be the primary. All virtual Ethernet adapters on an SEA
must have the same priority. Virtual Ethernet adapters of primary and backup
SEAs must belong to the same virtual switch.
SEA supports four hash modes for EtherChannel and IEEE 802.3ad to distribute
the outgoing traffic:
Default mode - The adapter selection algorithm uses the last byte of the
destination IP address (for TCP/IP traffic) or MAC address (for ARP and other
non-IP traffic). This mode is typically the best initial choice for a server with a
large number of clients.
Src_dst_port - The outgoing adapter path is selected through an algorithm
using the combined source and destination TCP or UDP port values: Average
the TCP/IP address suffix values in the Local and Foreign columns shown by
the netstat -an command.
Since each connection has a unique TCP or UDP port, the three port-based
hash modes provide additional adapter distribution flexibility when there are
several separate TCP or UDP connections between an IP address pair.
Src_dst_port hash mode is recommended when possible.
src_port - The adapter selection algorithm uses the source TCP or UDP port
value. In netstat -an command output, the port is the TCP/IP address suffix
value in the Local column.
SEA supports a fifth hash mode, but it is only available in EtherChannel mode.
When used, the outgoing datagrams are scheduled in a round-robin fashion, that
is, outgoing traffic is spread evenly across all adapter ports. This mode is the
typical choice for two hosts connected back-to-back (that is, without an
intervening switch).
AIX also supports IP Multipath Routing, that is, multiple routes can be specified
to the same destination (host or net routes), thus achieving load balancing.
If the upstream IP network does not support First Hop Redundancy Protocol
(FHRP)—such as Virtual Router Redundancy Protocol (VRRP), Hot Standby
Router Protocol (HSRP), or Gateway Load Balancing Protocol (GLBP)—POWER
System can use a Dead Gateway Detection (DGD) mechanism: If DGD detects
that the gateway is down, DGD allows an alternative route to be selected. Using
DGD in conjunction with IP multipathing, the platform can provide IP level load
balancing and failover. DGD uses ICMP and TCP mechanisms to detect gateway
failures. HSRP and GLBP are Cisco proprietary protocols.
AIX supports Virtual IP Address (VIPA) interfaces (vi0, vi1, and so on) that can
be associated with multiple underlying interfaces. The use of a VIPA enables
applications to bind to a single IP address. This feature can be used as failover
and load balancing in conjunction with multipath routing and DGD.
70 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Considerations
The following could be reasons for not using routing features at the server level:
Troubleshooting and modifications of routing issues are more complex
because the routing domain is spread over several administrators and thus
will increase operational expenses.
Routed and gated are not as advanced as commercial routing solutions. For
example, IP fast rerouting mechanisms are not implemented in gated and
routed.
The IBM System p partition architecture, AIX, and the Virtual I/O Server have
been security certified to the EAL4+ level. For more information, see:
http://www-03.ibm.com/systems/p/os/aix/certifications/index.html
x86 technology is probably the most widely used today. IBM System x servers
support both Intel and AMD hardware virtualization assistance. System x servers
are built with the IBM X-Architecture® blueprint, which melds industry standards
and innovative IBM technology. Some models even include VMware embedded
in hardware to minimize deployment time and improve performance.
IBM System x and BladeCenter are now part of the IBM CloudBurst™ offering
(see 2.8.2, “CloudBurst”), so these platforms are also available as integrated
modules for a cloud data center.
The following are some important benefits of virtualization that can be achieved
with System x and BladeCenter:
Optimize and lower the costs (CAPEX and OPEX) due to:
– A higher degree of server utilization
– Reduced power and cooling costs
– Simpler, more comprehensive server management
– Reliability and availability to minimize downtime
IBM x3850 M2 and x3950 M2 servers deliver consolidation and virtualization
capabilities, such as:
– IBM X-Architecture and eX4 chipsets are designed for virtualization.
– Scales easily from 4 to 16 sockets.
IBM BladeCenter provides end-to-end blade platform for virtualization of
client, server, I/O, networking, and storage.
IBM Systems Director enables new and innovative ways to manage IBM
Systems across a multisystem environment, improving service with integrated
72 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
systems management by streamlining the way physical and virtual systems
are managed.
– Unifies the management of physical and virtual IBM systems, delivering a
consistent look and feel for common management tasks.
– Multisystem support for IBM Power Systems, System x, BladeCenter,
System z, and Storage Systems.
– Reduced training cost by means of a consistent and unified platform
management foundation and interface.
Full integration of Virtualization Manager into IBM Systems Director 6.1 base
functionality:
– Consolidate management for different virtualized environments and tools
includes VMware ESX, Microsoft Virtual Server and Xen virtualization, as
well as Power Systems Hardware Management Console (HMC) and
Integrated Virtualization Manager (IVM).
– Track alerts and system status for virtual resources and their resources to
easily diagnose problems affecting virtual resources.
– Perform lifecycle management tasks, such as creating additional virtual
servers, editing virtual server resources, or relocating virtual servers to
alternate physical hosts.
– Get quick access to native virtualization management tools through
launch-in-context.
– Create automation plans based on events and actions from virtual and
physical resources, such as relocating a virtual server.
IBM System Director integration with VMware Virtual Center
– VMware VirtualCenter client is installed on the management console and
VMware VirtualCenter server is installed on a physical system with:
• IBM Systems Director Agent
• Virtualization Manager Agent for VMware VirtualCenter
– Drive VMware VMotion using physical hardware status information
through automated policies.
74 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
more information about the IBM BladeCenter, refer to the IBM BladeCenter
Products and Technology Redbooks publication, which can be found here:
http://www.redbooks.ibm.com/abstracts/sg247523.html?Open
The IBM blade technologies cover a very broad spectrum in the market by
including Intel, AMD, POWER servers, and Cell BE blades. This allows great
flexibility for using disparate workloads.
The IBM BladeCenter chassis portfolio (shown in Figure 2-17 on page 74) is
designed for different customer needs:
BladeCenter S - For SMBs or remote locations
BladeCenter E - High density and energy efficiency for the enterprise
BladeCenter H - For commercial applications, high-performance computing
(HPC), technical clusters, and virtualized enterprise solutions needing high
throughput
BladeCenter HT - For telecommunications and rugged environments that
require high performance and flexible I/O
BladeCenter T - For telecommunications and rugged environments such as
military, manufacturing, or medical
The IBM BladeCenter network hardware supports 1 Gbps and 10 Gbps native
Ethernet or the new converged enhanced Ethernet. Blade servers can have
several 10 Gbps converged network adapters or just 1-Gbps Ethernet adapters.
Switches can be from several vendors such as BNT, Brocade or Cisco and can
also be legacy Ethernet (1 or 10 Gbps) or 10-Gbps FCoE switches with up to 10
10-Gbps uplink ports each. Each H chassis, for example, can have up to four
switches, which represents up to 400 Gbps full-duplex.
IBM BladeCenter also has the following storage options available, external or
internal, including internal Solid® State Drives:
External storage: NAS, FC SAN, iSCSI SAN, SAS SAN
Internal storage: Solid state (SSD), Flash, SAS, SATA
The server can also be booted from the SAN, which further virtualizes the
operating system from the physical device.
Blade Open Fabric Manager (BOFM) is also useful when replacing failed blades.
After replacement the switch tables are unchanged and any other configuration
depending on the MAC address or the WW name is unaffected. In addition,
installations can be preconfigured before plugging in the first blade.
By using the Virtual Fabric solution, the number of ports on the Virtual Fabric
adapter can quadruple, while at the same time reducing switch modules by up to
75%. Characteristics that are leveraged by this technology include:
Multiple virtual ports and protocols (Ethernet, FCoE, and iSCSI) from a single
physical port.
Up to 8 virtual NICs or mix of vNICs and vCNA per adapter.
Each virtual port operates anywhere between 100 Mb to 10 Gb and can run
as Ethernet, FCoE, or iSCSI.
Shared bandwidth across multiple applications.
Support of vendor-branded switches.
For more information about the Virtual Fabric, refer to IBM BladeCenter Virtual
Fabric Solutions, REDP-4673, which can be found here:
http://www.redbooks.ibm.com/abstracts/redp4673.html?Open
76 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
2.5 Other x86 virtualization software offerings
Today, IBM server virtualization technologies are at the forefront of helping
businesses with consolidation, cost management, and business resiliency.
Virtualization was first introduced by IBM in the 1960s to allow the partitioning of
large mainframe environments. IBM has continued to innovate around server
virtualization and has extended it from the mainframe to the IBM Power Systems,
IBM System p, and IBM System i product lines. In the industry-standard
environment, VMware, Microsoft Hyper-V, and Xen offerings are for IBM
System x and IBM BladeCenter systems.
In the x86 server environment, virtualization has become the standard. First by
consolidating the number of underutilized physical servers into virtual machines
that are placed onto a smaller number of more powerful servers, resulting in cost
reduction in the number of physical servers and environmental usage (electrical
power, air conditioning and computer room floor space). Secondly, by
encapsulating the x86 server into a single logical file, it becomes easier to move
this server from one site to another site for disaster recovery purposes. Besides
the x86 servers being virtualized, the next virtualization environment being
undertaken is the desktops.
There has been significant work to introduce virtualization to Linux in the x86
markets using hypervisor technology. Advantages to Linux-based hypervisor
include:
The hypervisor has the advantage of contributions from the entire open
source communities, not just related to one vendor (Open Sources Solutions).
Currently Linux supports a very large base of hardware platforms, so it is not
limited to just the platforms certified by a single vendor. Also, as new
technologies are developed, a Linux-based hypervisor can take advantage of
these technologies, such as iSCSI, InfiniBand, 10 Gig Ethernet, and so forth.
Although Xen was owned by XenSource, the nature of Open Source software
ensures that there are multiple forks and distributions, many released by other
vendors, apart from Citrix. In fact Virtual Iron, OracleVM, and Sun xVM are all
based on Xen. Red Hat Inc. includes the Xen hypervisor as part of Red Hat
Enterprise Linux (RHEL) software.1 At the time of writing, Xen Server v5.5 is the
most widely supported version.
Xen allows paravirtual guests to have direct access to I/O hardware. The
scheduler is optimized to support virtual machine guests. I/O overhead is
reduced in Xen, as compared to full virtualization techniques.
On the other hand, hardware assistance technology processes today offer better
performance than paravirtualization. Since the release of Xen Server 3.0,
hardware-assisted virtualization has been supported through the use of the Intel
VTx and AMD AMD-V, integrated into modern x86/x64 processors.
1
Red Hat is taking two directions. Although its current portfolio is built around the Xen hypervisor, it
seems RH's strategic route will be KVM. Actually, Red Hat is supporting both the paravirtualized
version and the full virtualized version. The latest requires hardware support (Intel VT-x or AMD
Pacifica).
2
QEMU is a community-driven project and all Open Source hypervisors exploit it. It can emulate nine
target architectures on 13 host architectures and provide full system emulation supporting more
than 200 devices.
78 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Windows hosted on XenServer products is supported by Microsoft; the
collaboration between Microsoft and Citrix is focused on driving industry
standards within virtualization. This extends to interoperability between Microsoft
and Xen guests, allowing the optional Citrix Essentials package to enable
dynamic virtual machine migration between Microsoft Hyper-V and Citrix
XenServer.
Virtualization Stack
Storage Network
Virtualization Virtualization User User User
Applications Applications Applications
Xen Management API
Storage Network
Drivers Drivers
Direct Access
Direct Access
Hardware
Preferred platform Intel VT/AMD V
To update the external devices, the destination host sends a gratuitous ARP
packet as the last step of the migration. Spanning tree protocol should not be a
problem because loops are automatically prevented inside the hypervisor. Citrix
Xen Server 5.5 supports up to 6 physical Ethernet NICs per physical server.
Physical NICs can be bonded in XenServer so that they could act like one.
From a management perspective, Xen has its own kernel, so it uses a separate
VM monitor. Being a root user is not needed to monitor a guest to see how it is
performing. Xen is the first thing that comes up in the machine. Then Xen loads
its first Linux operating system (Domain 0).
IBM supports both the management tools and the VM image packaging.
2.5.2 KVM
Kernel-based Virtual Machine (KVM) is a Linux kernel module that turns Linux
into a hypervisor; this is known as Linux-based virtualization. It requires
hardware virtualization assistance (Intel VT-x or AMD Pacifica). It consists of a
loadable kernel module, kvm.ko. This module provides the core virtualization
infrastructure and a processor-specific module, kvm-intel.ko or kvm-amd.ko.
80 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
KVM development, which has been carried out since 2006 by the start-up
Qumranet, originated with the effort of trying to find an alternative to Xen that
could overcome some limitations such as support for NUMA computing
architectures. Originally, KVM was intended as a base technology for desktop
virtualization and was never intended to be a stand-alone hypervisor. The KVM
original patchset was submitted in October 2006. It has been included in the
Linux kernel since then and Kumranet was acquired by Red Hat Inc. in
September 2008. RHEV3 from Red Hat incorporates KVM. For more information
about this subject, visit the following site:
http://www.linux-kvm.org/
Without QEMU, KVM is not a hypervisor. With QEMU, KVM can run unmodified
guest operating systems (Windows, FreeBSD, or Linux). The emulation has very
good performance (near-native) because it is supported by the hardware.
KVM can run on any x86 platform with a Linux operating system running. The
first step is to boot Linux. Then the virtual machines (VMs) are seen as Linux
processes, which can be assigned with a different scheduling priority.
Figure 2-19 on page 82 displays an overview of KVM.
KVM is not a hosted hypervisor. Instead, it uses the hardware to get control of
the machine. No modifications to the Linux operating system are required (in
upstream Linux). In fact, Linux provides essential services (hardware support,
bootstrap memory management, process management and scheduling, and
access control), and KVM becomes a loadable module. It is essentially a
CPU/MMU driver.
Moreover, running under a Linux kernel allows for not only physical but also
virtual memory paging and oversubscription. Recently (accepted for inclusion in
the Linux kernel 2.6.32 release) the Kernel Samepage Merging (KSM) feature
has been added. By looking for identical pages and merging them, this provides
the memory overcommit capabilities to make efficient usage of the available
physical memory and achieve more VMs per host and thus higher consolidation
ratios.
3
RHEV is short for Red Hat Enterprise Virtualization, which is a distribution from Red Hat that is
comprised of two components: RHEV-H and RHEV-M. RHEV-H (or Red Hat Enterprise
Virtualization Hypervisor) is based on the KVM open source hypervisor. RHEV-M is Red Hat
Enterprise Virtualization Manager, an enterprise grade server management system.
To overcome the limits of the paravirtualized I/O approach, KVM can leverage
hardware-assisted virtualization for PCI pass-through on Intel and AMD
platforms and SR-IOV adapters for hypervisor-bypass, making it possible to
obtain near-native I/O performance.
With regard to virtual networking (or guest networking), KVM internal network
can be configured in different modes4:
User Networking - The VM simply accesses the external network and the
Internet.
Private Virtual Bridge - A private network between two or more VMs that is not
seen by the other VMs and the external network.
4
Excerpt from http://www.linux-kvm.org/page/Networking
82 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Public Bridge - The same as Private Virtual Bridge, but with external
connectivity.
KVM can also be seen as a virtualization driver, and in that sense it can add
virtualization support of multiple computing architectures, and it is not confined
only to the x86 space: IA64, S390, and Embedded PowerPC®.
KVM permits hybrid mode operation; regular Linux applications can run
side-by-side with VMs, achieving a very high rate of hardware efficiency.
Moreover, being based on Linux, KVM supports all the hardware that Linux
supports. VM Storage access, for example, is performed through Linux.
VMware vSphere 4
Availability Security Scalability
VMotion vShield Zones DRS
Application Storage vMotion VMSafey Hot Add
Services HA
Fault Tolerance
Disaster Recovery
The vSphere 4 kernel has been developed for 64-bit processors. Also, vSphere
utilizes the virtualization acceleration feature on microprocessors, such as Intel
VT and AMD-V. This will enhance virtualization performance and scalability
compared with ESX Version 3, which is built as 32 bit and software-based.
There are two types of vSphere software. The vSphere ESX software has a
service console that is compatible with Red Hat Enterprise Linux (RHEL) 5.2.
5
VMware, Introduction to VMware vSphere, EN-000102-00,
http://www.vmware.com/pdf/vsphere4/r40/vsp_40_intro_vs.pdf
84 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Third-party applications can be installed into the service console to provide
additional management for the ESX server.
The other type is called ESXi; it does not have a service console. VMware has
published a set of APIs that can be used by third parties to manage this
environment. This environment is deemed more secure by not having this Linux
environment to maintain. ESXi is maintained using the VMkernel interface.
ESX overview
All virtual machines (VMs) run on top of the VMkernel and share the physical
resources. VMs are connected to the internal networking layer and gain
transparent access to the ESX server. With virtual networking, virtual machines
can be networked in the same way as physical machines. Figure 2-21 on
page 86 shows an overview of the ESX structure.
Virtualization
Kernel Layer Virtual Network Layer
Hardware Layer
There are similarities and differences between virtual switches and physical
switches. There are two types of virtual switches: vNetwork Standard Switch
(vSS) and the vNetwork Distributed Switch (vDS). The vSS is configured at the
ESX host level, whereas the vDS is configured at the vCenter level and functions
as a single virtual switch across the associated ESX hosts.
86 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Virtual NIC
Virtual NIC, also called Virtual Ethernet Adapter, is a virtualized adapter that
emulates Ethernet and is used by each VM. Virtual NIC has its own MAC
address, which is transparent to the external network.
Virtual switch port
A virtual port corresponds to a physical port on a physical Ethernet switch. A
virtual NIC connects to the port that you define on the virtual switch. The
maximum number of virtual switch ports per ESX server is 4096. The
maximum number of active ports per ESX server is 1016.
Port group
A port group specifies port configuration options such as bandwidth
limitations and VLAN tagging policies for each member port. The maximum
number of port groups per ESX server is 512.
Uplink port
Uplink ports are ports associated with physical adapters, providing a
connection between a virtual network and a physical network.
Physical NIC
Physical NIC is a physical Ethernet adapter that is installed on the server. The
maximum number of physical NICs per ESX server varies depending on the
Ethernet adapter hardware.
Uplink Port
VLAN tagging
802.1Q tagging is supported on a virtual switch. There are three types of
configuration modes:
Virtual switch tagging (VST)
Define one port group on a virtual switch for each VLAN, then attach the
virtual machine’s virtual adapter to the port group instead of to the virtual
switch directly. The virtual switch port group tags all outbound frames and
88 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
removes tags for all inbound frames. It also ensures that frames on one VLAN
do not leak into a different VLAN. Use of this mode requires that the physical
switch provide a trunk. This is the most typical configuration.
Virtual machine guest tagging (VGT)
Install an 802.1Q VLAN trunking driver inside the virtual machine, and tags
will be preserved between the virtual machine networking stack and external
switch when frames are passed from or to virtual switches.
External switch tagging (EST)
Use external switches for VLAN tagging. This is similar to a physical network,
and a VLAN configuration is normally transparent to each individual physical
server.
QoS
The virtual switch shapes traffic by establishing parameters for three outbound
traffic characteristics: Average bandwidth, burst size, and peak bandwidth. You
can set values for these characteristics for each port group. For the virtual
Distributed Switch, the same traffic classifications can be placed on the inbound
traffic.
High availability
In terms of networking, you can configure a single virtual switch to multiple
physical Ethernet adapters using NIC teaming. A team can share the load of
traffic between physical and virtual networks and provide passive failover in the
event of a hardware failure or a network outage. You can set NIC teaming
policies (virtual port ID, source MAC hash, or IP hash) at the port group level.
High availability of VM is described later in this section.
Figure 2-24 on page 91 shows an overview diagram of vDS. The vDS is really a
vSS with additional enhancements. Virtual port groups are associated with a
vDS and specify port configuration options for each member port. Distributed
virtual port groups define how a connection is made through the vDS to the
network. Configuration parameters are similar to those available with port groups
on standard switches. The VLAN ID, traffic shaping parameters, port security,
teaming, the load balancing configuration, and other settings are configured
here.
Virtual uplinks provide a level of abstraction for the physical NICs (vmnics) on
each host. NIC teaming, load balancing, and failover policies on the vDS and
90 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
virtual port Groups are applied to the uplinks and not the vmnics on individual
hosts. Each vmnic on each host is mapped to virtual uplinks, permitting teaming
and failover consistency, irrespective of vmnic assignments.
Distributed
Virtual Virtual Virtual Switch
Uplink Uplink Uplink
VM VM VM
Individual
virtual switch
Virtual Switch Virtual Switch Virtual Switch (Hidden)
IPv6 support for guest operating systems was introduced in VMware ESX 3.5.
With vSphere, IPv6 support is extended to include the VMkernel and service
console, allowing IP storage and other ESX services to communicate over IPv6.
Nexus 1000V
VMware provides the Software Development Kit (SDK) to third-party partners to
create their own plug-in to distributed virtual switches. Nexus 1000V is one of the
Nexus 1000V has two major components: the Virtual Ethernet Module (VEM)
running on each ESX hypervisor kernel, and, as in vDS, a Virtual Supervisor
Module (VSM) managing multiple clustered VEMs as a single logical switch.
VSM is an appliance that is integrated into vCenter management.
Cisco implements VN-Link on both Nexus 1000v and other Nexus hardware
switch. VN-Link offloads switching function from the virtual switch. It defines
association with virtual NIC and virtual port called VNtag ID. For more
information about VN-Link, see:
http://www.cisco.com/en/US/netsol/ns894/index.html
92 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
There are other, similar technologies. Virtual Ethernet Port Aggregator (VEPA)
and Virtual Ethernet Bridging (VEB), which can offload switching function from a
virtual switch, are also proposed to IEEE.
Cluster
Root Resource Pool:
Physical Physical Physical Physical Physical
5x (4.0 GHz, 4 GB)
Server Server Server Server Server
(CPU = 24 GHz
Mem = 20 GB)
94 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
VMotion overview
This function moves a running VM from one physical server to another with
minimum impact to users. Figure 2-26 shows an overview of VMotion.
VCenter
Shared
Storage
(e.g. SAN)
In this overview, the ESX servers (ESX Server A and ESX Server B) share the
storage (for example, the SAN or iSCSI volume). When VM3 is VMotion from
ESX Server A, it is moved, which means the memory contents of the virtual
machine are copied to ESX Server B. After the transfer is completed, VM3 is
activated on ESX Server B. This technology provides for a dynamic infrastructure
that includes features for:
Dynamic resource optimization in the resource group
High availability of VM
Easy deployment of VM
VMotion only works in an L2 environment. In other words, source and target ESX
servers should be located within the same broadcast domain.
VM Direct I/O
VMDirectPath is a new capability provided in vSphere for direct assignment of
physical NIC/HBA to a VM as guest. VMDirectPath is designed for VMs that
require the dedicated network bandwidth. But virtual machines that use Direct
I/O cannot perform additional VM functions such as VMotion, fault tolerance, and
suspend/resume. Note that this function requires specific network adapters listed
on the compatibility list provided by VMware.
In general, these APIs enable a security product to inspect and control certain
aspects of VM access to memory, disk, and the network from “outside” VM, using
the hypervisor to look inside a VM without actually loading any host agents. One
of the examples of VMsafe implementation is VSS from IBM. Refer to the VSS
section for details (3.4.6, “Virtual network security”).
96 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Careful capacity planning and testing should be performed. At this time, only
virtual machines with one virtual CPU are supported.
NetIOC is able to individually identify and prioritize the following traffic types
leaving an ESX/ESXi host on a vDS-connected uplink:
Virtual machine traffic
Management traffic
iSCSI
NFS
VMware Fault Tolerance (VFT) logging
VMotion
In Figure 2-27 on page 98, NetIOC is implemented on the vDS using shares and
maximum limits. Shares are used to prioritize and schedule traffic for each
physical NIC, and maximum limits on egress traffic are applied over a team of
physical NICs. Limits are applied first and then shares.
vMotion
ISCSI
NFS
FT
APP APP APP APP
OS OS OS OS
vmkernel TCP/IP
Virtual
Port
NetIOC
Policles appy Scheduler prioritized traffic
to traffic leaving the host according to
relative "shares" values
leaving the
configured for each traffic
ESX/ESXI Host
type on the vDS
LBT dynamically adjusts the mapping of virtual ports to physical NICs to best
balance the network load entering or leaving the ESX/ESXi 4.1 host. When LBT
detects an ingress or egress congestion condition on an uplink, signified by a
mean utilization of 75% or more over a 30-second period, it will attempt to move
one or more of the virtual ports to vmnic-mapped flows to lesser-used links within
the team.
98 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
IPv6 enhancements
vSphere 4.1 is undergoing testing for U.S. NIST Host Profile compliance that
includes requirements for IPsec and IKEv2 functionality (with the exception of
MLDv2 plus PKI and DH-24 support within IKE).
Note: IPv6 is not supported for vSphere vCLI, VMware HA and VMware FT
logging. IKEv2 is disabled by default.
The system administrator creates the organization and assigns resources. After
the organization is created, the system administrator emails the organization's
URL to the administrator assigned to the organization. Using the URL, the
organization administrator logs in to the organization and sets it up, configures
resource use, adds users, and selects organization-specific profiles and settings.
Users create, use, and manage virtual machines and vApps.
6
Taken from “Cloud Director Installation and Configuration Guide”, EN-000338-00 found at
http://www.vmware.com/pdf/vcd_10_install.pdf
vCenter
vCenter Database
ESX/ESXi
ESX/ESXi vCenter vCenter
Database
ESX/ESXi
vCenter vCenter
ESX/ESXi Database
ESX/ESXi
vShield
Manager
vShield
Manager
vShield
Manager
Figure 2-28 shows a vCloud Director cluster comprised of four server hosts.
Each host runs a group of services called a vCloud Director cell. All hosts in the
cluster share a single database. The entire cluster is connected to three vCenter
servers and the ESX/ESXi hosts that they manage. Each vCenter server is
connected to a vShield Manager host, which provides network services to the
cloud.
Table 2-6 provides information about the limits in a vCloud Director installation.
vCenter Servers 10
100 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Category Maximum number
Users 5000
The vCloud Director installation and configuration process creates the cells,
connects them to the shared database, and establishes the first connections to a
vCenter server, ESX/ESXi hosts, and vShield Manager. After installation and
configuration is complete, a system administrator can connect additional vCenter
servers, vShield Manager servers, and ESX/ESXi hosts to the Cloud Director
cluster at any time.
To learn more about the value of IBM System x, BladeCenter, and VMware, see
IBM Systems Virtualization: Servers, Storage, and Software, REDP-4396.
2.5.4 Hyper-V R2
Windows Server 2008 R2 includes Hyper-V R2, which is a hypervisor-based
architecture (“bare metal” hypervisor) that is a very thin software layer (less than
1 MB in space). It was released in the summer of 2008. The free, standalone
version of Hyper-V (Microsoft Hyper-V Server 2008) was released in October.
The two biggest concerns with Hyper-V have been addressed in the R2 release:
It fully supports failover clustering.
It now includes live migration, which is the ability to move a virtual machine
from one physical host to another without service interruption.
A primary virtual machine (parent partition) runs only Windows Server 2008 and
the virtualization stack, and has direct access to the physical machine’s hardware
and I/O. The other VMs (children) do not have direct access to the hardware.
Also, Hyper-V implements a virtual switch. The virtual switch is the only
networking component that is bound to the physical network adapter. The parent
partition and the child partitions use virtual network adapters (known as vNICs),
which communicate with the virtual switch using Microsoft Virtual Network Switch
Protocol. Multiple virtual switches can be implemented, originating different
virtual networks.
VLAN IDs can be specified on each Virtual Network Virtual Switch except for the
Private Network Virtual Switch.
Figure 2-29 on page 103 shows an overview of Hyper-V. The purple VMs are
fully virtualized. The green VMs are paravirtualized (enlightened partitions).
Paravirtualized drivers are available for guests; guests not implementing
paravirtualized drivers must traverse the I/O stack in the parent partition,
degrading guest performance. The VMbus is a logical channel that connects
each VM, using the Virtualization Service Client (VSC) to the parent partition that
runs the Virtualization Service Provider (VSP). The VSP handles the device
access connections from the child partitions (VMs).
102 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
SLES 10 SP1
WS08 Natively Enlightened W/ Windows 2000 RHEL 4
Enlightened Hen Kernel WS03 Server not Server not Not
(Paravirtualized) (Paravirtualized) Enlightened Enlightened Enlightened
WS08 Parent w/ Integration w/ Integration w/ Integration w/o Integration w/o Integration
Partition Components Components Components Components Components
VM Worker
Processes
WS08 Kernel IC
IC Kernel
IC VSC Mode
IHV Drivers VSC VSC
Hypercall
Adapter
VSP
VMBus VMBus VMBus Emulation Emulation
At least one VLAN must be created to allow the VMs to communicate with the
network (VLAN creation is supported by Hyper-V). The physical NIC is then
configured to act as a virtual switch on the parent partition. Then the virtual NICs
are configured for each child partition to communicate with the network using the
virtual switch.
Each virtual NIC can be configured with either a static or dynamic MAC address.
At this time NIC teaming is not supported. Spanning tree protocol should not be a
problem because loops are prevented by the virtual switch itself.
One significant advantage of the Microsoft approach is that SMSE is not limited
to managing virtual environments because it was designed to manage all
systems, including physical and virtual.
The following are key storage and virtualization technologies from IBM,
discussed further in this section:
Storage Area Networks (SAN) and SAN Volume Controller (SVC)
Virtualization Engine TS7520: virtualization for open systems
Virtualization Engine TS7700: mainframe virtual tape
XIV Enterprise Storage
IBM Disk Systems
Storwize V7000
Network Attached Storage (NAS)
N Series
Figure 2-30 on page 105 illustrates the IBM Storage Disk Systems portfolio, with
the components that will be briefly described in the next sections.
104 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
IBM Disk Portfolio 4Q2010 Optimized for Open Systems Optimized for z/OS and
POWER
Block File
DS8000 XIV SONAS
For clients requiring: For clients requiring: For clients requiring:
Enterprise •Advanced disaster recovery with 3- •Massive I/O, backup or
•Breakthrough ease of use and
way mirroring and System z GDPS management restores
support •High utilization with automated •Consolidation, or scale large
•Continuous availability, no downtime performance (“hot spot”) management numbers of clusters
for upgrades •Virtually unlimited snapshots
•Best-in-class response time for OLTP or •Advanced self-healing architecture
transaction workloads, flash N series
optimization For clients requiring:
•Single system capacity scalable to the •NAS storage
PB range •Simple two-site high
availability
DS5000 Storwize V7000
Midrange For clients requiring: For clients requiring:
•Good cost/performance, general-
•10s of TB of rack-mounted storage
purpose storage
with sophisticated software functions
•Need to add capacity to existing
•Breakthrough ease of use and
configuration
management
•10s of TB of capacity
•Non-disruptive migration from or
virtualization of, existing disk
Entry DS3000
For clients requiring:
•SMBs or branch office locations; cost sensitive; start at as small as 100s GB, up to
low TB capacity
For more information about IBM Storage Solutions, refer to IBM System Storage
Solutions Handbook, SG24-5250, which can be found here:
http://www.redbooks.ibm.com/abstracts/sg245250.html?Open
2.6.1 Storage Area Networks (SAN) and SAN Volume Controller (SVC)
SAN-attached storage connects storage to servers with a Storage Area Network
using ESCON or Fibre Channel technology. Storage area networks make it
possible to share homogeneous storage resources across the enterprise. For
many companies, however, information resources are spread over various
locations and storage environments with products from different vendors. With
this in mind, the best storage solution takes advantage of the existing investment
and provides growth when it is needed.
Client
LAN
Network Network
Server Server
Backup Server
SAN
Disk Storage
Tape
106 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
SVC helps to simplify storage management by presenting a single view of
storage volumes. Similarly, SVC is an integrated solution supporting high
performance and continuous availability in open system environments, as shown
in Figure 2-32.
The solution runs on clustered storage engines, based on System x servers and
open standards-based technology. Industry-standard host bus adapters (HBAs)
interface with the SAN fabric. SVC represents storage to applications as virtual
disks, created from the pool of managed disks residing behind the storage
engines. Storage administrators can scale performance by adding storage
engines and capacity by adding disks to the managed storage pool.
When combined with physical tape resources for longer-term data storage, the
TS7520 Virtualization Engine is designed to provide an increased level of
108 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
operational simplicity and energy efficiency, support a low cost of ownership, and
increase reliability to provide significant operational efficiencies.
For further details, see “Virtualization Engine TS7520: Virtualization for open
systems” in IBM Systems Virtualization: Servers, Storage, and Software,
REDP-4396, which can be found at:
http://www.redbooks.ibm.com/abstracts/redp4396.html?Open
The XIV Storage System is well suited for mixed or random access workloads,
such as the processing of transactions, video clips, images, and email; and
industries such as telecommunications, media and entertainment, finance, and
pharmaceutical; as well as new and emerging workload areas, such as Web 2.0.
Storage virtualization is inherent to the basic principles of the XIV Storage
System design: physical drives and their locations are completely hidden from
the user, which dramatically simplifies storage configuration, letting the system
lay out the user’s volume in an optimal way.
For more information about IBM XIV Storage System, refer to IBM XIV Storage
System: Architecture, Implementation, and Usage, SG24-7659, which can be
found at:
http://www.redbooks.ibm.com/abstracts/sg247659.html?Open
For more information about IBM Disk Systems Solutions, refer to IBM System
Storage Solutions Handbook, SG24-5250, which can be found at:
http://www.redbooks.ibm.com/abstracts/sg245250.html?Open
110 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Management integration with IBM Systems Director
Fibre Channel or iSCSI interfaces
IBM also developed a cutting edge solution for distributed NAS called Scale Out
NAS (SoNAS). For more information about SoNAS, refer to IBM Scale Out
Network Attached Storage Architecture and Implementation, SG24-7875, which
can be found at:
http://www.redbooks.ibm.com/redpieces/abstracts/sg247875.html?Open
Client
NAS
The IBM System Storage N series Gateways, an evolution of the N5000 series
product line, is a network-based virtualization solution that virtualizes tiered,
heterogeneous storage arrays, allowing clients to utilize the dynamic
virtualization capabilities available in Data ONTAP across multiple tiers of IBM
and vendor-acquired storage.
For more information about N Series Storage Systems, refer to IBM System
Storage N Series Hardware Guide, SG24-7840, which can be found here:
http://www.redbooks.ibm.com/abstracts/sg247840.html?Open
112 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
dynamically brings new capabilities to the business that can be exploited to
deliver higher efficiencies.
The ability to move virtual machines from physical host to physical host while the
virtual machines remain operational is another compelling capability. One of the
key values that IBM systems management software can provide is to mask the
complexities that are introduced by virtualization.
Virtualized Infrastructure
114 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
IBM Service Management Reference Model
Deployment Types
Reconciliation
Usage & Energy Software Pkg Backup & Data Modeling
Serv ice Catalog Accounting Efficiency Discovery & Distribution Recovery
Tool
Request Application
Server Mgmt Storage Mgmt Network Mgmt Configurat ion
Management Mgmt
Pr oduction Dist ribution Transpor tation People F acilities Remote Applications Information System Storage N etw ork Voice Security
Tivoli Service Automation Manager (TSAM) allows to quickly roll out new
services in the data center to support dynamic infrastructure and cloud
computing services. The TSAM component is also based on the Tivoli Process
Automation Engine (TPAe), implementing a data model, workflows and
116 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
applications for automating the management of IT services by using the notion of
Service Definitions and Service Deployment Instances.
TSAM and SRM rely on the CCMDB component to access information about
resources in the managed data center.
Admin GUI
MEA / REST APIs
OMNIbus leverages the Netcool® Knowledge Library for SNMP integration with
many different technologies (more than 175 MIBs), more than 200 different
probes, and approximately 25 vendor alliances (including Cisco, Motorola,
Juniper, Ciena, Checkpoint, Alcatel, Nokia, and so on) to provide extensive
integration capabilities with third-party products.
Many external gateways are also available to provide integration with other Tivoli,
IBM (DB2® 7.1, Informix® 9.20, and so on), and non-IBM products (Oracle
10.1.0.2 EE, Siebel, Remedy 7, and MS SQL).
OMNIBUS OMNIBUS
ObjectServer Console (Desktop)
URE
ITECT
RIC ARCH
GENE
Omnibus
Probe Security Network System Performance Application Storage Transaction Mainframe
Events Events Events Events Events Events Events Events
OMEGAMON
ITCAM
Netcool
Proviso TSM
ITM
TSOM
TNM IP (aka.
Netcool
Precision)
ITM
118 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
views from the Tivoli Monitoring family, Tivoli Service Request Manager®, Tivoli
Application Dependency Manager, and many more. Tivoli Netcool/OMNIbus can
serve as a “manager of managers” that leverages existing investments in
management systems such as HP OpenView, NetIQ, CA Unicenter TNG, and
many others.
120 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Enhanced virtualization management
– Provides the ability to provision, delete, move, and configure virtual
machines across multiple platforms.
– Provides improved integration and management of storage infrastructure
supporting virtual systems through IBM TotalStorage Productivity Center.
Consolidated orchestration capability
– Provides an integrated Tivoli Intelligent Orchestrator (TIO) product into the
core TPM product for increased value at no additional cost.
IBM Tivoli Provisioning Manager 7.1.1 software uses the Tivoli process
automation engine that is also used by other IBM Service Management products
(such as the Change and Configuration Management Database and Tivoli
Service Request Manager) to provide a common “look and feel” and seamless
integration across different products.
TPM simplifies and automates data center tasks. The main capabilities are
summarized here:
It provides operating system imaging and bare metal provisioning.
TPM offers flexible alternatives for quickly creating and managing operating
systems cloned or scripted installs such as dynamic image management,
single instance storage, encrypted mass deployments, and bandwidth
optimization.
It provides software distribution and patch management over a scalable,
secure infrastructure.
TPM can automatically distribute and install software products defined in the
software catalog without creating specific workflows or automation packages
to deploy each type of software.
122 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
It provides automated deployment of physical and virtual servers through
software templates.
It provides provisioning support for different system platforms.
– It provisions new virtual machines on VMWare ESX servers.
– It provisions Windows OS and applications onto a new ESX virtual
machine.
– It creates new LPARs on pSeries®.
– It provisions new AIX OS onto a pSeries LPAR through NIM.
It integrates with the IBM TotalStorage Productivity Center to provide a
storage capacity provisioning solution designed to simplify and automate
complex cross-discipline tasks.
It supports a broad range of networking devices and nodes, including
firewalls, routers, switches, load balancers and power units from leading
manufacturers (such as Cisco, Brocade Networks, Extreme, Alteon, F5, and
others).
It provides compliance and remediation support.
– Using compliance management, the software and security setup on a
target computer (or group of computers) can be examined, and then that
setup can be compared to the desired setup to determine whether they
match.
– If they do not match, noncompliance occurs and recommendations about
how to fix the noncompliant issues are generated.
It provides report preparation.
Reports are used to retrieve current information about enterprise inventory,
activity, and system compliance. There are several report categories. Each
category has predefined reports that can be run from the main report page or
customized, in the wizard, to suit the business needs. Also, the report wizard
can be used to create new reports.
It supports patch management.
TPM provides “out-of-the-box” support for Microsoft Windows, Red Hat Linux,
AIX, Solaris, HP Unix, and SUSE Linux Enterprise Sever using a common
user interface. Accurate patch recommendations for each system are based
on vendor scan technology.
IBM Systems Director is included with the purchase of IBM System p, System x,
System z, and BladeCenter systems. It is offered for sale to help manage select
non-IBM systems. Because it runs on a dedicated server, it provides secure
isolation from the production IT environment. It is a Web server-based
infrastructure so that it can be a single point of control accessible via a Web
browser. In the future IBM Systems Software intends to release virtual appliance
solutions that include a pre-built Systems Director software stack that can run as
a virtual machine and will include integrated High Availability features.
One of the most notable extensions to Directors that came out during 2009 is
Systems Director VMControl™. IBM Systems Director VMControl is a
cross-platform suite of products that provides assistance in rapidly deploying
virtual appliances to create virtual servers that are configured with the desired
operating systems and software applications. It also allows to group resources
into system pools that can be centrally managed, and to control the different
workloads in the IT environment. Figure 2-40 on page 125 shows the
components of the IBM Systems Director.
124 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Management
Enterprise Da tacente r / Enterprise Upwar d Integration (T ivoli, e tc)
RE ST AP I CLI
Northbound Interfaces Platform Management
Advanced Managers Base Dire ctor Foundation
Storag e
V MCont rol
Vi rt uali zation M anager Service Manager Managemen t
Network
Disc overy Manager St at us Manager Management
Net work Cont rol
Power Systems
Management
Remot e Acces s
Conf iguration M anager Manager
BOF M BC / X Systems
Ex te nd Console Assembl y Ext e nd
Ma nagement*
S TG Ot hers. . B as e Ba se z System
Dir e ct or Management Server Asse mbl y Dir e ct or
Mana gement*
SWG / Tivol i Others. .
Ad va n ce d Age nt Assembly (Common & Platfo rm) Re so u r ce
T a sk s M an ag e m en t STG Ot hers
IS V Others.. Hosti ng R untime (LWI) Da ta ba se
IBM Systems Director provides the core capabilities that are needed to manage
the full lifecycle of IBM server, storage, network, and virtualization systems:
Discovery Manager discovers virtual and physical systems and related
resources.
Status Manager provides health status and monitoring of system resources.
Update Manager acquires, distributes, and installs update packages to
systems.
Automation Manager performs actions based on system events.
Configuration Manager configures one or more system resource settings.
Virtualization Manager creates, edits, relocates, and deletes virtual
resources.
Remote Access Manager provides a remote console, a command line, and
file transfer features to target systems.
Enhancements improve ease of use and deliver a more open, integrated toolset.
Its industry-standard foundation enables heterogeneous hardware support and
works with a variety of operating systems and network protocols. Taking
advantage of industry standards allows for easy integration with the management
tools and applications of other systems.
Optional, fee-based extensions to IBM Systems Director are available for more
advanced management capabilities. Extensions are designed to be modular,
thereby enabling IT professionals to tailor their management capabilities to their
specific needs and environments. Version 6.1, launched in November 2008,
provides storage management capabilities. Version 6.1.1, which shipped starting
in May 2009, also includes basic (SNMP-based) network management and
discovery capabilities (Network Manager).
126 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
With other System Director advanced managers
These include, for example, the BladeCenter Open Fabric Manager (BOFM)
or VMControl.
From a networking point of view, the pivotal element that provides integration with
Tivoli products and the rest of the Data Center Infrastructure is Systems Director
Network Control. IBM Systems Director Network Control builds on Network
Management base capabilities by integrating the launch of vendor-based device
management tools, topology views of network connectivity, and subnet-based
views of servers and network devices. It is responsible for:
Integration with Tivoli Network Management products
Broadening Systems Director ecosystem
Enhanced platform management
Virtualization Automation
For more information about IBM Systems Director, refer to the Systems Director
6.1.x Information Center page at:
http://publib.boulder.ibm.com/infocenter/director/v6r1x/index.jsp
128 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Accurate monitoring and root cause analysis
ITNM can be configured to actively monitor the network for availability and
performance problems. ITNM uses the network topology model to group
related events and identify the root cause of network problems, thereby
speeding mean time to repair.
Event correlation uses a network topology model to identify the root cause of
network problems. Network events can be triggered by setting personalized
thresholds for any SNMP performance metrics including complex expressions
such as bandwidth utilization.
Figure 2-41 on page 130 illustrates the Network Manager processes. The
discovery agents discover the existence, configuration and connectivity
information for network devices, updating the network connectivity and
information model (NCIM). At the same time, OMNIbus probes receive and
process events from the network.
The polling agents poll the network for conditions such as device availability and
SNMP threshold breaches. If a problem or resolution is detected, the polling
agents generate events and send them to OMNIbus.
The Event Gateway provides the communication functionality for ITNM to enrich
and correlate events from OMNIbus.
Event RCA
Gateway Engine
NCIM
OMNIbus
Topology-based root cause analysis (RCA) consists of the event stream being
analyzed in the context of the discovered relationships (physical and logical).
Root cause analysis can be performed on any events from any source. Dynamic
root cause analysis recalculates the root cause and symptoms as new events
arrive.
The RCA process can include simple hierarchical networks or even complex
meshed networks, with remote (OSPF and BGP, for example), intra-device
correlation (such as card failure), and intra-device dependencies (such as cable
failures). ITNM enriches events from Netcool probes and the Netcool Knowledge
Library. SNMP traps and other events from Netcool probes are enriched from an
extensive set of rules known as the Netcool Knowledge Library. ITNM further
enriches these events with topology information to provide the user with useful
contextual information including navigating to topology maps to see the device in
various network contexts, and using diagnostic tooling. Events can be configured
to alert the operator when network services, such as VPNs, have been effected,
providing the operator with information about priority in order to perform triage.
130 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
IBM Tivoli Network Manager’s latest release at the time of this writing is 3.8.0.2. It
is available as a standalone product or integrated with Netcool/OMNIbus (O&NM
- OMNIbus and Network Manager).
For more information about ITNM, refer to the IBM Tivoli Network Management
information center at:
http://publib.boulder.ibm.com/infocenter/tivihelp/v8r1/index.jsp?toc
=/com.ibm.netcool_OMNIbus.doc/toc.xml
In this section, we briefly discuss the capabilities of the major functions of the
IBM Tivoli Netcool Configuration Manager.
132 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
The policy-based compliance management function of Tivoli Netcool
Configuration Manager provides a rules-based tool for checking and maintaining
network device configuration compliance with various sets of policies.
You can configure compliance checks to check for the presence or absence of
specific commands or data in a device's configuration or a response from a query
to a device. Based on the results of the compliance check, the tool either reports
the results or, if desired, initiates a configuration change to bring a device back
into compliance. You can organize and group related compliance checks into
higher-level policies. You can schedule compliance checks to execute on a
dynamic or scheduled basis. Likewise, you can set up compliance checks to
trigger automatically as a result of a configuration change on a device.
Figure 2-43 on page 134 shows the closed loop problem resolution provided by
the integration of Tivoli Network Manager, Tivoli Netcool/OMNIbus, and Tivoli
Netcool Configuration Manager.
RCA
Netcool
Today events come into
Management
OMNIbus and faults are
localized Solutions
OMNIbus
ITNCM
DETECT FIX
ITNCM
Triggering traps to OMNIbus
for policy breaches
Check if Network devices
are compliant to best
practices.
Policy Compliance
134 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
device are easy to spot, isolate, and rectify, by simply rolling back the changes
via Tivoli Netcool Configuration Manager.
The integration provides the ability to implement network policies and enforce
compliance by utilizing the capability of Tivoli Netcool Configuration Manager
to make a change to a large number of devices in one go, while ensuring that
the changes are accurate without manual intervention. This reduces the time
to value of network management implementation
Within a single application, ITNCM forms the foundation to the network and
network services resource configuration, and a combined multi-service,
multivendor device configuration management and service activation supporting
platform.
Built upon the patented Intelliden R-Series Platform, ITNCM offers network
control by simplifying network configurations, standardizing best practices and
policies across the entire network, and automating routine tasks such as
configuration and change management, service activation, compliance and
security of mission critical network. Figure 2-44 demonstrates this.
136 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Figure 2-45 Network management integration framework
The three main areas of integration for network management are listed here:
Integration with Event Management
This allows the incorporation of network-related events into an overall event
dashboard, for example ITNM-Netcool/OMNIbus integration.
Integration with Systems Management
This allows the inclusion of networking infrastructure management and
provisioning within systems and storage management and provisioning tools,
for example the Systems Director Network Manager module.
Integration with Service Management
This allows relating networking management and provisioning with IT service
management provisioning. Also, the automation of recurring tasks can be
performed, theoretically leveraging this kind of integration. The network
infrastructure provisioning has to be orchestrated with the server and storage
provisioning, for example the TBSM-ITNM integration.
138 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
about that computer system. Single Sign On integration should also be
enabled to provide a seamless navigation integration.
Shared console is an even deeper level of integration. The same console has
panels with information from multiple products. When the user changes
contexts in one panel, the other panels switch to the same context. The Tivoli
Integrated Portal can perform this function.
For more information about these topics, see Integrating Tivoli Products,
SG24-7757, available at:
http://www.redbooks.ibm.com/abstracts/sg247757.html?Open
These design goals go far beyond a single server or a single rack level; they are
goals for the entire data center. With the new philosophy and the new design, the
iDataPlex solution promises to address the data center challenges at various
levels:
An innovative rack design achieves higher node density within the traditional
rack footprint. Various networking, storage, and I/O options are optimized for
the rack design.
An optional Rear Door Heat eXchanger virtually eliminates traditional cooling
based on computer room air conditioning (CRAC) units.
An innovative flex node chassis and server technology are based on industry
standard components.
Shared power and cooling components improve efficiency at the node and
rack level.
Intelligent, centralized management is available through a management
appliance.
140 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
48”
23.6”
16 units of
Vertical Space
42 Flex 2U chassis for switches,
Side by Side Layout cables & PDU’s
Labeled vertical
Latch Lock for slots on front 82.4”
Security and rear
1200 mm 1280 mm
Figure 2-47 Comparison of iDataPlex with two standard 42U racks (top view)
142 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Storage node
Two compute
nodes
Figure 2-49 on page 144 gives an idea of how these components are integrated.
2U Chassis
iDataPlex Rear Door
Heat Exchanger
PDUs
3U Chassis
On the left side is a 2U (top) and a 3U chassis (bottom). These can be equipped
with several different server, storage, and I/O components. The populated
chassis are mounted in an iDataPlex rack (middle). This rack was specifically
designed to meet high-density data center requirements. It allows mandatory
infrastructure components like switches and power distribution units (PDUs), as
shown on the right, to be installed into the rack without sacrificing valuable server
space. In addition, the iDataPlex solution provides management on the rack level
and a water-based cooling option with the Rear Door Heat eXchanger.
144 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Table 2-8 Supported Ethernet switches
If the plan is to use copper-based InfiniBand and Ethernet cabling, then mount
both switches horizontally, if possible. This is due to the number of cables that go
along the B and D columns; all the copper cables take up so much space in front
of the vertical pockets that proper cabling to vertically-mounted switches is no
longer possible. Figure 2-50 on page 146 illustrates the recommended cabling.
Switch
IB Switch IB Switch
Plenum Plenum
PDU
PDU
PDU
PDU
42 U 42 U E-Net Switch E-Net Switch
42 U 42 U
Switch
Switch
IB Switch IB Switch
Plenum Plenum
PDU
PDU
PDU
PDU
E-Net Switch E-Net Switch
Figure 2-50 Recommended cabling for Ethernet-only (left) and Ethernet plus InfiniBand
(right)
2.8.2 CloudBurst
IBM CloudBurst is a prepackaged private cloud offering that brings together the
hardware, software, and services needed to establish a private cloud. This
offering takes the guesswork out of establishing a private cloud by preinstalling
and configuring the necessary software on the hardware and leveraging services
for customization to the environment. Just install the applications and begin
exploiting the benefits of cloud computing, such as virtualization, scalability, and
a self-server portal for provisioning new services.
IBM CloudBurst includes a self-service portal that allows users to request their
own services, automation to provision the services, and virtualization to make
system resources available for the new services. This is all delivered through the
integrated, prepackaged IBM CloudBurst offering and includes a single support
interface to keep things simple.
IBM CloudBurst is positioned for enterprise clients looking to get started with a
private cloud computing model. It enables users to rapidly implement a complete
cloud environment including both the cloud management infrastructure and the
cloud resources to be provisioned.
146 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Built on the IBM BladeCenter platform, IBM CloudBurst provides preinstalled
capabilities essential to a cloud model, including:
A self-service portal interface for reservation of compute, storage, and
networking resources, including virtualized resources
Automated provisioning and deprovisioning of resources
Prepackaged automation templates and workflows for most common
resource types, such as VMware virtual machines
Service management for cloud computing
Real-time monitoring for elasticity
Backup and recovery
Preintegrated service delivery platforms that include the hardware, storage,
networking, virtualization, and management software to create a private cloud
environment faster and more efficiently
Figure 2-51 The front view of a 42U rack for IBM Cloudburst
148 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
3
For a general overview of the main networking protocols and standards, refer to
TCP/IP Tutorial and Technical Overview, GG24-3376, which can be found at:
http://www.redbooks.ibm.com/abstracts/gg243376.html?Open
150 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Data Center Network-wide virtualization - These techniques extend the
virtualization domain to the whole Data Center Network and even to the WAN
in case there are multiple Data Centers.
One-to-many Many-to-one
Physical Switch
Virtualized Server
VM1 VM2
APP APP
OS OS
Logical
SW 1 Logic al Hypervisor / VMM
Logical S W5
SW 2 L ogi cal
Logic al
SW4
S W3
The virtualization features that were introduced are targeted mostly at the
forwarding and control planes. However, management plane and services plane
virtualization techniques are also emerging in the marketplace.
Management plane virtualization enables multitenant data center network
infrastructures because different virtual networks can be managed
independently without conflicting with each other.
Services plane virtualization (see 3.4.5, “Network services deployment
models” on page 176) allows you to create virtual services contexts that can
be mapped to different virtual networks, potentially in a dynamic fashion. This
is a pivotal functionality in a dynamic infrastructure because it provides
Those business drivers are causing new trends to develop in data center
architectures and the network must adapt with new technologies and solutions,
as follows:
The new landscape brings the focus back to the centralized environment. The
distributed computing paradigm may not always match the needs of this
dynamic environment. Virtualization and consolidation together put more
152 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
traffic on single links since the “one machine-one application” correlation is
changing. This can cause bottlenecks and puts pressure on the access layer
network.
For example, the End of Row (EoR) model may be unfit to sustain this new
traffic pattern and so the Top of Rack (ToR) model is rapidly becoming
attractive. The access switch sprawl can be effectively managed by
consolidating several access nodes into a bigger, virtual one.
Figure 3-2 shows the consolidation and virtualization of server and storage.
NIC
NIC
NIC
NIC
NIC
LAN NIC/SW SAN HBA/SW
LAN
LAN LAN
LAN
ACCESS SAN SW ACCESS SAN SW
ACCESS
SW SAN SW ACCESS
SW SAN SW
SW SW
Virtualization brings a new network element into the picture. The virtual switch
in the hypervisor cannot always be managed efficiently by traditional network
management tools, and it is usually seen as part of the server and not part of
the network. Moreover, these virtual switches have limited capabilities
compared with the traditional network hardware (no support for multicast and
port mirroring, plus limited security features). However, virtual switches are
becoming much more capable over time. Hypervisor development is opening
up new opportunities for more feature-rich virtual switches.
Figure 3-3 on page 154 shows networking challenges due to virtual switches
and local physical switches.
LAN
L AN LAN
LA N
AC CE SS SA N SW AC CE SS SA N SW
ACCESS
SW SAN SW ACCESS
SW SAN SW
SW SW
The server and its storage are being increasingly decoupled, through SANs
or NAS, thus leading to the possibility of application mobility in the data
center.
The 10 Gb Ethernet is reaching its price point for attractiveness. This
increased capacity makes it theoretically possible to carry the storage traffic
on the same Ethernet cable. This is also an organizational issue because the
storage and the Ethernet networks are usually designed by different teams.
Figure 3-4 on page 155 shows storage and data convergence and its impact
on the network infrastructure.
154 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
VIRTUAL SERVER
SERVER/BLADE SERVER
…
… SRVR SRVR
SRVR SRVR
VLAN VLAN VLAN
LAN SW SAN SW
LAN NIC SAN HBA
LAN
LAN
AC CESS SAN SW
ACCESS
SW SAN SW LAN
LAN
SW ACCE SS
ACCESS
SAN SW
SAN SW
SW
SW
Figure 3-4 Storage and data convergence impact on the network infrastructure
156 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
SERVER/BLADE SERVER
SRVR SRVR … SRVR SRVR
SRVR
LAN NIC/SW
VN VN VN VN
IP CORE L3
VN VN VN VN
USER
158 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
This scenario also impacts the latency that users experience when accessing
applications, given that data travels over a longer distance and more appliances
are involved in the traffic flow. This is particularly harmful for applications that rely
on a large number of small packets travelling back and forth with a small amount
of data (chattiness).
In addition to WAN slowing down the application access to users, another critical
requirement in this scenario is the availability of the centralized IT infrastructure.
A single fault, without a proper single point of failure avoidance and disaster
recovery (or even business continuity) strategy, would affect a very large
population with a dramatic negative impact on productivity (service uptime and
revenues are increasingly tightly coupled).
From a data center perspective this implies that a certain slice of bandwidth is
dedicated to the single branch office to deliver the mission-critical services.
Classification of traffic needs to be done in advance because the number of
applications that travel the wide area network (WAN) can be vast, and traffic
shapers can also limit the bandwidth of some kind of applications (for example ftp
transfer, p2p applications) so that all services are available but performance can
vary over different conditions.
160 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Figure 3-7 Policy enforcement with traffic shapers
WOC techniques can also reduce latency at the session level, not at the link
level, which depends on other factors the WOC cannot change. This is important
because WOCs do not accelerate traffic on the link but are appliances that sit in
the middle of the session to understand the packet flows, avoid data duplication
and combine and reduce packets of the application sessions in order for a single
IP datagram sent over the tunnel to carry more packets of the same or different
sessions, thus reducing the overall latency of a single session.
This approach has the benefit of lowering the use of disk, memory, and
processor resources in both a virtualized or traditional environment. For example,
the consequence in a virtualized data center is more VMs per physical server
due to reduced resource utilization.
Another aspect of ADCs is that they are tied to the application layer and, as
application traffic, are adapted to VM migrations, even between data centers or to
the cloud. Downtime and user disruption are therefore avoided by accelerated
data transmission and flexibility in the expansion of the overall service delivery
capacity of the service delivery. Other features to consider when approaching
ADC solutions are compression that allows less data to be sent over the WAN,
and TCP Connections management.
Note that this process can be transparent to both server and client applications,
but it is crucial to redirect traffic flows in real time if needed. It is also important
that the choice of where to deploy the envisioned solution is meeting the
performance and, above all, cost objectives. These functions, just as the other
network services, can be implemented using different deployment models (see
3.4.5, “Network services deployment models” on page 176 for more details).
For example, software-based ADCs are entering the market from various
vendors. A hybrid deployment model can also be used, leveraging hardware
platforms for resource-intensive operations such as SSL termination, and
offloading other functions such as session management or load balancing to
software-based ADCs.
Each of the described approaches can address different aspects of the overall
design and does not necessarily mean that one excludes the other. In fact, the
combination of features can efficiently solve the identified issues in bandwidth
management.
162 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
3.4 Virtualization technologies for the data center
network
In this section we present the main virtualization technologies that can be
exploited in a data center network context.
The latency introduced by having multiple switching stages (and also, depending
on the design, perhaps multiple security stages) from the server to the data
center network core, and the availability of high port density 10 GbE core
switches is driving the trend towards the delayering of the data center network,
which can achieve both significant cost benefits (less equipment to manage) and
service delivery improvement (less latency turns into improved application
performance for the users).
On the other hand, servers equipped with 10 GbE interfaces induce higher
bandwidth traffic flowing to the core layer, so oversubscription on the access
layer uplinks may become detrimental for service performance. The development
of IEEE standards for 40 GBE and 100 GBE will alleviate this potential bottleneck
going forward and help simplify the data center network design.
On the other hand, this trend faces limitations in highly virtualized environments
because I/O operations experience significant overhead passing through the
operating system, the hypervisor, and the network adapter. This may limit the
ability of the virtualized system to saturate the physical network link.
Various technologies are emerging to mitigate this. For example, bypassing the
hypervisor using a Single Root I/O Virtualization (SR-IOV) adapter allows a guest
164 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
VM to directly access the physical adapter, boosting I/O performance
significantly. This means that the new network access layer becomes the
adapter, which incorporates switching capabilities.
The outcome of these trends is that the access layer is evolving towards Top of
Rack (ToR) topologies rather than pure End of Row (EoR) topologies. A ToR
topology is more scalable, flexible, and cost-effective, and it consumes less
energy than an EoR topology when dealing with the increased bandwidth
demands at the access layer.
Because the traffic pattern varies from data center to data center (actually it is
the architecture of the applications and how users are accessing the applications
that will generate requirements on the design of the access layer) it is difficult to
provide baselines for developing a design.
The modular ToR approach can provide a sufficient solution for today’s
bandwidth demands. Because bandwidth can be added where it is needed, it is
important that port utilization (both server and uplink ports) is monitored with
proper management tools so that bandwidth can be added to the data center
infrastructure before bandwidth constraints are recognized by users.
The growth of traffic volume generated at the access layer is tightly coupled with
the increase of latency, which is originated by having additional switching stages
(virtual switches or blade switches, for example). To lower the latency introduced
by the data center network, different architectural patterns are emerging. One
possible option is to adopt a two-tiered network by collapsing the core and
Node aggregation
Node aggregation techniques are similar to server clustering in some ways
because many physical entities are logically grouped as one, single entity. The
network impact of this approach is that there are fewer logical nodes to monitor
and manage, thus allowing you to disable the spanning tree protocol aggregating
links across different physical switches. At the same time, risk is reduced
because nodes are still physically redundant, with no single point of failure. In
fact the network becomes a logical hub-and-spoke topology with a proprietary
control plane replacing the spanning tree protocol. Figure 3-8 on page 167
illustrates node aggregation techniques.
166 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Aggregation layer
2:1
Servers
Node partitioning
Node partitioning techniques allow you to split the resources of a single node
while maintaining separation and providing simplified (and thus improved)
security management and enforcement. Partitioned nodes go beyond virtualizing
the forwarding and control planes, such as you have with VLANs.
This type of virtual node also virtualizes the management plane by partitioning
the node’s resources and protecting them from one another. In this scenario a
multitenant environment can be implemented without needing dedicated
hardware for each tenant. Another possible use case of this technology is to
collapse network zones with different security requirements that were physically
separated on the same hardware, thereby improving efficiency and reducing the
overall node count.
Typical deployment scenarios are virtual routers, switches, and firewalls that can
serve different network segments without having to reconfigure the whole
environment if new equipment is added inside the data center. This can be
extremely helpful for firewalls because these virtual firewall solutions reduce the
need for dedicated in-line firewalling that can become bottlenecks, especially if
the link speed is very high.
168 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
3.4.4 Building a single distributed data center
The trend to physically flatten data center networks highlights the need for a well
thought out logical network architecture that is able to embrace different physical
data center technologies, tools and processes to offer to the entire enterprise the
correct levels of reliability, availability and efficiency.
The idea is to create an overlay logical network that can transport data providing
to the services a single virtualized view of the overall infrastructure. The final goal
that this approach can provide is efficiency in the operational costs that can
derive from the following factors:
Better resources utilization: virtualized resource pools that comprises
processor, memory and disks, for instance, can be planned and allocated with
a greater degree of flexibility by leveraging underlying physical resources that
are available in different, yet unified data centers. Also, new workloads can be
provisioned leveraging resources that are not necessarily tied up to a physical
location. The role of the “standby” data centers can change considerably to an
“active” role, with the immediate advantage of better utilization of investments
that have been made or are planned to be made. In fact, the scalability of one
data center can be lowered because peaks can be handled by assigning
workloads to the other data centers.
Lower service downtime: when data centers are interconnected to create a
single logical entity, the importance of redundancy at the link level becomes
critical in supporting this model. Redundancy needs to be coupled with overall
utilization of the links, which needs to be continuous and at the expected level
of consumption. Capacity planning becomes key to enable the overall
availability of the services that rely on the IT infrastructure.
Consistent security policy: distributed yet interconnected data centers can
rely on a level of security abstraction that consists of technologies and
processes that tend to guarantee a common set of procedures that maintain
business operations and are able to address crisis management when
needed. Secure segregation of applications and data in restricted zones is
also extended to the virtualized data center layer to ensure compliance with
the enterprise security framework.
Correct level of traffic performance by domain or service or type:
interconnecting data centers results in even more business transactions
contending for network resources. A possible methodology to adopt to
address this aspect could be as follows:
– Gain complete visibility of all the traffic flows.
– Guided by business requirements, set the right level of priorities mainly
from an availability and performance point of view.
170 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Figure 3-9 Interconnecting data centers
There are several different approaches available today for extending layer 2
networks, and others that are in draft version in the standardization bodies. Each
approach needs to be carefully evaluated in terms of how they meet different
requirements such as availability features, implementation processes, cost, and
management. We outline three approaches:
172 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
– Additional VLAN features (for example, VLAN translation and VLAN
re-use)
The IETF L2VPN working group is the same group that is working on VPLS.
While alternatives are available and the business relevance is increasing today,
there can be some aspects that need to be considered and addressed in all the
components just stated but that are particularly severe for Layer 2 extensions:
Latency - Since there is no market solution today that can address the latency
derived from the speed of light, this aspect might not always be an issue when
data centers are within shorter distances. Latency becomes an important
factor to be considered when data centers are to be “actively” interconnected
over longer distances.
Bandwidth - Workload mobility and data replication require high-speed
connectivity that is able to support a large amount of traffic. Moving virtual
machines across data centers means that the snapshot of the virtual machine
needs to cross the links between the data centers. If the end-to-end link
between the sites does not comply with the vendor specification, the switch
off of a VM and the restarting of a new one can fail.
The real challenge is data synchronization across data centers, with the
objective of having the most recent data as close as possible to the
application. In both situations, techniques such as WAN Optimization
Controllers can help in reducing the amount of bandwidth by avoiding
deduplication of data. Quality of Service complemented by traffic shapers
could also be an alternative for preserving the overall operational level
agreements (OLAs).
Organization (that is, DC team and WAN team) - Alternative selection criteria
is also based on the enterprise organization structure and the skills
associated with each alternative. For example, the fact that VPLS is based on
an MPLS infrastructure implies that skills have to be in place, either internal or
external to the organization if VPLS will be selected as a Layer 2 extension.
Data flows of multitier applications - When data centers are servicing users,
the changes in traffic flows that are built within the overall network
infrastructure need to be well understood and anticipated when moving a
service. Especially when the service is built upon a multilevel application
approach (that is, front end service - application service - data service),
moving single workloads (front end service layer) from one data center to
another might result in increased latency and security flow impacts in the
overall business transaction. In many situations, security needs to be applied
within the flow and this layer needs to be aware of transactions that are going
through different paths that must be created, or that the path is changing
within interconnected data centers.
174 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
are shared among multiple clients. Both aspects and, in general, internal audit
requirements, might impact the overall solution approach.
Management - Determining the impact of deploying new technologies and
other IT-related services and analyzing how the management infrastructure
will support the related architectures aims at network efficiency and overall
reduction in the cost of ownership. Understanding the gaps in the current
network management processes or tools is advisable. For instance, MPLS
connectivity management might address the need for dynamic and
automated provisioning, as well as resource management for tunnels. Also,
troubleshooting differs from that for a routed network. Therefore, diagnostic
capabilities are an important aspect of all the management processes.
Storms or loops - A major difference between a router and a bridge is that a
router discards a packet if the destination is not in the routing table. In
contrast, a bridge will flood a packet with unknown destination addresses to
all ports except to the port it received the packet from. In routed networks, IP
routers decrease the TimeToLive value by one at each router hop. The TTL
value can be set to a maximum of 255, and when the TTL reaches zero, the
packet is dropped.
This, of course, does not solve loops consequences in a routing domain but
considering that in a bridged network there is no such mechanism, the
available bandwidth can be severely degraded until the loop condition is
removed. When interconnecting data centers at Layer 2, the flooding of
broadcasts or multicasts or the propagation of unknown frames needs to be
avoided or strictly controlled so that a problem in one data center is not
propagated into another one.
Split brain - This situation might occur if both nodes are up but there is no
communication (network failure) between sites. Of course this situation needs
to be avoided by carefully planning the redundancy of paths at different levels
(node, interface, and link).
Cloud computing models imply additional thinking for the interconnection of data
centers. The alternatives offered by the market—private, public, or hybrid
cloud—all have in common the need for a careful networking strategy to ensure
availability, performance and security. Especially in multitenant environments
where service providers offer shared services, it is essential to consider which
aspects of service levels and security have strong relationships with the
networking and therefore must be addressed in a complementary manner.
Interaction with cloud models may imply different locations for the applications
and the related connectivity with respect to users. The techniques for the
interconnection with the cloud should not only deliver reliable and secure data
between different points, but must, above all, adapt to the variability of resources
that can be dynamically and automatically relocated geographically and in real
In this section, we focus on the possible deployment models for those services,
highlighting how virtualization can be leveraged and pointing out the pros and
cons of each approach.
176 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
specific virtual machines (VMs) in the same network segment. Both traditional
vendors and new entrants are focusing on this kind of technology.
On the other hand, application delivery functions can be broken down further and
then split depending on the best deployment model for each function. Using
application delivery as an example, SSL termination may be a better fit for a
hardware platform, while other more application-specific functions may be
virtualized.
Virtual appliances are a better fit for cloud computing networking infrastructures
because of their infinite scalability and the ability to closely integrate with the
virtual infrastructure because the services can easily be moved to another
physical host.
IBM has developed a high-level security framework. This is shown in Figure 3-10
on page 179. The IBM Security Framework was developed to describe security in
terms of the business resources that need to be protected, and it looks at the
different resource domains from a business point of view.
For more information, refer to Introducing the IBM Security Framework and IBM
Security Blueprint to Realize Business-Driven Security, REDP-4528, and the
Cloud Security Guidance IBM Recommendations for the Implementation of
Cloud Security, REDP-4614.
178 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Figure 3-10 IBM Security Framework
Before After
APP APP
APP Virtual APP
APP Switch APP
OS APP OS APP
OS OS
VM OS VM OS
VM OS VM OS
VM VM
Physical NICs
VM VM
Virtual
Switch
Virtual
Switch APP
APP
APP
OS APP
Management OS
VM VM OS
VM OS
VM
VM
Bear in mind that all these functional components can be deployed using the
different models that have been presented in the previous section.
We will now briefly describe some relevant IBM products in the virtualized data
center network environment.
The IBM security product portfolio covers professional and managed security
services and security software and appliances. To learn about comprehensive
security architecture with IBM Security, see Enterprise Security Architecture
using IBM ISS Security Solutions, SG24-7581.
180 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
IBM Security network components
The protocol analysis module (PAM) is a modular architecture for network
protection. PAM is the base architecture throughout all IBM Security products; it
includes the following components:
IBM Virtual Patch technology
This shields vulnerable systems from attack regardless of a vendor-supplied
software patch.
Threat detection and prevention technology
This detects and prevents virus and worm attacks, and hacker obfuscation.
IBM Proventia Content Analyzer
This monitors and identifies the content of network flow to prevent data leak or
to satisfy regulatory requirements.
Web Application Security
This provides web application vulnerabilities protection, such as SQL injection
and command injections.
Network Policy Enforcement
This enforces policy and controls to prevent risky behavior, such as the use of
P2P applications or tunneling protocols.
Policy
Applications Applications Applications
Response
Engines
OS Hardened OS OS
OS
VMsafe
Hypervisor
Hardware
182 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
There are two ways to monitor traffic. One way is by using VMsafe architecture,
called DV filters, and the other is by using accelerated mode.
With DV filters, all NICs of VMs are monitored. An overview of a DV filter is shown
in Figure 3-13. A DV filter does not require any configuration changes to the
internal network.
IPS VM VM
SVM
vSwitch vSwitch
PNIC PNIC
In both methods, the same IPS function is supported, but traffic coverage is
different.
vSwitch vSwitch
PNIC
VSS also provides protocol access control like a normal IPS appliance from ISS.
It filters network packets based on protocol, source and destination ports, and IP
addresses. It can be enabled or disabled globally or for specific VMs. Because
the VMware internal network does not support Layer 3, this function does not
support L3 security zoning, which normal firewall appliances can.
Discovery
The discovery module collects inventory information about VMs, such as
operating system name and version, open TCP ports, and so on. A vulnerability
check is not performed in this version.
Antirootkit (ARK)
A rootkit is a program that is designed to hide itself and other programs, data,
and/or activity including viruses, backdoors, keyloggers and spyware on a
computer system. For example, Haxdoor is a known rootkit of the Windows
platform, and LinuxRootkit targets Linux OS. Generally, it is difficult to detect a
rootkit that rewrites kernel code of OS. However, in a virtualized environment, it is
possible that the hypervisor can monitor OS memory tables from outside OS.
VMsafe supports a vendor tool that inspects each memory table on VM. VSS can
detect malicious software inside OS by inspecting each kernel memory space.
184 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Network Access Control
Network Access Control (NAC) controls which VMs can access the network. It
classifies each VM as trusted or untrusted. The trusted list is maintained
manually. Any VM that comes online, and is not on the trusted list, is quarantined.
This section introduces best practice options for virtualized network resources in
servers. The following approaches are discussed:
NIC sharing
vNIC - Virtual Switching
NIC teaming
Source Route - I/O Virtualization
NIC sharing
The most basic of the new connectivity standards simply assigns an operating
system to share available network resources. In its most basic format, each
operating system has to be assigned manually to each NIC in the platform.
Logical NIC sharing allows each operating system to send packets to a single
physical NIC. Each operating system has its own IP address. The server
manager software generally has an additional IP address for configuration and
management. A requirement of this solution is that all guest operating systems
have to be in the same Layer 2 domain (subnet) with each guest operating
system assigned an IP address and a MAC address. Because the number of
guest operating systems that could reside on one platform was relatively small,
the MAC address could be a modified version of the NIC’s burned-in MAC
address, and the IP addresses consisted of a small block of addresses in the
same IP subnet. One additional IP address was used for the management
console of the platform.
Features to manage QoS and load balancing to the physical NIC from the guest
operating systems were limited. In addition, any traffic from Guest OS 1 destined
M a n a g e m e n t C o n s o le 10
.1
.1
10 .1
G u est O S 1 .1 .
1 .2
10
G ue st O S 2 .1 .
1 .3
1 0 . 1 .1 .4
G ue st O S 3
1 0 .1 . 1 . 5
G u est O S 4
186 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Platform
Management Separate
VLANS or
Subnets
Guest
OS
802.1q 802.1q
Trunk Trunk
Guest
OS
Virtual
Switch
Guest
OS
Virtual Physical
A Layer 3 implementation of this feature allows traffic destined for a server that
resides on the same platform to be routed between VLANs totally within the host
platform, and avoids the traffic traversing the Ethernet connection, both outbound
and inbound.
The challenge that this presents to the network architecture is that now we have
a mix of virtual and physical devices in our infrastructure. We have effectively
moved our traditional access layer to the virtual realm. This implies that a virtual
NIC (vNIC) and a virtual switch all have the same access controls, QoS
capabilities, monitoring, and other features that are normally resident and
required on access-level physical devices. Also, the virtual and physical elements
may not be manageable from the same management platforms, which adds
complexity to network management.
NIC teaming
To eliminate server and switch single point-of-failure, servers are dual-homed to
two different access switches. NIC teaming features are provided by NIC
vendors, such as NIC teaming drivers and software for failover mechanisms,
used in the server systems. In case the primary NIC card fails, the secondary
NIC card takes over the operation without disruption. NIC teaming can be
implemented with the options Adapter Fault Tolerance (AFT), Switch Fault
Tolerance (SFT), or Adaptive Load Balancing (ALB). Figure 3-17 on page 189
illustrates the most common options, SFT and ALB.
The main goal of NIC teaming is to use two or more Ethernet ports connected to
two different access switches. The standby NIC port in the server configured for
NIC teaming uses the same IP and MAC address (in case of Switch Fault
Tolerance) of the failed primary server NIC. When using Adaptive Load
Balancing, the standby NIC ports are configured with the same IP address but
using multiple MAC addresses. One port receives data packets only and all ports
transmit data to the connected network switch. Optionally, a heartbeat signaling
protocol can be used between active and standby NIC ports.
There are other teaming modes in which more than one adapter can receive the
data. The default Broadcom teaming establishes a connection between one team
member and the client, the next connection goes to the next team member and
that target and so on, thus balancing the workload. If a team member fails, then
the work of the failing member is redistributed to the remaining members.
188 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
NIC teaming with Switch Fault Tolerance NIC teaming with Adaptive Load Balancing
One Ethermet port is active second port is standby One Ethermet port receives and all ports transmit
Using one IP and MAC address Using one IP and multiple MAC addresses
x x
Ethernet Interface 0: Ethernet Interface 1: Ethernet Interface 0: Ethernet Interfaces 1 to N:
10.1.1.30 10.1.1.30 10.1.1.30 10.1.1.30
MAC A MAC A MAC A MAC B, C,…N
Figure 3-17 NIC teaming with Switch Fault Tolerance Adaptive Load Balancing
The PCI Special Interest Group (SIG) standardized the North side of I/O
virtualization in a server. The network functions, such as switching or bridging,
were outside the PCI SIG scope. For a detailed description of SR-IV and sharing
specifications, see the following website:
http://www.pcisig.com/specifications/iov/single_root/
The intermediary role for I/O virtualization (IO VI) is used to support IOV by
intervening on one or more of the following: Configuration, I/O, and memory
operations from a system image, and DMA, completion, and interrupt operations
to a system image.
190 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
SI 1 SI 2 SI 3 SI N
.............
Virtualization Intermediary
Processor
Memory
PCI device
Switch
All of these considerations will affect the network architecture required to support
NIC virtualization.
192 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
4
194 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
4.1 The changing IT landscape and data center
networking
As organizations undertake information technology (IT) optimization projects,
such as data center consolidation and server virtualization, they need to ensure
that the proper level of focus is given to the critical role of the network in terms of
planning, execution, and overall project success. While many consider the
network early in the planning stages of these projects and spend time
considering this aspect of these initiatives, many more feel that additional
network planning could have helped their projects be more successful.
Looking ahead, many expect that the network will become more important to
their companies' overall success. To address this, networking investments
related to support of server and storage virtualization are currently at the top of
the list for consideration, followed by overall enhancement and optimization of the
networking environment.
Developing a plan for the network and associated functional design is critical.
Without a strong plan and a solid functional design, networking transitions can be
risky, leading to reduced control of IT services delivered over the network, the
potential for high costs with insufficient results, and unexpected performance or
availability issues for critical business processes.
With so many vendor, product, and technology options available and so much to
explore, it is easy to fall into the trap of working backwards from product literature
and technology tutorials rather than beginning a network design with an
understanding of business and IT requirements. When focus is unduly placed on
products and emerging technologies before business and IT requirements are
determined, the data center network that results may not be the data center
network a business truly needs.
New products and new technologies should be deployed with an eye towards
avoiding risk and complexity during transition. When introducing anything new,
extra effort and rigor should be factored into the necessary design and testing
activities. Also, it would be atypical to start from scratch in terms of network
196 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
infrastructure, networking staff, or supporting network management tools and
processes. That means careful migration planning will be in order, as will
considerations concerning continuing to support aspects of the legacy data
center networking environment, along with anything new.
Enterprises that adopt cloud computing have a range of deployment models from
which to choose, based on their business objectives. The most commonly
discussed model in the press is public cloud computing where any user with a
credit card can gain access to IT resources. On the other end of the spectrum
are private cloud computing deployments where all IT resources are owned,
managed and controlled by the enterprise. In between, there is a range of
options including third-party-managed, third-party-hosted and shared or member
cloud services. It is also possible to merge cloud computing deployment models
to create hybrid clouds that use both public and private resources.
In particular, the networking attributes for a private cloud data center network
design are different from traditional data center network design. Traditionally, the
data center network has been relatively static and inflexible and thought of as a
separate area of IT. It has been built out over time in response to point-in-time
requirements, resulting in device sprawl much like the rest of the data center IT
infrastructure. The data center network has been optimized for availability, which
typically has been achieved via redundant equipment and pathing, adding to cost
and sprawl as well.
The attractiveness of a private cloud deployment model is all about lower cost
and greater flexibility. Low cost and greater flexibility are the two key tenets the
network must support for success in cloud computing. This means the network
will need to be optimized for flexibility so it can support services provisioning
(both scale up and down) and take a new approach to availability that does not
require costly redundancy. An example of this could be moving an application
workload to another server if a NIC or uplink fails versus providing redundant
links to each server.
198 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Private cloud adoption models require a new set of network design attributes;
these are demonstrated in Figure 4-1.
Enterprise Enterprise
data center data center
A newer protocol, IPv6, also developed by the IETF, is available. IPv6 offers a
bigger address space to accommodate the budding need for new IP addresses.
IPv6 also promises improvements in security, mobility and systems
management. While IPv4 and IPv6 can coexist, ultimately the network of every
organization that uses the public Internet will need to support IPv6. Once the
available IPv4 address space for the world is exhausted, the ability to route
network traffic from and to hosts that are IPv6-only will become a requirement.
No matter the reason why a company introduces IPv6, deployment of IPv6 will
take focus, planning, execution and testing and require operational changes, all
of which will impact servers, other devices in the data center and applications, in
addition to the network itself.
The Number Resource Organization (NRO)4, the coordinating body for the five
Regional Internet Registries (RIRs) that distribute Internet addresses, says that
as of September 2010 just 5.47% of the worldwide IPv4 address space remains
available for new allocations (see Figure 4-2). When the remaining addresses
are assigned, new allocations will be for IPv6 addresses only.
- ----Allocated 81.64%
Mar Jun Sep Dec Mar Jun Sep DecMar Jun Sep Dec Mar Jun Sep Dec Mar Jun Sep
200 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
other IT resources, and the applications that are used to run the business need
to be checked for their readiness to support IPv6 and plans must be made and
executed for any upgrades. Also, each enterprise must determine its strategy for
the coexistence of IPv4 and IPv6—tunnelling, translation, dual stack or a
combination because both protocols will need to be supported and interoperate
until IPv4 can be retired.
Setting the
Assuring Burgeoning evolutionary
service delivery solutions imperatives
These business and service objectives cause nontrivial challenges for data
center network managers and architects since the traditional data center network
infrastructure is not up to the task of satisfying all these new requirements. In
fact, many technology-related challenges arise in this context.
In order to overcome the limitation of the data center network’s static, traditional
physical model, emerging standards and new architectural alternatives
(described in 4.2.2, “Burgeoning solutions for the data center network” on
page 208) can be exploited. The drawback of this path, however, is that while
alleviating today’s challenges, additional points of concern emerge. These will be
presented in 4.2.3, “Additional points of concern” on page 209. The only way to
overcome these is to set high level but yet actionable guidelines to follow, as
shown in Figure 4-3. These will be presented and discussed in section 4.2.3.
In order to frame the discussion around functional areas, the data center network
functional architecture that has been presented in Chapter 1, “Drivers for a
dynamic infrastructure” on page 1 is shown again in Figure 4-4 on page 203.
202 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Single Distributed Data Center
ge Connectivityy
WAN optimization services
Virtualized L1
L1, L2 & L3 network services
Edg ervices
nfrastructure Se
ement
SAN Local Local Local SAN
Access Access Local
Switching/ Apps Apps
Delivery Security Security Switching
Routing Delivery
/Routing
Manage
In
Pools
Virtualized
Resource P
Storage Storage
Memory Memory
CPU CPU
D t Center
Data C t A D t C
Data Center
t B
The interconnection shown between data centers (there are two in Figure 4-4 but
there can be more) has to encompass traditional Layer 1 services (for example,
dense wavelength division multiplexing or DWDM connectivity for storage
extension) and Layer 3 services (for example, via multiprotocol label switching or
MPLS), but also, a Layer 2 extension may be needed, driven by server
virtualization requirements.
Figure 4-5 on page 205 shows examples of how the diagrams can be used in
different situations:
Example A shows a typical client-server flow where access security,
application delivery, and local switching functions are all performed in
hardware by specialized network equipment.
Example B shows a client-server flow where switching is also performed
virtually at the hypervisor level and access security and application delivery
functions are not needed.
Example C shows an example of a VM mobility flow between Data Center A
and Data Center B.
204 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Single Distributed Data Center Single Distributed Data Center
Data center perimeter security services Data center perimeter security services
Data Center Edge
Connectivity
WAN optimization GEO load balance services WAN optimization GEO load balance services
Data center access Data center access Data center access Data center access
security serviices security serviices security serviices security serviices
Infrastructure Services
Virtualized L1, L2 & L3 network services Virtualized L1, L2 & L3 network services
Infrastructure Services
Data Center
Data Center
Management
Management
SAN Local Local Local Local SAN Local Local
Access Access SAN Access Local Local
Access SAN
Apps Switching/ Switching/ Apps Apps Switching/ Switching/ Apps
Security Delivery Delivery Security Security Delivery
Routing Routing
Routing Routing Delivery Security
Resource Pools
Resource Pools
Virtualized
Virtualized
Storage Memory Memory
Storage Storage Memory Memory
CPU CPU Storage
CPU CPU
Example A
Example B
Single Distributed Data Center
Management
SAN Local Local Local Local SAN
Access Access
Apps Switching/ Switching/ Apps
Security Delivery Delivery Security
Routing Routing
Resource Pools
Virtualized
Example C
Given the complex situation we just described, we now provide a list of the most
relevant of the networking challenges in the data center, from a technology point
of view, that need to be addressed in order to obtain a logical, functional view of
the infrastructure as shown in Figure 4-5. Note that the order does not imply
relative importance of the impact of these challenges on the specific
environment, processes, and requirements.
The first set of challenges described here is related to the service drivers as they
impact the data center because the network must support other strategic IT
initiatives.
Layer 2 extension
The widespread adoption of server virtualization technologies drives a
significant expansion of the Layer 2 domain, and also brings the need to
extend Layer 2 domains across physically separated data centers in order to
206 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
and WAN routers) have the risky consequence that it is becoming
increasingly more difficult to spot and remove network bottlenecks in a timely
and seamless fashion. Clearly this has a significant impact on the
performance and even the availability of the enterprise network if the quality
of service (QOS) model is not designed and enforced properly.
Support new, network-demanding services
New services and applications that have very demanding network
requirements, such as video, cannot be accommodated easily in traditional
data center network environments. So the challenge is to be able to exploit
these new technologies that are demanded by the business while minimizing
the CAPEX expenditures that are needed and the risk of heavily changing the
data center network infrastructure, whose stability has been traditionally
enforced by it being static. These challenging new requirements impact both
the performance and the capacity planning nonfunctional requirements.
The next set of data center network challenges is related to the cost drivers
because the network must bring operating expenses (OPEX) and capital
expense (CAPEX) cost savings, exploiting automation, consolidation and
virtualization technologies leveraged in other IT domains such as storage and
servers.
Appliance sprawl
Today’s data center network environments are typically over-sophisticated
and characterized by dedicated appliances that perform specific tasks for
specific network segments. Some functions may be replicated so
consolidation and virtualization of the network can be leveraged to reap cost
savings and achieve greater flexibility for the setup of new services without
the need to procure new hardware. This appliance sprawl puts pressure on
the scalability and manageability of the data center network.
Heterogeneous management tools
The plethora of dedicated hardware appliances has another consequence
that impacts the operating expenses more than the capital expenses. In fact,
different appliances use different vertical, vendor and model-specific tools for
the management of those servers. This heterogeneity has a significant impact
on the manageability and serviceability of the data center network.
Network resources consolidation
Another challenge that is driven by cost reduction and resource efficiency is
the ability to share the physical network resources across different business
units, application environments, or network segments with different security
requirements by carving logical partitions out of a single network resource.
The concern about isolation security and independence of logical resources
assigned to each partition limits the widespread adoption of those
technologies. So logically sharing physical network resources has a
208 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
provide a framework to ensure lossless transmission of packets in a
best-effort Ethernet network.
New deployment models for network services such as multipurpose
virtualized appliances and virtual appliances can be exploited in order to
consolidate and virtualize network services such as firewalling, application
acceleration, and load balancing—thereby speeding up the time needed to
deploy new services and improving the scalability and manageability of the
data center network components.
Regulatory and industry-specific regulations together with IPv4 address
exhaustion drive the adoption of IPv6 networks.
In order to bridge the gap between physical and virtual network resources,
standards are being developed by IEEE (802.1Qbg and 802.1Qbh) to
orchestrate the network state of the virtual machines with the port settings on
physical appliances, thus enabling network-aware resource mobility inside the
data center. Again, these are standards-based solutions that can be
compared with vendor-specific implementations such as Cisco Nexus 1000v
and BNT VMReady.
Network resources can also be logically aggregated and partitioned in order
to logically group or share physical appliances to improve the efficiency and
the manageability of the data center network components.
In order to overcome the limits of the vendor-specific network management
tools, abstraction layers can enable network change and configuration
management features in a multivendor network environment. These tools,
more common in the server and storage space, can also provide linkage to
the service management layer in order to orchestrate network provisioning
with the setup of new IT services, speeding up the time to deploy metrics and
reducing capital and operating expenses associated with the growth of the
data center infrastructure.
These issues are not just technology related; they can be broadly categorized in
three main areas: technology, business, and organization.
7
For more information on IEEE DCB refer to http://www.ieee802.org/1/pages/dcbridges.html
210 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Physical zoning pattern Virtual zoning pattern
VM VM VM VM VM VM VM VM VM
S VM VM VM VM VM VM VM VM
Security Zone V Security Zone
M
Security Zoning
Controlled
Figure 4-6 Collapsing different security zones on shared server and network hardware
Business:
– The cost reduction imperative driven by the global economy has put a lot
of stress on enterprise IT budgets, so that projects and initiatives that
cannot clearly demonstrate the value for the business have very little
chance of being approved and launched by the decision makers. In this
context, network and IT managers are struggling to obtain tools and
methodologies that can show clear return on investment to their business
executives.
– Networking industry consolidation and the forces driving new alliances
among IT and networking vendors may impact the existing business
relationships together with the integration and interoperability governance
that have been in place previously.
Organization:
– IT organizations can no longer be grouped into independent silos. The
interdependencies between different teams managing and developing new
solutions for the data center are just too many to rely on the traditional
organizational model. Just to name a few examples: application
characteristics that cannot be ignored by network architects and
212 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Duplicated or suboptimal investments in tools and resources.
In the following sections we present three key evolutionary imperatives that help
to overcome the required shift in terms of mindset.
Since networks are not acting as a “network interface card to network interface
card” domain anymore, the IT infrastructures such as servers, storage, and
applications have dependencies on what networking has to offer. Keeping a
tactical approach can result in the networking domain bringing a suboptimal
performance to the enterprise. Also, networking is not made exclusively by data
packet forwarding fabrics, but other networking services that guarantee the
service delivery objectives within and outside the enterprise. This is the reason
why the IT organization should orchestrate the change with the current service
delivery assurance in mind, but also looking at the horizon in order to organize
the infrastructure, processes, and people around the necessary level of flexibility,
efficiency, and future service delivery models.
The network strategy and planning must therefore be driven by the business
requirements and guided by a set of common design principles. In addition, the
network must provide efficient, timely collection and access for the information
services and must enable the protection of enterprise assets, while facilitating
compliance with local, country, and international laws.
An enterprise should identify the scope, requirements and strategy for each
identified networking initiative inside the enterprise’s unique business
environment, obtain visibility of the current IT infrastructure (for example,
performing an assessment of the current environment), analyze gaps and set the
actionable roadmap to define the potential for the effects such a strategy could
have on the organization, the networking, and the IT infrastructures.
We have seen how envisioning a single distributed data center network that
provides reliable access to all authorized services, provides efficiencies in
management and cost, and enables more consistent service delivery needs,
Enterprises should assess the current status of the networking environment and
uncover the networking future state requirements by analyzing both business
and IT objectives. This methodology often needs to leverage reference models,
best practices, and intellectual capital. The analysis will then identify technology
solutions that address a migration process that leverages the current IT
investments in a sustainable operating environment.
The next steps aim at unleashing the potential of a sound transformation that
matters to the enterprise:
Determine the existing versus the required state.
Identify the gaps between the enterprise's current environment and the new
networking strategy.
Build a portfolio of initiatives that overcome the gaps.
Prioritize the different identified initiatives and the networking and overall IT
dependencies.
Determine the initiative’s overall impact (including on the organization) and
transition strategy.
Plan an actionable roadmap in conjunction with the business ROI expectation
in terms of costs versus the value derived.
214 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Develop possible solution approaches for each planned initiative and select
which approach to pursue.
Schedule initiatives.
Complete the transition plan.
With the reiteration of this process the enterprise has set the strategy and the
path to a comprehensive networking design that encompasses and unifies
networking data forwarding, networking security and networking services.
From a high-level point of view, standardization in the data center spans these
three dimensions:
Technology: for example, rationalizing the supported software stacks
(operating systems, hypervisors, data bases, application development
environment, applications but also network appliances OSs) and their
versioning simplifies the development process, lowers license costs and
suppliers management costs and shortens the creation of service catalogues
and image management tools and processes.
Processes: for example, centralizing processes that span the organization (for
example procurement, external support, and crisis management) improve
consistency and speed up lead time. Also, cost savings can be achieved by
eliminating duplicate processes optimizing resource efficiency both physical
and intellectual.
Tools: for example, reducing the number of vertical tools that are required to
manage and provision the infrastructure can bring cost benefits in terms of
required platforms and skills and also improve service delivery by leveraging
consistent user interfaces.
On the other hand, there are also barriers to the standardization process as it
has been previously described:
Standards might offer limited functionality, lower adoption rates and less skills
available than other more customized alternatives, which may already be in
use.
It is not always easy to coordinate across different organizational
departments.
It is very difficult to have an over-encompassing view of the overall
infrastructure and its interdependencies because applications, technology
domains and even data centers may be handled independently and not in a
synchronized, consistent fashion.
The upfront capital investments that may be required and the risks of shifting
away from what is well known and familiar are also barriers in this context.
Network professionals typically operate through command line interfaces
(CLIs) so a shift in terms of tooling, mindset and skills is required in order to
integrate network management and configuration into more consumable
interfaces, such as the one used in the storage and server arenas.
The over-sophisticated and highly customized and heterogeneous IT world
struggles with the end goal of a homogeneous resource pool-based IT
environment without actionable, well-understood and agreed-upon milestones
and initiatives that might depend on tools and technologies that offer limited
functionalities compared to what is required.
216 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Standardization is a journey that requires coordination and understanding of
common goals across people, tools, technologies and processes. On the other
hand, it is a key enabler for automation, cost and service delivery optimization
and is something that should always be kept in mind when integrating new
functionalities and technologies inside a dynamic data center infrastructure.
The data center network should be designed around service delivery starting
from a high-level design that outlines the basic solution structure to support the
enterprise strategy and goals by establishing a set of guiding principles and
using them to identify and describe the major components that will provide the
solution functionality. Typically, items that should be developed are:
Guiding principles: Principles relate business and IT requirements in a
language meaningful to IT and network managers. Each principle is
supported by the reason or the rationale for its inclusion and the effect or the
implications it will have on future technology decisions. Each principle refers
to at least one of the enterprise's key business requirements. The principles
are also developed to help explain why certain products, standards, and
techniques are preferred and how these principles respond to the needs of
enterprise security, servers, applications, storage and so on.
A network conceptual design: This is the first step in crafting a network
solution to support business requirements. It describes the relationship
between the building blocks or functions that comprise the network and
services required to meet business needs. This design also shows the
interactions between networking and other IT domains. At this level,
technology is not specified but support of the business purpose is shown.
Business requirements are then mapped to the network through guiding
principles, conceptual network design diagrams and application flows.
A specified design: This process refines the conceptual design by developing
the technical specifications of the major components of the solution in terms
of interfaces, capacity, performance, availability, security and management,
Once the physical design is complete, the enterprise will have a level of detail
appropriate to execute the initiative.
In Figure 4-7 on page 219, the possible interactions among the IT infrastructure
teams that are needed for a service-oriented DCN design are depicted, along
with examples, that the network domain might experience with other IT and
non-IT domains.
218 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Enterprise Services Needs
Architectural Governance
Assuring Service Delivery
WAN,
Internet,QoS,
Applications
latency, BW,...
vSwitching,hyp
Framework
Server
Security
ports, ...
Network
Regulations
NAS-FC-iSCSI-
Industry
FCoIP-FCoE Storage
SNMP-
NETCONF-
Network Mngmt Network Provisioning
Web-Interface
Tools & AutomationTools
Service Management
While we see industry regulations as external yet mandatory to follow, for each of
the planned IT initiatives, the network team, as shown in Figure 4-7, can leverage
and work with:
Application teams to determine how the network should treat the flows and
assure the required availability. It is also important to prove to the business
the cost that the network must sustain in order to satisfy all the requirements.
For example, the expected level of availability of a certain application has to
be matched with similar (if not higher) network availability. This holds true
both for client-to-server and server-to-server traffic flows.
Server teams to define the right level of synergy to avoid layer and
functionality duplications and delays in the application flow processing. For
example, the virtual switching functions implemented in the hypervisors are
Deep skills and extensive experience are necessary to detail and specify the
complete enterprise architecture. It is necessary to link network services and
security to other aspects of IT and business organization at key design decision
points, including systems management, applications development,
organizational change, testing and business continuity. The approach we
recommend enables the enterprise to deliver a robust and resilient solution that
aligns with and supports the ever changing business and industry regulations
environment.
220 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
IT resource efficiencies while cloud computing is changing the paradigm of how
IT resources are sourced and delivered. This major shift in how IT resources are
perceived and utilized is changing how the IT infrastructure needs to be designed
to support the changing business requirements and take advantage of
tremendous industry innovation.
When equipped with a highly efficient, shared, and dynamic infrastructure, along
with the tools needed to free up resources from traditional operational demands,
IT can more efficiently respond to new business needs. As a result, organizations
can focus on innovation and aligning resources to broader strategic priorities.
Decisions can be based on real-time information. Far from the “break/fix”
mentality gripping many data centers today, this new environment creates an
infrastructure that provides automated, process-driven service delivery and is
economical, integrated, agile, and responsive.
What does this evolution mean for the network? Throughout the evolution of the
IT infrastructure, you can see the increasing importance of stronger relationships
between infrastructure components that were once separately planned and
managed. In a dynamic infrastructure, the applications, servers, storage and
network must be considered as a whole and managed and provisioned jointly for
optimal function. Security integration is at every level and juncture to help provide
effective protection across your infrastructure, and across your business.
IBM Global Technology Services (GTS) has a suite of services to help you
assess, design, implement, and manage your data center network. Network
strategy, assessment, optimization and integration services combine the IBM IT
and business solutions expertise, proven methodologies, highly skilled global
resources, industry-leading management platforms and processes, and strategic
partnerships with other industry leaders to help you create an integrated
communications environment that drives business flexibility and growth.
IBM network strategy, assessment and optimization services help identify where
you can make improvements, recommend actions for improvements and
implement those recommendations. In addition, you can:
Resolve existing network availability, performance, or management issues.
Establish a more cost-effective networking and communications environment.
Enhance employee and organizational productivity.
IBM Network Integration Services for data center networks help you position your
network to better meet the high-availability, high-performance and security
requirements you need to stay competitive. We help you understand, plan for and
satisfy dynamic networking demands with a flexible, robust and resilient data
center network design and implementation. Whether you are upgrading, moving,
building or consolidating your data centers, our goal is to help improve the
success of your projects.
Figure 4-8 on page 223 depicts the approach taken for the IBM services
approach.
222 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Help me decide what to do Help me do it
Project management
In all phases shown in Figure 4-8, we will work with you to:
Plan
– Understand your current IT and networking environment.
– Collect and document your requirements.
– Identify performance and capacity issues.
– Determine your options.
– Compare your current environment to your plans.
– Make recommendations for transition.
Design
– Develop a conceptual-level design that meets the identified solution
requirements.
– Create a functional design with target components and operational
features of the solution.
– Create a physical design to document the intricate details of the solution,
including vendor and physical specifications, so that the design may be
implemented.
– Deliver a bill of materials and a plan for implementation.
224 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
large, complex projects such as this project. Establishing the appropriate
planning and governance frameworks at the outset will define the business
partnership relationships at varying levels and help both organizations maximize
the value and objective attainment that each can realize from this relationship.
The solution logical design contains information about the topology, sizing, and
functionality of the network, routing versus switching, segmentation, connections,
and nodes. The solution logical design is where the decisions are made
concerning OSI Layer implementation on specific nodes. Sizing information is
taken into account for connections and nodes to reflect the requirements (for
example, capacity and performance) and the nodes' ability to handle the
expected traffic. Functionality is documented for both nodes and connections to
reflect basic connectivity, protocols, capabilities, management, operations, and
security characteristics.
At this level of the design, IBM identifies product selection criteria; selects
products that meet specific node requirements; selects connection technologies;
rationalizes and validates the design with you; develops infrastructure and
facilities plans for the network; develops a detailed bill of materials; and prepares
a network infrastructure implementation plan.
Once the solution physical design is complete, you will have an architecture and
design specified to a level of detail appropriate to execute a network procurement
activity and, following delivery, to begin implementation of the new network.
226 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
procurement, site preparation, cabling configuration, installation, system testing,
and project management.
IBM has the deep networking skills and extensive experience necessary to assist
you in detailing and specifying a data center network architecture. Our Network
Integration Services team links network services and security to other aspects of
your IT and business organization at key design decision points, including
systems management, applications development, organizational change, testing
and business continuity. Our approach enables us to deliver a robust and resilient
solution that aligns with and supports your changing business. A strong partner
ecosystem with suppliers like Cisco, Juniper, F5, and Riverbed also enables our
network and security architects to support the right combination of networking
technology for you.
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this book.
IBM Redbooks
For information about ordering these publications, see “How to get Redbooks” on
page 231. Note that some of the documents referenced here may be available in
softcopy only.
Best Practices for IBM TEC to Netcool Omnibus Upgrade, SG24-7557
Communications Server for z/OS V1R9 TCP/IP Implementation Volume 1
Base Functions, Connectivity and Routing, SG24-7532
Communications Server for z/OS V1R9 TCP/IP Implementation Volume 2
Standard Applications, SG24-7533
Communications Server for z/OS V1R9 TCP/IP Implementation Volume 3
High Availability, Scalability and Performance, SG24-7534
Communications Server for z/OS V1R9 TCP/IP Implementation Volume 4
Security and Policy-Based Networking, SG24-7535
IBM Systems Virtualization Servers, Storage, and Software, REDP-4396
IBM System z Capacity on Demand, SG24-7504
IBM System z Connectivity Handbook, SG24-5444
IBM System z Strengths and Values, SG24-7333
IBM System z10 BC Technical Overview, SG24-7632
IBM System z10 EC Technical Guide, SG24-7516
IBM System z10 EC Technical Introduction, SG24-7515
Introduction to the New Mainframe - z/VM Basics, SG24-7316
Migrating to Netcool Precision for IP Networks - Best Practices for Migrating
from IBM Tivoli NetView, SG24-7375
PowerVM Virtualization on IBM System p Introduction and Configuration 4th
Edition, SG24-7940
z/VM and Linux Operations for z/OS System Programmers, SG24-7603
GDPS Family - An Introduction to Concepts and Capabilities, SG24-6374
Online resources
These Web sites are also relevant as further information sources:
ICS SalesOne Portal
http://w3-03.ibm.com/services/salesone/S1_US/html/htmlpages/networki
ngsvcs/ns_splash.html
230 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Virtual Ethernet Port Aggregator Standards Body Discussion, Paul Congdon,
10th November 2009
www.ieee802.org/1/files/public/docs2008/new-congdon-vepa-1108-
vo1.pdf
First Workshop on Data Center - Converged and Virtual Ethernet Switching
(DC CAVES)
http://www.i-teletraffic.org/itc21/dc-caves-workshop/
VSI Discovery and Configuration - Definitions, Semantics, and State
Machines
http://www.ieee802.org/1/files/public/docs2010/bg-sharma-evb-VSI-dis
covery-0110-v01.pdf
Trivial TLP Transport (T3P) - Proposed T3P DU and State Machines
http://www.ieee802.org/1/files/public/docs2010/bg-recio-evb-t3pr-011
0-v01.pdf
234 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
Ethernet connection 186 H
Ethernet frame 166 hardware platform 162, 177
Ethernet mode (Layer 2) 51 Health Insurance Portability and Accountability Act
Ethernet port 188 (HIPAA) 19, 28
Ethernet world 166 higher-value activity 11
evaluating vendor technologies 197 HiperSockets 43
evolution of the data center homogeneous resource 216
factors influencing ix, 244 hypervisor API 181
evolutionary imperatives 193 hypervisor development 153
example hypervisors 212 hypervisor level 204
expected service delivery 212 hypervisor resource 164
extended backplane 167
extensive customization 11
external regulations 28 I
I/O (IO) 189
external switch tagging (EST) 89
I/O operation 164
I/O performance 165
F IBM Integrated Communications Services portfolio
facilitate 212 193
facilities team 220 IBM Security
fact VPLS 172 Framework 178
failure management 25 network component 181
FCoE product 181
Fiber Channel over Ethernet 208 IBM Security, see (ISS) 184
Fiber Channel IBM Systems Director 123
Forwarder 163 IBM Tivoli Network Manager (ITNM) 128
over Ethernet 208 IBM Virtualization Engine TS7700 109
Switch 208 IDS (Intrusion Detection Services) 58
Fiber Channel over Ethernet IEEE
FCoE 208 802.1Qbg 209
firewalling 209 802.1Qbh 209
First Hop Router Protocol (FHRP) 174 IEEE 802.1aq 208
FRR TE 174 IEEE standard 164
IETF 208
IETF draft 172
G
geo-load balancing 203 IETF L2VPN 172
Gigabit Ethernet 52 IETF TRILL 208
global economy 211 IFL (Integrated Facility for Linux) 46
global integration IGP 174
indicators 1 impact on the security requirements 208
Global Technology Services (GTS) 221 implementing new network equipmen 195
goals improve the efficiency 209
networking guideline principle 212 improving network security 195
governance 212 inability to perform accurate and timely root-cause
governance of the data center network 212 analysis 195
guest VM 164 increasing capacity by upgrading switches 195
guiding principle 217 independant silo 211
independence of logical resources 207
individual SLAs 13
Index 235
Infiniband network 166 L
infrastructure management 204 L2/L3 boundary 172
Integrated Facility for Linux (IFL) 46 LAN is shifting 210
Integrated Virtualization Manager, IVM 64 LAN-like performance 158
interconnection between data centers 203 Layer 2 (data link layer) 50
interdependencies 211 Layer 2 domain 208
Internet address 200 Layer 3 (network layer) 50
Internet Number 200 Layer 3 IP network 208
American Registry 200 legacy data center 197
Internet standard 172 License and Update Management (LUM) 182
interoperability governance 211 Live Partition Mobility 64
interoperability issue 13 load balancing 209
Inter-VM IPS 182 logical partitions 207
Intrusion Detection Services (IDS) 58 Logical Unit Number (LUN) 37
IO VI 190 logically aggregate 209
IP address 184, 200 logically group 209
range 201 looking ahead 195
standard 192 low cost 198
IP address range allocated to an organization 201 low-level implementation 216
IP assignment 176
IP core 156
IP resolution 176 M
main enablers 157
IP router 175
manage the network as one single logical switch
IPS/IDS. VPN 176
208
IPv4 address
manageability of the data center network compo-
exhaustion 209
nents 209
IPv4 address space
management cost 215
number remaining 200
management of servers 207
IPv4 and IPv6 can coexist 200
MapReduce method 6
IPv6 200
Market Landscape for Data Center Networking 193
address spaces 200
master plan 218
network support 200
merge the plug-and-play nature of an Ethernet Lay-
planning and readiness 201
er 2 network 208
promises 200
Microsoft System Center Virtual Machine Manager
IPv6 and the data center 199
2007 (SCVMM) 103
IPv6 networks
migration planning 197
adoption of 209
mitigate 212
isolation security 207
mobile device 6–7
IT infrastructure 203
moving an application workload to another server
IT simplification 210
198
MPLS 203
J MPLS service 174
jitter, definition of 26 multipathing 208
Juniper Stratus 208 multiple SI 190
multi-protocol label switching (MPLS) 203
multipurpose virtualized appliances 209
K
key enabler 217 multitenant architecture 8
multitenant environment 175
236 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
multitenant platform 12 new technologies 196
multivendor networking environment 197 NFR
multivendor networks 197 non-functional requirements 202
non-blocking 208
non-functional requirements
N NFR 202
navigating vendor-specific alternatives 210
non-functional requirements (NFR) 202
Network Access Control (NAC) 182
non-functional requirements (NFRs) 23
Network Address Translation 201
north-to-south traffic 165
network architecture 25, 28, 187–188, 206
different requirements 165
network challenges
NRO (Number Resource Organization) 200
relevant 205
Number Resource Organization (NRO) 200
risks 195
network changes
most common types of 195 O
network critical role 198 Open Shortest Path First (OSPF) 55
network device 25, 29 Open Systems Adapter-Express (OSA-Express)
network infrastructure 23, 29, 152, 154, 196, 202 OSA-Express (Open Systems Adapter-Express)
data convergence impact 155 48
educated and consistent decisions 214 operating expenses 207
network interface card (NIC) 23, 159, 198 operating system 15, 164
network requirements 195 OPEX 201
network resources 209 optimized for flexibility 198
network service 162, 176, 209 options 212
deployment model 177 organization 209, 211
different categories 177 OSA-Express 53
network virtualization 3, 33, 149 OSI model 179
different levels 185 layer 7 179
first manifestations 150 outside OS 184
technology 149 OS memory tables 184
network-attached device 199 overall initiative 195
networking attributes for a private cloud data center Overlay Transport Virtualization (OTV) 172
network design 198
networking industry consolidation 211
networking investments 195
P
packet services delivery 210
networking vender 211
Parallel Sysplex 57
networks - essential to the success of cloud comput-
Parallel Sysplex clustering 45
ing initiatives 197
partitioned 209
network-specific function 204
Payment Card Industry (PCI) 19, 28
network-specific functions 204
perceived barrier 12
access security 204
performance and availability 208
application delivery 204
physical appliances 209
local switching and routing 204
physical server
new inputs 193
platform 22
new issues 209
physical switch 153
new products 196
plain old telephone service (POTS) 152
new products and new technologies 196
PMBOK phase 224
new services 207
points of concern 193
new skills 212
policy enforcement 161, 174
Index 237
polling agents 129 Root Complex (RC) 190
portsharing 57 Root Port (RP) 190
POWER Hypervisor 61 rootkit 184
primary router (PRIRouter) 53 Routing Information Protocol (RIP) 55
private cloud
adoption models 199
Project Management
S
same IP 185
Body 224
address 188
Institute 224
same subnet need 171
Project Management Institute (PMI) 224
SAN Virtual Controller (SVC) 106
proprietary control plane architecture 208
SAN-like model 210
protocol analysis module (PAM) 181
scalability 27, 209
public Internet 14, 199
scalability and manageability 207
SEA (Shared Ethernet Adapter) 67
Q secondary router (SECRouter) 53
QDIO mode 50 security 71
QoS capability 187 certifications 71
QoS tag 192 security framework 169, 174, 220
Quality of Service (QOS) 26, 28, 207 important implications 174
Quality of Service (QoS) requirements 26 security layer 156
security policy 27
security zones 28
R security, definition of 27
RACF 58
selecting standards, techniques, and technologies
RACF (Resource Access Control Facility) 49
196
real-time access 5, 18, 152
semi-trusted region 27
real-time data 6
server and storage specific functionalities 204
real-time information 3, 221
server consolidation and virtualization initiatives
real-time integration 4
206
recentralization 157
server network interface card (NIC) 24
Redbooks website 231
server virtualization 33
Contact us xii
requirement 203
redundant data centers 24
technique 168
Regional Internet Registries
techologies 205
RIR 200
service delivery 2–4, 162, 197, 201
regulatory and industry-specific regulations 209
service drivers 201
regulatory compliance 13
Service Level Agreement (SLA) 28
related skills 212
Service Management 2–3
requirement
service provider 28
IPV6 only 200
service-oriented approach 4
Resource Access Control Facility (RACF) 49
share physical appliances 209
resource efficiency 207, 210
share the physical network resources 207
resource pools 94
Shared Ethernet Adapter (SEA) 67
right cost 212
shared partition 41
RIP (Routing Information Protocol) 55
shared resources pools 210
RIRs (Regional Internet Registries) 200
Shortest Path Bridging
risk/opportunity growth 1
SPB 208
role of the network
significant saving 197
critical focus 195
238 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
single distributed data center 203 ToR topology 165
Single Root I/O Virtualization (SR-IOV) 164, 189 trade-off 210
single ToR 168 Traffic Engineering (TE) 174
Site Protector (SP) 182 Transparent Interconnect of Lots of Links
Software Development Kit (SDK) 91 TRILL 208
software-based ADCs 162 Transport Layer Security (TLS) 58
software-based appliance 176 TRILL (Transparent Interconnect of Lots of Links)
spanning tree protocol (STP) 166, 206 208
SPB TSAM - High-Level Architecture 117
Shortest Path Bridging 208 T-shaped skill 212
Special Interest Group (SIG) 189 TTL 175
special-purpose software 8 two-fold implication 204
spiraling number 199
SSL termination 162
stability of the control plane 206
U
understand application-level responsiveness 195
standards-based fabrics 208
use case 15
static VIPA 55
storage and data converged networks 212
storage virtualization 33, 65 V
technology 199 VCS
trend 152 Brocade Virtual Cluster Switching 208
STP vendor lock-in 12
Spanning Tree Protocol 206 vendor-specific alternative 210
STP approach 174 vendor-specific data center architectures 208
STP instance 206 vendor-specific feature 215
STP instances 206 vendor-specific implementation 209
suboptimal performance 213 vertical, vendor and model specific tools 207
support new, network-demanding services 207 VIPA (Virtual IP Addressing) 55
supporting virtualization 195 virtual appliances 209
SVC (SAN Virtual Controller) 106 virtual End of Row (VEOR) 163
Switch Fault Tolerance (SFT) 188 virtual Ethernet 65
Sysplex Distributor (SD) 56 Virtual Ethernet Port Aggregator (VEPA) 231
system management 29 Virtual I/O Server 64–65
Shared Ethernet Adapters 65
Virtual IP Addressing (VIPA) 55
T virtual MAC function 55
technology 209–210
virtual machine
technology alternatives 212
network state 209
The New Voice of the CIO 197
virtual machine (VM) 23, 47, 163, 196
thinking 193
virtual machine (VM) mobility 204
throughput, definition of 26
virtual machine guest tagging (VGT) 89
Tivoli Service Automation Manager and ISM Archi-
Virtual Machine Observer
tecture 116
main task 184
TLS (Transport Layer Security) 58
Virtual Machine Observer (VMO) 182
Today’s DCN Challenges 202
virtual networking 206
tools and methodologies 211
Virtual Private LAN Service (VPLS) 172
Top of Rack (TOR) 153
virtual resource pool level 204
Top of Rack (ToR) 153
Virtual Server Security (VSS) 181
ToR model 163
virtual switch 153, 186
Index 239
management boundary 212
network 181
port 189
support QoS 192
virtual switch tagging (VST) 88
virtualization layer 14
virtualization technology 23, 149, 207, 220
virtualized data center
layer 169
network 22
network environment 180
virtualized resource pools
implications 204
virtualized system 164
virtualized, to meet (VM) 150, 184
VLAN tagging 191
VLAN translation 172
VM
behavior 181
building block 206
entity 189
migration 162
VM (virtual machine) 47
VMotion 94
VMs 181
VMWare VMotion 206
W
WAN Acceleration 203
wide area network (WAN) 160–161, 203
WOC 161
workforce 7, 152
www.internetworldstats.com/stats.htm 5
Z
z/VM guest 46
z/VM virtual switch 50
240 IBM Data Center Networking: Planning for Virtualization and Cloud Computing
IBM Data Center Networking: Planning for Virtualization and Cloud Computing
(0.2”spine)
0.17”<->0.473”
90<->249 pages
Back cover ®
Drivers for change in The enterprise data center has evolved dramatically in recent
years. It has moved from a model that placed multiple data INTERNATIONAL
the data center
centers closer to users to a more centralized dynamic model. TECHNICAL
The factors influencing this evolution are varied but can mostly SUPPORT
IBM systems be attributed to regulatory, service level improvement, cost ORGANIZATION
management savings, and manageability. Multiple legal issues regarding the
networking security of data housed in the data center have placed security
capabilities requirements at the forefront of data center architecture. As the
cost to operate data centers has increased, architectures have
The new data center moved towards consolidation of servers and applications in BUILDING TECHNICAL
order to better utilize assets and reduce “server sprawl.” The INFORMATION BASED ON
design landscape more diverse and distributed the data center environment PRACTICAL EXPERIENCE
becomes, the more manageability becomes an issue. These
factors have led to a trend of data center consolidation and IBM Redbooks are developed
resources on demand using technologies such as by the IBM International
virtualization, higher WAN bandwidth technologies, and newer Technical Support
management technologies. Organization. Experts from
IBM, Customers and Partners
The intended audience of this book is network architects and network from around the world create
administrators. timely technical information
In this IBM Redbooks publication we discuss the following topics: based on realistic scenarios.
Specific recommendations
The current state of the data center network are provided to help you
The business drivers making the case for change implement IT solutions more
The unique capabilities and network requirements of system platforms effectively in your
The impact of server and storage consolidation on the data center environment.
network
The functional overview of the main data center network virtualization
and consolidation technologies
The new data center network design landscape For more information:
ibm.com/redbooks
SG24-7928-00 ISBN0738435392