Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
100 views7 pages

Ethernet in Data Center Networks: JANUARY 19, 2022

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
100 views7 pages

Ethernet in Data Center Networks: JANUARY 19, 2022

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

EXECUTIVE SUMMARY

Ethernet in Data Center Networks


Kent Lusted, Principal Engineer, Intel; Co-chair, Ethernet Alliance High Speed Networking Subcommittee
Daniel Gonzalez, Business Development Manager, Anritsu Co.

JANUARY 19, 2022


KEY TAKEAWAYS

•  Growing demand for information has created


an explosion in data center traffic.

•  Data center architectures are evolving to


support increasing Ethernet transfer rates.

•  Data center operators must optimize Ethernet


media types for speed, power, reach, and
latency.

•  DPUs and IPUs are influencing data center


design.

•  Data centers are transforming into edge


computing networks.

•  To address requirements related to power and


Ethernet speeds, data centers are turning to
optical transceivers and high-speed breakout
cables.

•  Networking equipment manufacturers use


Anritsu solutions to measure signal integrity.

•  As distributed computing becomes more


common, service levels and 400G network
in partnership with interoperability are priorities for data center
operators.

•  With multi-access edge computing and


network virtualization, data center providers
can maintain different SLAs for different
applications.
EXECUTIVE SUMMARY

Ethernet in Data Center Networks

OVERVIEW Figure 1: Voracious Demand for Data


Looking ahead, data center networking is expected to
see continued evolution and innovation. This year, a
growing number of data center operators will move
beyond 100 Gigabit Ethernet toward 400G. Meanwhile,
the IEEE 802.3 is beginning work on standards for
what comes next via the new Task Force P802.3df. As
data center network operators move to 400 Gigabit
Ethernet and beyond, they will face new challenges
such as signal integrity, network interoperability, and
maintaining service level agreements (SLAs) for
different applications. To address these issues, both
Courtesy: Cedric Lam, Google
data center operators and networking equipment
manufacturers are turning to Anritsu test and measure-
Data center architectures are evolving to
ment equipment.
support increasing Ethernet transfer rates.
A cloud scale data center consists of a data center
CONTEXT interconnect (DCI) that connects the data center
Kent Lusted and Daniel Gonzalez discussed Ethernet building to others in the area, as well as routers, leaf
usage trends in data center networks, as well as the and spine switches, top of rack (TOR) switches, middle
technologies helping operators meet growing band- of row (MOR) switches, and servers.
width demands and verify network performance at
Historically, DCIs have supported 100 Gigabit Internet.
high speeds.
Today, they are more likely to be 400 Gigabit Ethernet.
For the server and compute element, data transfer
KEY TAKEAWAYS rates in the past were 10 Gigabit Ethernet. Today, 25
Growing demand for information has created Gigabit Ethernet is the norm and speeds in excess of
an explosion in data center traffic. 100 Gigabit Ethernet are expected. In the data centers
Between 2010 and 2020, data center traffic experi- of today and tomorrow, reach distances will decrease
enced exponential growth. That growth further acceler- as a consequence of rate choices and the associated
ated during the global pandemic. People working, technology implications.
learning, and playing at home increased the number of Figure 2: Next-Generation Data Center Networking
applications running in data centers and connectivity
became even more critical. The growing numbers of
users, access methods, and services have created a
bandwidth explosion.

Courtesy: John D’Ambrosia, Futurewel

PAGE 2
EXECUTIVE SUMMARY

Ethernet in Data Center Networks

Data center operators must optimize DPUs and IPUs are influencing data center
Ethernet media types for speed, power, design.
reach, and latency. DPUs are hardware accelerators that offload networking
Data centers support a variety of media types which and communication workloads from the CPU. With the
must be optimized for speed, power, reach, and exponential increase in network traffic to the network
latency. Higher Ethernet rates are forcing the industry interface card in the server, increases in software-defined
to reassess long-held assumptions about: networking (SDN) have put a greater load on servers.

1. Power. Data centers already have access to the Infrastructure processing units (IPUs) accelerate and
maximum amount of power. As a result, data center run the SDN and management software in hardware
designers must consider how to use available power constructs away from server cores. Intel’s IPU enables
more efficiently. server cores to continue to run end customer applica-
tions, as well as provides system-level security, con-
2. Switch, router, transceiver, NIC architecture, and
trol, and isolation. The software framework offers a
physical design. These must be built using a
common look and feel for users, abstracting the IPU’s
holistic approach.
features and capabilities. The Intel IPU is a hardware
3. Application latency. Users engage in many different and software programmable solution.
online activities ranging from search to ecommerce, Figure 4: IPUs Support the Data Center of the Future
social media, storage, and more. All of these applica-
tions have different user latency expectations.

As data centers optimize Ethernet media, they need to


strike a careful balance between reach and cost. Each
solution has a unique optimization point that addresses
a specific data center challenge. For example, passive
copper cables are relatively cost effective, but provide
relatively short reach. Multi-mode solutions are more
costly, but provide greater reach. With single mode
fiber and coherent technologies, cost increases
dramatically.
Ethernet is shaping today’s and
Figure 3: Reach vs. Cost Considerations
tomorrow’s applications in the data
center. At the same time, the
applications running in data centers
are influencing the development of
Ethernet for today and tomorrow. It’s a
symbiotic relationship.
Kent Lusted, Intel and The Ethernet Alliance

PAGE 3
EXECUTIVE SUMMARY

Ethernet in Data Center Networks

Data centers are transforming into edge centers, high-speed Ethernet optical interface
computing networks. advances have enabled providers to increase their
Over the past three to five years, high-speed comput- leaf-spine connections up to 400G. As data center
ing resources have shifted from centralized locations to operators upgrade leaf-spine connections and
a distributed model that uses high-speed, low-latency deploy equipment from different vendors, they often
interconnections between resources. More recently, face interoperability challenges. Not all 400G Ether-
high-speed computing resources have pushed out as net optics are created equal and their performance
far as the network edge or as close to the user as on forward error correction (FEC) KPI thresholds
possible. This trend has been driven primarily by varies.
increased demand for application support, Internet of
• High-speed breakout cables reduce costs, but
Things (IoT) devices, and latency sensitive services.
they also present performance and distance
As computing resources move closer to the edge, the tradeoffs. High-speed breakout cables use the same
latency key performance indicator (KPI) tightens. This pluggable interface as optics like quad small form-
KPI is application-service dependent. Latency affects factor pluggables (QSFPs) or small form-factor
the user experience for applications and must be pluggables (SFPs). However, they are fanout cables
considered when deploying Ethernet connects. where one end supports the aggregate rate and the
other end is a series of disaggregated interfaces.
Figure 5: Evolution of the Data Center into an Edge Computing
Network New breakout cables can support upward of 400G
and reduce deployment costs. Data center providers
must weigh the costs and benefits to identify the
best fit based on application and deployment details.
Figure 6: Direct Connection vs. Breakout Cables

To address requirements related to power


and Ethernet speeds, data centers are
turning to optical transceivers and high-
speed breakout cables.
Daniel Gonzalez made two observations about
Ethernet trends within data centers:

• Since data centers can’t get more power, many


are using optical transceivers to reduce power,
while increasing the bitrate. In hyperscale data

PAGE 4
EXECUTIVE SUMMARY

Ethernet in Data Center Networks

Networking equipment manufacturers use As distributed computing becomes more


Anritsu solutions to measure signal integrity. common, service levels and 400G network
To test the latest versions of 100, 200, 400 and even interoperability are priorities for data center
800 Gigabit optical interfaces, manufacturers use operators.
Anritsu instruments like the MP1900A Signal Quality Advances in Ethernet data transport speeds are
Analyzer. The MP1900A Signal Quality Analyzer is a improving network latency and throughput. As a result,
benchtop, high-speed bit error rate tester that supports data centers are shifting from centralized models to
the latest Ethernet rates up to 800 Gigabits per distributed and interconnected models.
second.
Sharing high-speed computing resources over a
Pluggable optical host interfaces have multiple lanes of converged network offers a variety of benefits. Rather
synchronized or pulse amplitude modulation or PAM4 than creating dedicated, redundant disaster recovery
signals. Each one carries forward error correction (FEC) sites, it is now possible to use high-speed resources to
patterns that are generated in the pluggable optic and distribute computing across multiple locations. Those
are converted to an optical waveform. locations can be interconnected to create pooled
resources for computing-intensive applications.
Optical waveforms are evaluated to determine if the
signal integrity remains intact over the physical Coherent, pluggable 400Gbase-ZR optical modules can
medium which could be optical, coax, or cable, transport 400G Ethernet over individual wavelengths
depending on whether it is an active optic cable or a over various optical network devices. The challenge,
direct attach cable. however, is ensuring 400G network interoperability
Figure 7: Anritsu Test and Measurement Solutions for 400G/800G
with multi-vendor pluggable modules.
Optics
In addition, maintaining high service levels can be difficult
in a distributed network with multiple demarcation
points. Many data center operators use Anritsu test and
measurement equipment to verify whether the network
meets their key performance indicators. To ensure that
the quality of the link remains consistent, KPIs can be
measured frequently and across various demarcation
points which may be managed by different providers.
Figure 8: Ethernet DCI KPIs Across Multiple Demarcation Points

PAGE 5
EXECUTIVE SUMMARY

Ethernet in Data Center Networks

With multi-access edge computing and


network virtualization, data center providers Interoperability has become a very hot
can maintain different SLAs for different topic. While vendors follow standards,
applications. there are many ways to manipulate the
Multi-access edge computing (MEC) refers to comput- registers when it comes to optics. And
ing resources that are interconnected over a network,
this goes for cables, as well. The
but distributed as close to the end users as the edge
network allows. One key to making MEC work is
importance of interoperability is
splitting network resources into multiple virtual net- knowing that you can cover a variety of
works. While the physical computing resources remain different customer deployments and
the same, the Ethernet traffic is split into multiple multi-vendor configurations.”
logical networks.
Daniel Gonzalez, Anritsu
As a result, it is possible to run virtually independent
networks over the same hardware. These networks
can each have its own SLA which is managed indepen- ADDITIONAL INFORMATION
dently. For example, one network slice might support
• The Ethernet Alliance is a global community of end
streaming real-time video, while another supports
users, system vendors, and component suppliers
gaming or real-time analytics for autonomous vehicles.
with a mission to promote existing and emerging
The KPIs that are important in a MEC environment IEEE 802 Ethernet standards and accelerate industry
include latency, as well as metrics related to network standards adoption by demonstrating multi-vendor
synchronization over Ethernet transport such as time interoperability. Ethernet Alliance expands the
error measurement for Precision Time Protocol or PTP. ecosystem that supports Ethernet development by
These KPIs can determine whether each virtual net- facilitating interoperability testing, such as Plug
work instance is meeting its service level agreement. Fests. It also provides interoperability assurance,
Without metrics, it is impossible to guarantee the collaborative interaction with industry organizations,
customer experience. global outreach across its membership, and thought
leadership.
Figure 9: Ethernet MEC KPI for 5G Applications
• IEEE 802.3 2020 Ethernet Bandwidth Assessment.
Downloaded this report from the IEEE 802.3 website.

• Ethernet Alliance Technology Exploration Forum


(TEF) 2021. Recordings and content from this event
are accessible from the Ethernet Alliance website.

PAGE 6
EXECUTIVE SUMMARY

Ethernet in Data Center Networks

BIOGRAPHIES Daniel Gonzalez


Business Development Manager, Anritsu Co.
Kent Lusted Daniel Gonzalez possesses over 20 years’ experience
Principal Engineer, Intel; Co-chair, Ethernet Alliance
High Speed Networking Subcommittee in digital and optical transport testing, development,
training, and execution spanning technologies including
Kent Lusted is a Principal Engineer focused on TDM, SONET, OTN, ATM, Carrier Ethernet, and
Ethernet PHY Standards within Intel’s Ethernet Physical Layer Signal Integrity.
Products Group. He won an Intel Achievement Award
in 2002 for his contributions towards delivering the As a Business Development Manager for Anritsu
world’s first client and dual-port server Gigabit Ethernet Company, Daniel is responsible for providing technical
controllers. Since 2012, He has been an active con- support to sales, marketing and customers in North &
tributor and member of the IEEE 802.3 standards South America. Daniel holds a B.S. in
development leadership team and is currently the Telecommunications Management from DeVry
Vice-Chair of the IEEE 802.3ck 100G SERDES electrical University, is a member of OIF (Optical Internetworking
interfaces Task Force. Kent is also co-Chair of the Forum) Networking and Operations Working Group
Ethernet Alliance Higher-Speed Networking (WG), IEEE Communications Society (ComSoc) and
Subcommittee that hosts interoperability events to Ethernet Alliance, and holds a Personal Certification for
verify new designs and validate the specifications used MEF (Metro Ethernet Forum) Carrier Ethernet 2.0. He
to create new products and solutions. also has several of his articles published in Lightwave,
ISE Magazine, Mission Critical, and Pipeline
publications.

PAGE 7
© 2022 Endeavor Business Media. All rights reserved.

You might also like