Lesson 268-Workbook
Lesson 268-Workbook
Technologies
BGP
Border Gateway Protocol
By Orhan Ergun
Evolving Technologies by Orhan Ergun www.orhanergun.net
SD-WAN 5 Common
Elements:
Edge
Controller
Gateway
Orchestrator
Web Portal
SD-WAN Components – Common Architectural Elements- SD-
WAN Edge
• It acts as the security-policy enforcer and conducts WAN optimization tasks that
include data deduplication, compression, and packet buffering
• It also creates and removes encrypted tunnels over underlay networks, whether it’s
a wired or wireless connection
SD-WAN Components – Common Architectural Elements- SD-
WAN Edge
• Since SD-WAN Edges often connect to public Internet WANs, they would include,
at a minimum, some NAT and firewall capabilities
• Most of the time actual cost of the deployment comes from SD-WAN Edges as
there are much more Edge device in the network than Centralized Core Devices
(Gateways , Controllers , Orchestrator)
SD-WAN Controller
• SD-WAN Controller centralizes management to the SD-WAN Edge and to the SD-
WAN Gateway
• The SD-WAN Controller provides physical or virtual device management for all
SD-WAN Edges and SD- WAN Gateways associated with the controller
• This includes, but is not limited to, configuration and activation, IP address
management, and pushing down policies onto SD-WAN Edges and SD-WAN
Gateways
SD-WAN Controller
• The SD-WAN controller maintains connections to all SD-WAN Edges and SD-WAN
Gateways to identify the operational state of SD- WAN tunnels across different
WANs and retrieve QoS performance metrics for each SD-WAN tunnel
• For example, the Service Orchestrator is responsible for configuring the end-to-
end SD- WAN managed service between SD-WAN Edges and SD-WAN Gateways
over one or more underlay WANs, e.g., Internet and MPLS, setting up application-
based forwarding over WANs based on security, QoS or business or intent-based
policies
SD-WAN Gateway
• The SD-WAN Gateway is a special case of an SD- WAN Edge that also enables sites
interconnected via the SD-WAN to connect to other sites interconnected via
alternative VPN technologies, e.g., CE or MPLS VPNs
SD-WAN Gateway
• There are two ways to deliver an SD-WAN service to sites connected via another
VPN service.
• One way requires an SD- WAN Edge to be placed at each subscriber site
connected to the VPN service so SD-WAN tunnels can be created over the VPN
SD-WAN Gateway
• This approach enables sites interconnected via SD-WAN and other VPN
technology domains to intercommunicate
SD-WAN Gateway
• This approach does not require SD-WAN Edges to be placed at each VPN site to
achieve interconnectivity
• The MSP or CSP typically integrates the Subscriber Web Portal for the SD-WAN
managed service into their existing customer portal used for other managed
services
SD-WAN Key Characteristics
1. The ability to support multiple connection types, such as MPLS, Last Mile Fiber
Optical Network or through high speed cellular networks
e.g. 4G LTE and 5G wireless technologies
2. The ability to do dynamic application aware path selection, for load sharing and
resiliency purposes
3. A simple interface that is easy to configure and manage
4. The ability to support VPNs, and third party services such as WAN optimization
controllers, firewalls and web gateways
SD-WAN – Different Transport Mechanisms
SD-WAN – Dynamic Path Selection
• This feature ensures traffic uses the best path depending on the business need,
such as mission-critical and delay-sensitive applications
SD-WAN – Dynamic Path Selection
• Because most SD-WAN vendors offer IPsec, it’s common to think that SD-WANs
are inherently secure
SD-WAN – Wan Optimization – Security and Other Services
• For example, you still need stateful firewall capabilities between the public
Internet and your WAN edge device to grant or deny access
SD-WAN – Wan Optimization – Security and Other Services
Source : VERSA
SD-WAN – Wan Optimization – Security and Other Services
• Since every branch constitutes a WAN edge with exposure to the Internet,
you may need all these capabilities at each branch site!
SD-WAN – Wan Optimization – Deduplication and Compression
• This means the data set must be comprised of multiple identical files or files that
contain a portion of data that is identical to the content found in other files
SD-WAN – Wan Optimization – Deduplication and Compression
• Compressing large files into smaller bits allows users to store more data and also
it makes data transmission much quicker and easier
• Examples of common file level compression that we use in our day-to-day lives
include MP3 audio and JPG image files
SD-WAN – Wan Optimization – How Data Compression Work?
• You might notice that some letters appear more than others - A appears about 2x as
much as B and C, and the other letters don't appear at all
• Using that information, you can choose an encoding that represents the characters in the
string with less information, e.g., A may be encoded using binary 0, while B and C are
assigned 10 and 11respectively. If you were originally using 8 bits per character, that is a
big savings
SD-WAN – Wan Optimization – How Data Compression Work?
• Another
encoding
schema can be
Run-length
encoding
SD-WAN – Wan Optimization – Security and Other Services
• Sometimes, it will be impossible for the receiver to communicate back with the
sender to check for errors in the received packages
SD-WAN - Forward Error Correction
• Some algorithms are made for this kind of situation as for example in a multiple
receiver communication
• They use a forward error correction, which is based on the addition of redundant
bits over the bit stream of data
SD-WAN - Forward Error Correction
• A simple example of forward error correction is (3,1) repetition code. In this example, each bit of data is sent three times
and the value or meaning of the message is decided upon majority vote. The most frequently sent bit is assumed to be the
value of the message (see table below)
SD-WAN - Good to have capabilities with SD-WAN
• Some of these features might be good to have for some companies and must to
have for others depends on the application requirements and the constraints
• Internet connectivity is one of the cheapest and most widely available bandwidth
options
Source : WanDynamics
SD-WAN - Quality of Service
• Some SD-WAN vendors market their Forward Error Correction (FEC) and Dynamic
Path Selection/Control features as QOS but they are not QOS mechanisms
• Some SD-WAN vendors support Traffic Shaping , Rate Limiting , Policying as QoS
features as well
SD-WAN - Quality of Service
• QoS simply
prioritization
some traffic
and punishing
others!
SD-WAN - Zero-touch Deployment/Provisioning
• With this capability, IT teams can bring up services without the need to interact
with physical equipment, resulting in fast and efficient deployment of services
• ZTP can be found in switches, wireless access points, SD-WAN nodes, NFV-
platforms , firewalls and many other networking devices
• Not all ZTP implementations are truly ‘Zero Touch’ though, so sometimes you will
also come across terms like ‘minimal touch provisioning’ or ‘one touch
provisioning’
SD-WAN - Zero-touch Deployment/Provisioning
• More and more vendors are offering a cloud service to support the configuration
and ZTP process
• Cisco Meraki, Riverbed, Citrix and Juniper Networks are among those
• All that is required is registering the serial numbers of the devices purchased and
the vendor will ensure the devices are correctly registered and visible under your
management portal account
• The device can then be fully configured and managed via the cloud
SD-WAN - Global Coverage
• If your business requires international connectivity, you may need to analyze the
provider's point-of-presence (POP) coverage to understand the effect on
application performance
• Certain providers and vendors operate a significant global network presence that
includes specific POPs for both private and internet traffic
• SD-WAN features are focused on application performance, but latency and jitter
challenges can arise when deploying international services
SD-WAN Vendor POC Support
• The proof of concept for SD-WAN is an excellent way to understand and verify the
capability of an SD-WAN offering
• Some vendors offer demo hardware for a period of time, often with presales
resources to assist with the configuration
SD-WAN Cloud Connection
• Some SD-WAN products have the ability to program “cloud breakout” based on
applications, allowing direct access to trusted sites (like SalesForce.com), while
tunneling traffic to unknown sites to either cloud-based or centrally-based
inspection services
• Enterprises today face major user experience problems for SaaS applications
because of networking problems
• The centralized Internet exit architecture can be inefficient and results in poor
SaaS performance
• And branch sites are running out of capacity to handle Internet traffic which is a
concern because more than 50% of branch traffic is destined to the cloud
SD-WAN Cloud Connection- SAAS
• One benefit of using cloud computing services is that firms can avoid the upfront
cost and complexity of owning and maintaining their own IT infrastructure, and
instead simply pay for what they use, when they use it
Cloud Computing – General Characteristics
AGILITY
• The cloud gives you easy access to a broad range of technologies so that you can innovate faster
and build nearly anything that you can imagine
• You can quickly spin up resources as you need them–from infrastructure services, such as
compute, storage, and databases, to Internet of Things, machine learning, data lakes and
analytics, and much more
ELASTICITY:
• Instead, you provision the amount of resources that you actually need
• You can scale these resources up or down to instantly to grow and shrink capacity
as your business needs change
Cloud Computing – General Characteristics
GLOBAL AVAILABILITY:
• With the cloud, you can expand to new geographic regions and deploy globally in
minutes
• This is generally true for the Public Cloud Deployments which we will see next
Cloud Computing
• Public clouds are the most common way of deploying cloud computing. The cloud
resources (like servers and storage) are owned and operated by a third-
party cloud service provider and delivered over the Internet
• The computing functionality may range from common services such as email,
apps and storage to the enterprise-grade OS platform or infrastructure
environments used for software development and testing
Cloud Computing – Public Cloud
• Lower costs - No need to purchase hardware or software, and you pay only for
the service you use
• No maintenance - Service provider provides the maintenance
• Near-unlimited scalability - On-demand resources are available to meet your
business needs
• High reliability - A vast network of servers ensures against failure
Cloud Computing – Public Cloud
• The total cost of ownership (TCO) can rise exponentially for large-scale usage,
specifically for midsize to large enterprises
• Low visibility and control into the infrastructure, which may not suffice to meet
regulatory compliance.
Cloud Computing – Private Cloud
• Private Cloud refers to the cloud solution dedicated for use by a single
organization
• The computing resources are isolated and delivered via a secure private network,
and not shared with other customers
Cloud Computing – Private Cloud
• The infrastructure may not offer high scalability to meet unpredictable demands
if the cloud data center is limited to on-premise computing resources
Cloud Computing – Hybrid Cloud
• Often called “the best of both worlds,” hybrid clouds combine on-premises
infrastructure, or private clouds, with public clouds so organizations can get the
advantages of both
Cloud Computing – Hybrid Cloud
• It is cost effective with the ability to scale to the public cloud, you pay for extra
computing power only when needed
• Example: Oracle Cloud Platform (OCP), Red Hat OpenShift, Google App Engine
Cloud Computing – Different Cloud Services - SaaS
• This service makes the users connect to the applications through the Internet on
a subscription basis
• Because hardware, software and staffing costs for AI can be expensive, many
vendors are including AI components in their standard offerings or providing
access to artificial intelligence as a service (AIaaS) platforms
• Amazon AI
• IBM Watson Assistant
• Microsoft Cognitive Services
• Google AI
Edge Computing
• In a simpler term, edge computing means running fewer processes in the cloud
and moving those processes to local places, such as on a user’s computer, an IoT
device, or an edge server
Edge Computing
• Akamai, CloudFront,
CloudFlare and many
other Edge Computing
Providers provide edge
services like WAF, Edge
Applications, Serverless
Computing, DDos
Protection, Edge
Firewall etc.
Fog Computing
• Both the Fog and Edge Computing are totally concerned and looks into the
computing capabilities to be done locally rather than pushing it to the Cloud
• Most of the Fog Computing use cases came from the IOT deployments
• Delivering digital TV services over IP networks by using copper, fiber, wireless and
HFC infrastructure
• Content Delivery Network companies replicate content caches close to large user
population
• They don’t provide Internet access or Transit Service to customers or ISPs, but
distribute the content of the Content Providers
• Let’s first understand some fundamental businesses such as Content Provider and
OTT (Over the Top) Providers
CDN – What does Content Provider do?
• These two terms are used in the networking communities in the standard bodies
(IETF, IEEE etc.) and at the events such as NOG, RIPE and IETF meetings
CDN – What does Content Provider do?
• Search companies (Bing, Google, Yandex, Baidu), TV stations (ABC News, BBC,
CNN), video providers (YouTube, Netflix), online libraries and E-Commerce
websites all are Content Providers. Content Providers are commonly referred as
OTT (Over the Top) Providers
CDN – What does Content Provider do?
• This is a big debate between Service Providers and Content Providers. All of these
regulations lead the Content Providers to become the largest companies in the
world
CDN – What does Content Provider do?
• Google, Netflix, Facebook, Microsoft and almost all other big Content Providers
have their own CDN networks and deploy their cache engines widely inside ISP’s
and/or IXP networks to be closer to their customers
• If Google were an ISP, it would be the second largest carrier in the planet
CDN – What is OTT?
• So, when you hear Over the Top Providers, they are Content Providers. Content
can be any application, any service such as Instant messaging services (Skype,
WhatsApp), streaming video services (YouTube, Netflix, Amazon Prime), voice
over IP and many other voice or video content type
CDN – Advantages of CDN
• More popular contents are cached locally and the least popular ones can be
served from the origin
CDN – Advantages of CDN
• Before CDNs, the content were served from the source locations which increased
latency, thus reduced throughput
• Content were delivered from the central site. User requests were reaching to the
central site where the source was located
CDN – Before CDN – After CDN
BEFORE CDN AFTER CDN
CDN – Who is doing CDN Business?
• Amazon, Akamai, Limelight, Fastly and Cloudflare are the largest CDN providers
which provide services to different Content Providers all over the world
• Also, some major Content Providers such as Google, Facebook, Netflix, etc. prefer
to build their own CDN infrastructures and become large CDN providers
CDN – Where CDN Providers Deploy their servers?
• These servers are located Inside the Service Providers networks and the Internet
Exchange Points
• They have thousands of servers and they serve huge amount of Internet content.
CDNs are highly distributed platforms
CDN – How it works?
• To improve user experience and lower transmission costs, large companies set up
servers with copies of data in strategic geographic locations around the world
• This is called a CDN, and these servers are called edge servers, as they are closest
on the company’s network to the end-user
CDN – How it works?
• Edge servers are proxy caches that work in a manner similar to the browser
caches
• When a request comes into an edge server, it first checks the cache to see if the
content is present
• If the content is in cache and the cache entry hasn’t expired, then the content is
served directly from the edge server
CDN – How it works?
• Redirecting end users to the optimal edge server, based on the some attributes
such as network utilization , end user perceived latency , server load etc. is critical
issue and it is called as Request Routing
• A server selection algorithm may use a set of metrics, such as network utilization,
user perceived latency, network distance, and server load
CDN – Request Routing – Server Selection Mechanism
• Because the nearest server is commonly considered to best serve end users, end
user location is typically used as the primary server selection mechanism in
request routing
• In practice, most CDN servers simply obtain the end user location from the source
IP address of the incoming CDN request
CDN – Request Routing – Server Redirection Mechanisms
• The mechanism informs the end user about the optimal surrogate server selected
by the server selection mechanism
• HTTP Redirection, URL Rewriting and Anycast Methods are the alternative
methods to DNS Based approach in Server Redirection
CDN – Request Routing – Server Redirection Mechanisms –
HTTP Redirection
• HTTP redirection allows a web server to propagate the server selection result to
the end user via HTTP headers
• Hence, the end user can be redirected to the optimal server by following the
response generated from the Web server
CDN – Request Routing – Server Redirection Mechanisms –
HTTP Redirection
• The weakness of HTTP redirection lies in its reliance on support from the server
side to the client side
• In URL rewriting, the origin server rewrites the generated pages URL links in order
to indicate the best server
• Sites using CDNs configured with DNS Unicast use recursive DNS queries which
redirect visitors to their closest node
• This process involves setting up an alternate DNS record for their domain utilizing
the DNS CNAME record type
CDN – Request Routing – Server Redirection Mechanisms – DNS
Based Redirection
• When a website visitor types in the URL or clicks on a link which references the
primary web server, the user’s configured DNS provider then redirects them to
the CDN service which hosts the site’s content
• Once the user reaches the CDN, they are then routed to the closest node which is
determined by learning their location based on their IP address
CDN – Request Routing – Server Redirection Mechanisms – DNS
Based Redirection
• When the browser makes a DNS request for a domain name that is handled by a
CDN, the server handling DNS requests for the domain name looks at the
incoming request to determine the best set of servers to handle it
• At it’s simplest, the DNS server does a geographic lookup based on the DNS
resolver’s IP address and then returns an IP address for an edge server that is
physically closest to that area
CDN – Request Routing – Server Redirection Mechanisms – DNS
Based Redirection
• Companies may optimize their CDNs in other ways as well, for instance,
redirecting to a server that is cheaper to run or one that is sitting idle while
another is almost at capacity
CDN – DNS Based Redirection Mechanism Problem
• With this approach, CDN Authoritative DNS Server responds to the IP address of
the resolver and not that of the actual client
• For example, a client based in Australia could be configured to use a DNS resolver
based in Europe
• In this instance, once the visitor’s request has reached the CDN platform, the
service will incorrectly assume the originating IP to be in Europe and not Australia
sending the client to the incorrect node (EDNS which comes with privacy issue
offered as a solution)
CDN – Request Routing – Server Redirection Mechanisms – DNS
Based Redirection
CDN – Request Routing – Server Redirection Mechanisms –
Anycast
• In this approach, the same IP address is assigned to multiple servers located in a
distributed manner
CDN – Request Routing – Server Redirection Mechanisms –
Anycast
• When the client sends requests to the IP address, the requests will be routed to
the nearest server defined by the routing policy
• With this approach content providers may lose some server selection flexibility
Consider a scenario in which Anycast forwards requests to the nearest (yet
overloaded) server, by simply respecting a distance-based routing policy
CDN – Request Routing – Server Redirection Mechanisms –
Anycast
• CDN service providers who configure their platform with Anycast set a single IP
address for all their nodes
• Unlike a DNS Based CDN Redirection, where every node has a unique IP address
and recursive DNS routes the client to the closest node, Anycast uses the Border
Gateway Protocol (BGP) to route clients using the natural network flow of the
Internet
CDN – Request Routing – Server Redirection Mechanisms –
Anycast
• Anycast uses this information to efficiently route traffic based on hop count
ensuring the shortest traveling distance between the client and its final
destination
CDN – Request Routing – Server Redirection Mechanisms –
Anycast
• The primary difference being that with Anycast only a single IP address is
advertised by the CDN provider whereas with DNS based approach each node has
a unique IP address
• This CDN routing approach uses the originating client IP instead of the IP of the
DNS resolver which ensures the CDN directs the client to the closest possible
node
CDN – Request Routing – Server Redirection Mechanisms –
Anycast
• Due to its architecture, Anycast offers some advantages over DNS-based request
routing
• Most importantly, due to its efficient use of network hops, it allows for faster
connectivity
• DNS Based request routing though, offers more granular server selection criteria,
for example server load , POP bandwidth capacity , not only user latency to the
server
Wireless Local Area Network
Design
WLAN Architecture
• For many years, the conventional access point was a standalone WLAN device
where control, data and management planes of operation existed and operated
on the edge of the network architecture
• However, the most common industry term for the traditional access point is
autonomous AP
WLAN Architecture - Autonomous WLAN Architecture
• All configuration settings exist in the autonomous access point itself, and
therefore, the management plane resides individually in each autonomous AP
• All encryption and decryption mechanisms and MAC layer mechanisms also
operate within the autonomous AP
WLAN Architecture - Autonomous WLAN Architecture
• Access points operate as layer 2 devices; however, they still need a layer 3
address for connectivity to an IP network
• This model uses a central WLAN controller that resides in the core of the network
• In the centralized WLAN architecture, autonomous APs have been replaced with
controller-based access points, also known as lightweight APs or thin APs
WLAN Architecture - Centralized WLAN Architecture
• Management Plane: Access points are configured and managed from the WLAN
controller
• Control Plane: Adaptive RF, load balancing, roaming handoffs, and other
mechanisms exist in the WLAN controller
• Data Plane: The WLAN controller exists as a data distribution point for user traffic
WLAN Architecture - Centralized WLAN Architecture
• The encryption and decryption capabilities might reside in the centralized WLAN
controller or may still be handled by the controller-based APs, depending on the
vendor
Centralized WLAN Architecture - WLAN Controller
• AP Management
• WLAN Management
• User Management
• Device Monitoring
• VLANs
• Security Support
• Captive Portal
• Adaptive RF Spectrum Management
• Layer 3 Roaming Support
Centralized WLAN Architecture - WLAN Controller
There are two types of data-forwarding methods when using WLAN controllers:
• Centralized Data Forwarding: Where all data is forwarded from the AP to the
WLAN controller for processing
• It may be used in many cases, especially when the WLAN controller manages
encryption and decryption or applies security and QoS policies
Centralized WLAN Architecture - WLAN Controller
• A recent trend has been to move away from the centralized WLAN controller
architecture toward a distributed architecture
• Some WLAN vendors, such as Aerohive Networks, have designed their entire
WLAN system around a distributed architecture
Distributed WLAN Architecture
• Some of the WLAN controller vendors now also offer a distributed WLAN
architecture solution, in addition to their controller-based solution
• In these systems, cooperative access points are used, and control plane
mechanisms are enabled in the system with inter-AP communication via
cooperative protocols
Distributed WLAN Architecture
Source: arubanetworks
Distributed WLAN Architecture
• Coverage Design
• Roaming Design
• Channel Design
• Capacity Design
Coverage Design
• When designing a WLAN, probably the first thing that comes to mind will always
be the coverage area or zone from which Wi-Fi clients can communicate
• The primary coverage goals for any WLAN are to provide high data rate
connectivity for connected clients and to provide for seamless roaming
Coverage Design
• The exact opposite should be considered during the design phase. A proper
WLAN coverage design should be based on the perspective of the Wi-Fi clients
• Therefore, a quality received signal for the client is needed to provide high data
rate connectivity
Coverage Design – Received Signal
• So what exactly is
considered a quality
received signal? As
shown in the table,
depending on the
proximity between an
AP and a Wi-Fi client,
an 802.11 radio might
receive an incoming
signal anywhere
between –30 dBm and
the noise floor
Coverage Design – Received Signal
• A received signal of –70 dBm or higher usually guarantees that a client radio will
use one of the highest data rates that the client is capable of!
Coverage Design - Signal-to-Noise Ratio (SNR)
• Another reason for planning for –70 dBm coverage is because the received signal
of –70 dBm is usually well above the noise floor
• The SNR is not actually a ratio; it is simply the difference in decibels between the
received signal and the background noise (noise floor) measured in dBs
• If an 802.11 radio receives a signal of –70 dBm and the noise floor is measured at
–95 dBm, the difference between the received signal and the background noise is
25 dB. So, the SNR is 25 dB
Coverage Design - Signal-to-Noise Ratio (SNR)
• In most instances, a
received signal of –
70 dBm will be 20 dB
or higher above the
noise floor. In most
environments, a –70
dBm signal ensures
high rate
connectivity, and the
20 dB SNR ensures
data integrity
Coverage Design - Signal-to-Noise Ratio (SNR)
• If the amplitude of the noise floor is too close to the amplitude of the received
signal, data corruption will occur and result in layer 2 retransmissions
• To ensure that frames are not corrupted due to a low SNR, most WLAN vendors
recommend a minimum SNR of 20 dB for data WLANs and a minimum SNR of
25 dB for WLANs that require voice-grade communications
Coverage Design - VoWifi
• Therefore, when you are designing for voice-grade WLANs, a –65 dBm or
stronger signal is recommended so that the received signal is higher above the
noise floor
Coverage Design - VoWifi
• One VoWiFi vendor may state that a –67 dBm signal is sufficient, whereas
another vendor may suggest an SNR as high as 28 dB
• When you are designing for voice, SNR is the most important RF metric
Coverage Design – Dynamic Rate Switching
• Will a client device be able to communicate with an AP if the signal drops below –
70 dBm?
• The answer is yes, because most client devices can still decode an 802.11
preamble from received signals that are as low as only 4 dB above the noise floor
• As mobile client radios move away from an access point, they will shift down to
lower-bandwidth capabilities by using a process known as dynamic rate switching
(DRS)
Coverage Design – Dynamic Rate Switching
• Data rate transmissions between the access point and the client stations will shift
down or up, depending on the quality of the signal between the two radios
Coverage Design – Dynamic Rate Switching
• There is a direct correlation between signal quality and distance from the AP
• As mobile client stations move farther away from an access point, both the AP
and the client will shift down to lower rates that require a less complex
modulation and coding scheme (MCS)
Coverage Design – Dynamic Rate Switching
• Dynamic rate switching (DRS) is also referred to as dynamic rate shifting, dynamic
rate selection, adaptive rate selection, and automatic rate selection
• All these terms refer to a method of speed fallback on a Wi-Fi radio receiver (Rx)
as the incoming signal strength and quality from the transmitting Wi-Fi radio
decreases
Coverage Design – Dynamic Rate Switching
• The objective of DRS is upshifting and downshifting for rate optimization and
improved performance
• From the client’s perspective, the lower data rates will provide larger concentric
zones of coverage than the higher data rates
Coverage Design – Transmit Power
• A big factor that will affect both WLAN coverage and roaming is the transmit
power of the access points
• Although most indoor APs may have full transmit power settings as high as 100
mW, they should rarely be deployed at full power
• This extends the effective range of the access point; however, designing WLANs
strictly for range is an outdated concept
Coverage Design – Transmit Power
• APs at maximum transmit power will result in oversized coverage and not meet
your capacity needs
Coverage Design – Transmit Power
• Access points deployed at full transmit power in an indoor environment will also
increase the chance of co-channel interference, which can result in unnecessary
medium contention overhead
• APs at full power also increase the chance of sticky clients which negatively
impact roaming
Coverage Design – Transmit Power
• Typical indoor WLAN deployments are designed with the APs set at about one-
fourth to one-third maximum transmit power
• Higher user density environments may require that the AP transmit power be set
at the lowest setting of 1 mW
Coverage Design – Transmit Power
• One heavily debated topic is the concept of a balance power link between an AP
and a client
• In simpler words, the transmit power settings between an AP and a client are the
same
• Very often WLAN clients transmit at higher power levels than indoor access
points
Coverage Design – Transmit Power
• The transmit power of many indoor APs may be 10 mW or less due to high-
density design needs
• However, most clients, such as smartphones and tablets, may transmit at fixed
power of 15 mW or 20 mW
• Because clients often transmit at a higher power than the APs and because clients
are mobile, co-channel interference (CCI) is often caused by a power mismatch
Roaming Design
• Roaming is the method by which client stations move between RF coverage cells
in a seamless manner
• Roaming is one of the most common issues you will need to troubleshoot in
WLAN
• Client stations, not the access point, decide whether or not the client roams
between access points
• Some vendors may involve the access point or WLAN controller in the roaming
decision, but ultimately the client station initiates the roaming process with a
reassociation request frame
Roaming Design
• The method by which a client station decides to roam is a set of proprietary rules
determined by the manufacturer of the 802.11 radio, usually defined by a
roaming trigger threshold
Roaming Design
• Roaming thresholds usually involve signal strength, SNR, and bit-error rate
• As the client station communicates on the network, it continues to look for other
access points via probing and listening on other channels and will hear received
signals from other APs
Roaming Design
• As the received signal from the original AP gets weaker and a station hears a
stronger signal from another known access point, the station will initiate the
roaming process
Roaming Design
• However, other metrics, such as SNR, error rates, and retransmissions, may also
have a part in what triggers a client to roam
• SNR is a metric used by some WLAN clients to trigger roaming events as well as
dynamic rate switching
Roaming Design
• One major consideration when designing a WLAN is what happens when client
stations roam across layer 3 boundaries
• The roaming is seamless at layer 2, but user VLANs are tied to different subnets
on either side of the router
• As a result, the client station will lose layer 3 connectivity and must acquire a new
IP address
Roaming Design – Layer 3 Roaming
• Any connection-oriented applications that are running when the client re-
establishes layer 3 connectivity will have to be restarted
• For example, a VoIP phone conversation would disconnect in this scenario, and
the call would have to be reestablished
Roaming Design – Layer 3 Roaming
Roaming Design – Layer 3 Roaming
• Because 802.11 wireless networks are usually integrated into pre-existing wired
topologies, crossing layer 3 boundaries is often a necessity, especially in large
deployments
• Layer 3 roaming solutions based on Mobile IP use some type of tunneling method
and IP header encapsulation to allow packets to traverse between separate layer
3 domains, with the goal of maintaining upper-layer communications
Roaming Design – Layer 3 Roaming – Mobile IP
• The mobile client must register its home address with a device called a home
agent (HA)
Roaming Design – Layer 3 Roaming – Mobile IP
• The home agent is a single point of contact for a client when it roams across layer
3 boundaries
• The HA shares client MAC/IP database information in a table, called a home agent
table (HAT) with another device, called the foreign agent (FA)
Roaming Design – Layer 3 Roaming – Mobile IP
• In this example, the foreign agent is another access point that handles all Mobile
IP communications with the home agent on behalf of the client
• When the client roams across layer 3 boundaries, the client is roaming to a
foreign network where the FA resides
• The FA uses the HAT tables to locate the HA of the mobile client station
• Any traffic that is sent to the client’s home address is intercepted by the HA and
sent through the Mobile IP tunnel to the FA
• The FA then delivers the tunneled traffic to the client, and the client is able to
maintain connectivity using the original home address
Roaming Design – Layer 3 Roaming – Mobile IP
• In our example, the Mobile IP tunnel is between two APs on opposite sides of a
router
• If the user VLANs exist at the edge of the network, tunneling of user traffic
occurs between access points that assume the roles of HA and FA
Roaming Design – Layer 3 Roaming – Mobile IP
• One controller functions as the home agent, while another controller functions
as the foreign agent
Roaming Design – Layer 3 Roaming
• Larger enterprise networks often have multiple user and management VLANs
linked to multiple subnets; therefore, a layer 3 roaming solution will be required
Channel Design
• Another key component of WLAN design is the selection of the proper channels
to be used among multiple APs in the same location
• When designing a wireless LAN, you need overlapping coverage cells in order to
provide for roaming
• However, the overlapping cells should not have overlapping frequencies, and in
the United States only channels 1, 6, and 11 should be used in the 2.4 GHz ISM
band to get the most available, non-overlapping channels
Channel Design - Adjacent Channel Interference
• If overlapping coverage cells also have frequency overlap from adjacent channels,
the transmitted frames will become corrupted, the receivers will not send ACKs,
and layer 2 retransmissions will significantly increase
Channel Design - 2.4 GHz Channel Reuse
• Once again, overlapping RF coverage cells are needed for roaming, but
overlapping frequencies must be avoided
• The only three channels that meet these criteria in the 2.4 GHz ISM band are
channels 1, 6, and 11 in the United States
Channel Design - 2.4 GHz Channel Reuse
• Another of the most common mistakes many businesses make when first
deploying a WLAN is to configure multiple access points all on the same channel
• If all the APs are on the same channel, unnecessary medium contention overhead
occurs
• CSMA/CA dictates half-duplex communications, and only one radio can transmit
on the same channel at any given time
Channel Design - Co-Channel Interference
• Does an RF signal just stop at the edge of a coverage cell designed for –70 dBm
coverage?
The answer is no, the RF signal continues to propagate, and the signal can be heard by other
802.11 radios at a great distance
Channel Design - Co-Channel Interference
• An 802.11 radio will defer transmissions if it hears the PHY layer preamble
transmissions of any other 802.11 radio at a signal detect (SD) threshold just four
decibels (dB) or greater above the noise floor
• Any radio that hears another radio on the same channel will defer, which results
in medium contention overhead and delay
Channel Design - 5 GHz Channel Reuse
• Channel reuse patterns should also be used in the 5 GHz frequency bands
• If all the 5 GHz channels are legally available for transmissions, a total of 25
channels may be available for a channel reuse pattern at 5 GHz
Channel Design - 5 GHz Channel Reuse
• The more channels that are used, the greater the chance that CCI can be
prevented, including co-channel interference that originates from client devices
Channel Design - 40 MHz Channel Design
• Many WLAN access point vendors require that channel bonding be manually
enabled because there is the potential for channel bonding to negatively impact
the performance of the WLAN
Channel Design - 40 MHz Channel Design
• One of the advantages of using 5 GHz instead of 2.4 GHz is that there are many
more 5 GHz 20 MHz channels that can be used in a reuse pattern
• Only three 20 MHz channels can be used in 2.4 GHz. The problem with using only
three 20 MHz channels is that there will always be some amount of co-channel
interference even though these channels are non-overlapping
Channel Design - 40 MHz Channel Design
• Medium contention overhead due to APs on the same 20 MHz channel can be
almost completely avoided in 5 GHz because there are more channels
• A 5 GHz channel reuse plan of eight or more 20 MHz channels will greatly
decrease co-channel interference and medium contention overhead
Channel Design - 40 MHz Channel Design
• If only eight 20 MHz channels are being used, then a four channel 40 MHz
channel reuse pattern exists in 5GHz
Channel Design - 40 MHz Channel Design
• Although the bandwidth is doubled for the 802.11n/ac radios, there will be an
increase of medium contention overhead because there are only four 40 MHz
channels, and access points and clients on the same 40 MHz channel will likely
hear each other
• The medium contention overhead may have a negative impact and decreases
any gains in performance that the extra bandwidth might provide
Channel Design - 40 MHz Channel Design
• Another problem with channel bonding is that it usually will result in a higher
noise floor of about 3 dB
• If the noise floor is 3 dB higher, then the SNR is 3 dB lower, which means that the
radios may shift down to lower MCS rates and therefore lower modulation data
rates
• In many cases, this offsets some of the bandwidth gains that the 40 MHz
frequency space provides
Channel Design - 40 MHz Channel Design
• So, should you use channel bonding or not? If four or fewer 40 MHz channels are
available, you might not want to turn on channel bonding, especially if the 5 GHz
radios are transmitting at a higher power level
• If the majority of the WLAN clients do not support channel bonding, don’t use
channel bonding
Channel Design - Static Channels and Transmit Power vs.
Adaptive RF (RRM)
• Probably the most debated topic when it comes to WLAN design is whether to
use static channel and power settings for APs versus using adaptive channel and
power settings
• Based on the accumulated RF information, the access points adjust their power
and channel settings, adaptively changing the RF coverage cells
Channel Design - Static Channels and Transmit Power vs.
Adaptive RF (RRM)
• The algorithms for RRM constantly improve year after year. The majority of
commercial WLAN customers use RRM because it is easy to deploy
Channel Design - Static Channels and Transmit Power vs.
Adaptive RF (RRM)
• From the perspective of the client station, only one access point exists
Channel Design - Single-Channel Architecture
• In this type of WLAN architecture, all access points in the network can be
deployed on one channel in 2.4 GHz or 5 GHz frequency bands
Channel Design - Single-Channel Architecture
• Client stations believe they are connected to only a single access point, although
they may be roaming across multiple physical Access Points
Channel Design - Single-Channel Architecture
• We know that client stations make the roaming decision in an MCA environment
• However, client stations do not know that they roam in an SCA environment
Channel Design - Single-Channel Architecture
• The clients must still be mobile and transfer layer 2 communications between
physical access points
• All the client-roaming mechanisms are now handled back on the WLAN
controller, and client-side roaming decisions have been eliminated in Single
Channel Architecture
Capacity Design
• When a wireless network is designed, two concepts that typically compete with
each other: Capacity and Range
Capacity Design
• In the early days of wireless networks, it was common to install an access point
with the power set to the maximum level to provide the largest coverage area
possible
Capacity Design
• This was typically acceptable because there were very few wireless devices
• Also, access points were very expensive, so companies tried to provide the most
coverage while using the fewest access points
Capacity Design
• Most WLANs are designed with a primary focus on client capacity needs
• We still plan for a –70 dBm or better received signal, high SNR, seamless roaming,
and a proper channel reuse pattern
Capacity Design
• Important to know that, how you design for coverage will also impact capacity
needs
• As it is said before, APs configured for full transmit power are no longer ideal
Capacity Design
• Adjusting the AP transmit power to limit the effective coverage area is known as
cell sizing and is one of the most common methods of meeting client capacity
needs
• Typical indoor WLAN deployments are designed with the APs set at about one-
fourth to one-third transmit power
Capacity Design
• Higher user and client density environments may require that the AP transmit
power be set at the lowest setting of 1 mW
• In other words, more APs are needed to meet capacity needs, and therefore AP
transmit power will need to be lowered
Capacity Design
• Limiting the transmit power of APs also helps reduce CCI caused by APs, which
has a direct impact on performance
Capacity Design - High Density
• The terms high density (HD) and very high density (VHD) are often used when
discussing capacity design and planning for a WLAN
Capacity Design - High Density
• The average person may want to connect to an enterprise WLAN with as many as
three or four Wi-Fi devices
• Obviously, the density of client devices also depends on the number of users
Capacity Design - High Density
• APs are deployed in many different rooms with walls that will often contribute to
different levels of attenuation
Capacity Design - Very High Density
• All the APs likely hear each other within the open space
• Design for a very high-density WLAN is quite complex and different from standard
high-density environments with walls
Capacity Design - Ultra High Density
• The best examples of an ultra high-density WLAN are stadiums and sports arenas
Capacity Design - How many clients can connect to an AP?
• There are simply too many variables to always give the same answer for any
WLAN vendor’s AP
• The default settings of an enterprise WLAN radio might allow as many as 100–250
client connections
Capacity Design - How many clients can connect to an AP?
• Since most enterprise APs are dual-frequency with both a 2.4 GHz and 5 GHz
radio, theoretically 200–500 clients could associate to the radios of a single AP
Capacity Design - How many clients can connect to an AP?
• The performance needs of this many client devices will not be met and the user
experience will be miserable
• The perception will be that the Wi-Fi is “slow” If the access point is using
802.11n/ac radios with 20 MHz channels, a good rule of thumb is that each radio
could support 35–50 active devices for average use
Capacity Design – Common Design Questions
• However, bandwidth-intensive
applications, such as high-
definition video streaming, will
have an impact
1. First, how many users currently need wireless access and how many Wi-Fi devices will they be
using?
2. Second, how many users and devices may need wireless access in the future?
3. Where are the users?
These first two questions will help you to begin adequately planning for a good
ratio of devices per access point while allowing for future growth
Capacity Design - How many users and devices are expected?
• The third question of great significance is, where are the users? Sit down with
network management and indicate on the floor plan of the building any areas of
high user density
• For example, one company might have offices with only 1 or 2 people per room,
whereas another company might have 30 or more people in a common area
separated by cubicle walls
Capacity Design - How many users and devices are expected?
• Other examples of areas with high user density are call centers, classrooms, and
lecture halls
Capacity Design - How many users and devices are expected?
• You should always plan to conduct a validation survey when the users are
present, not during off-hours
Capacity Design - How many users and devices are expected?
• Always remember that all client devices are not equal. Many client devices
consume more airtime due to lesser MIMO capabilities
• For example, an older 802.11n tablet with a 1×1:1 MIMO radio transmitting on a
20 MHz channel can achieve a data rate of 65 Mbps with TCP throughput of 30
Mbps to 40 Mbps
Capacity Design - What types of client devices are connecting to
the WLAN?
• Many laptops also have 3×3:3 MIMO capabilities and thus are capable of higher
data rates
• The bulk of newer smartphones and tablets are now 2×2:2 MIMO capable
Capacity Design - What types of client devices are connecting to
the WLAN?
• The point is that devices with less MIMO capabilities consume more airtime and
therefore affect the aggregate performance of any WLAN
• An AP can service more 2×2:2 MIMO clients efficiently as opposed to legacy 1×1:1
MIMO clients, which operate at lower data rates
Capacity Design – How many AP per room?
• The location where APs are physically mounted also needs to be taken into
consideration based on capacity needs
• Some large areas of a building may only have two or three users that require Wi-
Fi access
• Conversely, other areas, such as an auditorium may have hundreds of users and
devices that need Wi-Fi connectivity
Capacity Design – How many AP per room?
• One AP per every two or three classrooms may be sufficient to meet capacity needs
Capacity Design – How many AP per room?
• How many APs are needed depends on the capacity requirements as well as
customer environment
• The unlicensed 5 GHz frequency spectrum offers many advantages over the
unlicensed 2.4 GHz frequency spectrum for Wi-Fi communications
• The 5 GHz U-NII bands offer a wider range of frequency space and many more
channels
Capacity Design - Band Steering
• A proper 5 GHz channel reuse pattern using multiple channels will greatly
decrease medium contention overhead caused by co-channel interference
• In the 2.4 GHz band, there is always medium contention overhead due to CCI
simply because there are only three channels
Capacity Design - Band Steering
• Another consideration of the 2.4 GHz band is that, in addition to this band being
used for WLAN networking, it is heavily used by many other types of devices,
including microwave ovens, baby monitors, cordless telephones, and video
cameras
Capacity Design - Band Steering
• With all of these different devices operating in the same frequency range, there is
much more RF interference and a much higher noise floor than in the 5 GHz
bands
Capacity Design - Band Steering
• So, if the use of the 5 GHz bands will provide better throughput and performance,
how can we encourage the clients to use this band?
• For starters, it is the client that decides which AP and which band to connect to,
typically based on the strongest signal that it hears for the SSID that it wants to
connect to
Capacity Design - Band Steering
• Most access points have both 2.4 GHz and 5 GHz radios in them, with both of
them advertising the same SSIDs
• Since the 5 GHz signals naturally attenuate more than the 2.4 GHz signals, it is
likely that the client radio will identify the 2.4 GHz radio as having a stronger
signal and connect to it by default
Capacity Design - Band Steering
• In many environments, the client would be capable of making a strong and fast
connection with either of the AP’s radios but will choose the 2.4 GHz signal
because it is the strongest
Capacity Design - Band Steering
• When a dual-frequency client first starts up, it will transmit probe requests on
both the 2.4 and 5 GHz bands, looking for an AP
• Load balancing the clients between access points, ensures that a single AP is not
overloaded with too many clients, and that the total client population can be
served by numerous APs, with the final result being better performance
Capacity Design – Load Balancing
• When a client wants to connect to an AP, the client will send an association
request frame to the AP
• If an AP is already overloaded with too many clients, the AP will ignore the
association response of the client
• Assumption is that the client will then send another association request to
another nearby AP with a lesser client load
• Over time, the client associations will be fairly balanced across multiple APs
Capacity Design – Load Balancing
• The client load information will obviously have to be shared among the access
points
• Load balancing is a control plane mechanism that can exist either in a distributed
architecture where all the APs communicate with each other, or within a
centralized architecture that utilizes a WLAN controller
Capacity Design - Airtime Consumption
• Designing for client device capacity has now become the norm
• WLAN design practices now dictate that you design to minimize airtime
consumption, which is directly related to capacity design
Capacity Design - Airtime Consumption
• Wi-Fi is a half-duplex RF medium and only one radio can transmit on a channel at
any given time
Capacity Design - Airtime Consumption
• Whenever a radio has won a transmission opportunity, the radio owns the
available airtime until it finishes transmitting
• Yes, every radio needs to be able to transmit and deliver data; however, there are
some simple WLAN design best practices that can minimize unnecessary airtime
consumption
Capacity Design - Airtime Consumption
• Designing for –70 dBm coverage and high SNR also ensures that client devices will
transmit 802.11 data frames at high data rates, based on the client radio’s
capabilities
Capacity Design - Airtime Consumption
• So what are some other WLAN design best practices that can reduce airtime
consumption?
• One of the best ways to cut down on airtime consumption is to disable some of
the lower data rates on an AP
Capacity Design - Airtime Consumption
• Figure depicts an AP
communicating with
multiple client stations at
6 Mbps while
communicating with one
client using a 150 Mbps
data rate
Capacity Design - Airtime Consumption
• Artificial intelligence (AI) makes it possible for machines to learn from experience,
adjust to new inputs and perform human-like tasks
Content Recommendation
Voice Recognition
Autonomous Driving
Drivers of Advances in AI
Machine Learning vs. Traditional Programming
Machine Learning
• There are many types of Machine Learning but mainly all Machine Learning and
Deep Learning algorithms are classified as either Supervised or Unsupervised
Learning or Reinforcement Learning
Machine Learning Types
Machine Learning – Supervised Learning
Supervised learning
uses labeled training
data to train machines
to learn relationships
between given inputs
to a given output
Machine Learning – Supervised Learning
Supervised Learning
provides predictability
for each labeled
information
Machine Learning – Supervised Learning
Supervised vs. Unsupervised Learning
• Supervised
Learning
separate each
labeled data
• Unsupervised
Learning cluster
them
Machine Learning – Unsupervised Learning
Machine Learning – Unsupervised Learning
Machine Learning – Reinforcement Learning
• An RL agent learns from the consequences of its actions and it selects its actions
on basis of its past experiences and also by new choices (exploration)
• The reinforcement signal that the RL-agent receives is a numerical reward, which
encodes the success of an action's outcome, and the agent seeks to learn to
select actions that maximize the accumulated reward over time
Machine Learning – Reinforcement Learning
• After training is
complete, the dog
should be able to
observe the owner
and take the
appropriate action,
for example, sitting
when command is
to “sit”
Real Life example of Reinforcement Learning
Autonomous Parking
Real Life example of Reinforcement Learning
Real Life Examples of Supervised and Unsupervised Learning
Supervised and
Unsupervised Learning
are not just used to
understand whether
picture is cat or dog!
• It teaches a computer to filter inputs through layers to learn how to predict and
classify information
• Its purpose is to mimic how the human brain works, thus commonly knowns as
Deep Neural Networks
Deep Learning (Neural Networks)
• The idea behind deep learning algorithm, you get input from observation and you
put your input into one layer
• That layer creates an output which in turn becomes the input for the next layer,
and so on
• It is the practice of fine-tuning the weights of a neural net, based on the error
rate obtained in the previous iteration
• Segment Routing (SR) leverages the source routing paradigm. A node steers a
packet through an ordered list of instructions, called ‘ segment ‘
• State is kept in the packet header, not on the router, with Segment Routing
• If you have 100 Edge Routers in your network and if you enable MPLS Traffic Edge
to Edge, you would have 100×99/2 = 4950 LSP states on your Midpoint LSR. This is
prevalent in many MPLS TE enabled network
Segment Routing – Why Segment Routing?
• If you enable Segment Routing and if you evaluate the same midpoint case (since
you assign a Prefix/Node SID for every Edge router), Midpoint LSR would have
110 entries instead of 4500 entries
• Segment list can easily get big if you use explicit routing for the purpose of OAM.
If you do that, you may end up with 7-8 segments. In that case, it is pertinent that
you check the hardware support
Segment Routing – Why Segment Routing?
• Cisco claims that they have performed the tests on a number of service provider
networks and that their findings show that two or three segments would be
enough for the most explicit path scenarios
• You can use Segment Routing to provide MPLS VPN service without using LDP for
the transport label distribution
Segment Routing MPLS – How does it work?
• There are mainly two types of Segment in Segment Routing with MPLS Dataplane
: Prefix and Adjacency Segment
• Prefix Segment is used for the shortest-path to the IGP prefix and it is Equal Cost
Multi Path (ECMP) aware
• Segment Routing provides Traffic Engineering without having soft state RSVP-TE
protocol on your network. Soft state protocols require a lot of processing power
• Although Segment Routing does not have permission control, you can use routers
to specify, for instance, 50Mbs LSP path for traffic A and 30 Mbps for traffic B
using centralized controller, a process that allows you to use traffic engineering
Segment Routing – Use Cases/Applications
• Segment Routing provides Fast Reroute without RSVP-TE, and you do not need to
have thousands of forwarding state in the network, as it uses IP FRR technology,
specifically Topology Independent LFA
• With Traffic Engineering, you can have ECMP capability, a task that is very difficult
to achieve with RSVP Based Traffic Engineering. This is because you need to
create two tunnels
Segment Routing – Use Cases/Applications - EPE
• However, with Segment Routing, BGP Egress peer engineering is much easier
Segment Routing Current Status
• If you have devices not supported by Segment Routing but by LDP, you can use
Segment Routing to interwork the LDP enabled devices
• Segment routing is applied to an IPv6 data plane by encoding IPv6 segments into
new routing extension header (SRH)
Segment Routing (SRv6) – SR IPv6 Dataplane
• Can be used as unified control plane for DC, WAN and Metro Networks
• SR MPLS in DC can be used but generally hosts don’t deploy MPLS, thus SRv6 is
seen as better candidate to be deployed towards up to the Host as IPv6 in the
host is supported by every vendor
Segment Routing (SRv6) – SR IPv6 Dataplane
• SR MPLS is used for Transport purpose , SRv6 can be used not only for Transport, but also for
Service signaling, so much more protocol can be eliminated in the network
Segment Routing Header in IPv6
Segment Routing (SRv6) – SR IPv6 Dataplane
• When SRv6 is deployed, only the nodes that have to process the packet header must have
SRv6 dataplane support, all other nodes in the network are just plain IPv6 nodes
Segment Routing (SRv6) – SR IPv6 Dataplane
• 128 bit segment ID is broken into three fields: Locator, Function and Arguments
• The Argument field of the SID could carry the QoS Flow Identifier (QFI), for
example
Segment Routing (SRv6) – SR IPv6 Dataplane
• Locator information is distributed by IGP and all other nodes install this
information to their IPv6 routing table. Even it is not a real address, the other
nodes will be able to route packets to the this address (aka SRv6 SID)
• Segment Left (which is decremented at every SRv6 hop) is copied to the IPv6
Destination field, allowing standard routing practices to be applied to SRv6
packets when traversing non-SRv6 capable network elements
Segment Routing (SRv6) – Non-SRv6 Capable Nodes
• Non-SRv6 capable
nodes just perform
IPv6 routing, they don’t
have to understand or
take any action on SRH
SRv6 Basic Functions - END Function
• When the node receive the packet with End Function (Function 0) it decrements
the Segments Left field, update the Destination Address field in the IPv6 header
and forward the packet to next node along the shortest path route (Node SID in
SR-MPLS uses shortest path tree as well)
• If Segment Left is 0, it means final node is reached, thus IPv6 and SRH headers are
removed and payload is processed
SRv6 Basic Functions - END Function
SRv6 Basic Functions - END.X Function