Virtual Private Networks
Advanced Technologies
Petr Grygárek
Agenda:
Supporting Technologies (GRE, NHRP)
Dynamic Multipoint VPNs (DMVPN)
Group Encrypted Transport VPNs (GET VPN)
Multicast VPNs (mVPN)
© 2010 Petr Grygarek 1
Generic Routing Encapsulation
(GRE)
© 2010 Petr Grygarek 2
GRE Principle (1)
• RFC 1701 - encapsulation of an arbitrary L3
protocol over another arbitrary L3 layer
• defines additional header (4-20B)
• “key“ field may potentially identify VRF
• multiple overlapping IP ranges
• RFC 1702 - encapsulates IP in IP
• accompanies RFC 1701
• IP protocol type 47
• Allows the creation of tunnels over the shared
infrastructure
• Originally P2P, support for P2MP interfaces added later
© 2010 Petr Grygarek 3
GRE Principle (2)
• Completely stateless
• Low overhead
• Tunnel interface is by default always up
• even if the remote point is unavailable
• Support for GRE keepalives
• GRE encapsulation process may be hardware-
accelerated on some platforms
• But be aware that decapsulation may be still CPU-
based ;-)
© 2010 Petr Grygarek 4
Usage of GRE
• Data tunnels over IP infrastructure
• Unencrypted in the original implementation
• Passing routing information between VPN sites
© 2010 Petr Grygarek 5
GRE Point-to-point Interface
• Tunnel Source Address
• Local endpoint physical interface
• implies local tunnel endpoint physical IP address
• Remote endpoint physical (IP) address
• Optional tunnel protection parameters
Routing over tunnel via remote endpoint tunnel
address
© 2010 Petr Grygarek 6
GRE Multipoint Interface
• Tunnel interface address
• Local endpoint physical interface
• Optional tunnel protection parameters
• No destination endpoint addresses
• Neither tunnel nor physical
• Destination physical addresses are determined by
(ARP-like) NHRP „database“
• Maps the destination tunnel address to the corresponding
physical IP address
• List of peers where multicasts have to be forwarded
© 2010 Petr Grygarek 7
Next-Hop Resolution Protocol
(NHRP)
RFC 2332
© 2010 Petr Grygarek 8
NHRP Principle
• Allows systems connected to NBMA network to
dynamically learn “physical” (“NBMA”) addresses
of other systems to let them them communicate
directly
• NBMA may be either connection-oriented network (FR,
ATM) or IP infrastructure
• Direct communication may require to establish a SVC
• NBMA addresses may be either IP addresses or L2
addresses (DLCI, VPI/VCI)
• May be understood as ARP equivalent for NBMA
• ARP unusable because underlay does not support broadcast
© 2010 Petr Grygarek 9
NHRP Usage
• Reduction of multihop routing over NBMA
network that is not fully meshed (SVC mesh)
• Starts with a partial mesh topology
• Most often hub-and-spoke
• Helps to establish a “dynamic full mesh”
© 2010 Petr Grygarek 10
Dynamic Full Mesh Advantages (1)
• Avoids multi-hop routing and overutilizing of the hub
router
• Avoids double encryption/decryption
• Decreases delay
• Utilizes the underlying network infrastructure more
efficiently
• (The same is valid for static full mesh)
• Support for dynamic NBMA addresses
• Systems behind NAT or with dynamic addresses (DHCP)
• For IP underlay clouds, mapping between tunnel „inner“
addressess and tunnel endpoints is needed
© 2010 Petr Grygarek 11
Dynamic Full Mesh Advantages
(2)
• Only spoke-to-spoke links that are needed for the
traffic are (dynamically) established
• No need to configure full mesh (manually)
• No limitation of number of tunnel interfaces and number of
routes supported on low-end routers
• Allows to mix high-end and low-end routers
• static full-mesh configuration would require all routers to have resources for
full-mesh implementation
• If a spoke-to-spoke tunnel cannot be established, the traffic may
still be routed through the hub
© 2010 Petr Grygarek 12
NHRP Components
• Next-Hop Clients (NHC)
• Dynamically register with NHS
• May be added without changing NHS configuration
• Next-Hop Servers (NHS)
• Allows NHC to register and discover logical-to-
physical address mapping for other NHC
NHRP Cache (on NHC)
• Dynamic and static entries
© 2010 Petr Grygarek 13
NHRP Messages (1)
• Registration Request/Response
• Registration of dynamic physical addresses with NHS
• Inner-outer (L3/L2 or L3/L3) address pair
• Resolution Request
• May be routed through multiple systems along the
(suboptimal) already known path to the destination system
• Resolution Response
• Send by the destination system directly to the requesting
system
© 2010 Petr Grygarek 14
NHRP Messages (2)
• Purge Request/Response
• Makes the system (NHC) to invalidate the cached
information obtained by NHRP
© 2010 Petr Grygarek 15
NHRP & multicast
• Hub has to be explicitly configured to send
multicasts to registered spokes
• Multicasts are necessary for many routing protocols
© 2010 Petr Grygarek 16
Dynamic Multipoint VPNs
(DM VPN)
© 2010 Petr Grygarek 17
DMVPN Principle (1)
• Makes configuration of multipoint VPNs easier by
avoiding a need to configure VPN tunnels manually
• Only hub-and-spoke topology has to be preconfigured
• Creates (encrypted) spoke-to-spoke tunnels on data-
driven basis
• Utilizes NHRP, GRE and IPSec
• The communication between spokes is routed by hub until the direct
tunnel is created (or if it could not be created)
• On-demand IPSec tunnel negotiation
© 2010 Petr Grygarek 18
DMVPN Principle (2)
• Spokes (NHC) dynamically registers with hub (NHS)
using NHRP
• Inner tunnel (logical) to (currently assigned) physical
address mapping
• Allows spoke to look up an address of another spoke
• spokes may have dynamic (physical) addresses
• Each spoke may create spoke-to-spoke tunnels up to
its available resources
• Does not limit any other spoke to use all its available
resources
• Dynamic tunnels are deleted after idle timeout expires
© 2010 Petr Grygarek 19
DMVPN Advantages
• Spokes can be added without any hub
configuration change
• Uniform spoke configuration
• Utilizes standard protocols and solutions
• Combination of GRE,NHRP and IPSec
© 2010 Petr Grygarek 20
Developmental phases of
DMVPN
• Phase 1 – hub-and-spoke capability only
• Phase 2 – dynamic spoke-to-spoke tunnels
• Phase 3 – limits routing information advertised to
spokes
• Better scalability
• Does not require all spoke routers to maintain all the
routes of the VPN, just those needed for currently
used spoke-to-spoke communications
© 2010 Petr Grygarek 21
DMVPN Phase 2 (1)
• Dynamic routing protocol on hub-to-spoke tunnel
advertises all routes behind hub and other spokes
• Uses hub's or respective spokes' tunnel (inner) address as
next hops to networks behind particular spokes
• routing protocol has to preserve next hop (spoke-to-spoke)
• Split horizon rule has to be turned off on hub
• Each spoke has routes to all networks in its routing
table
• with tunnel interface as the outgoing interface
© 2010 Petr Grygarek 22
DMVPN Phase 2 (2)
• NHRP runs on the spoke's tunnel (multipoint)
interface
• NHRP cache is used to find the logical-to-NBMA
mapping for the next hop address
• If an entry is not found in the cache, NHRP request
has to be send to NHS
• A disadvantage is a significant load on the routing
protocol in VPN
© 2010 Petr Grygarek 23
DMVPN Phase 3 (1)
• Reduces the amount of routes advertised to
spokes
• Hub summarizes routing information advertised to
spokes
• Hub sets itself as a next hop
• Spoke sends the first data packet to hub over the
tunnel interface
• The logical-to-NBMA mapping is preconfigured for
hub
© 2010 Petr Grygarek 24
DMVPN Phase 3 (2)
• If a hub receives a packet from a spoke on the
tunnel interface that has to be routed to other
spoke by the same router interface, it initiates the
spoke-to-spoke tunnel creation
• Sends redirect to the source spoke
• NHRP redirect message
• Contains the correct (logical) next hop address and the
original destination address
© 2010 Petr Grygarek 25
DMVPN Phase 3 (3)
• Based on NHRP Redirect from hub, spoke sends
NHRP Request to determine a NBMA address for
the logical next hop address from the redirect
message
• NHRP Request is routed to the destination spoke
• Destination spoke responds to the original requesting
spoke with its NBMA the whole subnet from its routing
table that matches the required destination address from
the NHRP Request
© 2010 Petr Grygarek 26
DMVPN Phase 3 (4)
• Source spoke inserts the record for the particular
destination network into its routing table
• pointing to the newly created spoke-to-spoke tunnel
interface
• most often protected by IPSec profile
• The following packets follow the direct spoke-to-
spoke path
© 2010 Petr Grygarek 27
Problems of Hub Failure
• Spoke will delete all routes pointing to the
(multipoint) tunnel interface
• Even existing spoke-to-spoke tunnels become
unusable as there is no entry in the routing table to
route traffic into them
• Tunnels will remain available, but unused
• At least until NHRP cache entries time out
• Routes advertised from redundant hubs may solve
the problem
• Normally they are ignored because of worse AD
© 2010 Petr Grygarek 28
DMVPN Configuration
• Multipoint GRE interface on hub
• Because it connects to multiple spokes
• Multipoint GRE interface on spokes
• Because multiple spoke-to-spoke tunnels may be
initiated in parallel
• IPSec profile is typically applied on GRE tunnel
to protect the traffic
• Standard IPSec mechanisms are used
© 2010 Petr Grygarek 29
Group-Encrypted Transport VPNs
(GET VPN)
© 2010 Petr Grygarek 30
GET VPN (1)
• Tunnel-less any-to-any service
• Better scalability
• No multiple tunnel interfaces needed for partial/full mesh
• No overlay routing
• Optimal traffic paths
• IPSec based - transport mode
• Supports multicasts and QoS
• IP header visible (incl. QoS marking)
© 2010 Petr Grygarek 31
GET VPN (2)
• Secure central key distribution to routers (group
members) in a domain
• Key server
• Unicast & multicast key distribution to authorized routers
(download/push)
• Policy management
• Secondary key server implemented for redundancy
• automatic failover (COOP protocol)
© 2010 Petr Grygarek 32
GDOI: Group Domain of
Interpretation (1)
• Key management protocol between group
member(s) and key server
• RFC 3457
• based on ISAKMP/IKE
• establishes a security association among two or more
group members
• Uses IKE Phase 1 to authenticate group
members to a key server
• according to defined group policy
© 2010 Petr Grygarek 33
GDOI: Group Domain of
Interpretation (2)
• Group key (“key encryption key”, KEK) is pulled
from key server during IKE phase 2 by group
members
• Key server pushes traffic encryption keys (TEK) to
all group members using unsolicitated multicast /
broadcast / unicast messages - encrypted by KEK
• periodic re-keys
• TEK may be used for both unicast or multicast
communication between GMs
© 2010 Petr Grygarek 34
Multicast VPNs (mVPN)
© 2010 Petr Grygarek 35
Implementation Requirements
• Potentially different PIM modes in the core and
each mVPN
• Support for all PIM modes
• Overlap of customers' multicast addressing
© 2010 Petr Grygarek 36
Overlay Infrastructure
• Full mesh of tunnels between VPN sites
• Hides VPN multicast from the core
• No multicast state in the core
• Customers' multicasts groups may overlap
• Non-scalable
• Suboptimal multicast routing (replication)
© 2010 Petr Grygarek 37
2-level Multicast Solution
• Multicast Distribution Tree (MDT)
• Aggregates all multicast traffic between sites of the
same VPN
• GRE-encapsulated
• Including system-oriented traffic between PE routers (PIM
sessions between PEs)
• May be seen as multiaccess segment
• Every PE router is connected with virtual tunnel interface
• Suboptimal – delivers ALL multicast traffic to all PEs
of the VPN
© 2010 Petr Grygarek 38
An optimization: Data MDT (1)
• Configured optionally
• Carries traffic of a single (or multiple) customer's
group(s)
• Source PE switches to Data MDT from the Default MDT
after preconfigured traffic threshold for given group(s)
• The tree spans only PEs with networks interested in
particular multicast groups behind them
• Default MDT is used to inform other PEs about active
sources sending to Data MDT
• PE may optionally join the Data MDT
© 2010 Petr Grygarek 39
Data MDT: Pros and Cons
• Limits traffic over core network
• More states in core network (multiple trees)
© 2010 Petr Grygarek 40