Domain 4 (Communication and Network Security)
Domain 4 (Communication and Network Security)
(FDDI)
1
At this layer, a 48-bit (12-digit hexadecimal) address is defined that represents
the physical address “burned-in” or chemically etched into each Network
Interface Card (NIC). The first three
octets (MM:MM:MM or MM-MM-MM) are the ID number of the hardware
manufacturer. Manufacturer ID numbers are assigned by the Institute of
Electrical and Electronics Engineers (IEEE). The last three octets (SS:SS:SS or
SS-SS-SS) make up the serial number for the device that is assigned by the
manufacturer. The Ethernet and ATM technologies supported on devices use
the MAC-48 address space. IPv6 uses the EUI-64 address space.
2
This layer is concerned with sending frames to the next link on a local area
network.
3
Address Resolution Protocol (ARP) is used at the MAC layer to provide for
direct communication between two devices within the same LAN segment.
Sending devices will resolve IP addresses to MAC addresses of target devices
to communicate.
4
Fibre Channel is a high-speed serial interface using either optical or electrical
connections (i.e., the physical layer) at data rates currently up to 2Gbits/s with
a growth path to 10Gbits/s. FCoE is a lightweight encapsulation protocol and
lacks the reliable data transport of the TCP layer. Therefore, FCoE must
operate on DCB-enabled Ethernet and use lossless traffic classes to prevent
Ethernet frame loss under congested network conditions. FCoE on a DCB
network mimics the lightweight nature of native FC protocols and media. It
does not incorporate TCP or even IP protocols. This means that FCoE is a layer
2 (non-routable) protocol just like FC. FCoE is only for short-haul
communication within a data center.
5
Multiprotocol Label Switching (MPLS) is a wide area networking protocol that
operates at both layer 2 and 3 and does “label switching.” The first device
does a routing lookup, just like before, but instead of finding a next-hop, it
finds the final destination router. And it finds a predetermined path from
“here” to that final router. The router applies a “label” based on this
information. Future routers use the label to route the traffic without needing
to perform any additional IP lookups. At the final destination router, the label
is removed, and the packet is delivered via normal IP routing. RFC 3031
defines the MPLS label switching architecture.
6
ability to organizations which are choosing Software Defined Wide area
network or SD WAN. We will cover this later inshAllah.
6
The Point-to-Point Protocol (PPP) provides a standard method for transporting
multiprotocol datagrams over point-to-point links. PPP is
comprised of three main components:
7
Bridges are layer 2 devices that filter traffic between segments based on MAC
addresses. In addition, they amplify signals to facilitate physically
larger networks. A basic bridge filters out frames that are destined for another
segment. Bridges can connect LANs with unlike media types,
such as connecting an Unshielded Twisted Pair (UTP) segment with a segment
that uses coaxial cable. Bridges do not reformat frames, such
as converting a Token Ring frame to Ethernet. This means that only identical
layer 2 architectures can relate to a simple bridge (e.g., Ethernet to Ethernet,
etc.).
8
to organizations by effectively making all traffic crossing the
bridge visible to anyone connected to the LAN.
Switches The most common type of switches used today in the LAN operate at
layer 2. A switch establishes a collision domain per port, enabling more efficient
transmissions with CSMA/CD logic within Ethernet. Switches are the core
device used today to build LANs. There are many security features offered
within switches today, such as port blocking, port authentication, MAC filtering,
and virtual local area networks (VLAN), to name a few. Layer 3 switches are
switch, router combinations and are capable of making “switching decisions”
based on either the MAC or IP address.
8
Virtual local area networks (VLANs) allow network administrators to use
switches to create software-based LAN segments that can be defined based
on factors other than physical location. Devices that share a VLAN
communicate through switches, without being routed to other sub-networks,
which reduces overhead due to router latency (as routers become faster, this
is less of an advantage).
9
because communication within a VLAN is restricted to member devices.
However, there are attacks that allow a malicious user to see traffic from other
VLANs (so-called VLAN hopping). Therefore, a VLAN can be created so that
engineers can efficiently share confidential documents,
but the VLAN does not significantly protect the documents from unauthorized
access.
9
1
1
The network layer moves data between networks as packets by means of logical
addressing schemes.
1
In many cases, computer transmission methodology reflects some of the
norms that happen in a verbal conversation. Typically, if you want to have a
private conversation with an individual, you will take that person aside and
speak one-to-one.
2
network topology, the broadcast could have anywhere from one to tens of
thousands of recipients. Like a person standing on a soapbox, this is a noisy
method of communication. Typically, only one or two destination hosts are
interested in the broadcast; the other recipients waste resources to process the
transmission. However, there are productive uses for broadcasts. Consider a
router that knows a device’s IP address but must determine the device’s media
access control (MAC) address. The router will broadcast an Address Resolution
Protocol (ARP) request asking for the device’s MAC address.
2
The Internet Protocol (IP) is the dominant protocol that operates at the OSI
Network Layer 3. IP is responsible for addressing packets so that they can be
transmitted from the source to the destination hosts. Because it is an
unreliable protocol, it does not guarantee delivery. IP will subdivide the
message into fragments when they are too large for a packet. Hosts are
distinguished by the IP addresses. The address is expressed as four octets
separated by a dot (.), for example, 216.12.146.140. Each octet may have a
value between 0 and 255. However, 0 and 255 are not used for hosts. 255 is
used for broadcast addresses, and the 0’s meaning depends on the context in
which it is used. Each address is subdivided into two parts: the network
number and the host. The network number assigned by an external
organization, such as the Internet Corporation for Assigned Names and
Numbers (ICANN), represents the organization’s network. The host represents
the network interface within the network. The part of the address that
represents the network number defines the network’s class. Class A network
used the leftmost octet as the network number, Class B used the leftmost two
octets, etc. The part of the address that is not used as the network number is
3
used to specify the host. For example, the address 216.12.146.140 represents a
Class C network. Therefore, the network portion of the address is represented
by the 216.12.146, and the unique host address within the network block is
represented by 140. 127, which is the Class A network address block, is
reserved for a computer’s loopback address. Usually, the address 127.0.0.1 is
used. The loopback address is used to provide a mechanism for self diagnosis
and troubleshooting at the machine level. This mechanism allows a network
administrator to treat a local machine as if it were a remote machine, and ping
the network interface to establish whether it is operational.
3
IPv6 is a modernization of IPv4 that includes the following:
• A much larger address field: IPv6 addresses are 128 bits, which supports
2128 hosts. Suffice it to say that we will not run out of addresses.
• Improved security: IPSec can be implemented in IPv6. This will help ensure
the integrity and confidentiality of IP packets and allow communicating
partners to authenticate with each other.
4
5
The ICMP is used for the exchange of control messages between hosts and
gateways and is used for diagnostic tools such as ping and traceroute. ICMP
can be leveraged for malicious behavior, including man-in-the-middle and
denial-of-service attacks.
6
IGMP is used to manage multicasting groups that are a set of hosts anywhere
on a network that are listening for a transmission. Multicast agents administer
multicast groups, and hosts send IGMP messages to local agents to join and
leave groups.
7
Open Shortest Path First (OSPF) is an interior gateway routing protocol
developed for IP networks based on the shortest path first or link-state
algorithm. A link-state algorithm can keep track of a total “cost” to calculate
the most efficient way of moving information from a source to destination.
While a distance vector protocol, such as Routing Information Protocol (RIP),
will basically use the number of hops or count of links between networks to
determine the best path, a link-state algorithm can surmise the most efficient
path by knowing the connecting speed, congestion of the link, availability of
the link, and the total hops to determine what might be the best path. A
longer hop count could be the shortest path if all other measurements are
superior to a path with a shorter hop count.
8
routing structure (topography). The advantage of shortest path first algorithms
is that their use results in smaller, more frequent updates everywhere. They
converge quickly, thus preventing such problems as routing loops and Count-to-
Infinity (when routers continuously increment the hop count to a network). The
disadvantage of shortest path first algorithms is that they require substantial
amounts of CPU power and memory.
8
Routers route packets to other networks and are commonly referred to as the
Gateway. They read the IP destination in received packets, and based on the
router’s view of the network, it determines the next device on the network
(the next hop) to send the packet. If the destination address is not on a
network that is directly connected to the router, it will send the packet to the
gateway of last resort, another connected router, and rely on that router to
establish a path. Routers can be used to interconnect different technologies
and change the architecture. For example, connecting a Token Ring and
Ethernet networks to the same router would allow IP Ethernet packets to be
forwarded to a Token Ring network. Routers are most commonly used today
to connect LANs to
WANs. To build a network, you need switches for the LAN and a router to
connect the LAN to the WAN. The most basic security that can be performed
at layer 3 on a router is an access control list (ACL) that can define permitted
and denied source and destination addresses and ports or services.
9
Routers and firewalls are devices that enforce administrative security policies by
filtering incoming traffic based on a set of rules. While a firewall should always be
placed at internet gateways, there are also internal network considerations and
conditions where a firewall would be employed, such as network zoning.
Additionally, firewalls are also threat management appliances with a variety of
other security services embedded, such as proxy services and
intrusion prevention services (IPS) that seek to monitor and alert proactively at the
network perimeter.
1
1
1
1
The transport layer delivers end-to-end services through segments transmitted in a
stream of data and controls streams of data to relieve congestion through elements
that include quality of service (QoS).
1
The Transmission Control Protocol (TCP) provides connection-oriented data
management and reliable data transfer.
2
The UDP provides connectionless data transfer without error detection and
correction. UDP uses port numbers in a similar fashion to TCP. As a connectionless
protocol, UDP is useful for attacks as there is no state for routers or firewalls to
observe and monitor.
3
Well-Known Ports: Ports 0–1023
• These ports are related to the common protocols that are utilized in the
underlying management of Transport Control Protocol/Internet Protocol (TCP/IP)
system (Domain Name Service (DNS), Simple Mail Transfer Protocol (SMTP), etc.)
4
5
The session layer provides a logical persistent connection between peer hosts. The
session layer is responsible for creating, maintaining, and tearing down the session.
RPCs represent the ability to allow for the executing of objects across hosts with a
client sending a set of instructions to an application residing on a different host on
the network. It is important to note that RPC does not in fact provide any services
on its own; instead, it provides a brokering service by providing (basic)
authentication and a way to address the actual service.
6
7
The presentation layer maintains that communications delivered between sending
and receiving computer systems are in a common and discernable system format.
8
To provide a reliable syntax, systems processing at the presentation layer will use
American Standard Code for Information Interchange (ASCII) or Extended Binary
Coded Decimal Interchange Code (EBCDIC) to translate from Unicode. In 2016 the
W3C Internationalization Working Group estimated that 86 percent of all web pages
sampled showed that they are using UTF 8 Unicode character encoding. It further
states, “Not only are people using UTF-8 for their pages, but Unicode encodings are
the basis of the Web itself. All browsers use Unicode internally, and convert all other
encodings to Unicode for processing. As do all search engines. All modern operating
systems also use Unicode internally. It has become part of the fabric of the Web.”
9
Translation services are also necessary when considering that different computer
platforms (Macintosh and Windows personal computers) may exist within the same
network and could be sharing data. The presentation layer is needed to translate
the output from unlike systems to similar formats.
1
Data conversion or bit order reversal and compression are other functions of the
presentation layer. As an example, an MPEG-1 Audio Layer-3 (MP3) is a standard
audio encoding and compression algorithm that creates a file with a bitrate of
128kbit/s. The Waveform Audio File Format (WAVE) with Linear PCM bitstream is
another standard audio encoding and compression that creates a file with a bitrate
of 44.1khz. The compression for both formats is accomplished at the presentation
layer. If a tool is used to convert one format into another, this is also accomplished
at the presentation layer.
1
Encryption services such as TLS/SSL are managed below, above, and within the
presentation layer. At times, the encoding capabilities that are resident at the
presentation layer are inappropriately conflated with a specific set of cryptographic
services. Abstract Syntax Notation (ASN.1) is an ISO standard that addresses the
issue of representing, encoding, transmitting, and decoding data structures. The
transfer of data entities between two points of communication could appear as
nonsensical or encoding if a nonparticipating (eavesdropping) third party wasn’t
aware of the standard being used in transmission.
1
1
The presentation layer maintains that communications delivered between sending
and receiving computer systems are in a common and discernable system format.
1
To provide a reliable syntax, systems processing at the presentation layer will use
American Standard Code for Information Interchange (ASCII) or Extended Binary
Coded Decimal Interchange Code (EBCDIC) to translate from Unicode. In 2016 the
W3C Internationalization Working Group estimated that 86 percent of all web pages
sampled showed that they are using UTF 8 Unicode character encoding. It further
states, “Not only are people using UTF-8 for their pages, but Unicode encodings are
the basis of the Web itself. All browsers use Unicode internally, and convert all other
encodings to Unicode for processing. As do all search engines. All modern operating
systems also use Unicode internally. It has become part of the fabric of the Web.”
2
Translation services are also necessary when considering that different computer
platforms (Macintosh and Windows personal computers) may exist within the same
network and could be sharing data. The presentation layer is needed to translate
the output from unlike systems to similar formats.
3
Data conversion or bit order reversal and compression are other functions of the
presentation layer. As an example, an MPEG-1 Audio Layer-3 (MP3) is a standard
audio encoding and compression algorithm that creates a file with a bitrate of
128kbit/s. The Waveform Audio File Format (WAVE) with Linear PCM bitstream is
another standard audio encoding and compression that creates a file with a bitrate
of 44.1khz. The compression for both formats is accomplished at the presentation
layer. If a tool is used to convert one format into another, this is also accomplished
at the presentation layer.
4
Encryption services such as TLS/SSL are managed below, above, and within the
presentation layer. At times, the encoding capabilities that are resident at the
presentation layer are inappropriately conflated with a specific set of cryptographic
services. Abstract Syntax Notation (ASN.1) is an ISO standard that addresses the
issue of representing, encoding, transmitting, and decoding data structures. The
transfer of data entities between two points of communication could appear as
nonsensical or encoding if a nonparticipating (eavesdropping) third party wasn’t
aware of the standard being used in transmission.
5
6
The application layer supports or hosts the function of applications that run on a
system. All manner of a human supported interfaces, messaging, systems control,
and processing occur at the application level. While the application layer itself is not
the application it is where applications run.
7
DHCP is a client/server application that is designed to assign IP addresses from a
pool of pre-allotted addresses on a DHCP server. Based upon the specifications in
RFC 2131, the client transmits on port 67 and the server responds on port 68. The
client sends out a broadcast with a DHCPDISCOVER packet. The server responds
with a DHCPOFFER giving the client an available address to use. The client responds
back with DHCPREQUEST to use the offered address, and the server sends back a
DHCPACK allowing the client to bind the requested address to the network interface
card (NIC). If a DHCP server doesn’t respond in a predetermined time, then the
DHCP client self-assigns an IP address in the 169.254.x.x range based upon IPv4
Link-Local Addresses based upon RFC 3927.
8
DNS resolves Fully Qualified Domain Names (FQDN) to IP addresses and transmits
data on port 53. According to RFC 1035, the local user, or client, queries an agent
known as a Resolver that is part of the client operating system. DNS is used to
resolve a FQDN to an IP address. Network nodes automatically register this
resolution in the DNS server’s database. To resolve any external domain name,
each DNS in the world must hold a list of these root servers. Various extensions to
DNS have been proposed to enhance its functionality and security, for instance, by
introducing authentication using DNS Security Extensions (DNSSEC), multicasting, or
service discovery.
DNS maintains a directory of zones that have a hierarchical superior known as the
root that are represented by an administrative (“.”) that is appended to the end of a
FQDN. The root servers (at the initial printing of this publication there are 13) carry
references to what is known as Top Level Domains (TLDs). A few examples of TLDs
are .com; .edu; .gov; etc. The TLDs contain references to sub zones know as second
level domain. A few examples of second level domains include amazon.com;
microsoft.com; ibm.com; etc. The subzones can continue with third or fourth level
domains that are typically tied to a specific service.
9
When a resolver connects to a DNS server, the default specifications state that it will
do so with an iterative lookup. This means that the DNS server will hand the lookup
to the resolver after making the first query. In a recursive lookup, the DNS server will
return with a response of the FQDN to the original resolver after managing the
lookup from the root servers until the last answer.
The following records are necessary for the DNS server to be operational.
• Host (A)
• Start of Authority (SOA)
• Name Server (NS)
• Pointer (PTR)
• Mail Exchange (MX)
9
SNMP is designed to manage network infrastructure. SNMP architecture consists of
a management server (called the manager in SNMP terminology) and a client
usually installed on network devices, such as routers and switches, called an agent.
SNMP allows the manager to retrieve “get” values of variables from the agent, as
well as “set” variables. Such variables could be routing tables or performance-
monitoring information. Probably the most easily exploited SNMP vulnerability is a
brute-force attack on default or easily guessable SNMP passwords known as
“community strings” often used to manage a remote device. Given the scale of
SNMP v1 and v2 deployment, combined with a lack of clear direction from the
security professional with regards to the risks associated with
using SNMP without additional security enhancements to protect the community
string, it is certainly a realistic scenario and a potentially severe but easily mitigated
risk. Until version 2, SNMP did not provide any degree of authentication or
transmission security. Authentication consists of an identifier, called a community
string, by which a manager will identify itself against an agent (this string is
configured into the agent) and a password sent with
a command. As a result, passwords can be easily intercepted that could then result
in commands being sniffed and potentially faked. Like the previous problem, SNMP
1
version 2 did not support any form of encryption so that passwords (community
strings) were passed as cleartext. SNMP version 3 addresses this weakness with
encryption
for passwords.
1
LDAP uses a hierarchical tree structure for directory entries. Like X.500, LDAP entries
support the DN and RDN concepts. DN attributes are typically based on an entity’s
DNS name. Each entry in the database has a series of name/value pairs to denote
the various attributes associated with each entry.
1
1
1
Remote Meeting Technology
Several technologies and services exist that allow organizations and individuals to
meet “virtually.” These applications are typically web-based and either install
extensions in the browser or client software on the host system. These technologies
also typically allow “desktop sharing” as a feature. This feature may allow the
viewing of a user’s desktop. Some organizations use dedicated equipment such as
cameras, monitors and meeting rooms to host and participate in remote meetings.
These devices are often integrated with Voice over Internet Protocol (VoIP).
2
3
Permanent Virtual Circuits (PVCs) and Switched Virtual Circuits (SVCs).
4
Circuit-Switched Networks
Circuit-switched networks establish a dedicated circuit between endpoints. These
circuits consist of dedicated switch connections. Neither endpoint starts
communicating until the circuit is completely established. The endpoints have
exclusive use of the circuit and its bandwidth. Carriers base the cost of using a
circuit-switched network on the duration of the connection that makes this type of
network only cost-effective for a steady communication stream between the
endpoints. Examples of circuit-switched networks are the plain old telephone
service (POTS), Integrated Services Digital Network (ISDN), and Point-to-Point
Protocol (PPP).
5
Packet-Switched Networks
Packet-switched networks do not use a dedicated connection between endpoints.
Instead, data is divided into packets and transmitted on a shared network. Each
packet contains meta-information so that it can be independently routed on the
network. Networking devices will attempt to find the best path for each packet to
its destination. Because network conditions could change while the partners are
communicating, packets could take different paths as they
transverse the network and arrive in any order. It is the responsibility of the
destination endpoint to ensure that the received packets are in the correct order
before sending them up the stack.
6
management since there is a more thorough utilization of
resources. As service providers struggled to keep up with the quick deployment
needs and faster growth models, the slowness of hardware-based
solutions was exposed. A number of these service providers came together and
founded The European Telecommunications Standards Institute (ETSI) and worked to
formalize NFV standards.
6
Hardware reigned supreme in the networking world until the emergence of
software-defined networking (SDN), a category of technologies that separate the
network control plane from the forwarding plane to enable more automated
provisioning and policy-based management of network resources.
SDN's origins can be traced to a research collaboration between Stanford University
and the University of California at Berkeley that ultimately yielded
the OpenFlow protocol in the 2008 timeframe.
OpenFlow is only one of the first SDN canons, but it's a key component because it
started the networking software revolution. OpenFlow defined a programmable
network protocol that could help manage and direct traffic among routers and
switches no matter which vendor made the underlying router or switch. In the years
since its inception, SDN has evolved into a reputable networking technology offered
by key vendors including Cisco, VMware, Juniper, Pluribus and Big Switch. The Open
Networking Foundation develops myriad open-source SDN technologies as well.
"Datacenter SDN no longer attracts breathless hype and fevered expectations, but
the market is growing healthily, and its prospects remain robust," wrote Brad
7
Casemore, IDC research vice president, data center networks, in a recent
report, Worldwide Datacenter Software-Defined Networking Forecast, 2018–2022.
"Datacenter modernization, driven by the relentless pursuit of digital transformation
and characterized by the adoption of cloudlike infrastructure, will help to maintain
growth, as will opportunities to extend datacenter SDN overlays and fabrics to
multicloud application environments." SDN will be increasingly perceived as a form
of established, conventional networking, Casemore said.
IDC estimates that the worldwide data center SDN market will be worth more than
$12 billion in 2022, recording a CAGR of 18.5% during the 2017–2022 period. The
market generated revenue of nearly $5.15 billion in 2017, up more than 32.2% from
2016. In 2017, the physical network represented the largest segment of the
worldwide datacenter SDN market, accounting for revenue of nearly $2.2 billion, or
about 42% of the overall total revenue. In 2022, however, the physical network is
expected to claim about $3.65 billion in revenue, slightly less than the $3.68 billion
attributable to network virtualization overlays/SDN controller software but more
than the $3.18 billion for SDN applications.
“We're now at a point where SDN is better understood, where its use cases and value
propositions are familiar to most datacenter network buyers and where a growing
number of enterprises are finding that SDN offerings offer practical benefits,”
Casemore said. “With SDN growth and the shift toward software-based network
automation, the network is regaining lost ground and moving into better alignment
with a wave of new application workloads that are driving meaningful business
outcomes.”
What is SDN?
The idea of programmability is the basis for the most precise definition of what SDN
is: technology that separates the control plane management of network devices from
the underlying data plane that forwards network traffic. IDC broadens that definition
of SDN by stating: “Datacenter SDN architectures feature software-defined overlays
or controllers that are abstracted from the underlying network hardware, offering
intent-or policy-based management of the network as a whole. This results in a
datacenter network that is better aligned with the needs of application workloads
through automated (thereby faster) provisioning, programmatic network
management, pervasive application-oriented visibility, and where needed, direct
integration with cloud orchestration platforms.”
The driving ideas behind the development of SDN are myriad. For example, it
promises to reduce the complexity of statically defined networks; make automating
network functions much easier; and allow for simpler provisioning and management
7
of networked resources, everywhere from the data center to the campus or wide
area network. Separating the control and data planes is the most common way to
think of what SDN is, but it is much more than that, said Mike Capuano, chief
marketing officer for Pluribus. “At its heart SDN has a centralized or distributed
intelligent entity that has an entire view of the network, that can make routing and
switching decisions based on that view,” Capuano said. “Typically, network routers
and switches only know about their neighboring network gear. But with a properly
configured SDN environment, that central entity can control everything, from easily
changing policies to simplifying configuration and automation across the enterprise.”
How does SDN support edge computing, IoT and remote access?
A variety of networking trends have played into the central idea of SDN. Distributing
computing power to remote sites, moving data center functions to the edge,
adopting cloud computing, and supporting Internet of Things environments – each of
these efforts can be made easier and more cost efficient via a properly configured
SDN environment. Typically in an SDN environment, customers can see all of their
devices and TCP flows, which means they can slice up the network from the data or
management plane to support a variety of applications and configurations, Capuano
said. So users can more easily segment an IoT application from the production world
if they want, for example.
Some SDN controllers have the smarts to see that the network is getting congested
and, in response, pump up bandwidth or processing to make sure remote and edge
components don’t suffer latency.
SDN technologies also help in distributed locations that have few IT personnel on
site, such as an enterprise branch office or service provider central office, said
Michael Bushong, vice president of enterprise and cloud marketing at Juniper
Networks. “Naturally these places require remote and centralized delivery of
connectivity, visibility and security. SDN solutions that centralize and abstract control
and automate workflows across many places in the network, and their devices,
improve operational reliability, speed and experience,” Bushong said.
“If a key tenet of SDN is abstracted control over a fleet of infrastructure, then the
provisioning paradigm and dynamic control to regulate infrastructure state is
7
necessarily higher level,” Bushong said. “Policy is closer to declarative intent, moving
away from the minutia of individual device details and imperative and reactive
commands.” IDC says that intent-based networking “represents an evolution of SDN
to achieve even greater degrees of operational simplicity, automated intelligence,
and closed-loop functionality.”
For that reason, IBN represents a notable milestone on the journey toward
autonomous infrastructure that includes a self-driving network, which will function
much like the self-driving car, producing desired outcomes based on what network
operators and their organizations wish to accomplish, Casemore stated. “While the
self-driving car has been designed to deliver passengers safely to their destination
with minimal human intervention, the self-driving network, as part of autonomous
datacenter infrastructure, eventually will achieve similar outcomes in areas such as
network provisioning, management, and troubleshooting — delivering applications
and data, dynamically creating and altering network paths, and providing security
enforcement with minimal need for operator intervention,” Casemore stated.
While IBN technologies are relatively young, Gartner says by 2020, more than 1,000
large enterprises will use intent-based networking systems in production, up from
less than 15 in the second quarter of 2018.
7
How does SDN help customers with security?
SDN enables a variety of security benefits. A customer can split up a network
connection between an end user and the data center and have different security
settings for the various types of network traffic. A network could have one public-
facing, low security network that does not touch any sensitive information. Another
segment could have much more fine-grained remote access control with software-
based firewall and encryption policies on it, which allow sensitive data to traverse
over it.
“For example, if a customer has an IoT group it doesn’t feel is all that mature with
regards to security, via the SDN controller you can segment that group off away
from the critical high-value corporate traffic,” Capuano stated. “SDN users can roll
out security policies across the network from the data center to the edge and if you
do all of this on top of white boxes, deployments can be 30 – 60 percent cheaper
than traditional gear.”
The ability to look at a set of workloads and see if they match a given security policy
is a key benefit of SDN, especially as data is distributed, said Thomas Scheibe, vice
8
president of product management for Cisco’s Nexus and ACI product lines. "The
ability to deploy a whitelist security model like we do with ACI [Application Centric
Infrastructure] that lets only specific entities access explicit resources across your
network fabric is another key security element SDN enables," Scheibe said. A
growing number of SDN platforms now support microsegmentation, according to
Casemore. “In fact, micro-segmentation has developed as a notable use case for SDN.
As SDN platforms are extended to support multicloud environments, they will be
used to mitigate the inherent complexity of establishing and maintaining consistent
network and security policies across hybrid IT landscapes,” Casemore said.
8
What is SDN’s role in cloud computing?
SDN’s role in the move toward private cloud and hybrid cloud adoption seems a
natural. In fact, big SDN players such as Cisco, Juniper and VMware have all made
moves to tie together enterprise data center and cloud worlds. Cisco's ACI
Anywhere package would, for example, let policies configured through Cisco's SDN
APIC (Application Policy Infrastructure Controller) use native APIs offered by a
public-cloud provider to orchestrate changes within both the private and public
cloud environments, Cisco said. “As organizations look to scale their hybrid cloud
environments, it will be critical to leverage solutions that help improve productivity
and processes,” said Bob Laliberte, a senior analyst with Enterprise Strategy Group,
in a recent Network World article. “The ability to leverage the same solution, like
Cisco’s ACI, in your own private-cloud environment as well as across multiple public
clouds will enable organizations to successfully scale their cloud environments.”
Growth of public and private clouds and enterprises' embrace of distributed
multicloud application environments will have an ongoing and significant impact on
data center SDN, representing both a challenge and an opportunity for vendors,
said IDC’s Casemore. “Agility is a key attribute of digital transformation, and
enterprises will adopt architectures, infrastructures, and technologies that provide
9
for agile deployment, provisioning, and ongoing operational management. In a
datacenter networking context, the imperative of digital transformation drives
adoption of extensive network automation, including SDN,” Casemore said.
9
Where does SD-WAN fit in?
The software-defined wide area network (SD-WAN) is a natural application of SDN
that extends the technology over a WAN. While the SDN architecture is typically the
underpinning in a data center or campus, SD-WAN takes it a step further.
1
"SD-WAN has been a promised technology for years, but in 2019 it will be a major
driver in how networks are built and re-built," Anand Oswal, senior vice president of
engineering in Cisco’s Enterprise Networking Business, said a Network
World article earlier this year. It's a profoundly hot market with tons of players
including Cisco, VMware, Silver Peak, Riverbed, Aryaka, Fortinet, Nokia and Versa. IDC
says the SD-WAN infrastructure market will hit $4.5 billion by 2022, growing at a
more than 40% yearly clip between now and then.
From its VNI study, Cisco says that globally, SD-WAN traffic was 9% of business IP
WAN traffic in 2017 and will be 29% of business IP WAN traffic by 2022. In addition,
SD-WAN traffic will grow five-fold from 2017 to 2022, a compound annual growth
rate of 37%.
1
What is in the future for SDN?
Going forward there are a couple of developments to watch for, Cisco's Scheibe
said. One involves the increased ability to automate the provisioning of data center
services, to make it easier to horizontally extend access to data. The second
expected development is the ability to more easily allow customers to work across
domains to monitor and track what is going on across the infrastructure. According
to Cisco’s most recent Global Cloud Index research, SDN might streamline traffic
flows within the data center such that traffic is routed more efficiently than it is
today. “In theory, SDN allows for traffic handling policies to follow virtual machines
and containers, so that those elements can be moved within a data center in order
to minimize traffic in response to bandwidth bottlenecks,” Cisco stated.
Most major hyperscale data centers already employ flat architectures and SDN and
storage management, and adoption of SDN/NFV or network function
virtualization (which virtualizes network elements) within large-scale enterprise
data centers has been rapid, Cisco stated. Over two-thirds of data centers will adopt
SDN either fully or in a partial deployment by 2021. As a portion of traffic within the
data center, SDN/NFV is already transporting 23%, growing to 44% by 2021.
1
Cisco found that there are also ways in which SDN/NFV can lead to an increase in
both data center traffic and in general Internet traffic:
Traffic engineering enabled by SDN/ NFV supports very large data flows without
compromising short lived data flows, making it safe to transport large amounts of
data to and from big data clusters. SDN will allow video bitrates to increase, because
SDN can seek out highest bandwidth available even midstream, instead of lowering
the bitrate according the available bandwidth for the duration of the video, as is done
today. The future of SDN is shaped by operational needs and software innovation,
Bushong said. “While trends like cloud-native design certainly impact SDN
engineering, the operational side of SDN is likely to really benefit from the innovation
happening around machine learning and AI, and these innovations also benefit from
the accelerated pace of software and hardware innovation happening in prominent
public clouds.”
1
A content delivery network or content distribution network (CDN) is a large
distributed system of servers deployed in multiple data centers
across the internet. The goal of a CDN is to serve content to end users with high
availability and high performance. A key capability of CDN is to provide for capacity
management in that original content will not be easily exhausted by request from a
wide geographic field.
1
Firewalls
Firewalls will not be effective right out of the box. Firewall rules must be defined
correctly not to inadvertently grant unauthorized access. Like all hosts on a
network, administrators must install patches to the firewall and disable all
unnecessary services. Also, firewalls offer limited protection against vulnerabilities
caused by applications flaws in server software on other hosts. For example, a
firewall will not prevent an attacker from manipulating a database to disclose
confidential information.
Firewalls filter traffic based on a rule set. Each rule instructs the firewall to block or
forward a packet based on one or more conditions. For each incoming packet, the
firewall will look through its rule set for a rule whose conditions apply to that packet
and block or forward the packet as specified in that rule. Below are two important
conditions used to determine if a packet should be filtered.
By address: Firewalls will often use the packet’s source or destination address, or
both, to determine if the packet should
be filtered.
2
By service: Packets can also be filtered by service. The firewall inspects the service
the packet is using (if the packet is part of the Transmission Control Protocol (TCP) or
User Datagram Protocol (UDP), the service is the destination port number) to
determine if the packet should be filtered. For example, firewalls will often have a
rule to filter the Finger service to prevent an attacker from using it to gather
information about a host. Filtering by address and by service are often combined in
rules. If the engineering department wanted to grant anyone on the LAN access to its
web server, a rule could be defined to forward packets whose destination address is
the web server’s and the
service is HTTP (TCP port 80). Firewalls can change the source address of each
outgoing (from trusted to untrusted network) packet to a different address. This has
several applications, most notably to allow hosts with RFC 1918 addresses access to
the internet by changing their private address to one that is routable on the internet.
A private address is one that will not be forwarded by an internet router and,
therefore, remote attacks using private internal addresses cannot be launched over
the open internet. Anonymity is another reason to use network address translation
(NAT). Many organizations do not want to advertise their IP addresses to an
untrusted host and, thus, unnecessarily give information about the network. They
would rather hide the entire network behind translated addresses. NAT also greatly
extends the capabilities of organizations to continue using IPv4 address spaces.
2
Static Packet Filtering
When a firewall uses static packet filtering, it examines each packet without regard
to the packet’s context in a session. Packets are examined against static criteria, for
example, blocking all packets with a port number of 79 (finger). Because of its
simplicity, static packet filtering requires very little overhead, but it has a significant
disadvantage. Static rules cannot be temporarily changed by the firewall to
accommodate legitimate traffic. If a protocol requires a port to be temporarily
opened, administrators must choose between permanently opening the port and
disallowing the protocol.
3
Stateful Inspection or Dynamic Packet Filtering
Stateful inspection examines each packet in the context of a session that allows it to
make dynamic adjustments to the rules to accommodate legitimate traffic and
block malicious traffic that would appear benign to a static filter. For example, if a
user sends a Syn request to a server and receives a Syn Ack back from the server,
the next appropriate frame to send is an Ack. If the user sends another Syn request,
the stateful inspection device will see and reject this next “inappropriate” packet.
4
Next-generation firewalls (NGFWs) are deep-packet inspection firewalls that move
beyond port/protocol inspection and blocking to add application-level inspection,
intrusion prevention, along with malware awareness and prevention. NGFWs are
not the same as intrusion prevention system (IPS) stand-alone devices or even
firewalls that are simply integrating IPS capabilities. Included in what is called the
third generation of firewall technology is in-line deep inspection of traffic,
application programming interface (API) gateways, and Database Activity
Monitoring.
5
Intrusion Detection and Prevention Systems (IDS/IPS)
Intrusion detection systems (IDSs) monitor activity and send alerts when they
detect suspicious traffic. There are two broad classifications
of IDS/IPS:
Currently, there are two approaches to the deployment and use of IDSs.
An appliance on the network can monitor traffic for attacks based on a set of
signatures (analogous to antivirus software), or the appliance can watch the
network’s traffic for a while, learn what traffic patterns are normal and send an alert
when it detects an anomaly. Of course, the IDS can be deployed using a hybrid of
the two approaches as well.
6
Independent of the approach, how an organization uses an IDS determines whether
the tool is effective. Despite its name, the IDS should not be used to detect intrusions
because IDS solutions are not designed to be able to take preventative actions as part
of their response. Instead, it should send an alert when it detects interesting,
abnormal traffic that could be a prelude to an attack.
Some also use heuristics to evaluate the intended behavior of network traffic to
determine if it intended to be malicious or not. Most modern systems combine two
or more of these techniques together to provide a more accurate analysis before it
decides whether it sees an attack or not.
In most cases, there will continue to be problems associated with false positives as
well as false-negatives. False-positives occur when the IDS
or IPS identifies something as an attack, but it is in fact normal traffic. False-
negatives occur when the IPS or IDS fails to interpret something as an attack when it
should have. In these cases, intrusion systems must be carefully “tuned” to ensure
that these are kept to a minimum.
An IDS requires frequent attention. An IDS requires the response of a human who is
knowledgeable enough with the system and types of
normal activity to make an educated judgment about the relevance and significance
of the event. Alerts need to be investigated to Determine if they represent an actual
event, or if they are simply background noise.
6
7
8
9
Whitelisting/blacklisting: A whitelist is a list of email addresses and/or internet
addresses that someone knows as “good” senders. A blacklist is a corresponding list
of known “bad” senders. So, an email from an unrecognized sender is neither on
the whitelist or the blacklist and, therefore, is treated differently. Grey listing works
by telling the sending email server to resend the message sometime soon. Many
spammers set their software to blindly transmit their spam email, and the software
does not understand the “resend soon” message. Thus, the spam would never
actually be delivered.
1
1
Port Address Translation (PAT)
An extension to network address translation (NAT), which translates all addresses to
one externally routable IP address, is to use port address translation (PAT) to
translate the source port number for an external service. The port translation keeps
track of multiple sessions that are accessing the internet.
2
Proxy Firewall
A proxy firewall mediates communications between untrusted endpoints
(servers/hosts/clients) and trusted endpoints (servers/hosts/clients). From an
internal perspective, a proxy may forward traffic from known, internal client
machines to untrusted hosts on the internet, creating the illusion for the untrusted
host that the traffic originated from the proxy firewall, thus, hiding the trusted
internal client from potential attackers. To the user, it appears that they are
communicating directly with the untrusted server. Proxy servers are often placed at
internet gateways to hide the internal network behind one IP address and to
prevent direct communication between internal and external hosts.
3
Endpoint Security
Workstations should be hardened, and users should be using limited access
accounts whenever possible in accordance with the concept of “least privilege.”
While workstations are clearly what most people will associate with endpoint
attacks, the landscape is changing. Mobile devices, such as smart phones, tablets
etc., are beginning to make up more and more of the average organization’s
endpoints. With this additional diversity of devices, there becomes a requirement
for the security architect to also increase the diversity and agility of an
organization’s endpoint defenses.
4
For mobile devices such as smart phones and tablets, consider the following:
• Encryption for the whole device, or if not possible, then at least encryption for
sensitive information held on the device
• Device virtualization/sandboxing
• Remote management capabilities including the following:
o Remote wipe
o Remote geo locate
o Remote update
o Remote operation
• User policies and agreements that ensure an organization can manage the device
or seize it for legal hold
4
Voice over Internet Protocol (VoIP) is a technology that allows you to make voice
calls using a broadband internet connection instead of a regular (or analog) phone
line. VoIP is simply the transmission of voice traffic over IP-based networks. VoIP is
also the foundation for more advanced unified communications applications such
as web and video conferencing. VoIP systems are based on the use of the Session
Initiation Protocol (SIP), which is the recognized standard. Any SIP compatible
device can talk to any other. In all VoIP systems, your voice is converted into packets
of data and then transmitted to the recipient over the internet and decoded back
into your voice at the other end. To make it quicker, these packets are compressed
before transmission with certain codecs, almost like zipping a file on the fly. There
are many codecs with diverse ways of achieving compression and managing
bitrates, thus, each codec has its own bandwidth requirements and provides
different voice quality for VoIP calls.
VoIP systems employ session control and signaling protocols to control the
signaling, set-up, and tear-down of calls. A codec is software that
encodes audio signals into digital frames and vice versa. Codecs are characterized
by different sampling rates and resolutions. Different
5
codecs employ different compression methods and algorithms, using different
bandwidth and computational requirements.
5
Session Initiation Protocol (SIP)
As its name implies, SIP is designed to manage multimedia connections. SIP is
designed to support digest authentication structured by realms, like HTTP (basic
username/password authentication has been removed from the protocol as of RFC
3261). In addition, SIP provides integrity protection through MD5 hash functions.
SIP supports a variety of encryption mechanisms, such as TLS. Privacy extensions to
SIP, including encryption and caller ID suppression, have been defined in extensions
to the original Session Initiation Protocol (RFC 3325).
6
VoIP Problems
Packet loss: A technique called packet loss concealment (PLC) is used in VoIP
communications to mask the effect of dropped packets. There are several
techniques that may be used by different implementations:
Zero substitution is the simplest PLC technique that requires the least
computational resources. These simple algorithms generally provide the lowest
quality sound when a considerable number of packets are discarded.
Filling empty spaces with artificially generated, substitute sound. The more
advanced algorithms interpolate the gaps, producing the best sound quality at the
cost of using extra computational resources. The best implementation can tolerate
up to 20 percent of packets lost without significant degradation of voice quality.
While some PLC techniques work better than others, no masking technique can
compensate for a significant loss of packets. When bursts of packets are lost due to
network congestion, noticeable degradation of call quality occurs.
In VoIP, packets can be discarded for many reasons, including network congestion,
7
line errors, and late arrival. The network architect and security practitioner need to
work together to select the right PLC technique that best matches the characteristics
of an environment, as well as to ensure that they implement measures to reduce
packet loss on the network.
Jitter: Unlike network delay, jitter does not occur because of the packet delay but
because of a variation of packet timing. As VoIP endpoints try to compensate for jitter
by increasing the size of the packet buffer, jitter causes delays in the conversation. If
the variation becomes too high and exceeds 150ms, callers notice the delay and
often revert to a walkie-talkie style of conversation. Reducing the delays on the
network helps keep the buffer under 150ms even if a significant variation is present.
While the reduced delay does not necessarily remove the variation, it still effectively
reduces the degree to which the effect is pronounced and brings it to the point
where it’s unnoticeable by the callers. Prioritizing VoIP traffic and implementing
bandwidth shaping also helps reduce the variation of packet delay. At the endpoint, it
is essential to optimize jitter buffering. While greater buffers reduce and remove the
jitter, anything over 150ms noticeably affects the perceived quality of the
conversation. Adaptive algorithms to control buffer size depending on the current
network conditions are often quite effective. Fiddling with packet size (payload) or
using a different codec often helps control jitter as well.
Sequence errors: Routed networks will send packets along the best possible path at
this moment. That means packets will, on occasion, arrive in a different order than
transmitted. This will cause a degradation in the call quality.
7
Peer-to-Peer (P2P) Applications and Protocols
Peer-to-peer (P2P) applications are often designed to open an uncontrolled channel
through network boundaries (normally through tunneling). Therefore, they provide
a way for dangerous content, such as botnets, spyware applications, and viruses, to
enter an otherwise protected network. Because P2P networks can be established
and managed using a series of multiple, overlapping master and slave nodes, they
can be very difficult to fully detect and shut down. If one master node is detected
and shutdown, the “bot herder” who controls the P2P botnet can make one of the
slave nodes a master and use that as a redundant staging point, allowing for botnet
operations to continue unimpeded.
Instant Messaging
Instant messaging systems can generally be categorized in three classes:
• P2P networks
• Brokered communication
• Server-oriented networks
All these classes will support basic “chat” services on a one-to-one basis and
8
frequently on a many-to-many basis. Most instant messaging
applications do offer additional services beyond their text messaging capability, for
instance, screen sharing, remote control, exchange of files, and voice and video
conversation. Some applications even allow command scripting. Instant messaging
and chat is increasingly considered a significant business application used for office
communications, customer support, and “presence” applications. Instant message
capabilities will frequently be deployed with a bundle of other IP-based services such
as VoIP and video conferencing support.
8
Internet Relay Chat (IRC)
Internet Relay Chat (IRC) is a client/server-based network. This is a common method
of communicating today. IRC is unencrypted and, therefore, an easy target for
sniffing attacks. The basic architecture of IRC, founded on trust among servers,
enables special forms of denial-of service attacks. For instance, a malicious user can
hijack a channel while a server or group of servers has been disconnected from the
rest (net split). IRC is also a common platform for social engineering attacks aimed
at inexperienced or technically unskilled users. While there are many business and
personal benefits and efficiencies to be gained from adopting instant
messaging/chat/IRC technologies, there are also many risks.
Authenticity: User identification can be easily faked in instant messaging and chat
applications by the following:
9
wrong identity.
• The continued growth of social-networking services and sites like Facebook, Vine,
KiK, Twitter, LinkedIn and others present amply opportunity to create false identity
and to try and dupe others for criminal purposes.
9
Remote-Access Services
The services described under this section are present in many UNIX operations and,
when combined with Network File System (NFS) and Network Information Service
(NIS), provide the user with seamless remote working capabilities. However, they
also form a risky combination if not configured and managed properly.
Conceptually, because they are built on mutual trust, they can be misused to obtain
access and to horizontally and vertically escalate privileges in an attack. Their
authentication and transmission capabilities are insecure by design; therefore, they
have to be retrofitted (as X11) or replaced altogether (TELNET and rlogin by SSH).
1
TELNET is a command line protocol designed to give command line access to another
host. Although implementations for Windows exist, TELNET’s original domain was the
UNIX server world, and in fact, a TELNET server is standard equipment for any UNIX
server. (Whether it should be enabled is another question entirely, but in small LAN
environments, TELNET is still widely used.)
TELNET:
• Offers little security, and indeed, its use poses serious security risks in untrusted
environments.
• Is limited to username/password authentication.
• Does not offer encryption.
Once an attacker has obtained even a low-level user’s credentials, they have a trivial
path toward privilege escalation because they can transfer data to and from a
machine, as well as execute commands. As the TELNET server is running under
system privileges, it is an attractive target of attack in itself; exploits in TELNET
servers pave the way to system privileges for an attacker. Therefore, it is
recommended that security practitioners discontinue the use of TELNET over the
internet and on internet facing machines. In fact, the standard hardening procedure
for any internet facing server should include disabling its TELNET service that under
UNIX systems would normally run under the name of telnetd, and using SSHv2 for
remote administration and management where required.
Remote Log-in (rlogin), Remote Shell (rsh), Remote Copy (rcp) In its most generic
form, rlogin is a protocol used for granting remote access to a machine, normally a
UNIX server. Similarly, rsh grants direct remote command execution while rcp copies
data from or to a remote machine. If a rlogin daemon (rlogind) is running on a
machine, rlogin access can be granted in two ways:
By the latter, a user may grant access that was not permitted by the system
administrator. The same mechanism applies to rsh and rcp although they are relying
on a different daemon (rshd). Authentication can be considered host/IP address
based. Although rlogin grants access based on user ID, it is not verified; i.e., the ID a
remote client claims to possess is taken for granted if the request comes from a
trusted host. The rlogin protocol transmits data without encryption and is hence
subject to eavesdropping and interception.
1
The rlogin protocol is of limited value—its main benefit can be considered its main
drawback: remote access without supplying a password. It should only be used in
trusted networks, if at all. A more secure replacement is available in the form of
SSHv2 for rlogin, rsh, and rcp.
1
Screen Scraper
A screen scraper is a program that can extract data from output on a display
intended for a human. Screen scrapers are used in a legitimate fashion when older
technologies are unable to interface with modern ones. In a nefarious sense, this
technology can also be used to capture images from a user’s computer such as PIN
pad sequences at a banking website when implemented by a virus or malware.
1
Virtual Applications and Desktops
Virtual Network Terminal Services
Virtual terminal service is a tool frequently used for remote access to server
resources. Virtual terminal services allow the desktop environment for a server to
be exported to a remote workstation. This allows users at the remote workstation
to execute desktop commands as though they were sitting at the server terminal
interface in person. The advantage of terminal services such as those provided by
Citrix, Microsoft, or public domain virtual network computing (VNC) services is that
they allow for complex administrative commands to be executed using the native
interface of the server, rather than a command-line interface, which might be
available through SSHv2 or telnet. Terminal services also allow for the
authentication and authorization services integrated into the server to be leveraged
for remote users, in addition to all the logging and auditing features of the server as
well.
1
Virtual Private Network (VPN)
A virtual private network (VPN) is point-to-point connection that extends a private
network across a public network. The most common security definition is an
encrypted tunnel between two hosts, but doesn’t have to be. A tunnel is the
encapsulation of one protocol inside another. Remote users employ VPNs to access
their organization’s network securely.
Depending on the VPN’s implementation, they may have most of the same
resources available to them as if they were physically at the office. As an alternative
to expensive dedicated point-to-point connections, organizations use gateway-to-
gateway VPNs to securely transmit information over the internet between sites or
even with business partners.
1
Telecommuting
Common issues such as visitor control, physical security, and network control are
almost impossible to address with teleworkers. Strong VPN connections between
the teleworker and the organization need to be established, and full device
encryption should be the norm for protecting sensitive information.
If the user works in public places or a home office the following should also be
considered:
• Is the user trained to use secure connectivity software and methods such as a
VPN?
• Does the user know which information is sensitive or valuable and why someone
might wish to steal or modify it?
• Is the user’s physical location appropriately secure for the type of work and type
of information they are using?
• Who else has access to the area? While a child may seem trusted, the child’s
friends may not be.
1
Tunneling
Point-to-Point Tunneling Protocol (PPTP) Point-to-Point Tunneling Protocol (PPTP) is
a tunnel protocol that runs over other protocols. PPTP relies on Generic Routing
Encapsulation (GRE) to build the tunnel between the endpoints. The security
architect and practitioner both need to consider known weaknesses, such as the
issues identified with PPTP, when planning for the deployment and use of remote
access technologies. PPTP is based on Point-to-Point Protocol (PPP), so it does offer
authentication by way of password authentication protocol (PAP), Challenge-
Handshake Authentication Protocol (CHAP), or Extensible Authentication Protocol
(EAP).
2
IP security (IPSec) is a suite of protocols for communicating securely with IP by
providing mechanisms for authentication and encryption. Standard IPSec only
authenticates hosts with each other. If an organization requires users to
authenticate, they must employ a nonstandard proprietary IPSec implementation,
or use IPSec over Layer 2 Tunneling Protocol (L2TP).
The latter approach uses L2TP to authenticate the users and encapsulate IPSec
packets within an L2TP tunnel. Because IPSec interprets the change of IP address
within packet headers as an attack, NAT does not work well with IPSec. To resolve
the incompatibility of the two protocols, NAT Transversal (NAT-T) encapsulates IPSec
within UDP port 4500 (see RFC 3948 for details. Read this RFC and search it on
google please).
3
Authentication Header (AH) The Authentication Header (AH) is used to prove the
identity of the origin node and ensure that the transmitted data has not been
tampered with. Before each packet (headers + data) is transmitted, a hash value of
the packet’s contents (except for the fields that are expected to change when the
packet is routed) based on a shared secret is inserted in the last field of the AH. The
endpoints negotiate which hashing algorithm to use and the shared secret when
they establish their security association. To help thwart replay attacks (when a
legitimate session is retransmitted to gain unauthorized access), each packet that is
transmitted during a security association has a sequence number that is stored in
the AH. In transport mode, the AH is inserted between the packet’s IP and TCP
header. The AH helps ensure authenticity and integrity, not confidentiality.
Encryption is implemented through the use of encapsulating security payload (ESP).
• ESP header: Contains information showing which security association to use and
the packet sequence number. Like the AH, the ESP sequences every packet to
4
thwart replay attacks.
• ESP payload: The payload contains the encrypted part of the packet. If the
encryption algorithm requires an initialization vector (IV), it is included with the
payload. The endpoints negotiate which encryption to use when the security
association is established. Because packets must be encrypted with as little
overhead as possible, ESP typically uses a symmetric encryption algorithm.
• ESP trailer: May include padding (filler bytes) if required by the encryption
algorithm or to align fields.
• Authentication: If authentication is used, this field contains the integrity check
value (hash) of the ESP packet. As with the AH, the authentication algorithm is
negotiated when the endpoints establish their security association.
4
Security Associations (SAs)
A security association (SA) defines the mechanisms that an endpoint will use to
communicate with its partner. All SAs cover transmissions in one direction only. A
second SA must be defined for two-way communication. Mechanisms that are
defined in the SA include the encryption and authentication algorithms and
whether to use the AH or ESP protocol. Deferring the mechanisms to the SA, as
opposed to specifying them in the protocol, allows the communicating partners to
use the appropriate mechanisms based on situational risk.
5
Transport Mode and Tunnel Mode
Endpoints communicate with IPSec using either transport or tunnel mode. In
transport mode, the IP payload is protected. This mode is mostly used for end-to-
end protection, for example, between client and server.
In tunnel mode, the IP payload and its IP header are protected. The entire protected
IP packet becomes a payload of a new IP packet and header. Tunnel mode is often
used between networks, such as with firewall-to- firewall VPNs.
6
Internet Key Exchange (IKE)
Internet key exchange (IKE) allows two devices to “exchange” symmetric keys for
the use of encrypting in AH or ESP. There are two ways to “exchange” keys:
7
SSL VPNs are another approach to remote access. Instead of building a VPN around
the IPSec and the network layer, SSL VPNs leverage SSL/TLS to create a tunnel back
to the home office. SSL 3.0 (Secure Socket Layer) and TLS 1.2 (Transport Layer
Security) are essentially fully compatible, with SSL being a session encryption tool
originally developed by Netscape and TLS 1.2 being the open standard IETF version
of SSL 3.0. SSL and TSL use public key certs to authenticate each through mutual
authentication.
Remote users employ a web browser to access applications that are in the
organization’s network. Even though users employ a web browser, SSL VPNs are not
restricted to applications that use HTTP. With the aid of plug-ins, such as Java, users
can have access to back-end databases, and other non-web- based applications. SSL
VPNs have several advantages over IPSec. They are easier to deploy on client
workstations than IPSec because they require a web browser only, and almost all
networks permit outgoing HTTP. SSL VPNs can be operated through a proxy server.
In addition, applications can restrict users’ access based on criteria, such as the
network the user is on, which is useful for building extranets with several
organizations.
8
9