Monitoring Network Traffic and
Monitoring Network Traffic and
Supervisors:
Dr. ir. Andrea Continella
[email protected]
Msc. Chakshu Gupta
[email protected]
Ir. Frank Fransen
[email protected]
Msc. Luca Morgese
[email protected]
1 Introduction 1
2 Background 5
2.1 Security of Internet of Things devices . . . . . . . . . . . . . . . . . . 5
2.2 5G & Network Slicing . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.1 Network Function Virtualization . . . . . . . . . . . . . . . . . . 7
2.2.2 Software-Defined Networking . . . . . . . . . . . . . . . . . . . 7
2.2.3 Security of 5G Network Slicing . . . . . . . . . . . . . . . . . . 8
2.3 IoT & 5G Network Slicing . . . . . . . . . . . . . . . . . . . . . . . . . 9
3 Related Work 11
3.1 Securing Mobile Networks . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1.1 SDN/NFV-based Approaches . . . . . . . . . . . . . . . . . . . 12
3.1.2 Network Slicing-based Approaches . . . . . . . . . . . . . . . 13
3.2 Response Actions in Network Security . . . . . . . . . . . . . . . . . . 15
3.2.1 Response Actions in 5G Core Networks . . . . . . . . . . . . . 18
3.3 Research Gap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4 Approach 21
4.1 Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2 Network Traffic Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.3 Response Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.4 Evaluation Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5 Implementation 25
5.1 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.2 System Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6 Evaluation 33
6.1 Latency Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6.2 Attack Detection Testing . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6.3 Response Action Testing . . . . . . . . . . . . . . . . . . . . . . . . . 35
iii
7 Results 37
7.1 Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
7.2 Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
7.3 Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
7.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
8 Conclusion 43
8.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
8.2 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
8.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
9 List of acronyms 47
References 49
Chapter 1
Introduction
The Internet of Things (IoT) has been an ever growing presence in our daily lives.
New devices and products are created almost on a daily basis, and the world of IoT
is not showing any signs of stopping its immense growth anytime soon. IoT devices
are also being introduced into many new domains, from automotive to industrial
to the healthcare domain. These different domains present new challenges and
goals for IoT to satisfy. For example, privacy is especially important in the health-
care domain and availability is especially important in the automotive and industrial
domains. Problematically, many devices in the IoT ecosystem suffer from subpar
security due to several limitations present in such devices. IoT devices are often not
designed with security in mind and they often have low computational power, which
inhibits the use of host-based features that add security, as well as the use of costly
encryption techniques [1]–[5]. Additionally, they often do not have robust updating
systems in place [6]. As such, IoT devices have become an interesting target for
malicious actors. The Mirai botnet attacks in 2016 are a good example of the impact
these devices can have when many of them are compromised. The Mirai botnet
attacks were a series of distributed denial of service (DDoS) attacks performed by
an enormous botnet of IoT devices which took down several popular websites [7].
These incidents also indicate that it is imperative to increase the security of the IoT
ecosystem to prevent more large-scale attacks and malicious events.
There has been a lot of research to improve the security of IoT devices using vari-
ous approaches to detect and react to threats. One of the most interesting options
is the use of network based traffic monitoring. Such an approach does not require
changes to the software running on the IoT devices, and can be administered by
the network operator. Many previous works present methods for performing threat
detection in this way for home and enterprise networks [4], [6], [8]–[19]. There have
also been several works investigating the possibilities for securing IoT devices con-
nected to mobile networks i.e., fourth generation (4G),fifth generation (5G) and sim-
1
ilar networks [2], [20]–[26]. However, there are no widely adopted solutions that
provide IoT threat detection in mobile networks. As such, IoT devices connected in
this manner are not well protected.
In the context of mobile networking standards we have now arrived at the start-
ing point of widespread 5G implementation. This generation of mobile networks
has brought many new technologies and possibilities, such as network slicing and
higher bandwidth capabilities. 5G is envisioned to use these technologies to sup-
port many different use cases. One of the main new visions for 5G is that it will allow
for many new applications for IoT devices, as network speeds and new technologies
are geared towards enabling these additional uses for IoT. Examples include the au-
tomotive industry and healthcare domain. 5G supports these use cases better than
previous mobile networks because it is much faster, enables lower latency, and gen-
erally supports more devices connected at the same time. In addition, 5G has three
modes of operation, one of which is massive machine-type communication (MMTC).
MMTC uses various Low-power wide-area network (LPWAN) technologies, includ-
ing Narrowband IoT (NB-IoT). These technologies are ideal for low-powered de-
vices. MMTC is also focused on intermittent transmissions of small sizes, making
it ideal for smart sensors. Another of the modes of operation of 5G is ultra-reliable
low latency communication (URLLC), which aims to provide extremely low latency
communication. This would be ideal for various IoT devices to support emergency
services, smart vehicles, and (industrial) robotics [27]. The adoption of 5G has been
accelerating in the recent years, and it is expected to keep increasing in the com-
ing years. Currently, there are over 500 million active 5G subscriptions. By 2027,
it is expected that there will be roughly 4.4 billion 5G subscriptions. More than 180
service providers have launched commercial 5G services, mainly in Asia and North
America [28].
The current work aims to investigate one of the options for improving the security
of a 5G network slice containing IoT devices from the point of view of the mobile
network operator i.e., by monitoring traffic and automatically responding to detected
malicious behaviour. We also aim to investigate the speed of a traffic monitoring so-
lution where the traffic monitoring agent runs separately from the routing device and
the speed of applying a specific response action (PDR/FAR change) automatically.
We use 5G components that comply with the 3rd Generation Partnership Project
(3GPP) specifications to set up a testing infrastructure. On top of this testing in-
frastructure we implement a traffic monitoring system using Suricata 1 . We do this
by adding a separate monitoring agent where Suricata runs as an intrusion de-
tection system (IDS) as opposed to running Suricata as an intrusion prevention
system (IPS) on the device that routes traffic. The efficiency of this approach is
evaluated by simulating a set of attacks and measuring the detection for latency. In
addition, we suggest several response actions available to the 5G network provider
using components of the5G core network (see section 3.2.1). We implement one
of these response actions (PDR/FAR change) in the detection infrastructure. The
latency components present in such an automated response setup are also inves-
1
https://suricata.io/
tigated by testing with a simulated attack. We find that the proposed system works
and can detect attacks in a reasonable timeframe (∼11,32 milliseconds on average,
with slight variation depending on the attack). The automated response also works
in a reasonable timeframe (∼37,10 milliseconds between alert and isolation). Gen-
eralising these results, we find that the combined solution is only effective against
attacks that take more than ∼50 milliseconds on average. Attacks that complete in
a shorter timespan are not detected in time to apply an immediate response, but the
device is isolated shortly after the attack ends.
Chapter 2 further explains some of the relevant concepts for the project, and goes
further in depth on network slicing and IoT. Chapter 3 investigates existing solutions
for network monitoring of IoT, both with regards to mobile networks and network slic-
ing. Chapter 4 describes the theoretical approach taken for this work, while chapter
5 describes how this approach was implemented. Chapter 6 describes how the im-
plementation is evaluated. Chapter 7 describes the results of testing the monitoring
solution and the response action using the testbed implementation. Chapter 8 con-
cludes the paper with a summary of results, limitations and suggestions for future
work.
Chapter 2
Background
The sections below describe some general background relevant to the project. Sec-
tion 2.1 describes some of the properties and trends of security in IoT devices.
Section 2.2 describes the structure of 5G networks and further introduces network
slicing, as well as some of the technologies that facilitate it. Section 2.3 describes
what 5G network slicing can mean for the IoT domain.
There are several reasons for the subpar security of many IoT devices. Firstly, IoT
devices often have low computational power as this makes them cheaper and easier
to produce. This means that there is often not much room for adding security fea-
tures. Such constrained IoT devices can not use heavy cryptographic protocols or
authentication protocols, as there is simply not enough computational power avail-
able to run these protocols [1], [4], [6]. Host-based malware detection is usually not
an option for IoT devices because of this lack of computational power [3], [5]. Be-
sides these inherent issues, adding security features tends to be an afterthought in
IoT development [2]. This is partially due to the fact that in the IoT domain develop-
ments are so rapid that companies must be fast with releasing new products in order
to stay competitive. This also leads to a lack of (security) software updates, as it is
difficult for companies to cheaply supply updates for a large catalogue of devices [3].
Manufacturer constraints also often impede the use of host-based malware detec-
tion. These factors cause IoT devices to generally be vulnerable to malware and
attacks.
5
2.2 5G & Network Slicing
5G networks have two main components: the access technology and the core net-
work. The access technology is the system that allows devices to connect to the
network. There are several different options for this (see figure 2.1). The 5G Base
Station (BS) is a radio base station and is often called a gNodeB (gNb). This is in-
tended to become the most common access technology for mobile devices. Access
technologies are local, as they rely on physical phenomena (e.g., radio waves) to
connect to the devices. Devices or services that run close to the access technolo-
gies (in terms of latency) are often said to run in the ”edge”.
The core network receives communication from the access technology. This com-
munication is split into two planes: the control plane and the user plane. The control
plane handles all control functions, such as authenticating devices with the network
and handing over connections to other broadband networks. One of the most impor-
tant control plane functions is the access and mobility management function (AMF).
It handles many of the authentication and registration functions, as well as various
mobility tasks. It is also the function responsible for communication between the gNb
and the control plane of the core network. The user plane is simply a channel that
transfers user traffic it receives from the gNb to its end destination. It is embodied
by one or more user plane functions (UPFs) which sends this traffic to any of the
connected data networks. These may be the internet, an adjacent core network of
a different provider, or various smaller target networks. The core network is much
more centralized than the access technologies, although components may also be
placed at the edge (especially the user plane). See figure 2.1 for a diagram depict-
ing the various components of the 5G architecture and their interactions.
One of the most important technologies of the new 5G network is network slic-
ing. Network slicing allows defining isolated end-to-end logical networks that run on
generic hardware (which may be shared between multiple slices). Slices are cus-
tomised based on the needs of the devices present in the slice. Slices can increase
revenue for the providers, as they may more efficiently use the available network re-
sources [29]. The clients procuring a slice are called tenants. They may run various
applications (such as traffic monitoring) within their slice [34]. These applications
may also be offered by the network provider as a service [35].
Network slicing is well developed for 5G core networks, and has been proven to
work for radio access networks (RANs). However, it has not yet been widely im-
plemented and requires further development for usage in RANs. [36], [37] Network
Figure 2.1: 5G architecture diagram picturing UEs, access technologies, and core
network components [33].
slicing makes use of two essential paradigms: network function virtualization (NFV)
and software-defined networking (SDN).
NFV aims at providing flexibility, agility, and scalability to the network. NFV basi-
cally entails running network functions on general-purpose hardware in a virtualized
manner, instead of using specialized hardware. Examples of security functions that
can be run in such a way are firewalls and packet inspection systems. The resulting
functions are called virtualized network functions (VNFs). VNFs are then chained
together to create complete network services [34], [38].
SDN aims at making connectivity programmable e.g., directing traffic flows to im-
prove performance. The paradigm allows for managing network resources through
an abstraction layer. SDN is implemented on traditional networking capabilities
through the addition of hardware that allows for inserting software into the network-
ing components i.e., white-box routers and switches. This kind of hardware is im-
proving rapidly, with many devices available today [34], [38].
2.2.3 Security of 5G Network Slicing
It is important to consider the security issues affecting network slicing, as this new
technology exposes new vulnerabilities and brings various security benefits.
Network slicing provides several benefits to security. Isolation between slices means
that threats that may affect a specific slice can possibly be contained, preventing the
compromise of devices in other slices. Network slicing may also allow for adding
custom security functions to slices. These security functions can add security in dif-
ferent ways, from monitoring traffic to performing system hardening checks. It may
also be useful to create slices with different security levels, with some devices being
more strictly monitored than others. Network slicing also offers some sandboxing
possibilities, in the way of creating a quarantine slice. Devices that are presumed to
be malicious may be placed in this slice to prevent them from harming other devices.
This slice may restrict internet access or perform deeper inspection [39]. Further-
more, network slicing also allows for adding security functions with finer granular-
ity. Some examples of such security functions include antivirus solutions, applica-
tion control systems, anti-spyware, domain filtering, logging, and intrusion detection.
Unique security policies may also be added on a per-slice basis [40].
Besides providing additional security, network slicing also introduces several vul-
nerabilities. Impersonation attacks may allow adversaries to act as the network slice
manager. This would give complete control over the routing and configuration of the
network traffic. MitM attacks may occur when traffic between components facilitating
network slicing is done in an unprotected manner. This could allow eavesdropping
or impersonation attacks. Diversity in slice security means that attackers may tar-
get slices with low security to later reach slices with higher security. Proper isolation
mechanisms would prevent such an attack path. At the hardware level, side-channel
attacks may compromise slice activities. Hosting several network functions on the
same virtual hardware may allow attackers to observe the CPU usage patterns and
gain information on the system. DoS attacks may prevent slices and slice security
functions from working. This may also be done by placing many mobility requests
to the slice manager, causing an overload of the respective control plane functions.
Jamming and flooding attacks may be used to deplete resources and reduce the
availability of the system, both by targeting the network slicing infrastructure or by
targeting resources in the network directly [39], [41].
There has been work on the improvement of security for network slicing technology.
Some examples include efforts towards better load balancing in RANs, anomaly
detection in RANs, different DoS protection mechanisms, co-residence prevention
systems, and privacy preserving protocols. Side-channel attacks and UE attacks
are still somewhat under-researched for network slicing [41].
As previously mentioned, there are many benefits that network slicing brings to the
IoT ecosystem. Scalability is improved, as a lot more devices can be connected to
the same network by smartly dividing resources based on slice requirements. Since
many types of IoT devices often have very minimal communication patterns (only
sending a status update regularly) this is especially relevant. The capacity of the
hardware can also be more efficiently used by more precisely allocating resources
based on the needs of the devices in the slice, which may increase revenue for the
providers. Dynamicity is improved, since network slicing allows for rapidly deploy-
ing custom (temporary) networks by allocating resources dynamically. This dynamic
resource allocating may also allow for improving the quality of service (QoS) of vary-
ing IoT applications. Privacy can also be improved, for example when a dedicated
network slice is used for private data transferring [1].
However, besides all of these benefits there are also several challenges that net-
work slicing brings to the IoT ecosystem. Firstly, the network slice orchestrator must
be able to handle rapidly scaling network slice requests. The dynamic goals of
network slicing must be supported efficiently to prevent impact on network connec-
tivity and latency. There have been some efforts towards creating machine learning
techniques to handle such scaling. Recursion is another possibility that could pro-
vide benefits by allowing subslices to inherit the properties of parent slices. This
would allow for a more granular approach to defining slices. Slice function chains
may need to be dynamically implemented to allow for adding and removing network
functions assigned to a slice to support changing requirements of IoT devices. Fi-
nally, the complexity of managing and providing slices is not to be underestimated.
These systems consist of many functions for e.g., resource management and rout-
ing, which must be orchestrated carefully to handle possible failure scenarios [1].
Chapter 3
Related Work
The following sections explore the previous work that has been done in a similar
direction to the current work. Section 3.1 describes several approaches that have
been investigated for securing mobile networks. Subsection 3.1.1 mentions sev-
eral works that explicitly use SDN and NFV, and subsection 3.1.2 mentions several
works that explicitly use network slicing. Section 3.2 explores the response ac-
tions available for use in network security, as well as how these may translate to
operations in the 5G core network based on the 3GPP specifications. Section 3.3
describes the research gap that is left open by the previous works.
Ageyev et al [20] propose a system with two components: detectors in the RAN,
and a detection system in the core network (a 4G core network in their case). The
11
detector sends the anomalies it finds through a set of functions that extract some in-
formation and determine a set of response actions, which are then forwarded to the
VNF manager, which executes the response. The entire anomaly detection system
is separate from the devices that route the user traffic.
This section contains works that explicitly make use of the SDN and NFV paradigms
for their solutions. This means that some of these can also be applied in network
slices, as the combination of these paradigms supports this technology.
Alejandro et al [25] and Farris et al [26] propose a framework for orchestrating se-
curity functions in networks with NFV and SDN. The system has three levels of
policy definitions, from high level policy language defined by the users to low level
configurations of security components. They also provide a reasoning approach that
can handle conflicts in policies. The project was performed in connection with the
ANASTACIA3 project and is based on the architecture defined in ANASTACIA. No
implementation was provided and no testing was performed.
Chafika et al [32] propose a high level orchestration framework for secure network
slicing. The project was performed in context of the EU H2020 MonB5G6 project.
This project has its own network slicing orchestration architecture, in which the au-
thors have fitted their framework. The framework describes the placing of various
components in relation to each network slice. The system has two types of security
orchestrators: local security orchestrators and end-to-end security orchestrators.
The local secrity orchestrators implement security functions at the network slice
level. The end-to-end security orchestrators use a global view of the network to
manage security at a higher level. These may decide to move slices and make deci-
sions on where slices may be deployed. It also describes the general steps taken by
the local orchestrator to analyse vulnerabilities, identify security goals for slices, and
implement protective measures. The framework has not been implemented. The
authors plan to start implementing parts of the framework in their future work.
In a later work, Jain et al [23] expand their slicing simulation suite to incorporate a
machine learning component for detecting anomalies based on bandwidth usage. In
this paper they narrow the scope of their research to the healthcare use case, which
adds an interesting additional challenge in the form of life-critical devices. They solve
this by creating a backup slice for these devices and changing the response policy
to only alert an administrator in case a life-critical device shows anomalies (instead
of removing it from the network entirely). They use a neural network and achieve a
good accuracy (99,4%) at detecting anomalies. Attacks were once again manually
triggered, and also included the previously used attacks on base stations and slices.
Lam et al [42] propose a software-defined security (SDS) system that inspects traf-
6
https://www.monb5g.eu/
fic coming in to and going out of the core network. It makes use of a convolutional
neural network. The system reads traffic from the backhaul link (the link between
cloud RAN and core network) and the interconnect link (the link between core net-
work and internet). It also has an IDS module in the core network. The system
can be adapted per-slice by customising the SDS system connected to each slice.
They use the CICIDS2018 dataset, which contains many different devices and sev-
eral protocols. Anomaly detection is done based on the inter-arrival time of each
traffic flow. This is the average amount of flows that reaches a host in a given time
frame. Some general flow statistics are also used. These features are mapped to
224x224x3 images and fed into the neural network. The authors find a very high
precision (98,9%) when predicting anomalous and benign traffic in their dataset and
conclude that machine learning techniques seem very suitable for the purpose.
Sattar et al [43] investigate the possibilities for using slice isolation to protect against
DDoS attacks in 5G networks. DDoS attacks in network slicing may target either
the hosts resources or the communication links. The authors consider two types
of attacks: DDoS flooding attacks against communication links, and slice-initiated
attacks, where the adversary runs a set of VNFs at maximum capacity to exhaust
the slice resources. To handle these attacks, they present an optimization system
that allows for defining isolation requirements for each slice. The optimization sys-
tem then calculates a scheme which describes how to best allocate resources to the
slices. They implement a testbed with 12 slices and allocate resources according
to the calculated scheme. Then, they attack one of the slices using a DDoS flood
and measure the impact on one of the other slices. They find that higher isolation
reduces the impact the DDoS attack has on the second slice in terms of response
time, bandwidth availability, and round trip time. As such, important services may be
hosted on more isolated slices, to prevent impact of DDoS attacks. They also per-
form a slice-initiated attack, and find that all isolation levels in their approach prevent
this type of attack from having any impact.
• Incident containment
– Isolation
– Shrinking the attack surface
• Managing information
– Gathering
– Preserving
– Sharing
In addition, the paper provides a matrix for determining which containment actions
are available in a given network, based on the security actuators in place. In the
scenario of a 5G core operator using the previously described monitoring approach
we have two security actuators to our disposal:
1. OSI level 3 capable security actuator (i.e. the UPF with routing capabilities)
According to the matrix in the paper these actuators give us the following options for
containment actions:
• Traffic filtering/rerouting
• Disconnect/isolate host
• Disconnect/isolate network
• Generate an alarm
• Generate a report
Various of the other papers above also mention response actions. For the current
scenario the most notable response actions are the following: Candal Ventureira et
al [2] suggest moving a malicious UE to a different network slice where its traffic
is more closely monitored. Nobakht et al [6] make use of flow filters to filter traffic.
Specifically, they use thresholds for throughput rates on suspicious devices. Jain et
al [22] have two response actions in their simulated environment: disabling a device,
and disabling a network slice. Khettab et al [13] have two responses similar to the
ones by Nobakht et al [6], namely stopping the traffic flow (i.e., resetting the connec-
tion) and limiting the bandwidth of a suspicious device.
Several other papers we examined during the literature review also suggested in-
teresting response actions. Midi et al [45] present a paper focused on wireless sen-
sor networks, which are quite different from the envisioned telecom network. How-
ever, several response actions are suggested: collecting more information, alerting
neighbors and the device itself, discarding network traffic, suspending a host, and
permanently blocking the host. Hafeez et al [46] make use of adhoc overlay net-
works (AONs). These are virtual network which are overlaid on existing networks
and can restrict hosts from communicating to other hosts. They suggest three types:
no access, restricted access, and full access. Mishima et al [47] present the results
of a system put in place where devices displaying suspicious behaviour are isolated
from the network. This is mainly aimed at user devices such as laptops, and works
by publishing the isolation information on a website with instructions on how to ac-
cess the network again. The user is expected to perform tasks such as scanning for
malware before they are allowed on the network again.
• Doing nothing
• Disconnecting/isolating
For any of these options there is the additional axis of time to consider i.e., how
long should we isolate the device. Generally this should be a permanent action as
the device is compromised and it will not simply be fixed over time. However, if the
owner of the device removes the threat (e.g., by removing malware) the restriction
could be lifted. Another option is to isolate a device for a predetermined timespan,
which may be a solution to stop a current attack but keep the device available at a
later time.
Then there is the option of taking an entire part of the network (or slice) offline
or isolating it. This solution also may have a large effect on other devices, meaning
it should only be used if such impact is acceptable and no other response would
suffice. There are also the one-time disconnect actions, which only reset the ses-
sion of the connected device, but do not permanently disconnect it. The device
can immediately try to connect again and will go through registration procedures as
normal. This only impacts the session that is terminated. Finally, there are several
information-related actions that may be taken. For example, additional scanning of
traffic, collecting more logs, and alerting involved parties may be done in this space.
The 5G core network is standardised by the 3GPP foundation. Within the scope
of these specifications various methods of influencing traffic exist. Some of these
can be used as response actions by making use of the service-based architecture
of the control plane i.e., interacting with the control plane functions to initiate such
response actions.
The UPF contains several mechanisms that can be used to restrict traffic. When
packets reach the UPF from either side of the network (gNb/core or data network)
they are matched against a packet detection rule (PDR). This PDR may contain
various information elements that uniquely identify the traffic. This includes the IP
address of the UE, the interface on which the packet arrives, and various flow iden-
tifiers and filters. Each PDR also contains a link to a forwarding action rule (FAR).
A FAR is a rule that indicates what the UPF should do with the arrived packets. The
options Forward, Drop, Duplicate, and Buffer are available. These PDRs and FARs
are provided by the session management function (SMF). This is a control plane
function that handles session management. Applying a new PDR and FAR can lead
to restricting the traffic of a given UE or network, or the complete isolation of a given
UE or network [48], [49].
The UPF can also restrict the bandwidth of the UE. This is done by enforcing the
maximum bitrate for the device. This may be done by setting a QoS enforcement
rule (QER) linked to the PDR. This QER contains a value for the maximum bitrate,
which can be one of several types, with subtly different specifications [48], [49].
The 3GPP specifications mention a single main method for influencing traffic from
outside the 5G core network control plane. This method makes use of the application
function (AF). The AF is a function that offers ”application services”, which may in-
clude many things, such as video streaming services. An application service for
monitoring and managing IoT devices may be set up using the AF. The capabilities
of the AF for influencing the traffic in the 5G core network make use of the policy
system provided by the policy control function (PCF). This policy system allows the
AF to set up filters for the data streams the function is concerned with. It can also
apply various policies on these filters, from dropping traffic to restricting bandwidth.
A trusted AF may directly interact with the PCF, while an outside AF has to use
the functions provided by the network exposure function (NEF). The NEF performs
some checks and forwards the request. The PCF communicates changes in policy
to the SMF, which translates it to rules used by the UPF, such as PDRs and FARs.
This action can lead to filtering and/or routing traffic, isolating a host, or isolating a
network [48], [50]–[53].
In addition to this main method there are several options for slightly altering network
functions to accept requests from the monitoring agent. The SMF may be altered
to accept requests for changes in PDR and FAR for a target UE. This would pro-
vide an easy way to essentially handle traffic filtering and isolating at the lowest level
.i.e., closest to the UPF. The SMF would translates the requests and directly forward
them to the UPF. A similar approach could be taken for restricting device bandwidth.
This would be done by modifying the unified data management function (UDM) to
accept a request. The UDM would then inform the SMF, which would in turn commu-
nicate with the PCF and AMF. The AMF would inform the UE and await a response,
which may be an issue if the UE is compromised. One could possibly add an option
to the AMF to not wait for a response. If a response is obtained, the AMF informs
the SMF, which updates the UPF to restrict the bandwidth. This action only leads to
restricting traffic of a given UE [48], [50], [53].
As such, this research project contributes to the body of research by further in-
vestigating another (automated) response action available in 5G network slices with
IoT devices, namely the PDR/FAR change. The project combines this action with an
implementation using 5G network components that comply with the 3GPP specifi-
cations, as well as investigating the efficiency of using a separate monitoring agent
in this context.
Chapter 4
Approach
The following sections describe the selected approach for the current work. Section
4.1 shortly describes the practical goals of the approach. Section 4.2 describes the
approach to monitoring the IoT network traffic. Section 4.3 describes the approach
for adding the response action to the monitoring setup. Section 4.4 describes how
we approach the evaluation of the proposed solutions to reach the goals of the
project.
4.1 Goals
We have two main goals for the testing approach: Firstly, we want to find out whether
a network traffic monitoring system that uses a separate monitoring agent can detect
attacks in a timely manner when applied in a realistic 5G network slice containing IoT
devices. Note that this work does not focus on the accuracy of the detection, but the
speed of detection in the complete architecture, as the aforementioned accuracy has
been extensively researched in the past. Secondly, we want to find out whether the
response action (PDR/FAR change) has the desired effect and can be automatically
deployed in a timely manner. These two results will indicate whether the proposed
solution would be suitable for protecting IoT devices in a 5G network slice.
21
Figure 4.1: Architecture for basic testbed using 5G components that conform to the
3GPP specifications.
and a separate host for the control plane functions. In addition, we added a host
outside of the 5G network (i.e., somewhere in the data network) to use for testing
attack scenarios that require two-way communication. This host will from now on be
referred to as the ”DN Host”.
We place a monitoring agent on a separate host to prevent load on the UPF. The
UPF mirrors traffic to the monitoring agent, which then analyses it. The main benefit
envisioned for this approach is that the UPF does not have to wait for the Suricata
analysis to forward the packets, which means there should be no impact on the
latency between the UE and the target of the communication. A drawback of this
approach is that the monitoring agent does not immediately act as an IPS, which
means attacks will pass through until the monitoring agent triggers a response.
Whether this is a problem depends on the attack. We use a simple default network
slice for the IoT devices in our test. The UPF can discern between traffic coming
from the IoT slice and traffic coming from other slices, and only mirrors the traffic
from the IoT slice to the monitoring agent.
We test the automated response with one of the replayed attacks. Once again,
we collect and analyse timestamps to investigate the speed of the solution. We also
investigate whether the expected outcome of the response action is reached. Find-
ing this information will allow us to determine the whether the response action would
be useful in practice.
Chapter 5
Implementation
The following sections describe the implementation of the approach from the previ-
ous chapter. Section 5.1 describes the overarching architecture of the testbed and
the technologies used to implement it. Section 5.2 describes the various compo-
nents in further details, as well as various flows of events that occur on the testbed.
Virtual machine #1 contains the UEs and the gNb. We simulate the gNb and UEs
using the open source UERANSIM2 . The virtual machine sends user traffic from the
IoT UEs to the UPF, and exchanges control plane traffic with the 5G control plane.
Virtual machine #2 contains the UPF. The UPF is part of the 5G core network,
which we implement using the open source Free5GC3 project. The Free5GC project
is based on the 3GPP specifications release 15. The virtual machine receives user
traffic from the UERANSIM virtual machine and routes it to the DN Host. It also mir-
rors the user traffic of the IoT network slice to the monitoring agent and exchanges
control traffic with the 5G control plane.
Virtual machine #3 contains the 5G control plane. This includes all control plane
functions (e.g., AMF), which can all be separately addressed within the machine.
The control plane is also implemented using Free5GC. The virtual machine ex-
changes control traffic with the UERANSIM virtual machine, the UPF, and the mon-
1
https://www.openstack.org/
2
https://github.com/aligungr/UERANSIM
3
https://www.free5gc.org/
25
Figure 5.1: Architecture for testbed with monitoring solution.
itoring agent. The monitoring agent may send a request for applying a response
action, to which the control plane responds with a confirmation or an error code.
Virtual machine #4 runs the DN Host. This virtual machine simply sends and re-
ceives user traffic by communicating with the UPF. It is used for two-way commu-
nication in the experiments. In a realistic scenario this would be any host on the
internet or outside network.
Virtual machine #5 runs the traffic monitoring functions. Traffic received by this
virtual machine is analysed using Suricata4 . The virtual machine receives mirrored
user traffic from the UPF. It also exchanges control plane traffic with the 5G control
plane, which includes sending a request to apply a PDR/FAR change.
The UERANSIM virtual machine contains the simulated UE and gNb. When start-
ing up, the gNb connects to the control plane by contacting the AMF through the
ens4 interface. The AMF handles the registration request and connects the gNb.
Then, when a UE is started, it will use the local loopback interface to connect to the
gNb. When a UE connects to the core network through the gNb, the AMF instructs
it to connect to a UPF that corresponds to the slice requested by the UE. As such,
the gNb passes any traffic coming from the UE to this UPF. UE traffic is sent into
the uesimtun0 interface on the virtual machine running UERANSIM. This traffic is
then passed through the local loopback interface to the gNb on the same machine,
which encapsulates it in a GPRS tunneling protocol user data (GTP-U) tunnel and
passes it through the ens4 interface to the UPF. For incoming user traffic the gNb
removes the GTP-U encapsulation and passes the traffic to the destination UE using
4
https://suricata.io/
Figure 5.2: Interfaces each request/reply packet passes through during the testbed
operation.
Figure 5.3: Steps each request/reply packet goes through during the testbed oper-
ation.
The UPF routes user traffic to or from the 5G network. User traffic from the gNb
is first received on the ens4 interface, which passes it to the upfgtp interface. Then,
the GTP-U encapsulation is removed and the packet is routed to the target destina-
tion. This could be the DN Host, which means the packet will leave the UPF from
the ens4 interface. The UPF does not add any encapsulation when it sends packets
to the DN Host, or any other host outside of the 5G network. In addition to routing,
the machine also performs traffic duplication. The traffic on the ens4 interface of the
virtual machine is duplicated into a tunnel interface (tun0) leading to the virtual ma-
chine running Suricata. This is done by using the traffic control configuration of the
Linux kernel, which allows defining of ingress and egress queue disciplines, as well
as defining filters that can specify where to mirror the traffic. The system only mir-
rors network traffic originating from and travelling to a specific range of IP addresses.
This range is linked to the IoT network slice. This requires some configuration in the
5G core, namely by giving each network slice a different data network configuration.
Each slice may still route to the same destination (e.g., the internet) but will provide
different IP addresses to the devices inside the slice. The tunnel interface the traffic
is duplicated to uses the generic routing encapsulation (GRE) protocol. This simply
provides an encapsulation layer around the original packet, which is stripped at the
destination. As such, the monitoring virtual machine receives the packets exactly as
they are on the ens4 interface of the UPF.
The virtual machine used for traffic monitoring runs the Suricata IDS on the incom-
ing tunnel interface (tun0). Packets arriving at the other interfaces on the device are
ignored. Note that packets destined for interface tun0 still pass through interface
ens4, but are not analysed by Suricata twice. Suricata is a signature-based IDS,
which means it detects threats by comparing incoming traffic to a set of signatures.
Each signature is part of a rule which contains the signature, an action Suricata per-
forms when a packet matches the signature, and some optional information on the
threat. We configured the IDS with a small set of rules comprising of rules from the
default emerging threats ruleset and several custom rules defined for the specific
attacks tested in the project. See section 6.2 for the custom rules.
Only one of the response actions mentioned in 3.2.1 is implemented for this project.
This response action is the PDR/FAR change. The other response actions have a
larger time cost as more additional features need to be implemented in Free5GC be-
fore they can be used. As such, they are left as future work. The PDR/FAR change
response is implemented by adding a feature to the SMF in the control plane. This
feature allows an outside monitoring agent such as the one used in the project to
request changes to the PDRs and FARs of a given UE, which essentially allows the
system to set up traffic filters. This is done based on the IP address of the UE,
but may be extended to more 5G-appropriate identifiers (e.g., PDU session ID) if
the monitoring agent is integrated into the 5G core further. A smaller program on
the monitoring agent continuously checks the Suricata log and sends a response
request when an alert is read from the log. The request is a simple HTTP POST
message containing the requested update in JSON format. The SMF then reads the
request and starts the update procedure. See figure 5.4 for a sequence diagram of
the steps in the response process.
Figure 5.4: Steps during the response process for the PDR/FAR change action.
Chapter 6
Evaluation
The following sections describe how the implemented system is evaluated. Section
6.1 describes how the latency components in the system are tested and analysed.
Section 6.2 describes how various attacks are replayed and detected by the system.
Section 6.3 describes how the response action is tested.
33
Table 6.1: Network traffic capture interfaces for latency testing.
VM Interface
UERANSIM uesimtun0
UERANSIM ens4
UPF ens4
UPF upfgtp
UPF tun0
DN Host ens4
Suricata tun0
Table 6.3: Custom rules for detecting flooding and scanning attacks using Suricata.
Attack Rule Suricata syntax
Triggers an alert when 100 TCP alert tcp any any → any any (msg: ”Device performing
ACK
ACK packets are sent between ACK flood.”; flags:A; threshold: type threshold, count
Flood
two hosts in 1 second. 100, seconds 1, track by src; flow:to server;)
Triggers an alert when 1000 alert udp any any → any any (msg: ”Device performing
UDP
UDP packets are sent between UDP flood.”; threshold: type threshold, count 1000,
Flood
two hosts in 1 second. seconds 1, track by src; flow:to server;)
alert tcp any any → any any (msg:”SYNSTEALTH
Triggers an alert when 50 TCP
Port SCAN DETECTED”; flow:stateless; flags:S,12;
SYN packets are sent from a
scan threshold:type threshold, track by src, count 50,
single host in 1 second.
seconds 1;)
Table 6.4: Interfaces for network traffic capture for response testing.
VM Interface
UERANSIM uesimtun0
Suricata tun0
Suricata ens4
Control Plane ens4
UPF ens4
To replay the attacks as they are in the dataset it is necessary for the hosts to have
a dialogue i.e., each host waits for the response from the other side as it is in the
traffic capture. This is mainly relevant for the telnet bruteforce (as it is expected the
host responds to attempted logins). We use Scapy3 to create a tool that can do this.
Both hosts read the traffic capture file and replay the traffic exactly as it was origi-
nally recorded. We test several captures for each attack to be able to find a mean
time to detect.
Some of the attacks in our testing repertoire are recognized by Suricata using the
default emerging threats ruleset. However, for the remaining attacks we create some
additional rules. Suricata does not provide rules against denial of service (DoS) at-
tacks in its default ruleset, as the definition of a DoS attack may differ between
scenarios. See table 6.3 for the custom rules.
3
https://scapy.net/
Chapter 7
Results
This chapter contains the results of the research. Section 7.1 presents the results
of the latency testing of the monitoring system. Section 7.2 presents the results of
testing the monitoring system against the attacks described in the previous chapter.
Section 7.3 presents the results of testing with the implemented response action.
Section 7.4 shortly discusses the results.
7.1 Latency
Table 7.1 shows the latency results for transmission between the UE and the DN
Host. The latency results were recorded over 200 ICMP packets (100 request, 100
response) using the timestamps from the tcpdump captures. There are small differ-
ences in various components with regards to outgoing or incoming packets. How-
ever, the sum of the averages for each direction is very close (∼1,41ms for outgoing
and ∼1,34ms for incoming). This may indicate that various operations (e.g., en-
capsulation and decapsulation) are happening in different components based on
whether they apply to incoming or outgoing packets. This could also explain the 0
value for the outgoing packets between the ens4 interface and the upfgtp interface
on the UPF, as both the decapsulation and processing of the packet may only occur
when it has been passed to the upfgtp interface. Nonetheless, this gives an idea of
the normal latency components present in the simple 5G core network example.
Table 7.2 shows the latency results for the additional steps required for the moni-
toring of traffic. Here we also see small differences in whether packets are incoming
or outgoing. Note that from the perspective of the Suricata machine all packets are
technically ”incoming”, as all packets are mirrored from the UPF. This means the
differences in latency are caused by the type of packet (request or response) and
not by the direction the packet is traveling. The sum of averages for each direction
37
Table 7.1: Latency components of transmission between DN Host and UE.
VM Interfaces Latency request (ms) Latency reply (ms)
UERANSIM uesimtun0 → ens4 0,695306 0,573517
- ens4 (UERANSIM) → ens4 (UPF) 0,413568 0,471501
UPF ens4 (incoming) → upfgtp 0 0,011100
UPF upfgtp → ens4 (outgoing) 0,053120 0,036416
- ens4 (UPF) → ens4 (DN Host) 0,248233 0,248445
1,410277 1,340979
is different here, which can mainly be attributed by the difference in analysis times
required by Suricata. The difference in analysis time may be due to various fac-
tors outside the scope of this research, such as the way Suricata processes these
packets. The main time cost comes from the analysis done by Suricata (90,87%
of the total). This takes ∼24,2ms for outgoing packets and ∼28,2ms for incoming
packets. The transfer between the virtual machine running the UPF and the virtual
machine running Suricata also takes a large amount of time in comparison to the
other components (4,71% of the total).
7.2 Attacks
Table 7.3 shows the mean time to detection (MTTD) for the tested attacks. This
metric is split up into two different variants. The main MTTD is the time between
the moment the first packet is seen on the outgoing interface of the attacker (i.e.,
the start of the attack) and the moment an alert shows up in the Suricata log. The
second option measures from the moment the packet that triggers the firing of the
alert (i.e., the 100th packet in 1 second) is seen on the outgoing interface of the at-
tacker to the moment an alert shows in the Suricata log, and is denoted ”MTTD from
threshold”. This way of looking at the detection time gives some insight into what
the numbers would look like for single-packet attacks, as the time taken to reach the
threshold for the flooding rules is not taken into account using this method.
Table 7.3: Mean time to detection (MTTD) for the tested attacks.
MTTD (ms) MTTD from threshold (ms)
AVG STDEV AVG STDEV
Mirai ACK flood 68,25 14,05 12,69 3,72
Mirai Telnet bruteforce 120536,52 44536,62 119500,70 44481,62
Mirai UDP flood 530,48 78,06 14,42 4,72
Port scanning 224,69 17,90 12,06 5,34
Port and OS scanning 225,37 15,91 12,73 4,91
Note that there is a serious increase in time for the Telnet bruteforce. This is an
anomaly that occurred where it took Suricata several minutes to show the alert in
the log after receiving the alerting packet. We suspect this is due to the specifics
of the rule used for detecting this attack. The rule uses the ”stream:established”
keyword. This causes Suricata to make use of its TCP re-assembly system. This
system keeps track of the state of the TCP stream by looking at occurrences of the
three-way handshake and the four-way termination handshake. The streams used
in the attack simulation all use the three-way handshake, send some traffic, and
then do the four-way termination handshake. The system then knows the state is
closed. For each state of the stream the system has a timeout, which indicates
when the stream will be taken out of memory. For a closed state this timeout is 120
seconds. However, normally this timeout would not be relevant, as the data sent
in the stream would be analysed immediately. However, since Suricata is using the
TCP re-assembly system, it also waits for a chunk of data to arrive before analysing
it. The chunk limit is not reached by the short communication in the attack, and
therefore the data is only analysed when the timeout triggers. One of the bruteforce
pcaps triggered the chunk limit and lead to lower detection times, hence the total
average time slightly below 120 seconds. Note that the above explanation has not
been confirmed, but is assumed to be the likely cause of the outlier.
Variance between the different pcaps for each attack was observed. For the flood-
ing attacks this was mainly due to the different traffic traces being more or less
aggressive at the start of the attack. An ACK flooding attack that sends 100 packets
within 10ms starts the reporting process after those 10ms, whereas a less aggres-
sive attack might send the 100 packets in 20ms and start the reporting process later,
resulting in a longer time to detection. The detection time for the bruteforce attack
depended on when a specific credential was attempted by the attacker. The rule
in the standard set would trigger on a packet containing that credential. Some of
the pcaps would have this credential early in the bruteforce, and others later. This
means there is a large variance in the times to detection for the bruteforce attack. In
addition to this variance between different pcap files, we also found some variance
when using the same pcap file. Most of this variance occurred between the first
packet being sent and the alerting packet being sent. There are many factors that
may play into this phenomenon, including the replay scripts, the state of the network
interfaces, and the various encapsulation steps.
We can also obtain some information on the scenario where Suricata would have
run as an IPS on the UPF. We do this by looking at the time Suricata takes between
receiving a triggering packet (i.e., a packet that triggers an alerting rule) and show-
ing an alert in the log. This gives a clue as to the time Suricata requires for analysing
the (stream of) packet(s) and generating an alert. See table 7.5 for this data for each
attack. If we assume that the analysis times would be similar when running Suricata
in IPS mode, we can see that each packet would take an additional ∼10,81ms to
reach its destination. However, in the current scenario the monitoring agent has to
strip the GRE encapsulation from the packets before they are passed to the Suricata
analysis engine. This time cost is also included in those ∼10,81ms, and would not
be present when using the system in IPS mode on the UPF machine. We can also
look at table 7.2 to see what the analysis time looks for for ICMP packets. Here we
Table 7.5: Time taken by Suricata between receiving a triggering packet and show-
ing an alert in the log.
Suricata Analysis Time (ms)
AVG STDEV
Mirai ACK flood 11,99 1,88
Mirai Telnet bruteforce 3,83 1,42
Mirai UDP flood 14,00 1,89
Port scanning 11,76 2,53
Port and OS scanning 12,46 2,23
find a much higher time cost of ∼21,58ms for ICMP request packets and ∼25,87ms
for ICMP reply packets. Here the same issue with removing the GRE encapsulation
is present. Note that we have not tested Suricata in IPS mode, and we imagine there
are many factors that influence this latency (such as packet type and packet size).
7.3 Response
Table 7.6 shows the latency observed for the response testing with a single ACK
flood pcap. Once again the results are split up into two variants. The variants only
differ in the Detection portion of the process. The variants are the same as de-
scribed before, namely excluding or including the time it takes for the attack to reach
the threshold. The Response portion is the same for both variants and records the
time from the moment the log shows up until the traffic block is in place in the UPF.
We note that the detection results during this test differed slightly from the ones
found when only the detection system was tested. The average time until the attack
was detected was slightly higher and the standard deviation was much larger. This
is caused by a single outlier where detection took ∼45,85ms. Without this outlier we
find a detection time of ∼9,74ms on average with a standard deviation of ∼3,72ms.
The response portion of the process has a relatively large standard deviation. This
is caused by a single outlier where the SMF took over 160 milliseconds between
receiving the request and sending a request to the UPF. The other tests (i.e., disre-
garding the outlier) resulted in an average of ∼21,32ms with a standard deviation of
∼4,05ms.
We also test the response action in a few different configurations. We set the PDR
change to only affect a certain range of IP addresses and find that the IP addresses
targeted by the PDR have their traffic blocked, whereas the other devices can still
communicate. Traffic is blocked at the UPF incoming interface i.e., the upfgtp inter-
face or the ens4 interface, depending on the direction of traffic.
7.4 Discussion
Suricata was able to detect all of the attacks in a reasonable time with the additional
flooding rules. This means the system could be used to respond to such attacks
in a timely manner, reducing the impact on availability. The rules would need to be
adapted to the given network and the expected traffic.
The response action brings the average time to neutralize an attack to ∼50,45ms on
average. For various types of attack this is an acceptable time. Flooding attacks that
are stopped in ∼50,45ms are severely cut short, as many of these attacks are meant
to intended to be active for hours or days. However, attack that are meant to com-
plete faster than ∼50,45ms are not stopped in time by the system. The response
action is still applied in this case, which could soften the impact of the completed at-
tack by preventing the newly infected device to attack other devices. The response
action also produces the expected result i.e., the isolation of the device.
The response action also works as expected, allowing for defining both internal and
external IP address range to block traffic from. This means we could allow devices
to still talk to a subset of external addresses, and we could specify which devices
should be targeted with this rule.
Chapter 8
Conclusion
The following section contains some concluding words. Subsection 8.1 shortly sum-
marizes the results of the project. Section 8.2 mentions some limitations of the
project. Section 8.3 suggests some avenues for future investigations.
8.1 Summary
The project has found that the suggested monitoring approach works well and is
able to detect attacks in a reasonable timeframe (∼11,32ms on average). Any at-
tacks that take longer than this timeframe can be detected before completing and
may be stopped. This means that many flooding and scanning attacks can be cut
short and mitigated. However, any attacks that are shorter will complete before the
monitoring system detects the attack, and will not be stopped in time. The system
will still have an alert which indicates the attack has occurred, which might still be
enough to prevent further propagation of the attack.
One of the responses using the 5G core network components (PDR/FAR change)
was implemented and tested. The response successfully blocks the specified traffic
and does so within an acceptable timeframe. The response is applied on average
within ∼37,10ms from the moment the alert is generated, or ∼50,45ms of the trig-
gering packet being sent. This once again means that attacks that take longer than
that timeframe can be stopped, and shorter attacks are not stopped. However, the
system will still apply the response action, which may simply mean that the compro-
mised device is now isolated, and can not propagate an infection or attack.
In conclusion, the approach is feasible and may be suitable for the use case of
protecting large-scale IoT deployments in a 5G network slice.
43
8.2 Limitations
There are some limitations to the system implementation. First, the mirroring of
traffic to the monitoring virtual machine is done by using the Linux traffic control,
instead of the functions built into the UPF. A 5G UPF has a mechanism that allows
for duplicating traffic to another location. This is done by using PDRs and FARs.
The FAR contains instructions for what action to take on an arriving packet matching
the corresponding PDR. One of the options specified in the current standards is to
duplicate the packet. This can happen in combination with some of the other options
e.g., Forward. This means the packets is forwarded normally to the final destination
and duplicated to the destination specified by the FAR. The reason this project did
not make use of this capability is that it is not implemented in any of the open source
UPFs or 5G cores. It may be interesting for future research to look into the differ-
ences between using the built-in capabilities and the Linux traffic control method.
The results also contained various outliers that are not completely explained. The
Telnet bruteforce attack only being detected after several minutes would be a serious
issue in a real implementation and should have been more closely investigated. The
same goes for the outlier observed during the response action testing, where the
SMF took much longer than usual to forward a request to the UPF. These outliers
somewhat muddy the collected data, as they have serious impact on the observed
average times and standard deviations. However, they may not be simply discarded
as they may pose risks if these occur more often.
Another limitation is the use of simulated UEs and a simulated gNb. Using real 5G-
capable IoT devices could be an interesting addition of realism to similar projects.
A final limitation is the fact that only a single response action was implemented
during the project. The other response actions mentioned in 3.2.1 would have been
an interesting addition to the testing repertoire. Especially the inter-slice change
would have been interesting regarding the original focus on network slicing.
It may also be interesting to experiment with more custom slices. However, this
may require some significant work, depending on the software used for the 5G com-
ponents, as several of the open source candidates do not really support customising
slices. Custom slices may be combined with various security functions and evalu-
ated e.g., by seeing if custom latency constraints can still be satisfied when traffic is
sent through and IPS, and finding where the boundaries for such solution lie.
Chapter 9
List of acronyms
4G fourth generation
5G fifth generation
AF application function
BS Base Station
gNb gNodeB
47
MitM man-in-the-middle
SaaS security-as-a-service
UE user equipment
[1] S. Wijethilaka and M. Liyanage, “Realizing internet of things with network slic-
ing: Opportunities and challenges,” in 2021 IEEE 18th Annual Consumer Com-
munications Networking Conference (CCNC), 2021, pp. 1–6.
[5] S. S. Chawathe, “Monitoring iot networks for botnet activity,” in 2018 IEEE
17th International Symposium on Network Computing and Applications (NCA),
2018, pp. 1–8.
49
IEEE 7th International Conference on Network Softwarization (NetSoft), 2021,
pp. 409–415.
[9] R. Hattarki, S. Houji, and M. Dhage, “Real time intrusion detection system for iot
networks,” in 2021 6th International Conference for Convergence in Technology
(I2CT), 2021, pp. 1–5.
[11] A. Sivanathan, “Iot behavioral monitoring via network traffic analysis,” 2020.
[Online]. Available: https://arxiv.org/abs/2001.10632
[12] M. Ge, X. Fu, N. Syed, Z. Baig, G. Teo, and A. Robles-Kelly, “Deep learning-
based intrusion detection for iot networks,” in 2019 IEEE 24th Pacific Rim In-
ternational Symposium on Dependable Computing (PRDC), 2019, pp. 256–
25 609.
[32] B. Chafika, T. Taleb, C.-T. Phan, C. Tselios, and G. Tsolis, “Distributed ai-based
security for massive numbers of network slices in 5g amp; beyond mobile sys-
tems,” in 2021 Joint European Conference on Networks and Communications
6G Summit (EuCNC/6G Summit), 2021, pp. 401–406.
[34] F. Z. Yousaf, M. Bredel, S. Schaller, and F. Schneider, “Nfv and sdn—key tech-
nology enablers for 5g networks,” IEEE Journal on Selected Areas in Commu-
nications, vol. 35, no. 11, pp. 2468–2478, 2017.
[39] A. Mathew, “Network slicing in 5g and the security concerns,” in 2020 Fourth In-
ternational Conference on Computing Methodologies and Communication (IC-
CMC), 2020, pp. 75–78.
[40] Palo Alto Networks, Inc., “Mobile network infrastructure getting started,” pp.
84–87, 2021.
[42] J. Lam and R. Abbas, “Machine learning based anomaly detection for 5g
networks,” 2020. [Online]. Available: https://arxiv.org/abs/2003.03474
[43] D. Sattar and A. Matrawy, “Towards secure slicing: Using slice isolation to mit-
igate ddos attacks on 5g core network slices,” in 2019 IEEE Conference on
Communications and Network Security (CNS), 2019, pp. 82–90.
[48] 3GPP, “5g; system architecture for the 5g system (5gs),” 3rd Generation
Partnership Project (3GPP), Technical Specification (TS) 123.501, 03
2022, version 15.13.0. [Online]. Available: https://www.etsi.org/deliver/etsi ts/
123500 123599/123501/15.13.00 60/
[49] 3GPP, “LTE; 5G; Interface between the Control Plane and the User
Plane nodes,” 3rd Generation Partnership Project (3GPP), Technical
Specification (TS) 129.244, 11 2020, version 15.10.0. [Online]. Available:
https://www.etsi.org/deliver/etsi ts/129200 129299/129244/15.10.00 60/
[50] 3GPP, “5G; Procedures for the 5G System (5GS),” 3rd Generation
Partnership Project (3GPP), Technical Specification (TS) 123.502, 07
2020, version 15.16.0. [Online]. Available: https://www.etsi.org/deliver/etsi ts/
123500 123599/123502/15.16.00 60/
[51] 3GPP, “5g; 5g system; policy and charging control signalling flows and
qos parameter mapping; stage 3,” 3rd Generation Partnership Project
(3GPP), Technical Specification (TS) 129.513, 04 2021, version 15.10.0.
[Online]. Available: https://www.etsi.org/deliver/etsi ts/129500 129599/129513/
15.10.00 60/
[52] 3GPP, “5g; 5g system; policy authorization service; stage 3,” 3rd Generation
Partnership Project (3GPP), Technical Specification (TS) 129.514, 09
2022, version 15.13.0. [Online]. Available: https://www.etsi.org/deliver/etsi ts/
129500 129599/129514/15.13.00 60/