Suricata 8.0 User Guide & Setup
Suricata 8.0 User Guide & Setup
Release 8.0.0-dev
OISF
1 What is Suricata 3
1.1 About the Open Information Security Foundation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Quickstart guide 5
2.1 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Basic setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.4 Running Suricata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.5 Alerting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.6 EVE Json . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3 Installation 9
3.1 Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2 Binary packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3 Advanced Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4 Upgrading 17
4.1 General instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.2 Upgrading 7.0 to 8.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.3 Upgrading 6.0 to 7.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.4 Upgrading 5.0 to 6.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.5 Upgrading 4.1 to 5.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5 Security Considerations 23
5.1 Running as a User Other Than Root . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.2 Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6 Support Status 27
6.1 Levels of Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.2 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
6.3 Architecture Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
8 Suricata Rules 37
8.1 Rules Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
8.2 Meta Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
8.3 IP Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
8.4 TCP keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
8.5 UDP keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
i
8.6 ICMP keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
8.7 Payload Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
8.8 Integer Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
8.9 Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
8.10 Prefiltering Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
8.11 Flow Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
8.12 Bypass Keyword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
8.13 HTTP Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
8.14 File Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
8.15 DNS Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
8.16 SSL/TLS Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
8.17 SSH Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
8.18 JA3/JA4 Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
8.19 Modbus Keyword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
8.20 DCERPC Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
8.21 DHCP keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
8.22 DNP3 Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
8.23 ENIP/CIP Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
8.24 FTP/FTP-DATA Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
8.25 Kerberos Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
8.26 SMB Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
8.27 SNMP keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
8.28 Base64 keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
8.29 SIP Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
8.30 RFB Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
8.31 MQTT Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
8.32 IKE Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
8.33 HTTP2 Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
8.34 Quic Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
8.35 NFS Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
8.36 SMTP Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
8.37 WebSocket Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
8.38 Generic App Layer Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
8.39 Xbits Keyword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
8.40 Alert Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
8.41 Thresholding Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
8.42 IP Reputation Keyword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
8.43 IP Addresses Match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
8.44 Config Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
8.45 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
8.46 Lua Scripting for Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
8.47 Differences From Snort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
8.48 Multiple Buffer Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
8.49 Tag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
11 Performance 207
ii
11.1 Runmodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
11.2 Packet Capture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
11.3 Tuning Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
11.4 Hyperscan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
11.5 High Performance Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
11.6 Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
11.7 Ignoring Traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
11.8 Packet Profiling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
11.9 Rule Profiling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
11.10 Tcmalloc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
11.11 Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
12 Configuration 233
12.1 Suricata.yaml . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
12.2 Global-Thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
12.3 Exception Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
12.4 Snort.conf to Suricata.yaml . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
12.5 Multi Tenancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
12.6 Dropping Privileges After Startup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
12.7 Using Landlock LSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
12.8 systemd notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
12.9 Includes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
13 Reputation 309
13.1 IP Reputation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
17 Output 325
17.1 EVE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
17.2 Lua Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
17.3 Syslog Alerting Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
17.4 Custom http logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
17.5 Custom tls logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
17.6 Log Rotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
iii
21 Using Capture Hardware 433
21.1 Endace DAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
21.2 Napatech . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
21.3 Myricom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
21.4 eBPF and XDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
21.5 Netmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
21.6 AF_XDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
21.7 DPDK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
25 Acknowledgements 483
26 Licenses 485
26.1 GNU General Public License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
26.2 Creative Commons Attribution-NonCommercial 4.0 International Public License . . . . . . . . . . . 489
26.3 Suricata Source Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
26.4 Suricata Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
29 Appendix 551
29.1 EVE Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
Bibliography 643
Index 645
iv
Suricata User Guide, Release 8.0.0-dev
CONTENTS 1
Suricata User Guide, Release 8.0.0-dev
2 CONTENTS
CHAPTER
ONE
WHAT IS SURICATA
Suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. It is open source and owned
by a community-run non-profit foundation, the Open Information Security Foundation (OISF). Suricata is developed
by the OISF.
1.1.1 License
The Suricata source code is licensed under version 2 of the GNU General Public License.
This documentation is licensed under the Creative Commons Attribution-NonCommercial 4.0 International Public
License.
3
Suricata User Guide, Release 8.0.0-dev
TWO
QUICKSTART GUIDE
This guide will give you a quick start to run Suricata and will focus only on the basics. For more details, read through
the more specific chapters.
2.1 Installation
It's assumed that you run a recent Ubuntu release as the official PPA can then be used for the installation. To install the
latest stable Suricata version, follow the steps:
The dedicated PPA repository is added, and after updating the index, Suricata can be installed. We recommend installing
the jq tool at this time as it will help with displaying information from Suricata's EVE JSON output (described later
in this guide).
For the installation on other systems or to use specific compile options see Installation.
After installing Suricata, you can check which version of Suricata you have running and with what options, as well as
the service state:
$ ip addr
5
Suricata User Guide, Release 8.0.0-dev
There are many possible configuration options, we focus on the setup of the HOME_NET variable and the network inter-
face configuration. The HOME_NET variable should include, in most scenarios, the IP address of the monitored interface
and all the local networks in use. The default already includes the RFC 1918 networks. In this example 10.0.0.23 is
already included within 10.0.0.0/8. If no other networks are used the other predefined values can be removed.
In this example the interface name is enp1s0 so the interface name in the af-packet section needs to match. An
example interface config might look like this:
Capture settings:
af-packet:
- interface: enp1s0
cluster-id: 99
cluster-type: cluster_flow
defrag: yes
use-mmap: yes
tpacket-v3: yes
This configuration uses the most recent recommended settings for the IDS runmode for basic setups. There are many
of possible configuration options which are described in dedicated chapters and are especially relevant for high perfor-
mance setups.
2.3 Signatures
Suricata uses Signatures to trigger alerts so it's necessary to install those and keep them updated. Signatures are also
called rules, thus the name rule-files. With the tool suricata-update rules can be fetched, updated and managed to
be provided for Suricata.
In this guide we just run the default mode which fetches the ET Open ruleset:
sudo suricata-update
Afterwards the rules are installed at /var/lib/suricata/rules which is also the default at the config and uses the
sole suricata.rules file.
The actual thread count will depend on the system and the configuration.
To see statistics, check the stats.log file:
By default, it is updated every 8 seconds to show updated values with the current state, like how many packets have
been processed and what type of traffic was decoded.
2.5 Alerting
To test the IDS functionality of Suricata it's best to test with a signature. The signature with ID 2100498 from the ET
Open ruleset is written specific for such test cases.
2100498:
alert ip any any -> any any (msg:"GPL ATTACK_RESPONSE id check returned root"; content:
˓→"uid=0|28|root|29|"; classtype:bad-unknown; sid:2100498; rev:7; metadata:created_at␣
The syntax and logic behind those signatures is covered in other chapters. This will alert on any IP traffic that has the
content within its payload. This rule can be triggered quite easy. Before we trigger it, start tail to see updates to
fast.log.
Rule trigger:
This will display more detail about each alert, including meta-data.
Stats:
The first example displays the number of packets captured by the kernel; the second examples shows all of the statistics.
2.5. Alerting 7
Suricata User Guide, Release 8.0.0-dev
THREE
INSTALLATION
Before Suricata can be used it has to be installed. Suricata can be installed on various distributions using binary
packages: Binary packages.
For people familiar with compiling their own software, the Source method is recommended.
Advanced users can check the advanced guides, see Arch Based.
3.1 Source
Installing from the source distribution files gives the most control over the Suricata installation.
The Suricata source distribution files should be verified before building the source, see Verifying Suricata Source
Distribution Files.
Basic steps:
This will install Suricata into /usr/local/bin/, use the default configuration in /usr/local/etc/suricata/ and
will output to /usr/local/var/log/suricata
9
Suricata User Guide, Release 8.0.0-dev
--enable-geoip
Enables GeoIP support for detection.
--enable-dpdk
Enables DPDK packet capture method.
ò Note
ò Note
To install all minimal dependencies, it is required to enable extra package repository in most distros. You can enable it
possibly by one of the following ways:
10 Chapter 3. Installation
Suricata User Guide, Release 8.0.0-dev
Compilation
Follow these steps from your Suricata directory:
Rust support
Rust packages can be found in package managers but some distributions don't provide Rust or provide
outdated Rust packages. In case of insufficient version you can install Rust directly from the Rust project
itself:
3.1.3 Auto-Setup
You can also use the available auto-setup features of Suricata:
make install-conf would do the regular "make install" and then it would automatically create/setup all the necessary
directories and suricata.yaml for you.
make install-rules would do the regular "make install" and then it would automatically download and set up the latest
ruleset from Emerging Threats available for Suricata.
make install-full would combine everything mentioned above (install-conf and install-rules) and will present you with
a ready-to-run (configured and set-up) Suricata.
3.1. Source 11
Suricata User Guide, Release 8.0.0-dev
ò Note
Upgrading
To upgrade:
Remove
To remove Suricata from your system:
ò Note
If you want Suricata with built-in (enabled) debugging, you can install the debug package:
If you would like to help test the Release Candidate (RC) packages, the same procedures apply, just using another PPA:
suricata-beta:
12 Chapter 3. Installation
Suricata User Guide, Release 8.0.0-dev
You can use both the suricata-stable and suricata-beta repositories together. Suricata will then always be the latest
release, stable or beta.
OISF launchpad: suricata-beta.
Daily Releases
ò Note
If you would like to help test the daily build packages from our latest git(dev) repository, the same procedures as above
apply, just using another PPA, suricata-daily:
ò Note
Please have in mind that this is packaged from our latest development git master and is therefore potentially unstable.
We do our best to make others aware of continuing development and items within the engine that are not yet complete
or optimal. With this in mind, please refer to Suricata's issue tracker on Redmine for an up-to-date list of what we
are working on, planned roadmap, and to report issues.
3.2.2 Debian
ò Note
In the "stable" version of Debian, Suricata is usually not available in the latest version. A more recent version is often
available from Debian backports, if it can be built there.
To use backports, the backports repository for the current stable distribution needs to be added to the system-wide
sources list. For Debian 10 (buster), for instance, run the following as root:
ò Note
CentOS 7
Fedora
14 Chapter 3. Installation
Suricata User Guide, Release 8.0.0-dev
ò Note
To start Suricata:
To stop Suricata:
To reload rules:
yay -S suricata
16 Chapter 3. Installation
CHAPTER
FOUR
UPGRADING
ò Note
17
Suricata User Guide, Release 8.0.0-dev
• Stats counters that are 0 can now be hidden from EVE logs. Default behavior still logs those (see EVE Output -
Stats for configuration setting).
• SDP parser and logger have been introduced. Due to SDP being encapsulated within other protocols, such as SIP,
they cannot be directly enabled or disabled. Instead, both the SDP parser and logger depend on being invoked
by another parser (or logger).
• ARP decoder and logger have been introduced. Since ARP can be quite verbose and produce many events, the
logger is disabled by default.
• It is possible to see an increase of alerts, for the same rule-sets, if you use many stream/payload rules, due to
Suricata triggering TCP stream reassembly earlier.
• New transform from_base64 that base64 decodes a buffer and passes the decoded buffer. It's recommended
that from_base64 be used instead of base64_decode
• Datasets of type String now include the length of the strings to determine if the memcap value is reached. This
may lead to memcaps being hit for older setups that didn't take that into account. For more details, check https:
//redmine.openinfosecfoundation.org/issues/3910
• DNS logging has been modified to be more consistent across requests, responses and alerts. See DNS Logging
Changes for 8.0.
• PF_RING support has been moved to a plugin. See PF_RING plugin.
• LDAP parser and logger have been introduced.
• The following sticky buffers for matching SIP headers have been implemented:
– sip.via
– sip.from
– sip.to
– sip.content_type
– sip.content_length
• Napatech support has been moved to a capture plugin. See Napatech plugin.
4.2.2 Removals
• The ssh keywords ssh.protoversion and ssh.softwareversion have been removed.
4.2.3 Deprecations
• The http-log output is now deprecated and will be removed in Suricata 9.0.
• The tls-log output is now deprecated and will be removed in Suricata 9.0.
• The syslog output is now deprecated and will be removed in Suricata 9.0. Note that this is the standalone
syslog output and does affect the eve outputs ability to send to syslog.
18 Chapter 4. Upgrading
Suricata User Guide, Release 8.0.0-dev
4.3.3 Removals
• The libprelude output plugin has been removed.
• EVE DNS v1 logging support has been removed. If still using EVE DNS v1 logging, see the manual section on
DNS logging configuration for the current configuration options: DNS EVE Configuration
This merging of custom headers in the http object could result in custom headers overwriting standard fields in
the http object, or a response header overwriting request header.
To prevent the possibility of fields being overwritten, all custom headers are now logged into the
request_headers and response_headers arrays to avoid any chance of collision. This also facilitates the
logging of headers that may appear multiple times, with each occurrence being logged in future releases (see
note below).
While these arrays are not new in Suricata 7.0, they had previously been used exclusively for the
dump-all-headers option.
As of Suricata 7.0, the above configuration example will now be logged like:
"http": {
"hostname": "suricata.io",
"http_method": "GET",
"protocol": "HTTP/1/1",
"response_headers": [
{ "name": "Server", "value": "nginx" }
]
}
ò Note
Currently, if the same HTTP header is seen multiple times, the values are concatenated into a comma-
separated value.
For more information, refer to: https://redmine.openinfosecfoundation.org/issues/1275.
• Engine logging/output now uses separate defaults for console and file, to provide a cleaner output on the
console.
Defaults are:
– console: %D: %S: %M
– file: [%i - %m] %z %d: %S: %M
The console output also changes based on verbosity level.
4.3.5 Deprecations
• Multiple "include" fields in the configuration file will now issue a warning and in Suricata 8.0 will not be sup-
ported. See Includes for documentation on including multiple files.
20 Chapter 4. Upgrading
Suricata User Guide, Release 8.0.0-dev
• For AF-Packet, the cluster_rollover setting is no longer supported. Configuration settings using
cluster_rollover will cause a warning message and act as though cluster_flow` was specified. Please update
your configuration settings.
4.4.2 Removals
• File-store v1 has been removed. If using file extraction, the file-store configuration will need to be updated to
version 2. See Update File-store v1 Configuration to V2.
• Individual Eve (JSON) loggers have been removed. For example, stats-json, dns-json, etc. Use multiple
Eve logger instances if this behavior is still required. See Multiple Logger Instances.
• Unified2 has been removed. See unified2-removed.
4.4.3 Performance
• In YAML files w/o a flow-timeouts.tcp.closed setting, the default went from 0 to 10 seconds. This may lead to
higher than expected TCP memory use: https://redmine.openinfosecfoundation.org/issues/6552
4.5.2 Removals
• dns-log, the text dns log. Use EVE.dns instead.
• file-log, the non-EVE JSON file log. Use EVE.files instead.
• drop-log, the non-EVE JSON drop log.
See https://suricata.io/about/deprecation-policy/
22 Chapter 4. Upgrading
CHAPTER
FIVE
SECURITY CONSIDERATIONS
Suricata is a security tool that processes untrusted network data, as well as requiring elevated system privileges to
acquire that data. This combination deserves extra security precautions that we discuss below.
Additionally, supply chain attacks, particularly around rule distribution, could potentially target Suricata installations.
ò Note
If using the Suricata RPMs, either from the OISF COPR repo, or the EPEL repo, the following is already configured
for you. The only thing you might want to do is add your management user to the suricata group.
Many Suricata examples and guides will show Suricata running as the root user, particularly when running on live
traffic. As Suricata generally needs low level read (and in IPS write) access to network traffic, it is required that
Suricata starts as root, however Suricata does have the ability to drop down to a non-root user after startup, which could
limit the impact of a security vulnerability in Suricata itself.
ò Note
Currently the ability to drop root privileges after startup is only available on Linux systems.
This will create a user and group with the name suricata.
23
Suricata User Guide, Release 8.0.0-dev
Directory Permissions
/etc/suricata Read
/var/log/suricata Read, Write
/var/lib/suricata Read, Write
/var/run/suricata Read, Write
• /var/log/suricata:
• /var/lib/suricata:
• /var/lib/suricata:
run-as:
user: suricata
group: suricata
5.2 Containers
Containers such as Docker and Podman are other methods to provide isolation between Suricata and the host machine
running Suricata. However, we still recommend running as a non-root user, even in containers.
5.2.1 Capabilities
For both Docker and Podman the following capabilities should be provided to the container running Suricata for proper
operation:
5.2.2 Podman
Unfortunately Suricata will not work with rootless Podman, this is due to Suricata's requirement to start with root
privileges to gain access to the network interfaces. However, if started with the above capabilities, and configured to
run as a non-root user, it will drop root privileges before processing network data.
5.2. Containers 25
Suricata User Guide, Release 8.0.0-dev
SIX
SUPPORT STATUS
6.1.1 Tier 1
Tier 1 supported items are developed and supported by the Suricata team. These items receive full CI (continuous
integration) coverage, and functional failures block git merges and releases. Tier 1 features are enabled by default on
platforms that support the feature.
6.1.2 Tier 2
Tier 2 supported items are developed and supported by the Suricata team, sometimes with help from community mem-
bers. Major functional failures block git merges and releases, however less major issues may be documented as "known
issues" and may go into a release. Tier 2 features and functionality may be disabled by default.
6.1.3 Community
When a feature of Suricata is community supported, it means the OISF/Suricata development team won’t directly
support it. This is to avoid overloading the team.
When accepting a feature into the code base anyway, it will come with a number of limits and conditions:
• submitter must commit to maintaining it:
– make sure code compiles and correctly functions after Suricata and/or external (e.g. library) changes.
– support users when they encounter problems on forum and redmine tickets.
• the code will be disabled by default and will not become part of the QA setup. This means it will be enabled
only by an --enable configure flag.
• the code may not have CI coverage by the OISF infrastructure.
If the feature gets lots of traction, and/or if the team just considers it very useful, it may get ‘promoted’ to being officially
supported.
On the other hand, the feature will be removed if the submitter stops maintaining it and no-one steps up to take over.
27
Suricata User Guide, Release 8.0.0-dev
6.1.4 Vendor
Vendor supported features are features specific to a certain vendor and usually require software and/or hardware from
that vendor. While these features may exist in the main Suricata code, they rely on support from the vendor to keep the
feature in a functional state.
Vendor supported functionality will generally not have CI or QA coverage by the OISF.
6.1.5 Unmaintained
When a feature is unmaintained it is very likely broken and may be (partially) removed during cleanups and code
refactoring. No end-user support is done by the core team. If someone wants to help maintain and support such a
feature, we recommend talking to the core team before spending a lot of time on it.
Please see Contributing to Suricata for more information if you wish to contribute.
6.2 Distributions
6.2.1 Tier 1
These tier 1 supported Linux distributions and operating systems receive full CI and QA, as well as documentation.
6.2.2 Tier 2
These tier 2 supported Linux distributions and operating systems receive CI but not full QA (functional testing).
6.3.2 Tier 2
6.3.3 Community
Tier 2
Community
Vendor
Unmaintained
Operation modes
Tier 1
Tier 2
SEVEN
--include /etc/suricata/other.yaml
-T
Test configuration.
-v
Increase the verbosity of the Suricata application logging by increasing the log level from the default. This option
can be passed multiple times to further increase the verbosity.
• -v: INFO
• -vv: PERF
• -vvv: CONFIG
• -vvvv: DEBUG
This option will not decrease the log level set in the configuration file if it is already more verbose than the level
requested with this option.
-r <path>
Run in pcap offline mode (replay mode) reading files from pcap file. If <path> specifies a directory, all files in
that directory will be processed in order of modified time maintaining flow state between files.
31
Suricata User Guide, Release 8.0.0-dev
--pcap-file-continuous
Used with the -r option to indicate that the mode should stay alive until interrupted. This is useful with directories
to add new files and not reset flow state between files.
--pcap-file-recursive
Used with the -r option when the path provided is a directory. This option enables recursive traversal into sub-
directories to a maximum depth of 255. This option cannot be combined with --pcap-file-continuous. Symlinks
are ignored.
--pcap-file-delete
Used with the -r option to indicate that the mode should delete pcap files after they have been processed. This is
useful with pcap-file-continuous to continuously feed files to a directory and have them cleaned up when done.
If this option is not set, pcap files will not be deleted after processing.
--pcap-file-buffer-size <value>
Set read buffer size using setvbuf to speed up pcap reading. Valid values are 4 KiB to 64 MiB. Default value
is 128 KiB. Supported on Linux only.
-i <interface>
After the -i option you can enter the interface card you would like to use to sniff packets from. This option will
try to use the best capture method available. Can be used several times to sniff packets from several interfaces.
--pcap[=<device>]
Run in PCAP mode. If no device is provided the interfaces provided in the pcap section of the configuration file
will be used.
--af-packet[=<device>]
Enable capture of packet using AF_PACKET on Linux. If no device is supplied, the list of devices from the
af-packet section in the yaml is used.
--af-xdp[=<device>]
Enable capture of packet using AF_XDP on Linux. If no device is supplied, the list of devices from the af-xdp
section in the yaml is used.
-q <queue id>
Run inline of the NFQUEUE queue ID provided. May be provided multiple times.
-s <filename.rules>
With the -s option you can set a file with signatures, which will be loaded together with the rules set in the yaml.
It is possible to use globbing when specifying rules files. For example, -s '/path/to/rules/*.rules'
-S <filename.rules>
With the -S option you can set a file with signatures, which will be loaded exclusively, regardless of the rules set
in the yaml.
It is possible to use globbing when specifying rules files. For example, -S '/path/to/rules/*.rules'
-l <directory>
With the -l option you can set the default log directory. If you already have the default-log-dir set in yaml, it will
not be used by Suricata if you use the -l option. It will use the log dir that is set with the -l option. If you do not
set a directory with the -l option, Suricata will use the directory that is set in yaml.
-D
Normally if you run Suricata on your console, it keeps your console occupied. You can not use it for other
purposes, and when you close the window, Suricata stops running. If you run Suricata as daemon (using the -D
option), it runs at the background and you will be able to use the console for other tasks without disturbing the
engine running.
--runmode <runmode>
With the --runmode option you can set the runmode that you would like to use. This command line option can
override the yaml runmode option.
Runmodes are: workers, autofp and single.
For more information about runmodes see Runmodes in the user guide.
-F <bpf filter file>
Use BPF filter from file.
-k [all|none]
Force (all) the checksum check or disable (none) all checksum checks.
--user=<user>
Set the process user after initialization. Overrides the user provided in the run-as section of the configuration
file.
--group=<group>
Set the process group to group after initialization. Overrides the group provided in the run-as section of the
configuration file.
--pidfile <file>
Write the process ID to file. Overrides the pid-file option in the configuration file and forces the file to be written
when not running as a daemon.
--init-errors-fatal
Exit with a failure when errors are encountered loading signatures.
--strict-rule-keywords[=all|<keyword>|<keywords(csv)]
Applies to: classtype, reference and app-layer-event.
By default missing reference or classtype values are warnings and not errors. Additionally, loading outdated
app-layer-event events are also not treated as errors, but as warnings instead.
If this option is enabled these warnings are considered errors.
If no value, or the value 'all', is specified, the option applies to all of the keywords above. Alternatively, a comma
separated list can be supplied with the keyword names it should apply to.
--disable-detection
Disable the detection engine.
--disable-hashing
Disable support for hash algorithms such as md5, sha1 and sha256.
By default hashing is enabled. Disabling hashing will also disable some Suricata features such as the filestore,
ja3, and rule keywords that use hash algorithms.
--dump-config
Dump the configuration loaded from the configuration file to the terminal and exit.
--dump-features
Dump the features provided by Suricata modules and exit. Features list (a subset of) the configuration values and
are intended to assist with comparing provided features with those required by one or more rules.
--build-info
Display the build information the Suricata was built with.
33
Suricata User Guide, Release 8.0.0-dev
--list-app-layer-protos
List all supported application layer protocols.
--list-keywords=[all|csv|<kword>]
List all supported rule keywords.
--list-runmodes
List all supported run modes.
--set <key>=<value>
Set a configuration value. Useful for overriding basic configuration parameters. For example, to change the
default log directory:
--set default-log-dir=/var/tmp
This option cannot be used to add new entries to a list in the configuration file, such as a new output. It can only
be used to modify a value in a list that already exists.
For example, to disable the eve-log in the default configuration file:
--set outputs.1.eve-log.enabled=no
Also note that the index values may change as the suricata.yaml is updated.
See the output of --dump-config for existing values that could be modified with their index.
--engine-analysis
Print reports on analysis of different sections in the engine and exit. Please have a look at the conf parameter
engine-analysis on what reports can be printed
--unix-socket=<file>
Use file as the Suricata unix control socket. Overrides the filename provided in the unix-command section of the
configuration file.
--reject-dev=<device>
Use device to send out RST / ICMP error packets with the reject keyword.
--pcap-buffer-size=<size>
Set the size of the PCAP buffer (0 - 2147483647).
--netmap[=<device>]
Enable capture of packet using NETMAP on FreeBSD or Linux. If no device is supplied, the list of devices from
the netmap section in the yaml is used.
--pfring[=<device>]
Enable PF_RING packet capture. If no device provided, the devices in the Suricata configuration will be used.
--pfring-cluster-id <id>
Set the PF_RING cluster ID.
--pfring-cluster-type <type>
Set the PF_RING cluster type (cluster_round_robin, cluster_flow).
-d <divert-port>
Run inline using IPFW divert mode.
--dag <device>
Enable packet capture off a DAG card. If capturing off a specific stream the stream can be select using a device
name like "dag0:4". This option may be provided multiple times read off multiple devices and/or streams.
--napatech
Enable packet capture using the Napatech Streams API.
--erf-in=<file>
Run in offline mode reading the specific ERF file (Endace extensible record format).
--simulate-ips
Simulate IPS mode when running in a non-IPS mode.
sudo suricata -u
-u
Run the unit tests and exit. Requires that Suricata be configured with --enable-unittests.
-U, --unittest-filter=REGEX
With the -U option you can select which of the unit tests you want to run. This option uses REGEX. Example of
use: suricata -u -U http
--list-unittests
Lists available unit tests.
--fatal-unittests
Enables fatal failure on a unit test error. Suricata will exit instead of continuing more tests.
--unittests-coverage
Display unit test coverage report.
EIGHT
SURICATA RULES
8.1.1 Action
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP GET Request Containing Rule in URI";
flow:established,to_server; http.method; content:"GET"; http.uri; content:"rule"; fast_pattern; classtype:bad-unknown;
sid:123; rev:1;)
Valid actions are:
• alert - generate an alert.
• pass - stop further inspection of the packet.
• drop - drop packet and generate alert.
• reject - send RST/ICMP unreach error to the sender of the matching packet.
• rejectsrc - same as just reject.
• rejectdst - send RST/ICMP error packet to receiver of the matching packet.
37
Suricata User Guide, Release 8.0.0-dev
ò Note
In IPS mode, using any of the reject actions also enables drop.
8.1.2 Protocol
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP GET Request Containing Rule in URI";
flow:established,to_server; http.method; content:"GET"; http.uri; content:"rule"; fast_pattern; classtype:bad-unknown;
sid:123; rev:1;)
This keyword in a signature tells Suricata which protocol it concerns. You can choose between four basic protocols:
• tcp (for tcp-traffic)
• udp
• icmp
• ip (ip stands for 'all' or 'any')
There are a couple of additional TCP related protocol options:
• tcp-pkt (for matching content in individual tcp packets)
• tcp-stream (for matching content only in a reassembled tcp stream)
There are also a few so-called application layer protocols, or layer 7 protocols you can pick from. These are:
• http (either HTTP1 or HTTP2)
• http1
• http2
• ftp
• tls (this includes ssl)
• smb
• dns
• dcerpc
• dhcp
• ssh
• smtp
• imap
• pop3
• modbus (disabled by default)
• dnp3 (disabled by default)
• enip (disabled by default)
• nfs
• ike
• krb5
• bittorrent-dht
• ntp
• dhcp
• rfb
• rdp
• snmp
• tftp
• sip
• websocket
The availability of these protocols depends on whether the protocol is enabled in the configuration file, suricata.yaml.
If you have a signature with the protocol declared as 'http', Suricata makes sure the signature will only match if the TCP
stream contains http traffic.
Operator Description
../.. IP ranges (CIDR notation)
! exception/negation
[.., ..] grouping
Normally, you would also make use of variables, such as $HOME_NET and $EXTERNAL_NET. The suricata.yaml config-
uration file specifies the IP addresses these concern. The respective $HOME_NET and $EXTERNAL_NET settings will be
used in place of the variables in your rules.
See Rule-vars for more information.
Rule usage examples:
Example Meaning
!1.1.1.1 Every IP address but 1.1.1.1
![1.1.1.1, 1.1.1.2] Every IP address but 1.1.1.1 and 1.1.1.2
$HOME_NET Your setting of HOME_NET in yaml
[$EXTERNAL_NET, !$HOME_NET] EXTERNAL_NET and not HOME_NET
[10.0.0.0/24, !10.0.0.5] 10.0.0.0/24 except for 10.0.0.5
[..., [....]]
[..., ![.....]]
. Warning
You cannot write a signature using $EXTERNAL_NET because it evaluates to 'not any', which is an invalid value.
ò Note
Please note that the source and destination address can also be matched via the ip.src and ip.dst keywords (See
IP Addresses Match). These keywords are mostly used in conjunction with the dataset feature (Datasets).
Operator Description
: port ranges
! exception/negation
[.., ..] grouping
Example Meaning
[80, 81, 82] port 80, 81 and 82
[80: 82] Range from 80 till 82
[1024: ] From 1024 till the highest port-number
!80 Every port but 80
[80:100,!99] Range from 80 till 100 but 99 excluded
[1:80,![2,4]] Range from 1-80, except ports 2 and 4
[.., [..,..]]
8.1.5 Direction
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP GET Request Containing Rule in URI";
flow:established,to_server; http.method; content:"GET"; http.uri; content:"rule"; fast_pattern; classtype:bad-unknown;
sid:123; rev:1;)
The directional arrow indicates which way the signature will be evaluated. In most signatures an arrow to the right (->)
is used. This means that only packets with the same direction can match. However, it is also possible to have a rule
match both directions (<>):
The following example illustrates direction. In this example there is a client with IP address 1.2.3.4 using port 1024.
A server with IP address 5.6.7.8, listening on port 80 (typically HTTP). The client sends a message to the server and
the server replies with its answer.
Only the traffic from the client to the server will be matched by this rule, as the direction specifies that we do not want
to evaluate the response packet.
. Warning
<keyword>: <settings>;
<keyword>;
Rule options have a specific ordering and changing their order would change the meaning of the rule.
ò Note
The characters ; and " have special meaning in the Suricata rule language and must be escaped when used in a rule
option value. For example:
msg:"Message with semicolon\;";
As a consequence, you must also escape the backslash, as it functions as an escape character.
The rest of this chapter in the documentation documents the use of the various keywords.
Some generic details about keywords follow.
Disabling Alerts
There is a way to disable alert generation for a rule using the keyword noalert. When this keyword is part of a rule,
no alert is generated if the other portions of the rule match. That is, the other rule actions will still be applied. Using
noalert can be helpful when a rule is collecting or setting state using flowbits, datasets or other state maintenance
constructs of the rule language. See Thresholding Keywords for other ways to control alert frequency.
The following rules demonstrate noalert with a familiar pattern:
• The first rule marks state without generating an alert.
• The second rule generates an alert if the state is set and additional qualifications are met.
alert http any any -> $HOME_NET any (msg:"noalert example: set state"; flow:established,to_server;
xbits:set,SC.EXAMPLE,track ip_dst, expire 10; noalert; http.method; content:"GET"; sid:1; )
alert http any any -> $HOME_NET any (msg:"noalert example: state use"; flow:established,to_server;
xbits:isset,SC.EXAMPLE,track ip_dst; http.method; content:"POST"; sid: 2; )
In IPS mode, noalert is commonly used in when Suricata should drop network packets without generating alerts (ex-
ample below). The following rule is a simplified example showing how noalert could be used with IPS deployments
to drop inbound SSH requests.
drop tcp any any -> any 22 (msg:"Drop inbound SSH traffic"; noalert; sid: 3)
Modifier Keywords
Some keywords function act as modifiers. There are two types of modifiers.
• The older style 'content modifiers' look back in the rule, e.g.:
alert http any any -> any any (content:"index.php"; http_uri; sid:1;)
In the above example the pattern 'index.php' is modified to inspect the HTTP uri buffer.
• The more recent type is called the 'sticky buffer'. It places the buffer name first and all keywords following it
apply to that buffer, for instance:
alert http any any -> any any (http_response_line; content:"403 Forbidden"; sid:1;)
In the above example the pattern '403 Forbidden' is inspected against the HTTP response line because it follows
the http_response_line keyword.
Normalized Buffers
A packet consists of raw data. HTTP and reassembly make a copy of those kinds of packets data. They erase anomalous
content, combine packets etcetera. What remains is a called the 'normalized buffer':
Because the data is being normalized, it is not what it used to be; it is an interpretation. Normalized buffers are: all
HTTP-keywords, reassembled streams, TLS-, SSL-, SSH-, FTP- and dcerpc-buffers.
Note that there are some exceptions, e.g. the http_raw_uri keyword. See http.uri for more information.
Examples:
To continue the example from the previous chapter, the msg component of the signature is emphasized below:
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP GET Request Containing Rule in URI";
flow:established,to_server; http.method; content:"GET"; http.uri; content:"rule"; fast_pattern; classtype:bad-unknown;
sid:123; rev:1;)
Tip
It is a standard practice in rule writing to make the first part of the signature msg uppercase and to indicate the class
of the signature.
It is also standard practice that msg is the first keyword in the signature.
ò Note
sid:123;
Tip
It is a standard practice in rule writing that the signature sid is provided as the last keyword (or second-to-last if
there is a rev) of the signature.
There are reserved ranges of sids, the reservations are recorded at https://sidallocation.org/ .
ò Note
This value must be unique for all rules within the same rule group (gid).
As Suricata-update currently considers the rule's sid only (cf. Bug#5447), it is advisable to opt for a completely
unique sid altogether.
rev:123;
Tip
It is a standard practice in rule writing that the rev keyword is expressed after the sid keyword. The sid and rev
keywords are commonly put as the last two keywords in a signature.
8.2.5 classtype
The classtype keyword gives information about the classification of rules and alerts. It consists of a short name, a long
name and a priority. It can tell for example whether a rule is just informational or is about a CVE. For each classtype,
the classification.config has a priority that will be used in the rule.
Example classtype definition:
Once we have defined the classification in the configuration file, we can use the classtypes in our rules. A rule with
classtype web-application-attack will be assigned a priority of 1 and the alert will contain 'Web Application Attack' in
the Suricata logs:
Tip
It is a standard practice in rule writing that the classtype keyword comes before the sid and rev keywords (as shown
in the example rule).
8.2.6 reference
The reference keyword is used to document where information about the signature and about the problem the signature
tries to address can be found. The reference keyword can appear multiple times in a signature. This keyword is meant
for signature-writers and analysts who investigate why a signature has matched. It has the following format:
reference:type,reference
reference:url,www.info.com
There are several systems that can be used as a reference. A commonly known example is the CVE-database, which
assigns numbers to vulnerabilities, to prevent having to type the same URL over and over again. An example reference
of a CVE:
reference:cve,CVE-2014-1234
8.2.7 priority
The priority keyword comes with a mandatory numeric value which can range from 1 to 255. The values 1 through
4 are commonly used. The highest priority is 1. Signatures with a higher priority will be examined first. Normally
signatures have a priority determined through a classtype definition. The classtype definition can be overridden by
defining the priority keyword in the signature. The format of priority is:
priority:1;
8.2.8 metadata
The metadata keyword allows additional, non-functional, information to be added to the signature. While the format is
free-form, it is recommended to stick to [key, value] pairs as Suricata can include these in eve alerts. The format is:
8.2.9 target
The target keyword allows the rules writer to specify which side of the alert is the target of the attack. If specified, the
alert event is enhanced to contain information about source and target.
The format is:
target:[src_ip|dest_ip]
If the value is src_ip then the source IP in the generated event (src_ip field in JSON) is the target of the attack. If target
is set to dest_ip then the target is the destination IP in the generated event.
8.2.10 requires
The requires keyword allows a rule to require specific Suricata features to be enabled, or the Suricata version to match
an expression. Rules that do not meet the requirements will by ignored, and Suricata will not treat them as errors.
When parsing rules, the parser attempts to process the requires keywords before others. This allows it to occur after
keywords that may only be present in specific versions of Suricata, as specified by the requires statement. However,
the keywords preceding it must still adhere to the basic known formats of Suricata rules.
The format is:
To require multiple features, the feature sub-keyword must be specified multiple times:
to express that a rules requires version 7.0.4 or greater, but less than 8, OR greater than or equal to 8.0.3. Which could
be useful if a keyword wasn't added until 7.0.4 and the 8.0.3 patch releases, as it would not exist in 8.0.1.
This can be extended to multiple release branches:
requires: version >= 7.0.10 < 8 | >= 8.0.5 < 9 | >= 9.0.3
8.3 IP Keywords
8.3.1 ttl
The ttl keyword is used to check for a specific IP time-to-live value in the header of a packet. The format is:
ttl:<number>;
For example:
ttl:10;
8.3. IP Keywords 47
Suricata User Guide, Release 8.0.0-dev
alert ip $EXTERNAL_NET any -> $HOME_NET any (msg:"IP Packet With TTL 0"; ttl:0; classtype:misc-activity;
sid:1; rev:1;)
8.3.2 ipopts
With the ipopts keyword you can check if a specific IP option is set. Ipopts has to be used at the beginning of a rule.
You can only match on one option per rule. There are several options on which can be matched. These are:
IP Option Description
rr Record Route
eol End of List
nop No Op
ts Time Stamp
sec IP Security
esec IP Extended Security
lsrr Loose Source Routing
ssrr Strict Source Routing
satid Stream Identifier
any any IP options are set
ipopts: <name>;
For example:
ipopts: ts;
8.3.3 sameip
Every packet has a source IP-address and a destination IP-address. It can be that the source IP is the same as the
destination IP. With the sameip keyword you can check if the IP address of the source is the same as the IP address of
the destination. The format of the sameip keyword is:
sameip;
8.3.4 ip_proto
With the ip_proto keyword you can match on the IP protocol in the packet-header. You can use the name or the number
of the protocol. You can match for example on the following protocols:
For the complete list of protocols and their numbers see http://en.wikipedia.org/wiki/List_of_IP_protocol_numbers
Example of ip_proto in a rule:
alert ip any any -> any any (msg:"IP Packet with protocol 1"; ip_proto:1; classtype:bad-unknown; sid:5; rev:1;)
The named variant of that example would be:
ip_proto:ICMP;
8.3.5 ipv4.hdr
Sticky buffer to match on content contained within an IPv4 header.
Example rule:
alert ip any any -> any any (msg:"IPv4 header keyword example"; ipv4.hdr; content:"|06|"; offset:9; depth:1; sid:1;
rev:1;)
This example looks if byte 10 of IPv4 header has value 06, which indicates that the IPv4 protocol is TCP.
8.3.6 ipv6.hdr
Sticky buffer to match on content contained within an IPv6 header.
Example rule:
alert ip any any -> any any (msg:"IPv6 header keyword example"; ipv6.hdr; content:"|06|"; offset:6; depth:1; sid:1;
rev:1;)
This example looks if byte 7 of IP64 header has value 06, which indicates that the IPv6 protocol is TCP.
8.3.7 id
With the id keyword, you can match on a specific IP ID value. The ID identifies each packet sent by a host and
increments usually with one with each packet that is being send. The IP ID is used as a fragment identification number.
Each packet has an IP ID, and when the packet becomes fragmented, all fragments of this packet have the same ID. In
this way, the receiver of the packet knows which fragments belong to the same packet. (IP ID does not take care of the
order, in that case offset is used. It clarifies the order of the fragments.)
Format of id:
id:<number>;
Example of id in a rule:
alert tcp $EXTERNAL_NET any -> $HOME_NET any (msg:"id keyword example"; id:1; content:"content|3a 20|";
fast_pattern; classtype:misc-activity; sid:12; rev:1;)
8.3. IP Keywords 49
Suricata User Guide, Release 8.0.0-dev
8.3.8 geoip
The geoip keyword enables matching on the source, destination or source and destination IPv4 addresses of network
traffic, and to see to which country it belongs. To be able to do this, Suricata uses the GeoIP2 API of MaxMind.
The syntax of geoip:
geoip: src,RU;
geoip: both,CN,RU;
geoip: dst,CN,RU,IR;
geoip: both,US,CA,UK;
geoip: any,CN,IR;
Option Description
both Both source and destination have to match with the given geoip(s)
any Either the source or the destination has to match with the given geoip(s).
dest The destination matches with the given geoip.
src The source matches with the given geoip.
geoip currently only supports IPv4. As it uses the GeoIP2 API of MaxMind, libmaxminddb must be compiled in.
You must download and install the GeoIP2 or GeoLite2 database editions desired. Visit the MaxMind site at https:
//dev.maxmind.com/geoip/geolite2-free-geolocation-data for details.
You must also supply the location of the GeoIP2 or GeoLite2 database file on the local system in the YAML-file
configuration (for example):
geoip-database: /usr/local/share/GeoIP/GeoLite2-Country.mmdb
Matching on this bits can be more specified with the following modifiers:
+ match on the specified bits, plus any others
* match if any of the specified bits are set
! match if the specified bits are not set
Format:
fragbits:[*+!]<[MDR]>;
8.3.10 fragoffset
With the fragoffset keyword you can match on specific decimal values of the IP fragment offset field. If you would
like to check the first fragments of a session, you have to combine fragoffset 0 with the More Fragment option. The
fragmentation offset field is convenient for reassembly. The id is used to determine which fragments belong to which
packet and the fragmentation offset field clarifies the order of the fragments.
You can use the following modifiers:
Format of fragoffset:
fragoffset:[!|<|>]<number>;
8.3.11 tos
The tos keyword can match on specific decimal values of the IP header TOS field. The tos keyword can have a value
from 0 - 255. This field of the IP header has been updated by rfc2474 to include functionality for Differentiated services.
Note that the value of the field has been defined with the right-most 2 bits having the value 0. When specifying a value
for tos, ensure that the value follows this.
E.g, instead of specifying the decimal value 34 (hex 22), right shift twice and use decimal 136 (hex 88).
You can specify hexadecimal values with a leading x, e.g, x88.
Format of tos:
tos:[!]<number>;
Flag Description
F FIN - Finish
S SYN - Synchronize sequence numbers
R RST - Reset
P PSH - Push
A ACK - Acknowledgment
U URG - Urgent
C CWR - Congestion Window Reduced
E ECE - ECN-Echo
0 No TCP Flags Set
Modifier Description
+ match on the bits, plus any others
* match if any of the bits are set
! match if the bits are not set
To handle writing rules for session initiation packets such as ECN where a SYN packet is sent with CWR and ECE
flags set, an option mask may be used by appending a comma and masked values. For example, a rule that checks for
a SYN flag, regardless of the values of the reserved bits is tcp.flags:S,CE;
Format of tcp.flags:
Example:
alert tcp $EXTERNAL_NET any -> $HOME_NET any (msg:"Example tcp.flags sig"; \
:example-rule-emphasis:`tcp.flags:FPU,CE;` classtype:misc-activity; sid:1; rev:1;)
It is also possible to use the tcp.flags content as a fast_pattern by using the prefilter keyword. For more information on
prefilter usage see Prefiltering Keywords
Example:
alert tcp $EXTERNAL_NET any -> $HOME_NET any (msg:"Example tcp.flags sig"; \
:example-rule-emphasis:`tcp.flags:FPU,CE; prefilter;` classtype:misc-activity; sid:1;␣
˓→rev:1;)
8.4.2 seq
The seq keyword can be used in a signature to check for a specific TCP sequence number. A sequence number is
a number that is generated practically at random by both endpoints of a TCP-connection. The client and the server
both create a sequence number, which increases with one with every byte that they send. So this sequence number
is different for both sides. This sequence number has to be acknowledged by both sides of the connection. Through
sequence numbers, TCP handles acknowledgement, order and retransmission. Its number increases with every data-
byte the sender has send. The seq helps keeping track of to what place in a data-stream a byte belongs. If the SYN flag
is set at 1, than the sequence number of the first byte of the data is this number plus 1 (so, 2).
Example:
seq:0;
8.4.3 ack
The ack is the acknowledgement of the receipt of all previous (data)-bytes send by the other side of the TCP-connection.
In most occasions every packet of a TCP connection has an ACK flag after the first SYN and a ack-number which
increases with the receipt of every new data-byte. The ack keyword can be used in a signature to check for a specific
TCP acknowledgement number.
Format of ack:
ack:1;
8.4.4 window
The window keyword is used to check for a specific TCP window size. The TCP window size is a mechanism that has
control of the data-flow. The window is set by the receiver (receiver advertised window size) and indicates the amount
of bytes that can be received. This amount of data has to be acknowledged by the receiver first, before the sender can
send the same amount of new data. This mechanism is used to prevent the receiver from being overflowed by data. The
value of the window size is limited and can be 2 to 65.535 bytes. To make more use of your bandwidth you can use a
bigger TCP-window.
The format of the window keyword:
window:[!]<number>;
8.4.5 tcp.mss
Match on the TCP MSS option value. Will not match if the option is not present.
tcp.mss uses an unsigned 16-bit integer.
The format of the keyword:
tcp.mss:<min>-<max>;
tcp.mss:[<|>]<number>;
tcp.mss:<value>;
Example rule:
alert tcp $EXTERNAL_NET any -> $HOME_NET any (flow:stateless; flags:S,12; tcp.mss:<536; sid:1234; rev:5;)
8.4.6 tcp.hdr
Sticky buffer to match on the whole TCP header.
Example rule:
alert tcp $EXTERNAL_NET any -> $HOME_NET any (flags:S,12; tcp.hdr; content:"|02 04|"; offset:20;
byte_test:2,<,536,0,big,relative; sid:1234; rev:5;)
This example starts looking after the fixed portion of the header, so into the variable sized options. There it will look
for the MSS option (type 2, option len 4) and using a byte_test determine if the value of the option is lower than 536.
The tcp.mss option will be more efficient, so this keyword is meant to be used in cases where no specific keyword is
available.
8.6.1 itype
The itype keyword is for matching on a specific ICMP type (number). ICMP has several kinds of messages and uses
codes to clarify those messages. The different messages are distinct by different names, but more important by numeric
values. For more information see the table with message-types and codes.
itype uses an unsigned 8-bit integer.
The format of the itype keyword:
itype:min<>max;
itype:[<|>]<number>;
Example This example looks for an ICMP type greater than 10:
itype:>10;
8.6.2 icode
With the icode keyword you can match on a specific ICMP code. The code of a ICMP message clarifies the message.
Together with the ICMP-type it indicates with what kind of problem you are dealing with. A code has a different
purpose with every ICMP-type.
icode uses an unsigned 8-bit integer.
The format of the icode keyword:
icode:min<>max;
icode:[<|>]<number>;
icode:>5;
alert icmp $HOME_NET any -> $EXTERNAL_NET any (msg:"GPL MISC Time-To-Live Exceeded in Transit";
icode:0; itype:11; classtype:misc-activity; sid:2100449; rev:7;)
The following lists the meaning of all ICMP types. When a code is not listed, only type 0 is defined and has the meaning
of the ICMP code, in the table above. A recent table can be found at the website of IANA
8.6.3 icmp_id
With the icmp_id keyword you can match on specific ICMP id-values. Every ICMP-packet gets an id when it is being
send. At the moment the receiver has received the packet, it will send a reply using the same id so the sender will
recognize it and connects it with the correct ICMP-request.
Format of the icmp_id keyword:
icmp_id:<number>;
icmp_id:0;
8.6.4 icmp_seq
You can use the icmp_seq keyword to check for a ICMP sequence number. ICMP messages all have sequence numbers.
This can be useful (together with the id) for checking which reply message belongs to which request message.
Format of the icmp_seq keyword:
icmp_seq:<number>;
icmp_seq:0;
ò Note
Some pcap analysis tools, like wireshark, may give both a little endian and big endian value for icmp_seq. The
icmp_seq keyword matches on the big endian value, this is due to Suricata using the network byte order (big
endian) to perform the match comparison.
8.6.5 icmpv4.hdr
Sticky buffer to match on the whole ICMPv4 header.
8.6.6 icmpv6.hdr
Sticky buffer to match on the whole ICMPv6 header.
8.6.7 icmpv6.mtu
Match on the ICMPv6 MTU optional value. Will not match if the MTU is not present.
icmpv6.mtu uses an unsigned 32-bit integer.
The format of the keyword:
icmpv6.mtu:<min>-<max>;
icmpv6.mtu:[<|>]<number>;
icmpv6.mtu:<value>;
Example rule:
alert ip $EXTERNAL_NET any -> $HOME_NET any (icmpv6.mtu:<1280; sid:1234; rev:5;)
8.7.1 content
The content keyword is very important in signatures. Between the quotation marks you can write on what you would
like the signature to match. The most simple format of content is:
content: "............";
|61| is a
|61 61| is aa
|41| is A
|21| is !
|0D| is carriage return
|0A| is line feed
There are characters you can not use in the content because they are already important in the signature. For matching
on these characters you should use the heximal notation. These are:
" |22|
; |3B|
: |3A|
| |7C|
content:"a|0D|bc";
content:"|61 0D 62 63|";
content:"a|0D|b|63|";
It is possible to let a signature check the whole payload for a match with the content or to let it check specific parts of
the payload. We come to that later. If you add nothing special to the signature, it will try to find a match in all the bytes
of the payload.
drop tcp $HOME_NET any -> $EXTERNAL_NET any (msg:"ET TROJAN Likely Bot Nick in IRC (USA +..)";
flow:established,to_server; flowbits:isset,is_proto_irc; content:"NICK "; pcre:"/NICK .*USA.*[0-9]{3,}/i"; refer-
ence:url,doc.emergingthreats.net/2008124; classtype:trojan-activity; sid:2008124; rev:2;)
By default the pattern-matching is case sensitive. The content has to be accurate, otherwise there will not be a match.
Legend:
You see content:!"Firefox/3.6.13";. This means an alert will be generated if the used version of Firefox is not
3.6.13.
ò Note
8.7.2 nocase
If you do not want to make a distinction between uppercase and lowercase characters, you can use nocase. The keyword
nocase is a content modifier.
The format of this keyword is:
nocase;
You have to place it after the content you want to modify, like:
Example nocase:
8.7.3 depth
The depth keyword is a absolute content modifier. It comes after the content. The depth content modifier comes with
a mandatory numeric value, like:
depth:12;
The number after depth designates how many bytes from the beginning of the payload will be checked.
Example:
8.7.4 startswith
The startswith keyword is similar to depth. It takes no arguments and must follow a content keyword. It modifies
the content to match exactly at the start of a buffer.
Example:
content:"GET|20|"; startswith;
startswith cannot be mixed with depth, offset, within or distance for the same pattern.
8.7.5 endswith
The endswith keyword is similar to isdataat:!1,relative;. It takes no arguments and must follow a content
keyword. It modifies the content to match exactly at the end of a buffer.
Example:
content:".php"; endswith;
content:".php"; isdataat:!1,relative;
endswith cannot be mixed with offset, within or distance for the same pattern.
8.7.6 offset
The offset keyword designates from which byte in the payload will be checked to find a match. For instance offset:3;
checks the fourth byte and further.
The keywords offset and depth can be combined and are often used together.
For example:
If this was used in a signature, it would check the payload from the third byte till the sixth byte.
8.7.7 distance
The keyword distance is a relative content modifier. This means it indicates a relation between this content keyword
and the content preceding it. Distance has its influence after the preceding match. The keyword distance comes with a
mandatory numeric value. The value you give distance, determines the byte in the payload from which will be checked
for a match relative to the previous match. Distance only determines where Suricata will start looking for a pattern.
So, distance:5; means the pattern can be anywhere after the previous match + 5 bytes. For limiting how far after the
last match Suricata needs to look, use 'within'.
The absolute value for distance must be less than or equal to 1MB (1048576).
Examples of distance:
Distance can also be a negative number. It can be used to check for matches with partly the same content (see example)
or for a content even completely before it. This is not very often used though. It is possible to attain the same results
with other keywords.
8.7.8 within
The keyword within is relative to the preceding match. The keyword within comes with a mandatory numeric value.
Using within makes sure there will only be a match if the content matches with the payload within the set amount of
bytes. Within can not be 0 (zero)
The absolute value for within must be less than or equal to 1MB (1048576).
Example:
The second content has to fall/come 'within 3 ' from the first content.
As mentioned before, distance and within can be very well combined in a signature. If you want Suricata to check a
specific part of the payload for a match, use within.
8.7.9 rawbytes
The rawbytes keyword has no effect but is included to be compatible with signatures that use it, for example signatures
used with Snort.
8.7.10 isdataat
The purpose of the isdataat keyword is to look if there is still data at a specific part of the payload. The keyword starts
with a number (the position) and then optional followed by 'relative' separated by a comma and the option rawbytes.
You use the word 'relative' to know if there is still data at a specific part of the payload relative to the last match.
So you can use both examples:
isdataat:512;
isdataat:50, relative;
The first example illustrates a signature which searches for byte 512 of the payload. The second example illustrates a
signature searching for byte 50 after the last match.
You can also use the negation (!) before isdataat.
8.7.11 bsize
With the bsize keyword, you can match on the length of the buffer. This adds precision to the content match, previously
this could have been done with isdataat.
bsize uses an unsigned 64-bit integer.
An optional operator can be specified; if no operator is present, the operator will default to '='. When a relational
operator is used, e.g., '<', '>' or '<>' (range), the bsize value will be compared using the relational operator. Ranges are
exclusive.
If one or more content keywords precedes bsize, each occurrence of content will be inspected and an error will
be raised if the content length and the bsize value prevent a match.
Format:
bsize:<number>;
bsize:=<number>;
bsize:<<number>;
bsize:><number>;
bsize:<lo-number><><hi-number>;
alert dns any any -> any any (msg:"bsize buffer less than or equal value"; dns.query; content:"google.com"; bsize:<=20;
sid:3; rev:1;)
alert dns any any -> any any (msg:"bsize buffer greater than value"; dns.query; content:"google.com"; bsize:>8; sid:4;
rev:1;)
alert dns any any -> any any (msg:"bsize buffer greater than or equal value"; dns.query; content:"google.com";
bsize:>=8; sid:5; rev:1;)
alert dns any any -> any any (msg:"bsize buffer range value"; dns.query; content:"google.com"; bsize:8<>20; sid:6;
rev:1;)
alert dns any any -> any any (msg:"test bsize rule"; dns.query; content:"short"; bsize:<10; sid:124; rev:1;)
alert dns any any -> any any (msg:"test bsize rule"; dns.query; content:"longer string"; bsize:>10; sid:125; rev:1;)
alert dns any any -> any any (msg:"test bsize rule"; dns.query; content:"middle"; bsize:6<>15; sid:126; rev:1;)
To emphasize how range works: in the example above, a match will occur if bsize is greater than 6 and less than 15.
8.7.12 dsize
With the dsize keyword, you can match on the size of the packet payload/data. You can use the keyword for example
to look for abnormal sizes of payloads which are equal to some n i.e. 'dsize:n' not equal 'dsize:!n' less than 'dsize:<n'
or greater than 'dsize:>n' This may be convenient in detecting buffer overflows.
dsize cannot be used when using app/streamlayer protocol keywords (i.e. http.uri)
dsize uses an unsigned 16-bit integer.
Format:
dsize:[<>!]number; || dsize:min<>max;
8.7.13 byte_test
The byte_test keyword extracts <num of bytes> and performs an operation selected with <operator> against the
value in <test value> at a particular <offset>. The <bitmask value> is applied to the extracted bytes (before
the operator is applied), and the final result will be right shifted one bit for each trailing 0 in the <bitmask value>.
Format:
<num of bytes> The number of bytes selected from the packet to be con-
verted or the name of a byte_extract/byte_math variable.
<operator>
• [!] Negation can prefix other operators
• < less than
• > greater than
• = equal
• <= less than or equal
• >= greater than or equal
• & bitwise AND
• ^ bitwise OR
Example:
alert tcp any any -> any any (msg:"Byte_Test Example - Compare to String"; \
content:"foobar"; byte_test:4,=,1337,1,relative,string,dec;)
8.7.14 byte_math
The byte_math keyword adds the capability to perform mathematical operations on extracted values with an existing
variable or a specified value.
When relative is included, there must be a previous content or pcre match.
Note: if oper is / and the divisor is 0, there will never be a match on the byte_math keyword.
The result can be stored in a result variable and referenced by other rule options later in the rule.
Keyword Modifier
content offset,depth,distance,within
byte_test offset,value
byte_jump offset
isdataat offset
Format:
byte_math:bytes <num of bytes> | <variable-name> , offset <offset>, oper <operator>,␣
˓→rvalue <rvalue>, \
<num of bytes> The number of bytes selected from the packet or the
name of a byte_extract variable.
<offset> Number of bytes into the payload
oper <operator> Mathematical operation to perform: +, -, *, /, <<, >>
rvalue <rvalue> Value to perform the math operation with
result <result-var> Where to store the computed value
[relative] Offset relative to last content match
[endian <type>]
• big (Most significant byte at lowest address)
• little (Most significant byte at the highest address)
• dce (Allow the DCE module to determine the byte
order)
[string <num_type>]
• hex Converted data is represented in hex
• dec Converted data is represented in decimal
• oct Converted data is represented as octal
Example:
alert tcp any any -> any any \
(msg:"Testing bytemath_body"; \
content:"|00 04 93 F3|"; \
content:"|00 00 00 07|"; distance:4; within:4; \
byte_math:bytes 4, offset 0, oper +, rvalue \
(continues on next page)
8.7.15 byte_jump
The byte_jump keyword allows for the ability to select a <num of bytes> from an <offset> and moves the detection
pointer to that position. Content matches will then be based off the new position.
Format:
<num of bytes> The number of bytes selected from the packet to be con-
verted or the name of a byte_extract/byte_math variable.
<offset> Number of bytes into the payload
[relative] Offset relative to last content match
[multiplier] <value> Multiple the converted byte by the <value>
[endian]
• big (Most significant byte at lowest address)
• little (Most significant byte at the highest address)
[string] <num_type>
• hex Converted data is represented in hex
• dec Converted data is represented in decimal
• oct Converted data is represented as octal
Example:
8.7.16 byte_extract
The byte_extract keyword extracts <num of bytes> at a particular <offset> and stores it in <var_name>. The
value in <var_name> can be used in any modifier that takes a number as an option and in the case of byte_test it
can be used as a value.
Format:
<num of bytes> The number of bytes selected from the packet to be ex-
tracted
<offset> Number of bytes into the payload
<var_name> The name of the variable in which to store the value
[relative] Offset relative to last content match
multiplier <value> multiply the extracted bytes by <mult-value> before stor-
ing
[endian] Type of number being read: - big (Most significant byte
at lowest address) - little (Most significant byte at the
highest address)
[string] <num>
• hex - Converted string represented in hex
• dec - Converted string represented in decimal
• oct - Converted string represented in octal
Keyword Modifier
content offset,depth,distance,within
byte_test offset,value
byte_math rvalue
byte_jump offset
isdataat offset
Example:
8.7.17 rpc
The rpc keyword can be used to match in the SUNRPC CALL on the RPC procedure numbers and the RPC version.
You can modify the keyword by using a wild-card, defined with * With this wild-card you can match on all version
and/or procedure numbers.
RPC (Remote Procedure Call) is an application that allows a computer program to execute a procedure on another com-
puter (or address space). It is used for inter-process communication. See http://en.wikipedia.org/wiki/Inter-process_
communication
Format:
8.7.18 replace
The replace content modifier can only be used in ips. It adjusts network traffic. It changes the content it follows ('abc')
into another ('def'), see example:
The replace modifier has to contain as many characters as the content it replaces. It can only be used with individual
packets. It will not work for Normalized Buffers like HTTP uri or a content match in the reassembled stream.
The checksums will be recalculated by Suricata and changed after the replace keyword is being used.
pcre:"/<regex>/opts";
Example of pcre. In this example there will be a match if the payload contains six numbers following:
pcre:"/[0-9]{6}/";
These options are perl compatible modifiers. To use these modifiers, you should add them to pcre, behind regex. Like
this:
pcre: "/<regex>/i";
ò Note
Suricata's modifiers
Suricata has its own specific pcre modifiers. These are:
• R: Match relative to the last pattern match. It is similar to distance:0;
• U: Makes pcre match on the normalized uri. It matches on the uri_buffer just like uricontent and content combined
with http_uri.U can be combined with /R. Note that R is relative to the previous match so both matches have to
be in the HTTP-uri buffer. Read more about HTTP URI Normalization.
• I: Makes pcre match on the HTTP-raw-uri. It matches on the same buffer as http_raw_uri. I can be combined
with /R. Note that R is relative to the previous match so both matches have to be in the HTTP-raw-uri buffer.
Read more about HTTP URI Normalization.
• P: Makes pcre match on the HTTP- request-body. So, it matches on the same buffer as http_client_body. P can be
combined with /R. Note that R is relative to the previous match so both matches have to be in the HTTP-request
body.
• Q: Makes pcre match on the HTTP- response-body. So, it matches on the same buffer as http_server_body. Q
can be combined with /R. Note that R is relative to the previous match so both matches have to be in the HTTP-
response body.
• H: Makes pcre match on the HTTP-header. H can be combined with /R. Note that R is relative to the previous
match so both matches have to be in the HTTP-header body.
• D: Makes pcre match on the unnormalized header. So, it matches on the same buffer as http_raw_header. D can
be combined with /R. Note that R is relative to the previous match so both matches have to be in the HTTP-raw-
header.
• M: Makes pcre match on the request-method. So, it matches on the same buffer as http_method. M can be
combined with /R. Note that R is relative to the previous match so both matches have to be in the HTTP-method
buffer.
• C: Makes pcre match on the HTTP-cookie. So, it matches on the same buffer as http_cookie. C can be combined
with /R. Note that R is relative to the previous match so both matches have to be in the HTTP-cookie buffer.
• S: Makes pcre match on the HTTP-stat-code. So, it matches on the same buffer as http_stat_code. S can be
combined with /R. Note that R is relative to the previous match so both matches have to be in the HTTP-stat-
code buffer.
• Y: Makes pcre match on the HTTP-stat-msg. So, it matches on the same buffer as http_stat_msg. Y can be
combined with /R. Note that R is relative to the previous match so both matches have to be in the HTTP-stat-msg
buffer.
• B: You can encounter B in signatures but this is just for compatibility. So, Suricata does not use B but supports
it so it does not cause errors.
• O: Overrides the configures pcre match limit.
• V: Makes pcre match on the HTTP-User-Agent. So, it matches on the same buffer as http_user_agent. V can be
combined with /R. Note that R is relative to the previous match so both matches have to be in the HTTP-User-
Agent buffer.
• W: Makes pcre match on the HTTP-Host. So, it matches on the same buffer as http_host. W can be combined
with /R. Note that R is relative to the previous match so both matches have to be in the HTTP-Host buffer.
bsize:integer value;
The integer value can be written as base-10 like 100 or as an hexadecimal value like 0x64.
The most direct example is to match for equality, but there are different modes.
• Bitmask
• Negated Bitmask
ò Note
Comparisons are strict by default. Ranges are thus exclusive. That means a range between 1 and 4 will match 2
and 3, but neither 1 nor 4. Negated range !1-4 will match for 1 or below and for 4 or above.
Examples:
bsize:19; # equality
bsize:=0x13; # equality
bsize:!0x14; # inequality
bsize:!=20; # inequality
bsize:>21; # greater than
bsize:>=21; # greater than or equal
bsize:<22; # lesser than
bsize:<=22; # lesser than or equal
bsize:19-22; # range between value1 and value2
bsize:!19-22; # negated range between value1 and value2
bsize:&0xc0=0x80; # bitmask mask is compared to value for equality
bsize:&0xc0!=0; # bitmask mask is compared to value for inequality
8.8.2 Enumerations
Some integers on the wire represent an enumeration, that is, some values have a string/meaning associated to it. Rules
can be written using one of these strings to check for equality. This is meant to make rules more human-readable and
equivalent for matching.
Examples:
websocket.opcode:text;
websocket.opcode:1; # behaves the same
8.8.3 Bitmasks
Some integers on the wire represent multiple bits. Some of these bits have a string/meaning associated to it. Rules can
be written using a list (comma-separated) of these strings, where each item can be negated.
There is no right shift for trailing zeros applied here (even if there is one for byte_test and byte_math). That means
a rule with websocket.flags:&0xc0=2 will be rejected as invalid as it can never match.
Examples:
websocket.flags:fin,!comp;
websocket.flags:&0xc0=0x80; # behaves the same
8.9 Transformations
Transformation keywords turn the data at a sticky buffer into something else. Some transformations support options
for greater control over the transformation process
Example:
This example will match on traffic even if there are one or more spaces between the navigate and (.
The transforms can be chained. They are processed in the order in which they appear in a rule. Each transform's output
acts as input for the next one.
Example:
alert http any any -> any any (http_request_line; compress_whitespace; to_sha256; \
content:"|54A9 7A8A B09C 1B81 3725 2214 51D3 F997 F015 9DD7 049E E5AD CED3 945A FC79␣
˓→7401|"; sid:1;)
ò Note
8.9.1 dotprefix
Takes the buffer, and prepends a . character to help facilitate concise domain checks. For example, an input string
of hello.google.com would be modified and become .hello.google.com. Additionally, adding the dot allows
google.com to match against content:".google.com"
Example:
This example will match on windows.update.microsoft.com and maps.microsoft.com.au but not windows.
update.fakemicrosoft.com.
This rule can be used to match on the domain only; example:
8.9.2 strip_whitespace
Strips all whitespace as considered by the isspace() call in C.
Example:
8.9. Transformations 81
Suricata User Guide, Release 8.0.0-dev
8.9.3 compress_whitespace
Compresses all consecutive whitespace into a single space.
8.9.4 to_lowercase
Converts the buffer to lowercase and passes the value on.
This example alerts if http.uri contains this text has been converted to lowercase
Example:
8.9.5 to_md5
Takes the buffer, calculates the MD5 hash and passes the raw hash value on.
Example:
8.9.6 to_uppercase
Converts the buffer to uppercase and passes the value on.
This example alerts if http.uri contains THIS TEXT HAS BEEN CONVERTED TO UPPERCASE
Example:
8.9.7 to_sha1
Takes the buffer, calculates the SHA-1 hash and passes the raw hash value on.
Example:
8.9.8 to_sha256
Takes the buffer, calculates the SHA-256 hash and passes the raw hash value on.
Example:
8.9.9 pcrexform
Takes the buffer, applies the required regular expression, and outputs the first captured expression.
ò Note
alert http any any -> any any (msg:"HTTP with pcrexform"; http.request_line; \
pcrexform:"[a-zA-Z]+\s+(.*)\s+HTTP"; content:"/dropper.php"; sid:1;)
8.9.10 url_decode
Decodes url-encoded data, ie replacing '+' with space and '%HH' with its value. This does not decode unicode '%uZZZZ'
encoding
8.9.11 xor
Takes the buffer, applies xor decoding.
ò Note
this transform requires a mandatory option which is the hexadecimal encoded xor key.
This example alerts if http.uri contains password= xored with 4-bytes key 0d0ac8ff Example:
alert http any any -> any any (msg:"HTTP with xor"; http.uri; \
xor:"0d0ac8ff"; content:"password="; sid:1;)
8.9.12 header_lowercase
This transform is meant for HTTP/1 HTTP/2 header names normalization. It lowercases the header names, while
keeping untouched the header values.
The implementation uses a state machine : - it lowercases until it finds :` - it does not change until it finds a new line
and switch back to first state
This example alerts for both HTTP/1 and HTTP/2 with a authorization header Example:
alert http any any -> any any (msg:"HTTP authorization"; http.header_names; \
header_lowercase; content:"authorization:"; sid:1;)
8.9.13 strip_pseudo_headers
This transform is meant for HTTP/1 HTTP/2 header names normalization. It strips HTTP2 pseudo-headers (names
and values).
The implementation just strips every line beginning by :.
This example alerts for both HTTP/1 and HTTP/2 with only a user agent Example:
8.9. Transformations 83
Suricata User Guide, Release 8.0.0-dev
alert http any any -> any any (msg:"HTTP ua only"; http.header_names; \
bsize:16; content:"|0d 0a|User-Agent|0d 0a 0d 0a|"; nocase; sid:1;)
8.9.14 from_base64
This transform is similar to the keyword base64_decode: the buffer is decoded using the optional values for mode,
offset and bytes and is available for matching on the decoded data.
After this transform completes, the buffer will contain only bytes that could be bases64-decoded. If the decoding
process encountered invalid bytes, those will not be included in the buffer.
The option values must be , separated and can appear in any order.
ò Note
from_base64 follows RFC 4648 by default i.e. encounter with any character that is not found in the base64 alphabet
leads to rejection of that character and the rest of the string.
Format:
There are defaults for each of the options: - bytes defaults to the length of the input buffer - offset defaults to 0 and
must be less than 65536 - mode defaults to rfc4648
Note that both bytes and offset may be variables from byte_extract and/or byte_math in later versions of Suricata.
They are not supported yet.
Mode rfc4648 applies RFC 4648 decoding logic which is suitable for encoding binary data that can be safely sent by
email, used in a URL, or included with HTTP POST requests.
Mode rfc2045 applies RFC 2045 decoding logic which supports strings, including those with embedded spaces, line
breaks, and any non base64 alphabet.
Mode strict will fail if an invalid character is found in the encoded bytes.
The following examples will alert when the buffer contents match (see the last content value for the expected strings).
This example uses the defaults and transforms "VGhpcyBpcyBTdXJpY2F0YQ==" to "This is Suricata":
Appendices
From detect-engine-mpm.c. Basically the Pattern Strength "score" starts at zero and looks at each character/byte in the
passed in byte array from left to right. If the character/byte has not been seen before in the array, it adds 3 to the score
if it is an alpha character; else it adds 4 to the score if it is a printable character, 0x00, 0x01, or 0xFF; else it adds 6 to
the score. If the character/byte has been seen before it adds 1 to the score. The final score is returned.
s += 4;
else
s += 6;
a[pat[u]] = 1;
} else {
s++;
}
}
return s;
}
Only one content of a signature will be used in the Multi Pattern Matcher (MPM). If there are multiple contents, then
Suricata uses the 'strongest' content. This means a combination of length, how varied a content is, and what buffer it
is looking in. Generally, the longer and more varied the better. For full details on how Suricata determines the fast
pattern match, see Suricata Fast Pattern Determination Explained.
Sometimes a signature writer concludes he wants Suricata to use another content than it does by default.
For instance:
content:"User-Agent|3A|";
content:"Badness"; distance:0;
In this example you see the first content is longer and more varied than the second one, so you know Suricata will use
this content for the MPM. Because 'User-Agent:' will be a match very often, and 'Badness' appears less often in network
traffic, you can make Suricata use the second content by using 'fast_pattern'.
content:"User-Agent|3A|";
content:"Badness"; distance:0; fast_pattern;
Fast-pattern can also be combined with all previous mentioned keywords, and all mentioned HTTP-modifiers.
fast_pattern:only
Sometimes a signature contains only one content. In that case it is not necessary Suricata will check it any further
after a match has been found in MPM. If there is only one content, the whole signature matches. Suricata notices
this automatically. In some signatures this is still indicated with 'fast_pattern:only;'. Although Suricata does not need
fast_pattern:only, it does support it.
fast_pattern:'chop'
If you do not want the MPM to use the whole content, you can use fast_pattern 'chop'.
For example:
8.10.2 prefilter
The prefilter engines for other non-MPM keywords can be enabled in specific rules by using the 'prefilter' keyword.
In the following rule the TTL test will be used in prefiltering instead of the single byte pattern:
alert ip any any -> any any (ttl:123; prefilter; content:"a"; sid:1;)
For more information on how to configure the prefilter engines, see Prefilter Engines
When you take a look at the first rule you will notice it would generate an alert if it would match, if it were not for the
'flowbits: noalert' at the end of that rule.
The purpose of this rule is to check for a match on 'userlogin' and mark that in the flow. So, there is no need to generate
an alert. The second rule has no effect without the first rule. If the first rule matches, the flowbit sets that specific
condition to be present in the flow. Now the second rule can be checked whether or not the previous packet fulfills the
first condition. If the second rule matches now, an alert will be generated.
ò Note
ò Note
It is possible to use flowbits several times in a rule and combine the different functions.
ò Note
alert http any any -> any any (msg:"User1 or User2 logged in"; content:"login"; flowbits:isset,user1|user2; sid:1;)
8.11.2 flow
The flow keyword can be used to match on direction of the flow, so to/from client or to/from server. It can also match
if the flow is established or not. The flow keyword can also be used to say the signature has to match on stream only
(only_stream) or on packet only (no_stream).
So with the flow keyword you can match on:
to_client
Match on packets from server to client.
to_server
Match on packets from client to server.
from_client
Match on packets from client to server (same as to_server).
from_server
Match on packets from server to client (same as to_client).
established
Match on established connections.
not_established
Match on packets that are not part of an established connection.
stateless
Match on packets that are part of a flow, regardless of connection state. (This means that packets that are not
seen as part of a flow won't match).
only_stream
Match on packets that have been reassembled by the stream engine.
no_stream
Match on packets that have not been reassembled by the stream engine. Will not match packets that have been
reassembled.
only_frag
Match packets that have been reassembled from fragments.
no_frag
Match packets that have not been reassembled from fragments.
Multiple flow options can be combined, for example:
flow:to_client, established
flow:to_server, established, only_stream
flow:to_server, not_established, no_frag
• For other protocols (for example UDP), the connection will be considered established after seeing traffic from
both sides of the connection.
8.11.3 flowint
Flowint allows storage and mathematical operations using variables. It operates much like flowbits but with the addition
of mathematical capabilities and the fact that an integer can be stored and manipulated, not just a flag set. We can use
this for a number of very useful things, such as counting occurrences, adding or subtracting occurrences, or doing
thresholding within a stream in relation to multiple factors. This will be expanded to a global context very soon, so
users can perform these operations between streams.
The syntax is as follows:
Define a var (not required), or check that one is set or not set.
Compare or alter a var. Add, subtract, compare greater than or less than, greater than or equal to, and less than or equal
to are available. The item to compare with can be an integer or another variable.
For example, if you want to count how many times a username is seen in a particular stream and alert if it is over 5.
alert tcp any any -> any any (msg:"Counting Usernames"; content:"jonkman"; \
flowint: usernamecount, +, 1; noalert;)
This will count each occurrence and increment the var usernamecount and not generate an alert for each.
Now say we want to generate an alert if there are more than five hits in the stream.
alert tcp any any -> any any (msg:"More than Five Usernames!"; content:"jonkman"; \
flowint: usernamecount, +, 1; flowint:usernamecount, >, 5;)
alert tcp any any -> any any (msg:"Username Logged out"; content:"logout jonkman"; \
flowint: usernamecount, -, 1; flowint:usernamecount, >, 5;)
So now we'll get an alert ONLY if there are more than five active logins for this particular username.
This is a rather simplistic example, but I believe it shows the power of what such a simple function can do for rule
writing. I see a lot of applications in things like login tracking, IRC state machines, malware tracking, and brute force
login detection.
Let's say we're tracking a protocol that normally allows five login fails per connection, but we have vulnerability where
an attacker can continue to login after that five attempts and we need to know about it.
alert tcp any any -> any any (msg:"Start a login count"; content:"login failed"; \
flowint:loginfail, notset; flowint:loginfail, =, 1; noalert;)
So we detect the initial fail if the variable is not yet set and set it to 1 if so. Our first hit.
alert tcp any any -> any any (msg:"Counting Logins"; content:"login failed"; \
flowint:loginfail, isset; flowint:loginfail, +, 1; noalert;)
alert tcp any any -> any any (msg:"More than Five login fails in a Stream"; \
content:"login failed"; flowint:loginfail, isset; flowint:loginfail, >, 5;)
Now we'll generate an alert if we cross five login fails in the same stream.
But let's also say we also need alert if there are two successful logins and a failed login after that.
alert tcp any any -> any any (msg:"Counting Good Logins"; \
content:"login successful"; flowint:loginsuccess, +, 1; noalert;)
Here we're counting good logins, so now we'll count good logins relevant to fails:
alert tcp any any -> any any (msg:"Login fail after two successes"; \
content:"login failed"; flowint:loginsuccess, isset; \
flowint:loginsuccess, =, 2;)
alert tcp any any -> any any (msg:"Setting a flowint counter"; content:"GET"; \
flowint:myvar, notset; flowint:maxvar,notset; \
flowint:myvar,=,1; flowint: maxvar,=,6;)
alert tcp any any -> any any (msg:"Adding to flowint counter"; \
content:"Unauthorized"; flowint:myvar,isset; flowint: myvar,+,2;)
alert tcp any any -> any any (msg:"when flowint counter is 3 create new counter"; \
content:"Unauthorized"; flowint:myvar, isset; flowint:myvar,==,3; \
flowint:cntpackets,notset; flowint:cntpackets, =, 0;)
alert tcp any any -> any any (msg:"count the rest without generating alerts"; \
flowint:cntpackets,isset; flowint:cntpackets, +, 1; noalert;)
alert tcp any any -> any any (msg:"fire this when it reach 6"; \
flowint: cntpackets, isset; \
flowint: maxvar,isset; flowint: cntpackets, ==, maxvar;)
8.11.4 stream_size
The stream size option matches on traffic according to the registered amount of bytes by the sequence numbers. There
are several modifiers to this keyword:
Format
alert tcp any any -> any any (stream_size:both, >, 5000; sid:1;)
8.11.5 flow.age
Flow age in seconds (integer) This keyword does not wait for the end of the flow, but will be checked at each packet.
flow.age uses an unsigned 32-bit integer.
Syntax:
flow.age: [op]<number>
The time can be matched exactly, or compared using the _op_ setting:
flow.age:3 # exactly 3
flow.age:<3 # smaller than 3 seconds
flow.age:>=2 # greater or equal than 2 seconds
Signature example:
alert tcp any any -> any any (msg:"Flow longer than one hour"; flow.age:>3600; flowbits:␣
˓→isnotset, onehourflow; flowbits: onehourflow, name; sid:1; rev:1;)
In this example, we combine flow.age and flowbits to get an alert on the first packet after the flow's age is older than one
hour.
8.11.6 flow.pkts_toclient
Flow number of packets to client (integer) This keyword does not wait for the end of the flow, but will be checked at
each packet.
flow.pkts_toclient uses an unsigned 32-bit integer.
Syntax:
flow.pkts_toclient: [op]<number>
The number of packets can be matched exactly, or compared using the _op_ setting:
flow.pkts_toclient:3 # exactly 3
flow.pkts_toclient:<3 # smaller than 3
flow.pkts_toclient:>=2 # greater than or equal to 2
Signature example:
alert ip any any -> any any (msg:"Flow has 20 packets"; flow.pkts_toclient:20; sid:1;)
8.11.7 flow.pkts_toserver
Flow number of packets to server (integer) This keyword does not wait for the end of the flow, but will be checked at
each packet.
flow.pkts_toserver uses an unsigned 32-bit integer.
Syntax:
flow.pkts_toserver: [op]<number>
The number of packets can be matched exactly, or compared using the _op_ setting:
flow.pkts_toserver:3 # exactly 3
flow.pkts_toserver:<3 # smaller than 3
flow.pkts_toserver:>=2 # greater than or equal to 2
Signature example:
alert ip any any -> any any (msg:"Flow has 20 packets"; flow.pkts_toserver:20; sid:1;)
8.11.8 flow.bytes_toclient
Flow number of bytes to client (integer) This keyword does not wait for the end of the flow, but will be checked at each
packet.
flow.bytes_toclient uses an unsigned 64-bit integer.
Syntax:
flow.bytes_toclient: [op]<number>
The number of packets can be matched exactly, or compared using the _op_ setting:
flow.bytes_toclient:3 # exactly 3
flow.bytes_toclient:<3 # smaller than 3
flow.bytes_toclient:>=2 # greater than or equal to 2
Signature example:
alert ip any any -> any any (msg:"Flow has less than 2000 bytes"; flow.bytes_toclient:
˓→<2000; sid:1;)
8.11.9 flow.bytes_toserver
Flow number of bytes to server (integer) This keyword does not wait for the end of the flow, but will be checked at each
packet.
flow.bytes_toserver uses an unsigned 64-bit integer.
Syntax:
flow.bytes_toserver: [op]<number>
The number of packets can be matched exactly, or compared using the _op_ setting:
flow.bytes_toserver:3 # exactly 3
flow.bytes_toserver:<3 # smaller than 3
flow.bytes_toserver:>=2 # greater than or equal to 2
Signature example:
alert ip any any -> any any (msg:"Flow has less than 2000 bytes"; flow.bytes_toserver:
˓→<2000; sid:1;)
8.12.1 bypass
Bypass a flow on matching http traffic.
Example:
HTTP/1.1 200 OK
Content-Type: text/html
Content-Length: 258
Date: Thu, 14 Dec 2023 20:22:41 GMT
Server: nginx/0.8.54
Connection: Close
alert http $EXTERNAL_NET any -> $HOME_NET any (msg:"HTTP Stat Code Example"; flow:established,to_client;
http.stat_code; content:"200"; bsize:8; http.content_type; content:"text/html"; bsize:9; classtype:bad-unknown; sid:30;
rev:1;)
Request Keywords:
• file.name
• http.accept
• http.accept_enc
• http.accept_lang
• http.host
• http.host.raw
• http.method
• http.referer
• http.request_body
• http.request_header
• http.request_line
• http.uri
• http.uri.raw
• http.user_agent
• urilen
Response Keywords:
• http.location
• http.response_body
• http.response_header
• http.response_line
• http.server
• http.stat_code
• http.stat_msg
Request or Response Keywords:
• file.data
• http.connection
• http.content_len
• http.content_type
• http.cookie
• http.header
• http.header.raw
• http.header_names
• http.protocol
• http.start
8.13.2 Normalization
There are times when Suricata performs formatting/normalization changes to traffic that is seen.
GET / HTTP/1.1
Host: suricata.io
User-Agent: Mozilla/5.0
User-Agent: Chrome/121.0.0
8.13.3 file.name
The file.name keyword can be used with HTTP requests.
It is possible to use any of the Payload Keywords with the file.name keyword.
Example HTTP Request:
alert http $EXTERNAL_NET any -> $HOME_NET any (msg:"HTTP file.name Example"; flow:established,to_client;
file.name; content:"picture.jpg"; classtype:bad-unknown; sid:129; rev:1;)
ò Note
8.13.4 http.accept
The http.accept keyword is used to match on the Accept field that can be present in HTTP request headers.
It is possible to use any of the Payload Keywords with the http.accept keyword.
Example HTTP Request:
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP Accept Example"; flow:established,to_server;
http.accept; content:"*/*"; bsize:3; classtype:bad-unknown; sid:91; rev:1;)
ò Note
ò Note
http.accept can have additional formatting/normalization applied to buffer contents, see Normalization for ad-
ditional details.
8.13.5 http.accept_enc
The http.accept_enc keyword is used to match on the Accept-Encoding field that can be present in HTTP request
headers.
It is possible to use any of the Payload Keywords with the http.accept_enc keyword.
Example HTTP Request:
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP Accept-Encoding Example";
flow:established,to_server; http.accept_enc; content:"gzip, deflate"; bsize:13; classtype:bad-unknown; sid:92;
rev:1;)
ò Note
ò Note
http.accept_enc can have additional formatting/normalization applied to buffer contents, see Normalization for
additional details.
8.13.6 http.accept_lang
The http.accept_lang keyword is used to match on the Accept-Language field that can be present in HTTP request
headers.
It is possible to use any of the Payload Keywords with the http.accept_lang keyword.
Example HTTP Request:
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP Accept-Encoding Example";
flow:established,to_server; http.accept_lang; content:"en-US"; bsize:5; classtype:bad-unknown; sid:93; rev:1;)
ò Note
ò Note
http.accept_lang can have additional formatting/normalization applied to buffer contents, see Normalization
for additional details.
8.13.7 http.host
Matching on the HTTP host name has two options in Suricata, the http.host and the http.host.raw sticky buffers.
It is possible to use any of the Payload Keywords with both http.host keywords.
ò Note
The http.host keyword normalizes the host header contents. If a host name has uppercase characters, those would
be changed to lowercase.
Normalization Example:
ò Note
The nocase keyword is no longer allowed since the host names are normalized to contain only lowercase letters.
ò Note
http.host does not contain the port associated with the host (i.e. suricata.io:1234). To match on the host and port
or negate a host and port use http.host.raw.
ò Note
ò Note
The http.host and http.host.raw buffers are populated from either the URI (if the full URI is present in the
request like in a proxy request) or the HTTP Host header. If both are present, the URI is used.
ò Note
http.host can have additional formatting/normalization applied to buffer contents, see Normalization for addi-
tional details.
8.13.8 http.host.raw
The http.host.raw buffer matches on HTTP host content but does not have any normalization performed on the
buffer contents (see http.host)
Example HTTP Request:
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP Host Raw Example"; flow:established,to_server;
http.host.raw; content:"SuRiCaTa.Io|3a|8445"; bsize:16; classtype:bad-unknown; sid:124; rev:1;)
ò Note
ò Note
The http.host and http.host.raw buffers are populated from either the URI (if the full URI is present in the
request like in a proxy request) or the HTTP Host header. If both are present, the URI is used.
ò Note
http.host.raw can have additional formatting/normalization applied to buffer contents, see Normalization for
additional details.
8.13.9 http.method
The http.method keyword matches on the method/verb used in an HTTP request. HTTP request methods can be any
of the following:
• GET
• POST
• HEAD
• OPTIONS
• PUT
• DELETE
• TRACE
• CONNECT
• PATCH
It is possible to use any of the Payload Keywords with the http.method keyword.
Example HTTP Request:
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP Request Example"; flow:established,to_server;
http.method; content:"GET"; classtype:bad-unknown; sid:2; rev:1;)
8.13.10 http.referer
The http.referer keyword is used to match on the Referer field that can be present in HTTP request headers.
It is possible to use any of the Payload Keywords with the http.referer keyword.
Example HTTP Request:
GET / HTTP/1.1
Host: suricata.io
Referer: https://suricata.io
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP Referer Example"; flow:established,to_server;
http.referer; content:"http|3a 2f 2f|suricata.io"; bsize:19; classtype:bad-unknown; sid:200; rev:1;)
ò Note
ò Note
http.referer can have additional formatting/normalization applied to buffer contents, see Normalization for
additional details.
8.13.11 http.request_body
The http.request_body keyword is used to match on the HTTP request body that can be present in an HTTP request.
It is possible to use any of the Payload Keywords with the http.request_body keyword.
Example HTTP Request:
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP Request Body Example";
flow:established,to_server; http.request_body; content:"Suricata request body"; classtype:bad-unknown; sid:115;
rev:1;)
ò Note
How much of the request/client body is inspected is controlled in the libhtp configuration section via the
request-body-limit setting.
ò Note
http.request_body replaces the previous keyword name, http_client_body. http_client_body can still
be used but it is recommended that rules be converted to use http.request_body.
8.13.12 http.request_header
The http.request_header keyword is used to match on the name and value of a HTTP/1 or HTTP/2 request.
It is possible to use any of the Payload Keywords with the http.request_header keyword.
For HTTP/2, the header name and value get concatenated by ": " (colon and space). The colon and space are commonly
noted with the hexadecimal format |3a 20| within signatures.
To detect if an HTTP/2 header name contains a ":" (colon), the keyword http2.header_name can be used.
Example HTTP/1 Request:
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP Request Example"; flow:established,to_server;
http.request_header; content:"Host|3a 20|suricata.io"; classtype:bad-unknown; sid:126; rev:1;)
ò Note
8.13.13 http.request_line
The http.request_line keyword is used to match on the entire contents of the HTTP request line.
Example HTTP Request:
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP Request Example"; flow:established,to_server;
http.request_line; content:"GET /index.html HTTP/1.1"; bsize:24; classtype:bad-unknown; sid:60; rev:1;)
ò Note
8.13.14 http.uri
Matching on the HTTP URI buffer has two options in Suricata, the http.uri and the http.uri.raw sticky buffers.
It is possible to use any of the Payload Keywords with both http.uri keywords.
The http.uri keyword normalizes the URI buffer. For example, if a URI has two leading //, Suricata will normalize
the URI to a single leading /.
Normalization Example:
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP URI Example"; flow:established,to_server;
http.uri; content:"/index.html"; bsize:11; classtype:bad-unknown; sid:3; rev:1;)
8.13.15 http.uri.raw
The http.uri.raw buffer matches on HTTP URI content but does not have any normalization performed on the buffer
contents. (see http.uri)
Abnormal HTTP Request Example:
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP URI Raw Example"; flow:established,to_server;
http.uri.raw; content:"//index.html"; bsize:12; classtype:bad-unknown; sid:4; rev:1;)
ò Note
Example Request:
8.13.16 http.user_agent
The http.user_agent keyword is used to match on the User-Agent field that can be present in HTTP request headers.
It is possible to use any of the Payload Keywords with the http.user_agent keyword.
Example HTTP Request:
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP User-Agent Example";
flow:established,to_server; http.user_agent; content:"Mozilla/5.0"; bsize:11; classtype:bad-unknown; sid:90;
rev:1;)
ò Note
ò Note
Using the http.user_agent generally provides better performance than using http.header.
ò Note
http.user_agent can have additional formatting/normalization applied to buffer contents, see Normalization for
additional details.
8.13.17 urilen
The urilen keyword is used to match on the length of the normalized request URI. It is possible to use the < and >
operators, which indicate respectively less than and larger than.
urilen uses an unsigned 64-bit integer.
The urilen keyword does not require a content match on the http.uri buffer or the http.uri.raw buffer.
Example HTTP Request:
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP Request"; flow:established,to_server; urilen:11;
http.method; content:"GET"; classtype:bad-unknown; sid:40; rev:1;)
The above signature would match on any HTTP GET request that has a URI length of 11, regardless of the content or
structure of the URI.
The following signatures would all alert on the example request above as well and show the different urilen options.
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"urilen greater than 10"; flow:established,to_server;
urilen:>10; classtype:bad-unknown; sid:41; rev:1;)
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"urilen less than 12"; flow:established,to_server;
urilen:<12; classtype:bad-unknown; sid:42; rev:1;)
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"urilen greater/less than example";
flow:established,to_server; urilen:10<>12; classtype:bad-unknown; sid:43; rev:1;)
8.13.18 http.location
The http.location keyword is used to match on the HTTP response location header contents.
It is possible to use any of the Payload Keywords with the http.location keyword.
Example HTTP Response:
HTTP/1.1 200 OK
Content-Type: text/html
Server: nginx/0.8.54
Location: suricata.io
alert http $EXTERNAL_NET any -> $HOME_NET any (msg:"HTTP Location Example"; flow:established,to_client;
http.location; content:"suricata.io"; bsize:11; classtype:bad-unknown; sid:122; rev:1;)
ò Note
ò Note
http.location can have additional formatting/normalization applied to buffer contents, see Normalization for
additional details.
8.13.19 http.response_body
The http.response_body keyword is used to match on the HTTP response body.
It is possible to use any of the Payload Keywords with the http.response_body keyword.
Example HTTP Response:
HTTP/1.1 200 OK
Content-Type: text/html
Server: nginx/0.8.54
alert http $EXTERNAL_NET any -> $HOME_NET any (msg:"HTTP Response Body Example";
flow:established,to_client; http.response_body; content:"Server response body"; classtype:bad-unknown; sid:120;
rev:1;)
ò Note
http.response_body will match on gzip decoded data just like file.data does.
ò Note
How much of the response/server body is inspected is controlled in your libhtp configuration section via the
response-body-limit setting.
ò Note
http.response_body replaces the previous keyword name, http_server_body. http_server_body can still
be used but it is recommended that rules be converted to use http.response_body.
8.13.20 http.response_header
The http.response_header keyword is used to match on the name and value of an HTTP/1 or HTTP/2 request.
It is possible to use any of the Payload Keywords with the http.response_header keyword.
For HTTP/2, the header name and value get concatenated by ": " (colon and space). The colon and space are commonly
noted with the hexadecimal format |3a 20| within signatures.
To detect if an HTTP/2 header name contains a ":" (colon), the keyword http2.header_name can be used.
Example HTTP Response:
HTTP/1.1 200 OK
Content-Type: text/html
Server: nginx/0.8.54
Location: suricata.io
alert http $EXTERNAL_NET any -> $HOME_NET any (msg:"HTTP Response Example"; flow:established,to_client;
http.response_header; content:"Location|3a 20|suricata.io"; classtype:bad-unknown; sid:127; rev:1;)
8.13.21 http.response_line
The http.response_line keyword is used to match on the entire HTTP response line.
It is possible to use any of the Payload Keywords with the http.response_line keyword.
Example HTTP Response:
HTTP/1.1 200 OK
Content-Type: text/html
Server: nginx/0.8.54
alert http $EXTERNAL_NET any -> $HOME_NET any (msg:"HTTP Response Line Example";
flow:established,to_client; http.response_line; content:"HTTP/1.1 200 OK"; classtype:bad-unknown; sid:119;
rev:1;)
ò Note
8.13.22 http.server
The http.server keyword is used to match on the HTTP response server header contents.
It is possible to use any of the Payload Keywords with the http.server keyword.
Example HTTP Response:
HTTP/1.1 200 OK
Content-Type: text/html
Server: nginx/0.8.54
alert http $EXTERNAL_NET any -> $HOME_NET any (msg:"HTTP Server Example"; flow:established,to_client;
http.server; content:"nginx/0.8.54"; bsize:12; classtype:bad-unknown; sid:121; rev:1;)
ò Note
ò Note
http.server can have additional formatting/normalization applied to buffer contents, see Normalization for ad-
ditional details.
8.13.23 http.stat_code
The http.stat_code keyword is used to match on the HTTP status code that can be present in an HTTP response.
It is possible to use any of the Payload Keywords with the http.stat_code keyword.
Example HTTP Response:
HTTP/1.1 200 OK
Content-Type: text/html
Server: nginx/0.8.54
alert http $EXTERNAL_NET any -> $HOME_NET any (msg:"HTTP Stat Code Response Example";
flow:established,to_client; http.stat_code; content:"200"; classtype:bad-unknown; sid:117; rev:1;)
ò Note
8.13.24 http.stat_msg
The http.stat_msg keyword is used to match on the HTTP status message that can be present in an HTTP response.
It is possible to use any of the Payload Keywords with the http.stat_msg keyword.
Example HTTP Response:
HTTP/1.1 200 OK
Content-Type: text/html
Server: nginx/0.8.54
alert http $EXTERNAL_NET any -> $HOME_NET any (msg:"HTTP Stat Message Response Example";
flow:established,to_client; http.stat_msg; content:"OK"; classtype:bad-unknown; sid:118; rev:1;)
ò Note
ò Note
8.13.25 file.data
With file.data, the HTTP response body is inspected, just like with http.response_body. file.data also works
for HTTP request body and can be used in protocols other than HTTP.
It is possible to use any of the Payload Keywords with the file.data keyword.
Example HTTP Response:
HTTP/1.1 200 OK
Content-Type: text/html
Server: nginx/0.8.54
alert http $EXTERNAL_NET any -> $HOME_NET any (msg:"HTTP file.data Example"; flow:established,to_client;
file.data; content:"Server response body"; classtype:bad-unknown; sid:128; rev:1;)
The body of an HTTP response can be very large, therefore the response body is inspected in definable chunks.
How much of the response/server body is inspected is controlled in the libhtp configuration section via the
response-body-limit setting.
ò Note
If the HTTP body is a flash file compressed with 'deflate' or 'lzma', it can be decompressed and file.data can
match on the decompressed data. Flash decompression must be enabled under 'libhtp' configuration:
# Decompress SWF files.
# 2 types: 'deflate', 'lzma', 'both' will decompress deflate and lzma
# compress-depth:
# Specifies the maximum amount of data to decompress,
# set 0 for unlimited.
# decompress-depth:
# Specifies the maximum amount of decompressed data to obtain,
# set 0 for unlimited.
swf-decompression:
enabled: yes
type: both
compress-depth: 0
decompress-depth: 0
ò Note
file.data replaces the previous keyword name, file_data. file_data can still be used but it is recommended
that rules be converted to use file.data.
ò Note
If an HTTP body is using gzip or deflate, file.data will match on the decompressed data.
ò Note
Negated matching is affected by the chunked inspection. E.g. 'content:!"<html";' could not match on the first chunk,
but would then possibly match on the 2nd. To avoid this, use a depth setting. The depth setting takes the body size
into account. Assuming that the response-body-minimal-inspect-size is bigger than 1k, 'content:!"<html";
depth:1024;' can only match if the pattern '<html' is absent from the first inspected chunk.
ò Note
ò Note
8.13.26 http.connection
The http.connection keyword is used to match on the Connection field that can be present in HTTP request or
response headers.
It is possible to use any of the Payload Keywords with the http.connection keyword.
Example HTTP Request:
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP Connection Example";
flow:established,to_server; http.connection; content:"Keep-Alive"; bsize:10; classtype:bad-unknown; sid:94;
rev:1;)
ò Note
ò Note
http.connection can have additional formatting/normalization applied to buffer contents, see Normalization for
additional details.
8.13.27 http.content_len
The http.content_len keyword is used to match on the Content-Length field that can be present in HTTP request or
response headers. Use flow:to_server or flow:to_client to force inspection of the request or response respec-
tively.
It is possible to use any of the Payload Keywords with the http.content_len keyword.
Example HTTP Request:
HTTP/1.1 200 OK
Content-Type: text/html
Server: nginx/0.8.54
Connection: Close
Content-Length: 20
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP Content-Length Request Example";
flow:established,to_server; http.content_len; content:"100"; bsize:3; classtype:bad-unknown; sid:97; rev:1;)
alert http $EXTERNAL_NET any -> $HOME_NET any (msg:"HTTP Content-Length Response Example";
flow:established,to_client; http.content_len; content:"20"; bsize:2; classtype:bad-unknown; sid:98; rev:1;)
To do numeric evaluation of the content length, byte_test can be used.
If we want to match on an HTTP request content length equal to and greater than 100 we could use the following
signature.
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP Content-Length Request Byte Test Example";
flow:established,to_server; http.content_len; byte_test:0,>=,100,0,string,dec; classtype:bad-unknown; sid:99; rev:1;)
ò Note
8.13.28 http.content_type
The http.content_type keyword is used to match on the Content-Type field that can be present in HTTP request or
response headers. Use flow:to_server or flow:to_client to force inspection of the request or response respec-
tively.
It is possible to use any of the Payload Keywords with the http.content_type keyword.
Example HTTP Request:
HTTP/1.1 200 OK
Content-Type: text/html
Server: nginx/0.8.54
Connection: Close
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP Content-Type Request Example";
flow:established,to_server; http.content_type; content:"multipart/form-data|3b 20|"; startswith; classtype:bad-
unknown; sid:95; rev:1;)
alert http $EXTERNAL_NET any -> $HOME_NET any (msg:"HTTP Content-Type Response Example";
flow:established,to_client; http.content_type; content:"text/html"; bsize:9; classtype:bad-unknown; sid:96; rev:1;)
ò Note
ò Note
http.content_type can have additional formatting/normalization applied to buffer contents, see Normalization
for additional details.
8.13.29 http.cookie
The http.cookie keyword is used to match on the cookie field that can be present in HTTP request (Cookie) or HTTP
response (Set-Cookie) headers.
It is possible to use any of the Payload Keywords with both http.header keywords.
Example HTTP Request:
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP Cookie Example"; flow:established,to_server;
http.cookie; content:"PHPSESSIONID=123"; bsize:14; classtype:bad-unknown; sid:80; rev:1;)
ò Note
Cookies are passed in HTTP headers but Suricata extracts the cookie data to http.cookie and will not match
cookie content put in the http.header sticky buffer.
ò Note
ò Note
http.cookie can have additional formatting/normalization applied to buffer contents, see Normalization for ad-
ditional details.
8.13.30 http.header
Matching on HTTP headers has two options in Suricata, the http.header and the http.header.raw.
It is possible to use any of the Payload Keywords with both http.header keywords.
The http.header keyword normalizes the header contents. For example if header contents contain trailing white-
space or tab characters, those would be removed.
To match on non-normalized header data, use the http.header.raw keyword.
Normalization Example:
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP Header Example 1"; flow:established,to_server;
http.header; content:"User-Agent|3a 20|Mozilla/5.0|0d 0a|"; classtype:bad-unknown; sid:70; rev:1;)
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP Header Example 2"; flow:established,to_server;
http.header; content:"Host|3a 20|suricata.io|0d 0a|"; classtype:bad-unknown; sid:71; rev:1;)
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP Header Example 3"; flow:established,to_server;
http.header; content:"User-Agent|3a 20|Mozilla/5.0|0d 0a|"; startswith; content:"Host|3a 20|suricata.io|0d 0a|";
classtype:bad-unknown; sid:72; rev:1;)
ò Note
There are headers that will not be included in the http.header buffer, specifically the http.cookie buffer.
ò Note
http.header can have additional formatting/normalization applied to buffer contents, see Normalization for ad-
ditional details.
8.13.31 http.header.raw
The http.header.raw buffer matches on HTTP header content but does not have any normalization performed on
the buffer contents (see http.header)
Abnormal HTTP Header Example:
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP Header Raw Example";
flow:established,to_server; http.header.raw; content:"User-Agent|3a 20|Mozilla/5.0|0d 0a|"; content:"User-Agent|3a
20|Chrome|0d 0a|"; classtype:bad-unknown; sid:73; rev:1;)
ò Note
http.header.raw can have additional formatting applied to buffer contents, see Normalization for additional
details.
8.13.32 http.header_names
The http.header_names keyword is used to match on the names of the headers in an HTTP request or response. This
is useful for checking for a header's presence, absence and/or header order. Use flow:to_server or flow:to_client
to force inspection of the request or response respectively.
It is possible to use any of the Payload Keywords with the http.header_names keyword.
Example HTTP Request:
GET / HTTP/1.1
Host: suricata.io
Connection: Keep-Alive
HTTP/1.1 200 OK
Content-Type: text/html
Server: nginx/0.8.54
ò Note
8.13.33 http.protocol
The http.protocol keyword is used to match on the protocol field that is contained in HTTP requests and responses.
It is possible to use any of the Payload Keywords with the http.protocol keyword.
ò Note
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP Protocol Example"; flow:established,to_server;
http.protocol; content:"HTTP/1.1"; bsize:9; classtype:bad-unknown; sid:50; rev:1;)
8.13.34 http.start
The http.start keyword is used to match on the start of an HTTP request or response. This will contain the re-
quest/response line plus the request/response headers. Use flow:to_server or flow:to_client to force inspection
of the request or response respectively.
It is possible to use any of the Payload Keywords with the http.start keyword.
Example HTTP Request:
GET / HTTP/1.1
Host: suricata.io
Connection: Keep-Alive
HTTP/1.1 200 OK
Content-Type: text/html
Server: nginx/0.8.54
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP Start Request Example";
flow:established,to_server; http.start; content:"POST / HTTP/1.1|0d 0a|Host|0d 0a|Connection|0d 0a 0d 0a|";
classtype:bad-unknown; sid:101; rev:1;)
alert http $EXTERNAL_NET any -> $HOME_NET any (msg:"HTTP Start Response Example";
flow:established,to_client; http.start; content:"HTTP/1.1 200 OK|0d 0a|Content-Type|0d 0a|Server|0d 0a 0d a0|";
classtype:bad-unknown; sid:102; rev:1;)
ò Note
http.start contains the normalized headers and is terminated by an extra \r\n to indicate the end of the headers.
8.14.1 file.data
The file.data sticky buffer matches on contents of files that are seen in flows that Suricata evaluates. The various
payload keywords can be used (e.g. startswith, nocase and bsize) with file.data.
Example:
alert smtp any any -> any any (msg:"smtp app layer file.data example"; \
file.data; content:"example file content"; sid:1; rev:1)
alert http any any -> any any (msg:"http app layer file.data example"; \
file.data; content:"example file content"; sid:2; rev:1)
alert http2 any any -> any any (msg:"http2 app layer file.data example"; \
file.data; content:"example file content"; sid:3; rev:1;)
alert nfs any any -> any any (msg:"nfs app layer file.data example"; \
file.data; content:" "; sid:5; rev:1)
alert ftp-data any any -> any any (msg:"ftp app layer file.data example"; \
file.data; content:"example file content"; sid:6; rev:1;)
alert tcp any any -> any any (msg:"tcp file.data example"; \
file.data; content:"example file content"; sid:4; rev:1)
8.14.2 file.name
file.name is a sticky buffer that is used to look at filenames that are seen in flows that Suricata evaluates. The various
payload keywords can be used (e.g. startswith, nocase and bsize) with file.name.
Example:
file.name; content:"examplefilename";
filename:"examplefilename";
8.14.3 fileext
fileext is used to look at individual file extensions that are seen in flows that Suricata evaluates.
Example:
fileext:"pdf";
Note: fileext does not allow partial matches. For example, if a PDF file (.pdf) is seen by a Suricata signature with
fileext:"pd"; the signature will not produce an alert.
Note: fileext assumes nocase by default. This means that a file with the extension .PDF will be seen the same as if
the file had an extension of .pdf.
Note: fileext and file.name can both be used to match on file extensions. In the example below the two signatures
are considered the same.
Example:
fileext:"pdf";
Note: While``fileeext`` and file.name can both be used to match on file extensions, file.name allows for partial
matching on file extensions. The following would match on a file with the extension of .pd as well as .pdf.
Example:
file.name; content:".pd";
8.14.4 file.magic
Matches on the information libmagic returns about a file.
Example:
Note filemagic can still be used. The only difference between file.magic and file.magic is that filemagic
assumes nocase by default. In the example below the two signatures are considered the same.
Example:
Note: Suricata currently uses its underlying operating systems version/implementation of libmagic. Different versions
and implementations of libmagic do not return the same information. Additionally there are varying Suricata per-
formance impacts based on the version and implementation of libmagic. Additional information about Suricata and
libmagic can be found here: https://redmine.openinfosecfoundation.org/issues/437
file.magic supports multiple buffer matching, see Multiple Buffer Matching.
8.14.5 filestore
Stores files to disk if the signature matched.
Syntax:
filestore:<direction>,<scope>;
8.14.6 filemd5
Match file MD5 against list of MD5 checksums.
Syntax:
filemd5:[!]filename;
The filename is expanded to include the rule dir. In the default case it will become /etc/suricata/rules/filename. Use
the exclamation mark to get a negated match. This allows for white listing.
Examples:
filemd5:md5-blacklist;
filemd5:!md5-whitelist;
File format
The file format is simple. It's a text file with a single md5 per line, at the start of the line, in hex notation. If there is
extra info on the line it is ignored.
Output from md5sum is fine:
2f8d0355f0032c3e6311c6408d7c2dc2 util-path.c
b9cf5cf347a70e02fde975fc4e117760 util-pidfile.c
02aaa6c3f4dbae65f5889eeb8f2bbb8d util-pool.c
dd5fc1ee7f2f96b5f12d1a854007a818 util-print.c
2f8d0355f0032c3e6311c6408d7c2dc2
b9cf5cf347a70e02fde975fc4e117760
02aaa6c3f4dbae65f5889eeb8f2bbb8d
dd5fc1ee7f2f96b5f12d1a854007a818
Memory requirements
Each MD5 uses 16 bytes of memory. 20 Million MD5's use about 310 MiB of memory.
See also: https://blog.inliniac.net/2012/06/09/suricata-md5-blacklisting/
8.14.7 filesha1
Match file SHA1 against list of SHA1 checksums.
Syntax:
filesha1:[!]filename;
The filename is expanded to include the rule dir. In the default case it will become /etc/suricata/rules/filename. Use
the exclamation mark to get a negated match. This allows for white listing.
Examples:
filesha1:sha1-blacklist;
filesha1:!sha1-whitelist;
File format
Same as md5 file format.
8.14.8 filesha256
Match file SHA256 against list of SHA256 checksums.
Syntax:
filesha256:[!]filename;
The filename is expanded to include the rule dir. In the default case it will become /etc/suricata/rules/filename. Use
the exclamation mark to get a negated match. This allows for white listing.
Examples:
filesha256:sha256-blacklist;
filesha256:!sha256-whitelist;
File format
Same as md5 file format.
8.14.9 filesize
Match on the size of the file as it is being transferred.
filesize uses an unsigned 64-bit integer.
Syntax:
filesize:<value>;
Possible units are KB, MB and GB, without any unit the default is bytes.
Examples:
Note: For files that are not completely tracked because of packet loss or stream.reassembly.depth being reached on the
"greater than" is checked. This is because Suricata can know a file is bigger than a value (it has seen some of it already),
but it can't know if the final size would have been within a range, an exact value or smaller than a value.
8.15.1 dns.answer.name
dns.answer.name is a sticky buffer that is used to look at the name field in DNS answer resource records.
dns.answer.name will look at both requests and responses, so flow is recommended to confine to a specific direction.
The buffer being matched on contains the complete re-assembled resource name, for example "www.suricata.io".
dns.answer.name supports Multiple Buffer Matching.
dns.answer.name was introduced in Suricata 8.0.0.
8.15.2 dns.opcode
This keyword matches on the opcode found in the DNS header flags.
dns.opcode uses an unsigned 8-bit integer.
Syntax
dns.opcode:[!]<number>
dns.opcode:[!]<number1>-<number2>
Examples
Match on DNS requests and responses with opcode 4:
dns.opcode:4;
dns.opcode:!0;
Match on DNS requests where the opcode is between 7 and 15, exclusively:
dns.opcode:7-15;
Match on DNS requests where the opcode is not between 7 and 15:
dns.opcode:!7-15;
8.15.3 dns.rcode
This keyword matches on the rcode field found in the DNS header flags.
dns.rcode uses an unsigned 8-bit integer.
Currently, Suricata only supports rcode values in the range [0-15], while the current DNS version supports rcode values
from [0-23] as specified in RFC 6895.
We plan to extend the rcode values supported by Suricata according to RFC 6895 as tracked by the ticket: https:
//redmine.openinfosecfoundation.org/issues/6650
Syntax
dns.rcode:[!]<number>
dns.rcode:[!]<number1>-<number2>
Examples
Match on DNS requests and responses with rcode 4:
dns.rcode:4;
dns.rcode:!0;
8.15.4 dns.rrtype
This keyword matches on the rrtype (integer) found in the DNS message.
dns.rrtype uses an unsigned 16-bit integer.
Syntax
dns.rrtype:[!]<number>
Examples
Match on DNS requests and responses with rrtype 4:
dns.rrtype:4;
dns.rrtype:!0;
8.15.5 dns.query
dns.query is a sticky buffer that is used to inspect DNS query names in DNS request messages. Example:
alert dns any any -> any any (msg:"Test dns.query option"; dns.query; content:"google";␣
˓→nocase; sid:1;)
Being a sticky buffer, payload keywords such as content are to be used after dns.query:
The dns.query keyword affects all following contents, until pkt_data is used or it reaches the end of the rule.
ò Note
ò Note
dns.query will only match on DNS request messages, to also match on DNS response message, see dns.query.name.
Normalized Buffer
Buffer contains literal domain name
• <length> values (as seen in a raw DNS request) are literal '.' characters
• no leading <length> value
• No terminating NULL (0x00) byte (use a negated relative isdataat to match the end)
Example DNS request for "mail.google.com" (for readability, hex values are encoded between pipes):
DNS query on the wire (snippet):
|04|mail|06|google|03|com|00|
dns.query buffer:
mail.google.com
8.15.6 dns.query.name
dns.query.name is a sticky buffer that is used to look at the name field in DNS query (question) resource records. It
is nearly identical to dns.query but supports both DNS requests and responses.
dns.query.name will look at both requests and responses, so flow is recommended to confine to a specific direction.
The buffer being matched on contains the complete re-assembled resource name, for example "www.suricata.io".
dns.query.name supports Multiple Buffer Matching.
dns.query.name was introduced in Suricata 8.0.0.
8.16.1 tls.cert_subject
Match TLS/SSL certificate Subject field.
Examples:
tls.subject
Legacy keyword to match TLS/SSL certificate Subject field.
example:
tls.subject:"CN=*.googleusercontent.com"
8.16.2 tls.cert_issuer
Match TLS/SSL certificate Issuer field.
Examples:
tls.issuerdn
Legacy keyword to match TLS/SSL certificate IssuerDN field
example:
tls.issuerdn:!"CN=Google-Internet-Authority"
8.16.3 tls.cert_serial
Match on the serial number in a certificate.
Example:
alert tls any any -> any any (msg:"match cert serial"; \
tls.cert_serial; content:"5C:19:B7:B1:32:3B:1C:A1"; sid:200012;)
8.16.4 tls.cert_fingerprint
Match on the SHA-1 fingerprint of the certificate.
Example:
alert tls any any -> any any (msg:"match cert fingerprint"; \
tls.cert_fingerprint; \
content:"4a:a3:66:76:82:cb:6b:23:bb:c3:58:47:23:a4:63:a7:78:a4:a1:18"; \
sid:200023;)
8.16.5 tls.sni
Match TLS/SSL Server Name Indication field.
Examples:
8.16.6 tls.subjectaltname
Match TLS/SSL Subject Alternative Name field.
Examples:
8.16.7 tls_cert_notbefore
Match on the NotBefore field in a certificate.
Example:
alert tls any any -> any any (msg:"match cert NotBefore"; \
tls_cert_notbefore:1998-05-01<>2008-05-01; sid:200005;)
8.16.8 tls_cert_notafter
Match on the NotAfter field in a certificate.
Example:
alert tls any any -> any any (msg:"match cert NotAfter"; \
tls_cert_notafter:>2015; sid:200006;)
8.16.9 tls_cert_expired
Match returns true if certificate is expired. It evaluates the validity date from the certificate.
Usage:
tls_cert_expired;
8.16.10 tls_cert_valid
Match returns true if certificate is not expired. It only evaluates the validity date. It does not do cert chain validation.
It is the opposite of tls_cert_expired.
Usage:
tls_cert_valid;
8.16.11 tls.certs
Do a "raw" match on each of the certificates in the TLS certificate chain.
Example:
alert tls any any -> any any (msg:"match bytes in TLS cert"; tls.certs; \
content:"|06 09 2a 86|"; sid:200070;)
8.16.12 tls.version
Match on negotiated TLS/SSL version.
Supported values: "1.0", "1.1", "1.2", "1.3"
It is also possible to match versions using a hex string.
Examples:
tls.version:1.2;
tls.version:0x7f12;
The first example matches TLSv1.2, whilst the last example matches TLSv1.3 draft 16.
8.16.13 ssl_version
Match version of SSL/TLS record.
Supported values "sslv2", "sslv3", "tls1.0", "tls1.1", "tls1.2", "tls1.3"
Example:
alert tls any any -> any any (msg:"match SSLv2 and SSLv3"; \
ssl_version:sslv2,sslv3; sid:200031;)
8.16.14 tls.fingerprint
match TLS/SSL certificate SHA1 fingerprint
example:
tls.fingerprint:!"f3:40:21:48:70:2c:31:bc:b5:aa:22:ad:63:d6:bc:2e:b3:46:e2:5a"
8.16.15 tls.store
store TLS/SSL certificate on disk. The location can be specified in the output.tls-store.certs-log-dir parameter of the
yaml configuration file, cf TLS parameters and certificates logging (tls.log)..
8.16.16 ssl_state
The ssl_state keyword matches the state of the SSL connection. The possible states are client_hello,
server_hello, client_keyx, server_keyx and unknown. You can specify several states with | (OR) to check
for any of the specified states.
8.16.17 tls.random
Matches on the 32 bytes of the TLS random field.
Example:
alert tls any any -> any any (msg:"TLS random test"; \
tls.random; content:"|9b ce 7a 5e 57 5d 77 02 07 c2 9d be 24 01 cc f0 5d cd e1 d2 a5␣
˓→86 9c 4a 3e ee 38 db 55 1a d9 bc|"; sid: 200074;)
8.16.18 tls.random_time
Matches on the first 4 bytes of the TLS random field.
Example:
alert tls any any -> any any (msg:"TLS random_time test"; \
tls.random_time; content:"|9b ce 7a 5e|"; sid: 200075;)
8.16.19 tls.random_bytes
Matches on the last 28 bytes of the TLS random field.
Example:
alert tls any any -> any any (msg:"TLS random_bytes test"; \
tls.random_bytes; content:"|57 5d 77 02 07 c2 9d be 24 01 cc f0 5d cd e1 d2 a5 86 9c␣
˓→4a 3e ee 38 db 55 1a d9 bc|"; sid: 200076;)
8.16.20 tls.cert_chain_len
Matches on the TLS certificate chain length.
tls.cert_chain_len uses an unsigned 32-bit integer.
tls.cert_chain_len supports <, >, <>, ! and using an exact value.
Example:
alert tls any any -> any any (msg:"cert chain exact value"; \
tls.cert_chain_len:1; classtype:misc-activity; sid:1; rev:1;)
alert tls any any -> any any (msg:"cert chain less than value"; \
tls.cert_chain_len:<2; classtype:misc-activity; sid:2; rev:1;)
alert tls any any -> any any (msg:"cert chain greater than value"; \
tls.cert_chain_len:>0; classtype:misc-activity; sid:2; rev:1;)
alert tls any any -> any any (msg:"cert chain greater than less than value";\
tls.cert_chain_len:0<>2; classtype:misc-activity; sid:3; rev:1;)
alert tls any any -> any any (msg:"cert chain not value"; \
tls.cert_chain_len:!2; classtype:misc-activity; sid:4; rev:1;)
8.16.21 tls.alpn
Matches on the ALPN buffers.
Example:
alert tls any any -> any any (msg:"TLS ALPN test"; \
tls.alpn; content:"http/1.1"; sid:1;)
8.17.1 Frames
The SSH parser supports the following frames:
• ssh.record_hdr
• ssh.record_data
• ssh.record_pdu
These are header + data = pdu for SSH records, after the banner and before encryption. The SSH record header is 6
bytes long : 4 bytes length, 1 byte passing, 1 byte message code.
Example:
alert ssh any any -> any any (msg:"hdr frame new keys"; frame:ssh.record.hdr; content: "|15|"; endswith; bsize: 6;
sid:2;)
This rule matches like Wireshark ssh.message_code == 0x15.
8.17.2 ssh.proto
Match on the version of the SSH protocol used. ssh.proto is a sticky buffer, and can be used as a fast pattern.
ssh.proto replaces the previous buffer name: ssh_proto. You may continue to use the previous name, but it's
recommended that existing rules be converted to use the new name.
Format:
ssh.proto;
Example:
alert ssh any any -> any any (msg:"match SSH protocol version"; ssh.proto; content:"2.0"; sid:1000010;)
The example above matches on SSH connections with SSH version 2.0.
8.17.3 ssh.software
Match on the software string from the SSH banner. ssh.software is a sticky buffer, and can be used as fast pattern.
Format:
ssh.software;
Example:
alert ssh any any -> any any (msg:"match SSH software string"; ssh.software; content:"openssh"; nocase; sid:1000020;)
The example above matches on SSH connections where the software string contains "openssh".
8.17.4 ssh.hassh
Match on hassh (md5 of of hassh algorithms of client).
Example:
8.17.5 ssh.hassh.string
Match on Hassh string (hassh algorithms of client).
Example:
8.17.6 ssh.hassh.server
Match on hassh (md5 of hassh algorithms of server).
Example:
alert ssh any any -> any any (msg:"match SSH hash-server"; \
ssh.hassh.server; content:"b12d2871a1189eff20364cf5333619ee"; \
sid:1000020;)
8.17.7 ssh.hassh.server.string
Match on hassh string (hassh algorithms of server).
Example::
alert ssh any any -> any any (msg:"match SSH hash-server-string";
ssh.hassh.server.string; content:"[email protected],[email protected]";
sid:1000040;)
ssh.hassh.server.string is a 'sticky buffer'.
ssh.hassh.server.string can be used as fast_pattern.
8.18.1 ja3.hash
Match on JA3 hash (md5).
Example:
alert tls any any -> any any (msg:"match JA3 hash"; \
ja3.hash; content:"e7eca2baf4458d095b7f45da28c16c34"; \
sid:100001;)
8.18.2 ja3.string
Match on JA3 string.
Example:
alert tls any any -> any any (msg:"match JA3 string"; \
ja3.string; content:"19-20-21-22"; \
sid:100002;)
8.18.3 ja3s.hash
Match on JA3S hash (md5).
Example:
alert tls any any -> any any (msg:"match JA3S hash"; \
ja3s.hash; content:"b26c652e0a402a24b5ca2a660e84f9d5"; \
sid:100003;)
8.18.4 ja3s.string
Match on JA3S string.
Example:
alert tls any any -> any any (msg:"match on JA3S string"; \
ja3s.string; content:"771,23-35"; sid:100004;)
8.18.5 ja4.hash
Match on JA4 hash (e.g. q13d0310h3_55b375c5d22e_cd85d2d88918).
Example:
alert quic any any -> any any (msg:"match JA4 hash"; \
ja4.hash; content:"q13d0310h3_55b375c5d22e_cd85d2d88918"; \
sid:100001;)
Syntax:
With _<value>_ setting matches on the address or value as it is being accessed or written as follows:
Examples:
modbus: access write holding, address 500, value >200 # Write value greater than 200 at␣
˓→address 500 of Holding Registers table
With _<value>_ setting matches on the address or value as it is being accessed or written as follows:
Examples:
modbus: unit >10, access read discretes, address <100 # Greater than␣
˓→unit identifier 10 and Read access at address smaller than 100 of Discretes Input table
modbus: unit 10<>20, access write holding, address 500, value >200 # Greater than␣
˓→unit identifier 10 and smaller than unit identifier 20 and Write value greater than␣
(cf. http://www.modbus.org/docs/Modbus_Application_Protocol_V1_1b3.pdf)
Note: Address of read and write are starting at 1. So if your system is using a start at 0, you need to add 1 the address
values.
Note: According to MODBUS Messaging on TCP/IP Implementation Guide V1.0b, it is recommended to keep the
TCP connection opened with a remote device and not to open and close it for each MODBUS/TCP transaction. In that
case, it is important to set the depth of the stream reassembling as unlimited (stream.reassembly.depth: 0)
Note: According to MODBUS Messaging on TCP/IP Implementation Guide V1.0b, the MODBUS slave device ad-
dresses on serial line are assigned from 1 to 247 (decimal). Address 0 is used as broadcast address.
(cf. http://www.modbus.org/docs/Modbus_Messaging_Implementation_Guide_V1_0b.pdf)
Paper and presentation (in french) on Modbus support are available : http://www.ssi.gouv.fr/agence/publication/
detection-dintrusion-dans-les-systemes-industriels-suricata-et-le-cas-modbus/
8.20.1 dcerpc.iface
Match on the value of the interface UUID in a DCERPC header. If any_frag option is given, the match shall be done
on all fragments. If it's not, the match shall only happen on the first fragment.
The format of the keyword:
dcerpc.iface:<uuid>;
dcerpc.iface:<uuid>,[>,<,!,=]<iface_version>;
dcerpc.iface:<uuid>,any_frag;
dcerpc.iface:<uuid>,[>,<,!,=]<iface_version>,any_frag;
Examples:
dcerpc.iface:367abb81-9844-35f1-ad32-98f038001003;
dcerpc.iface:367abb81-9844-35f1-ad32-98f038001003,!10;
dcerpc.iface:367abb81-9844-35f1-ad32-98f038001003,any_frag;
dcerpc.iface:367abb81-9844-35f1-ad32-98f038001003,>1,any_frag;
8.20.2 dcerpc.opnum
Match on one or many operation numbers and/or operation number range within the interface in a DCERPC header.
The format of the keyword:
dcerpc.opnum:<u16>;
dcerpc.opnum:[>,<,!,=]<u16>;
dcerpc.opnum:<u16>,<u16>,<u16>....;
dcerpc.opnum:<u16>-<u16>;
Examples:
dcerpc.opnum:15;
dcerpc.opnum:>10;
dcerpc.opnum:12,24,62,61;
dcerpc.opnum:12,18-24,5;
dcerpc.opnum:12-14,12,121,62-78;
8.20.3 dcerpc.stub_data
Match on the stub data in a given DCERPC packet. It is a 'sticky buffer'.
Example:
dcerpc.stub_data; content:"123456";
dhcp.leasetime:[op]<number>
The time can be matched exactly, or compared using the _op_ setting:
dhcp.leasetime:3 # exactly 3
dhcp.leasetime:<3 # smaller than 3
dhcp.leasetime:>=2 # greater or equal than 2
Signature example:
alert dhcp any any -> any any (msg:"small DHCP lease time (<3)"; dhcp.leasetime:<3;␣
˓→sid:1; rev:1;)
8.21.2 dhcp.rebinding_time
DHCP rebinding time (integer).
dhcp.rebinding_time uses an unsigned 64-bit integer.
Syntax:
dhcp.rebinding_time:[op]<number>
The time can be matched exactly, or compared using the _op_ setting:
dhcp.rebinding_time:3 # exactly 3
dhcp.rebinding_time:<3 # smaller than 3
dhcp.rebinding_time:>=2 # greater or equal than 2
Signature example:
alert dhcp any any -> any any (msg:"small DHCP rebinding time (<3)"; dhcp.rebinding_time:
˓→<3; sid:1; rev:1;)
8.21.3 dhcp.renewal_time
DHCP renewal time (integer).
dhcp.renewal_time uses an unsigned 64-bit integer.
Syntax:
dhcp.renewal_time:[op]<number>
The time can be matched exactly, or compared using the _op_ setting:
dhcp.renewal_time:3 # exactly 3
dhcp.renewal_time:<3 # smaller than 3
dhcp.renewal_time:>=2 # greater or equal than 2
Signature example:
alert dhcp any any -> any any (msg:"small DHCP renewal time (<3)"; dhcp.renewal_time:<3;␣
˓→sid:1; rev:1;)
8.22.1 dnp3_func
This keyword will match on the application function code found in DNP3 request and responses. It can be specified as
the integer value or the symbolic name of the function code.
Syntax
dnp3_func:<value>;
– record_current_time
– open_file
– close_file
– delete_file
– get_file_info
– authenticate_file
– abort_file
– activate_config
– authenticate_req
– authenticate_err
– response
– unsolicited_response
– authenticate_resp
8.22.2 dnp3_ind
This keyword matches on the DNP3 internal indicator flags in the response application header.
Syntax
dnp3_ind:<flag>{,<flag>...}
This keyword will match of any of the flags listed are set. To match on multiple flags (AND type match), use dnp3_ind
for each flag that must be set.
Examples
dnp3_ind:all_stations;
dnp3_ind:class_1_events,class_2_events;
8.22.3 dnp3_obj
This keyword matches on the DNP3 application data objects.
Syntax
dnp3_obj:<group>,<variation>
Where <group> and <variation> are integer values between 0 and 255 inclusive.
8.22.4 dnp3_data
This keyword will cause the following content options to match on the re-assembled application buffer. The reassembled
application buffer is a DNP3 fragment with CRCs removed (which occur every 16 bytes), and will be the complete
fragment, possibly reassembled from multiple DNP3 link layer frames.
Syntax
dnp3_data;
Example
enip_command:99;
enip_command:list_identity;
enip_command uses an unsigned 16-bits integer. It can also be specified by text from the enumeration.
8.23.2 cip_service
For the CIP Service, we use a maximum of 3 comma separated values representing the Service, Class and Attribute.
These values are described in the CIP specification. CIP Classes are associated with their Service, and CIP Attributes
are associated with their Service. If you only need to match up until the Service, then only provide the Service value.
If you want to match to the CIP Attribute, then you must provide all 3 values.
Examples:
cip_service:75
cip_service:16,246,6
(cf. http://read.pudn.com/downloads166/ebook/763211/EIP-CIP-V1-1.0.pdf)
Information on the protocol can be found here: http://literature.rockwellautomation.com/idc/groups/literature/
documents/wp/enet-wp001_-en-p.pdf
8.23.3 enip.status
For the ENIP status, we are matching against the status field found in the ENIP encapsulation. It uses a 32-bit unsigned
integer as value.
enip.status uses an unsigned 32-bits integer. It can also be specified by text from the enumeration.
Examples:
enip.status:100;
enip.status:>106;
enip.status:invalid_cmd;
8.23.4 enip.protocol_version
Match on the protocol version in different messages. It uses a 16-bit unsigned integer as value.
enip.protocol_version uses an unsigned 16-bits integer.
Examples:
enip.protocol_version:1;
enip.protocol_version:>1;
8.23.5 enip.cip_attribute
Match on the cip attribute in different messages. It uses a 32-bit unsigned integer as value.
This allows to match without needing to match on cip.service.
enip.cip_attribute uses an unsigned 32-bits integer.
Examples:
enip.cip_attribute:1;
enip.cip_attribute:>1;
8.23.6 enip.cip_instance
Match on the cip instance in CIP request path. It uses a 32-bit unsigned integer as value.
enip.cip_instance uses an unsigned 32-bits integer.
Examples:
enip.cip_instance:1;
enip.cip_instance:>1;
8.23.7 enip.cip_class
Match on the cip class in CIP request path. It uses a 32-bit unsigned integer as value.
enip.cip_class uses an unsigned 32-bits integer.
This allows to match without needing to match on cip.service.
Examples:
enip.cip_class:1;
enip.cip_class:>1;
8.23.8 enip.cip_extendedstatus
Match on the cip extended status, if any is present. For multiple service packet, will match on any of the seen statuses.
It uses a 16-bit unsigned integer as value.
enip.cip_extendedstatus uses an unsigned 16-bits integer.
Examples:
enip.cip_extendedstatus:1;
enip.cip_extendedstatus:>1;
8.23.9 enip.revision
Match on the revision in identity message. It uses a 16-bit unsigned integer as value.
enip.revision uses an unsigned 16-bits integer.
Examples:
enip.revision:1;
enip.revision:>1;
8.23.10 enip.identity_status
Match on the status in identity message (not in ENIP header). It uses a 16-bit unsigned integer as value.
enip.identity_status uses an unsigned 16-bits integer.
Examples:
enip.identity_status:1;
enip.identity_status:>1;
8.23.11 enip.state
Match on the state in identity message. It uses an 8-bit unsigned integer as value.
enip.state uses an unsigned 8-bits integer.
Examples:
enip.state:1;
enip.state:>1;
8.23.12 enip.serial
Match on the serial in identity message. It uses a 32-bit unsigned integer as value.
enip.serial uses an unsigned 32-bits integer.
Examples:
enip.serial:1;
enip.serial:>1;
8.23.13 enip.product_code
Match on the product code in identity message. It uses a 16-bit unsigned integer as value.
enip.product_code uses an unsigned 16-bits integer.
Examples:
enip.product_code:1;
enip.product_code:>1;
8.23.14 enip.device_type
Match on the device type in identity message. It uses a 16-bit unsigned integer as value.
enip.device_type uses an unsigned 16-bits integer.
Examples:
enip.device_type:1;
enip.device_type:>1;
8.23.15 enip.vendor_id
Match on the vendor id in identity message. It uses a 16-bit unsigned integer as value.
enip.vendor_id uses an unsigned 16-bits integer.
Examples:
enip.vendor_id:1;
enip.vendor_id:>1;
8.23.16 enip.product_name
Match on the product name in identity message.
Examples:
enip.product_name; pcre:"/^123[0-9]*/";
enip.product_name; content:"swordfish";
8.23.17 enip.service_name
Match on the service name in list services message.
Examples:
enip.service_name; pcre:"/^123[0-9]*/";
enip.service_name; content:"swordfish";
8.23.18 enip.capabilities
Match on the capabilities in list services message. It uses a 16-bit unsigned integer as value.
enip.capabilities uses an unsigned 16-bits integer.
Examples:
enip.capabilities:1;
enip.capabilities:>1;
8.23.19 enip.cip_status
Match on the cip status (one of them in case of multiple service packet). It uses an 8-bit unsigned integer as value.
enip.cip_status uses an unsigned 8-bits integer.
Examples:
enip.cip_status:1;
enip.cip_status:>1;
ftpdata_command:(retr|stor)
Signature Example:
alert ftp-data any any -> any any (msg:"FTP store password"; filestore; filename:"password"; ftpdata_command:stor;
sid:3; rev:1;)
8.24.2 ftpbounce
Detect FTP bounce attacks.
Syntax:
ftpbounce
8.24.3 file.name
The file.name keyword can be used at the FTP application level.
Signature Example:
alert ftp-data any any -> any any (msg:"FTP file.name usage"; file.name; content:"file.txt"; classtype:bad-unknown;
sid:1; rev:1;)
For additional information on the file.name keyword, see File Keywords.
krb5_msg_type:<number>
Signature examples:
alert krb5 any any -> any any (msg:"Kerberos 5 AS-REQ message"; krb5_msg_type:10; sid:3;␣
˓→rev:1;)
alert krb5 any any -> any any (msg:"Kerberos 5 AS-REP message"; krb5_msg_type:11; sid:4;␣
˓→rev:1;)
alert krb5 any any -> any any (msg:"Kerberos 5 TGS-REQ message"; krb5_msg_type:12; sid:5;
˓→ rev:1;)
alert krb5 any any -> any any (msg:"Kerberos 5 TGS-REP message"; krb5_msg_type:13; sid:6;
˓→ rev:1;)
alert krb5 any any -> any any (msg:"Kerberos 5 ERROR message"; krb5_msg_type:30; sid:7;␣
˓→rev:1;)
ò Note
AP-REQ and AP-REP are not currently supported since those messages are embedded in other application proto-
cols.
8.25.2 krb5_cname
Kerberos client name, provided in the ticket (for AS-REQ and TGS-REQ messages).
If the client name from the Kerberos message is composed of several parts, the name is compared to each part and the
match will succeed if any is identical.
Comparison is case-sensitive.
Syntax:
krb5_cname; content:"name";
Signature example:
alert krb5 any any -> any any (msg:"Kerberos 5 des server name"; krb5_cname; content:"des
˓→"; sid:4; rev:1;)
8.25.3 krb5_sname
Kerberos server name, provided in the ticket (for AS-REQ and TGS-REQ messages) or in the error message.
If the server name from the Kerberos message is composed of several parts, the name is compared to each part and the
match will succeed if any is identical.
Comparison is case-sensitive.
Syntax:
krb5_sname; content:"name";
Signature example:
alert krb5 any any -> any any (msg:"Kerberos 5 krbtgt server name"; krb5_sname; content:
˓→"krbtgt"; sid:5; rev:1;)
8.25.4 krb5_err_code
Kerberos error code (integer). This field is matched in Kerberos error messages only.
For a list of error codes, refer to RFC4120 section 7.5.9.
Syntax:
krb5_err_code:<number>
Signature example:
alert krb5 any any -> any any (msg:"Kerberos 5 error C_PRINCIPAL_UNKNOWN"; krb5_err_
˓→code:6; sid:6; rev:1;)
app-layer-event:krb5.weak_encryption
Signature example:
alert krb5 any any -> any any (msg:"SURICATA Kerberos 5 weak encryption parameters";␣
˓→flow:to_client; app-layer-event:krb5.weak_encryption; classtype:protocol-command-
app-layer-event:krb5.malformed_data
Signature example:
alert krb5 any any -> any any (msg:"SURICATA Kerberos 5 malformed request data"; flow:to_
˓→server; app-layer-event:krb5.malformed_data; classtype:protocol-command-decode;␣
˓→sid:2226000; rev:1;)
8.25.7 krb5.ticket_encryption
Kerberos ticket encryption (enumeration).
For a list of encryption types, refer to RFC3961 section 8.
Syntax:
Signature example:
alert krb5 any any -> any any (krb5.ticket_encryption: weak; sid:1;)
alert krb5 any any -> any any (krb5.ticket_encryption: 23; sid:2;)
alert krb5 any any -> any any (krb5.ticket_encryption: rc4-hmac,rc4-hmac-exp; sid:3;)
8.26.1 smb.named_pipe
Match on SMB named pipe in tree connect.
Examples:
8.26.2 smb.share
Match on SMB share name in tree connect.
Examples:
8.26.3 smb.ntlmssp_user
Match on SMB ntlmssp user in session setup.
Examples:
8.26.4 smb.ntlmssp_domain
Match on SMB ntlmssp domain in session setup.
Examples:
8.26.5 smb.version
Keyword to match on the SMB version seen in an SMB transaction.
Signature Example:
alert smb $HOME_NET any -> any any (msg:"SMBv1 version rule"; smb.version:1; sid:1;)
alert smb $HOME_NET any -> any any (msg:"SMBv2 version rule"; smb.version:2; sid:2;)
8.26.6 file.name
The file.name keyword can be used at the SMB application level.
Signature Example:
alert smb any any -> any any (msg:"SMB file.name usage"; file.name; content:"file.txt"; classtype:bad-unknown; sid:1;
rev:1;)
For additional information on the file.name keyword, see File Keywords.
snmp.version:[op]<number>
The version can be matched exactly, or compared using the _op_ setting:
snmp.version:3 # exactly 3
snmp.version:<3 # smaller than 3
snmp.version:>=2 # greater or equal than 2
Signature example:
alert snmp any any -> any any (msg:"old SNMP version (<3)"; snmp.version:<3; sid:1;␣
˓→rev:1;)
8.27.2 snmp.community
SNMP community strings are like passwords for SNMP messages in version 1 and 2c. In version 3, the community
string is likely to be encrypted. This keyword will not match if the value is not accessible.
The default value for the read-only community string is often "public", and "private" for the read-write community
string.
Comparison is case-sensitive.
Syntax:
snmp.community; content:"private";
Signature example:
alert snmp any any -> any any (msg:"SNMP community private"; snmp.community; content:
˓→"private"; sid:2; rev:1;)
8.27.3 snmp.usm
SNMP User-based Security Model (USM) is used in version 3. It corresponds to the user name.
Comparison is case-sensitive.
Syntax:
snmp.usm; content:"admin";
Signature example:
alert snmp any any -> any any (msg:"SNMP usm admin"; snmp.usm; content:"admin"; sid:2;␣
˓→rev:1;)
8.27.4 snmp.pdu_type
SNMP PDU type (integer).
snmp.pdu_type uses an, :ref:` unsigned 32-bits integer <rules-integer-keywords>`.
Common values are:
• 0: GetRequest
• 1: GetNextRequest
• 2: Response
• 3: SetRequest
• 4: TrapV1 (obsolete, was the old Trap-PDU in SNMPv1)
• 5: GetBulkRequest
• 6: InformRequest
• 7: TrapV2
• 8: Report
This keyword will not match if the value is not accessible within (for ex, an encrypted SNMP v3 message).
Syntax:
snmp.pdu_type:<number>
Signature example:
alert snmp any any -> any any (msg:"SNMP response"; snmp.pdu_type:2; sid:3; rev:1;)
8.28.1 base64_decode
Decodes base64 data from a buffer and makes it available for the base64_data function.
We recommend using the base64 transform instead -- see from_base64.
Syntax:
The bytes option specifies how many bytes Suricata should decode and make available for base64_data. The decoding
will stop at the end of the buffer.
The offset option specifies how many bytes Suricata should skip before decoding. Bytes are skipped relative to the
start of the payload buffer if the relative is not set.
The relative option makes the decoding start relative to the previous content match. Default behavior is to start at
the beginning of the buffer. This option makes offset skip bytes relative to the previous match.
ò Note
ò Note
base64_decode follows RFC 4648 by default i.e. encounter with any character that is not found in the base64
alphabet leads to rejection of that character and the rest of the string.
See Redmine Bug 5223: https://redmine.openinfosecfoundation.org/issues/5223 and RFC 4648: https://www.
rfc-editor.org/rfc/rfc4648#section-3.3
8.28.2 base64_data
base64_data is a sticky buffer.
Enables content matching on the data previously decoded by base64_decode.
8.28.3 Example
Here is an example of a rule matching on the base64 encoded string "test" that is found inside the http_uri buffer.
It starts decoding relative to the known string "somestring" with the known offset of 1. This must be the first occurrence
of "somestring" in the buffer.
Example:
Buffer content:
http_uri = "GET /en/somestring&dGVzdAo=¬_base64"
Rule:
alert http any any -> any any (msg:"Example"; http.uri; content:"somestring"; \
base64_decode:bytes 8, offset 1, relative; \
base64_data; content:"test"; sid:10001; rev:1;)
Buffer content:
http_uri = "GET /en/somestring&dGVzdAo=¬_base64"
Rule:
alert http any any -> any any (msg:"Example"; content:"somestring"; http_uri; \
base64_decode:bytes 8, offset 1, relative; \
base64_data; content:"test"; sid:10001; rev:1;)
ò Note
base64_data cannot be used with fast_pattern and will result in a rule load error.
Keyword Direction
sip.method Request
sip.uri Request
sip.request_line Request
sip.stat_code Response
sip.stat_msg Response
sip.response_line Response
sip.protocol Both
sip.from Both
sip.to Both
sip.via Both
sip.user_agent Both
sip.content_type Both
sip.content_length Both
8.29.1 sip.method
This keyword matches on the method found in a SIP request.
Syntax
sip.method; content:<method>;
Examples
sip.method; content:"INVITE";
8.29.2 sip.uri
This keyword matches on the uri found in a SIP request.
Syntax
sip.uri; content:<uri>;
Examples
sip.uri; content:"sip:sip.url.org";
8.29.3 sip.request_line
This keyword forces the whole SIP request line to be inspected.
Syntax
sip.request_line; content:<request_line>;
Examples
8.29.4 sip.stat_code
This keyword matches on the status code found in a SIP response.
Syntax
sip.stat_code; content:<stat_code>
Examples
sip.stat_code; content:"100";
8.29.5 sip.stat_msg
This keyword matches on the status message found in a SIP response.
Syntax
sip.stat_msg; content:<stat_msg>
Examples
sip.stat_msg; content:"Trying";
8.29.6 sip.response_line
This keyword forces the whole SIP response line to be inspected.
Syntax
sip.response_line; content:<response_line>;
Examples
8.29.7 sip.protocol
This keyword matches the protocol field from a SIP request or response line.
If the response line is 'SIP/2.0 100 OK', then this buffer will contain 'SIP/2.0'
Syntax
sip.protocol; content:<protocol>
Example
sip.protocol; content:"SIP/2.0"
8.29.8 sip.from
This keyword matches on the From field that can be present in SIP headers. It matches both the regular and short forms,
though it cannot distinguish between them.
Syntax
sip.from; content:<from>
Example
sip.from; content:"user"
8.29.9 sip.to
This keyword matches on the To field that can be present in SIP headers. It matches both the regular and short forms,
though it cannot distinguish between them.
Syntax
sip.to; content:<to>
Example
sip.to; content:"user"
8.29.10 sip.via
This keyword matches on the Via field that can be present in SIP headers. It matches both the regular and short forms,
though it cannot distinguish between them.
Syntax
sip.via; content:<via>
Example
sip.via; content:"SIP/2.0/UDP"
8.29.11 sip.user_agent
This keyword matches on the User-Agent field that can be present in SIP headers.
Syntax
sip.user_agent; content:<user_agent>
Example
sip.user_agent; content:"Asterisk"
8.29.12 sip.content_type
This keyword matches on the Content-Type field that can be present in SIP headers. It matches both the regular and
short forms, though it cannot distinguish between them.
Syntax
sip.content_type; content:<content_type>
Example
sip.content_type; content:"application/sdp"
8.29.13 sip.content_length
This keyword matches on the Content-Length field that can be present in SIP headers. It matches both the regular and
short forms, though it cannot distinguish between them.
Syntax
sip.content_length; content:<content_length>
Example
sip.content_length; content:"200"
8.30.1 rfb.name
Match on the value of the RFB desktop name field.
Examples:
8.30.2 rfb.secresult
Match on the value of the RFB security result, e.g. ok, fail, toomany or unknown.
rfb.secresult uses an unsigned 32-bit integer.
Examples:
rfb.secresult: ok;
rfb.secresult: !0;
rfb.secresult: unknown;
8.30.3 rfb.sectype
Match on the value of the RFB security type field, e.g. 2 for VNC challenge-response authentication, 0 for no authen-
tication, and 30 for Apple's custom Remote Desktop authentication.
rfb.sectype uses an unsigned 32-bit integer.
This keyword takes a numeric argument after a colon and supports additional qualifiers, such as:
• > (greater than)
• < (less than)
• >= (greater than or equal)
• <= (less than or equal)
Examples:
rfb.sectype:2;
rfb.sectype:>=3;
8.31.1 mqtt.protocol_version
Match on the value of the MQTT protocol version field in the fixed header.
mqtt.protocol_version uses an unsigned 8-bit integer.
The format of the keyword:
mqtt.protocol_version:<min>-<max>;
mqtt.protocol_version:[<|>]<number>;
mqtt.protocol_version:<value>;
Examples:
mqtt.protocol_version:5;
8.31.2 mqtt.type
Match on the MQTT message type (also: control packet type). Valid values are :
• CONNECT
• CONNACK
• PUBLISH
• PUBACK
• PUBREC
• PUBREL
• PUBCOMP
• SUBSCRIBE
• SUBACK
• UNSUBSCRIBE
• UNSUBACK
• PINGREQ
• PINGRESP
• DISCONNECT
• AUTH
• UNASSIGNED
where UNASSIGNED refers to message type code 0.
mqtt.type uses an unsigned 8-bits integer.
Examples:
mqtt.type:CONNECT;
mqtt.type:PUBLISH;
mqtt.type:2;
8.31.3 mqtt.flags
Match on a combination of MQTT header flags, separated by commas (,). Flags may be prefixed by ! to indicate
negation, i.e. a flag prefixed by ! must not be set to match.
mqtt.flags uses an unsigned 8-bits integer
Valid flags are:
• dup (duplicate message)
• retain (message should be retained on the broker)
Examples:
mqtt.flags:dup,!retain;
mqtt.flags:retain;
8.31.4 mqtt.qos
Match on the Quality of Service request code in the MQTT fixed header. Valid values are:
• 0 (fire and forget)
• 1 (at least one delivery)
• 2 (exactly one delivery)
Examples:
mqtt.qos:0;
mqtt.qos:2;
8.31.5 mqtt.reason_code
Match on the numeric value of the reason code that is used in MQTT 5.0 for some message types. Please refer to the
specification for the meaning of these values, which are often specific to the message type in question.
mqtt.reason_code uses an unsigned 8-bits integer.
Examples:
This keyword is also available under the alias mqtt.connack.return_code for completeness.
8.31.6 mqtt.connack.session_present
Match on the MQTT CONNACK session_present flag. Values can be yes, true, no or false.
Examples:
mqtt.CONNACK; mqtt.connack.session_present:true;
8.31.7 mqtt.connect.clientid
Match on the self-assigned client ID in the MQTT CONNECT message.
Examples:
mqtt.connect.clientid; pcre:"/^mosq.*/";
mqtt.connect.clientid; content:"myclient";
8.31.8 mqtt.connect.flags
Match on a combination of MQTT CONNECT flags, separated by commas (,). Flags may be prefixed by ! to indicate
negation, i.e. a flag prefixed by ! must not be set to match.
mqtt.connect.flags uses an unsigned 8-bits integer
Valid flags are:
• username (message contains a username)
• password (message contains a password)
• will (message contains a will definition)
mqtt.connect.flags:username,password,!will;
mqtt.connect.flags:username,!password;
mqtt.connect.flags:clean_session;
8.31.9 mqtt.connect.password
Match on the password credential in the MQTT CONNECT message.
Examples:
mqtt.connect.password; pcre:"/^123[0-9]*/";
mqtt.connect.password; content:"swordfish";
8.31.10 mqtt.connect.protocol_string
Match on the protocol string in the MQTT CONNECT message. In contrast to mqtt.protocol_version this is a
property that is only really relevant in the initial CONNECT communication and never used again; hence it is organized
under mqtt.connect.
Examples:
mqtt.connect.protocol_string; content:"MQTT";
mqtt.connect.protocol_string; content:"MQIsdp";
8.31.11 mqtt.connect.username
Match on the username credential in the MQTT CONNECT message.
Examples:
mqtt.connect.username; content:"benson";
8.31.12 mqtt.connect.willmessage
Match on the will message in the MQTT CONNECT message, if a will is defined.
Examples:
mqtt.connect.willmessage; pcre:"/^fooba[rz]/";
mqtt.connect.willmessage; content:"hunter2";
8.31.13 mqtt.connect.willtopic
Match on the will topic in the MQTT CONNECT message, if a will is defined.
Examples:
mqtt.connect.willtopic; pcre:"/^hunter[0-9]/";
8.31.14 mqtt.publish.message
Match on the payload to be published in the MQTT PUBLISH message.
Examples:
8.31.15 mqtt.publish.topic
Match on the topic to be published to in the MQTT PUBLISH message.
Examples:
mqtt.publish.topic; content:"mytopic";
8.31.16 mqtt.subscribe.topic
Match on any of the topics subscribed to in a MQTT SUBSCRIBE message.
Examples:
mqtt.subscribe.topic; content:"mytopic";
8.31.17 mqtt.unsubscribe.topic
Match on any of the topics unsubscribed from in a MQTT UNSUBSCRIBE message.
Examples:
mqtt.unsubscribe.topic; content:"mytopic";
ike.init_spi; content:"18fe9b731f9f8034";
ike.resp_spi; content:"a00b8ef0902bb8ec";
8.32.2 ike.chosen_sa_attribute
Match on an attribute value of the chosen Security Association (SA) by the Responder. Supported for IKEv1
are: alg_enc, alg_hash, alg_auth, alg_dh, alg_prf, sa_group_type, sa_life_type, sa_life_duration,
sa_key_length and sa_field_size. IKEv2 supports alg_enc, alg_auth, alg_prf and alg_dh.
If there is more than one chosen SA the event MultipleServerProposal is set. The attributes of the first SA are used
for this keyword.
Examples:
ike.chosen_sa_attribute:alg_hash=2;
ike.chosen_sa_attribute:sa_key_length=128;
8.32.3 ike.exchtype
Match on the value of the Exchange Type.
ike.exchtype uses an unsigned 8-bit integer.
This keyword takes a numeric argument after a colon and supports additional qualifiers, such as:
• > (greater than)
• < (less than)
• >= (greater than or equal)
• <= (less than or equal)
• arg1-arg2 (range)
Examples:
ike.exchtype:5;
ike.exchtype:>=2;
8.32.4 ike.vendor
Match a vendor ID against the list of collected vendor IDs.
Examples:
ike.vendor:4a131c81070358455c5728f20e95452f;
8.32.5 ike.key_exchange_payload
Match against the public key exchange payload (e.g. Diffie-Hellman) of the server or client.
Examples:
ike.key_exchange_payload; content:"|6d026d5616c45be05e5b898411e9|"
8.32.6 ike.key_exchange_payload_length
Match against the length of the public key exchange payload (e.g. Diffie-Hellman) of the server or client.
ike.key_exchange_payload_length uses an unsigned 32-bit integer.
This keyword takes a numeric argument after a colon and supports additional qualifiers, such as:
• > (greater than)
• < (less than)
• >= (greater than or equal)
• <= (less than or equal)
• arg1-arg2 (range)
Examples:
ike.key_exchange_payload_length:>132
8.32.7 ike.nonce_payload
Match against the nonce of the server or client.
Examples:
ike.nonce_payload; content:"|6d026d5616c45be05e5b898411e9|"
8.32.8 ike.nonce_payload_length
Match against the length of the nonce of the server or client.
ike.nonce_payload_length uses an unsigned 32-bit integer.
This keyword takes a numeric argument after a colon and supports additional qualifiers, such as:
• > (greater than)
• < (less than)
• >= (greater than or equal)
• <= (less than or equal)
• arg1-arg2 (range)
Examples:
ike.nonce_payload_length:132
ike.nonce_payload_length:>132
8.33.1 Frames
The HTTP2 parser supports the following frames (as defined by Suricata) which are created for each HTTP2 frame (as
defined by the HTTP2 RFC) :
• http2.hdr
• http2.data
• http2.pdu
8.33.2 http2.frametype
Match on the frame type present in a transaction.
Examples:
http2.frametype:GOAWAY;
8.33.3 http2.errorcode
Match on the error code in a GOWAY or RST_STREAM frame
Examples:
http2.errorcode: NO_ERROR;
http2.errorcode: INADEQUATE_SECURITY;
8.33.4 http2.priority
Match on the value of the HTTP2 priority field present in a PRIORITY or HEADERS frame.
http2.priority uses an unsigned 8-bit integer.
This keyword takes a numeric argument after a colon and supports additional qualifiers, such as:
• > (greater than)
• < (less than)
• x-y (range between values x and y)
Examples:
http2.priority:2;
http2.priority:>100;
http2.priority:32-64;
8.33.5 http2.window
Match on the value of the HTTP2 value field present in a WINDOWUPDATE frame.
http2.window uses an unsigned 32-bit integer.
This keyword takes a numeric argument after a colon and supports additional qualifiers, such as:
• > (greater than)
• < (less than)
• x-y (range between values x and y)
Examples:
http2.window:1;
http2.window:<100000;
8.33.6 http2.size_update
Match on the size of the HTTP2 Dynamic Headers Table. More information on the protocol can be found here: https:
//tools.ietf.org/html/rfc7541#section-6.3
http2.size_update uses an unsigned 64-bit integer.
This keyword takes a numeric argument after a colon and supports additional qualifiers, such as:
• > (greater than)
• < (less than)
• x-y (range between values x and y)
Examples:
http2.size_update:1234;
http2.size_update:>4096;
8.33.7 http2.settings
Match on the name and value of a HTTP2 setting from a SETTINGS frame.
This keyword takes a numeric argument after a colon and supports additional qualifiers, such as:
• > (greater than)
• < (less than)
• x-y (range between values x and y)
Examples:
http2.settings:SETTINGS_ENABLE_PUSH=0;
http2.settings:SETTINGS_HEADER_TABLE_SIZE>4096;
8.33.8 http2.header_name
Match on the name of a HTTP2 header from a HEADER frame (or PUSH_PROMISE or CONTINUATION).
Examples:
http2.header_name; content:"agent";
8.34.1 quic.cyu.hash
Match on the CYU hash
Examples:
alert quic any any -> any any (msg:"QUIC CYU HASH"; \
quic.cyu.hash; content:"7b3ceb1adc974ad360cfa634e8d0a730"; \
sid:1;)
8.34.2 quic.cyu.string
Match on the CYU string
Examples:
alert quic any any -> any any (msg:"QUIC CYU STRING"; \
quic.cyu.string; content:"46,PAD-SNI-VER-CCS-UAID-TCID-PDMD-SMHL-ICSL-NONP-MIDS-SCLS-
˓→CSCT-COPT-IRTT-CFCW-SFCW"; \
sid:2;)
8.34.3 quic.version
Sticky buffer for matching on the Quic header version in long headers.
Examples:
8.36.2 Frames
The SMTP parser supports the following frames:
• smtp.command_line
• smtp.response_line
• smtp.data
• smtp.stream
smtp.command_line
A single line from the client to the server. Multi-line commands will have a frame per line. Lines part of the SMTP
DATA transfer are excluded.
alert smtp any any -> any any ( frame:smtp.command_line; content:"MAIL|20|FROM:"; startswith; sid:1;)
smtp.response_line
A single line from the server to the client. Multi-line commands will have a frame per line.
alert smtp any any -> any any ( frame:smtp.response_line; content:"354 go ahead"; startswith; sid:1;)
smtp.data
A streaming buffer containing the DATA bytes sent from client to server.
alert smtp any any -> any any ( frame:smtp.data; content:"Reply-To:"; startswith; content:"Subject"; distance:0; sid:1;)
smtp.stream
Streaming buffer of the entire TCP data for the SMTP session.
alert smtp any any -> any any (flow:to_client; frame:smtp.stream; content:"250 ok|0d 0a|354 go ahead"; sid:1;)
websocket.payload; pcre:"/^123[0-9]*/";
websocket.payload content:"swordfish";
8.37.2 websocket.flags
Matches on the websocket flags. It uses a 8-bit unsigned integer as value. Only the four upper bits are used.
The value can also be a list of strings (comma-separated), where each string is the name of a specific bit like fin and
comp, and can be prefixed by ! for negation.
websocket.flags uses an unsigned 8-bits integer
Examples:
websocket.flags:128;
websocket.flags:&0x40=0x40;
websocket.flags:fin,!comp;
8.37.3 websocket.mask
Matches on the websocket mask if any. It uses a 32-bit unsigned integer as value (big-endian).
websocket.mask uses an unsigned 32-bits integer
Examples:
websocket.mask:123456;
websocket.mask:>0;
8.37.4 websocket.opcode
Matches on the websocket opcode. It uses a 8-bit unsigned integer as value. Only 16 values are relevant. It can also be
specified by text from the enumeration
websocket.opcode uses an unsigned 8-bits integer
Examples:
websocket.opcode:1;
websocket.opcode:>8;
websocket.opcode:ping;
app-layer-protocol:[!]<protocol>(,<mode>);
Examples:
app-layer-protocol:ssh;
app-layer-protocol:!tls;
app-layer-protocol:failed;
app-layer-protocol:!http,final;
(continues on next page)
A special value 'failed' can be used for matching on flows in which protocol detection failed. This can happen if Suricata
doesn't know the protocol or when certain 'bail out' conditions happen.
The different modes are * direction : protocol recognized on the direction of the current packet * to_server : protocol
recognized in the direction to server * to_client : protocol recognized in the direction to client * either : tries to match
protocols found on both directions * final : final protocol chosen by Suricata for parsing * original : original protocol
(in case of protocol change)
By default, (if no mode is specified), the mode is direction.
Here is an example of a rule matching non-http traffic on port 80:
alert tcp any any -> any 80 (msg:"non-HTTP traffic over HTTP standard port"; flow:to_server; app-layer-
protocol:!http,final; sid:1; )
8.38.2 app-layer-event
Match on events generated by the App Layer Parsers and the protocol detection engine.
Syntax:
app-layer-event:<event name>;
Examples:
app-layer-event:applayer_mismatch_protocol_both_directions;
app-layer-event:http.gzip_decompression_failed;
Protocol Detection
applayer_mismatch_protocol_both_directions
The toserver and toclient directions have different protocols. For example a client talking HTTP to a SSH server.
applayer_wrong_direction_first_data
Some protocol implementations in Suricata have a requirement with regards to the first data direction. The HTTP parser
is an example of this.
https://redmine.openinfosecfoundation.org/issues/993
applayer_detect_protocol_only_one_direction
Protocol detection only succeeded in one direction. For FTP and SMTP this is expected.
applayer_proto_detection_skipped
xbits:<set|unset|isset|isnotset|toggle>,<name>,track <ip_src|ip_dst|ip_pair>;
xbits:<set|unset|isset|toggle>,<name>,track <ip_src|ip_dst|ip_pair> \
[,expire <seconds>];
xbits:<set|unset|isset|toggle>,<name>,track <ip_src|ip_dst|ip_pair> \
[,expire <seconds>];
8.39.1 Notes
• No difference between using hostbits and xbits with track ip_<src|dst>
• If you set on a client request and use track ip_dst, if you want to match on the server response, you check it
(isset) with track ip_src.
• To not alert, use noalert;
• the toggle option will flip the value of the xbits.
• See also:
– https://blog.inliniac.net/2014/12/21/crossing-the-streams-in-suricata/
– http://www.cipherdyne.org/blog/2013/07/crossing-the-streams-in-ids-signature-languages.html
YAML settings
Bits that are stored per host are stored in the Host table. This means that host table settings affect hostsbits and xbits
per host.
Bits that are stored per IP pair are stored in the IPPair table. This means that ippair table settings, especially memcap,
affect xbits per ip_pair.
Threading
Due to subtle timing issues between threads the order of sets and checks can be slightly unpredictable.
Unix Socket
Hostbits can be added, removed and listed through the unix socket.
Add:
List:
{
"message":
{
"count": 1,
"hostbits":
[{
"expire": 89,
"name": "blacklist"
}]
},
"return": "OK"
}
Examples
Creating a SSH blacklist
Then the following rule simply drops any incoming traffic to that server that is on that 'badssh' list:
8.40.1 noalert
A rule that specifies noalert will not generate an alert when it matches, but rule actions will still be performed.
noalert is often used in rules that set a flowbit for common patterns.
noalert is meant for use with rule actions alert, drop, reject that all explicitly or implicitly include alert.
alert http any any -> any any (http.user_agent; content:"Mozilla/5.0"; startwith; endswith; flowbits:set,mozilla-ua;
noalert; sid:1;)
This example sets a flowbit "mozilla-ua" on matching, but does not generate an alert due to the presence of noalert.
ò Note
8.40.2 alert
A rule that specifies alert will generate an alert, even if the rule action doesn't imply alerting.
This keyword can be used to implement an "alert then pass"-logic.
pass http any any -> any any (http.user_agent; content:"Mozilla/5.0"; startwith; endswith; alert; sid:1;)
This example would pass the rest of the HTTP flow with the Mozilla/5.0 user-agent, generating an alert for the "pass"
event.
8.41.1 threshold
The threshold keyword can be used to control the rule's alert frequency. It has 3 modes: threshold, limit and both.
Syntax:
type "threshold"
This type can be used to set a minimum threshold for a rule before it generates alerts. A threshold setting of N means
on the Nth time the rule matches an alert is generated.
Example:
alert tcp !$HOME_NET any -> $HOME_NET 25 (msg:"ET POLICY Inbound Frequent Emails -␣
˓→Possible Spambot Inbound"; \
This signature only generates an alert if we get 10 inbound emails or more from the same server in a time period of one
minute.
If a signature sets a flowbit, flowint, etc. those actions are still performed for each of the matches.
Rule actions drop (IPS mode) and reject are applied to each packet (not only the one that meets the thresh-
old condition).
type "limit"
This type can be used to make sure you're not getting flooded with alerts. If set to limit N, it alerts at most N times.
Example:
alert http $HOME_NET any -> any $HTTP_PORTS (msg:"ET USER_AGENTS Internet Explorer 6 in␣
˓→use - Significant Security Risk"; \
In this example at most 1 alert is generated per host within a period of 3 minutes if MSIE 6.0 is detected.
If a signature sets a flowbit, flowint, etc. those actions are still performed for each of the matches.
Rule actions drop (IPS mode) and reject are applied to each packet (not only the one that meets the limit
condition).
type "both"
This type is a combination of the "threshold" and "limit" types. It applies both thresholding and limiting.
Example:
alert tcp $HOME_NET 5060 -> $EXTERNAL_NET any (msg:"ET VOIP Multiple Unauthorized SIP␣
˓→Responses TCP"; \
This alert will only generate an alert if within 6 minutes there have been 5 or more "SIP/2.0 401 Unauthorized" re-
sponses, and it will alert only once in that 6 minutes.
If a signature sets a flowbit, flowint, etc. those actions are still performed for each of the matches.
Rule actions drop (IPS mode) and reject are applied to each packet.
type "backoff"
Allow limiting of alert output by using a backoff algorithm.
Syntax:
track: backoff is only supported for by_flow count: number of alerts before the first match is logged multiplier:
value to multiply count with each time the next value is reached
A count of 1 with a multiplier of 10 would generate alerts for matching packets:
In the following example, the pkt_invalid_ack would only lead to alerts the 1st, 10th, 100th, etc.
alert tcp any any -> any any (stream-event:pkt_invalid_ack;
threshold:type backoff, track by_flow, count 1, multiplier 10; sid:2210045; rev:2;)
If a signature sets a flowbit, flowint, etc. those actions are still performed for each of the matches.
Rule actions drop (IPS mode) and reject are applied to each matching packet.
track
Option Tracks By
by_src source IP
by_dst destination IP
by_both pair of src IP and dst IP
by_rule signature id
by_flow flow
8.41.2 detection_filter
The detection_filter keyword can be used to alert on every match after a threshold has been reached. It differs from
the threshold with type threshold in that it generates an alert for each rule match after the initial threshold has been
reached, where the latter will reset it's internal counter and alert again when the threshold has been reached again.
Syntax:
Example:
Alerts each time after 15 or more matches have occurred within 2 seconds.
If a signature sets a flowbit, flowint, etc. those actions are still performed for each of the matches.
Rule actions drop (IPS mode) and reject are applied to each packet that generate an alert
8.42.1 iprep
The iprep directive matches on the IP reputation information for a host.
alert ip $HOME_NET any -> any any (msg:"IPREP internal host talking to CnC server";␣
˓→flow:to_server; iprep:dst,CnC,>,30; sid:1; rev:1;)
This rule will alert when a system in $HOME_NET acts as a client while communicating with any IP in the CnC category
that has a reputation score set to greater than 30.
iprep:<side to check>,<category>,<isset|issnotset>
alert ip any any -> any any (msg:"IPREP High Value CnC"; iprep:src,CnC,>,100; sid:1;␣
˓→rev:1;)
8.43.1 ip.src
The ip.src keyword is a sticky buffer to match on source IP address. It matches on the binary representation and is
compatible with datasets of types ip and ipv4.
Example:
alert tcp $EXTERNAL_NET any -> $HOME_NET any (msg:"Inbound bad list"; flow:to_server; ip.
˓→src; dataset:isset,badips,type ip,load badips.list; sid:1; rev:1;)
8.43.2 ip.dst
The ip.dst keyword is a sticky buffer to match on destination IP address. It matches on the binary representation and is
compatible with the dataset of type ip and ipv4.
Example:
alert tcp $HOME_NET any -> any any (msg:"Outbound bad list"; flow:to_server; ip.dst;␣
˓→dataset:isset,badips,type ip,load badips.list; sid:1; rev:1;)
config dns any any -> any any (dns.query; content:"suricata"; config: logging disable,␣
˓→type tx, scope tx; sid:1;)
This example will detect if a DNS query contains the string suricata and if so disable the DNS transaction logging.
This means that eve.json records, but also Lua output, will not be generated/triggered for this DNS transaction.
8.44.1 Keyword
The config rule keyword provides the setting and the scope of the change.
Syntax:
8.44.2 Action
Config rules can, but don't have to, use the config rule action. The config rule action won't generate an alert when the
rule matches, but the rule actions will still be applied. It is equivalent to alert ... (noalert; ...).
8.45 Datasets
Using the dataset and datarep keyword it is possible to match on large amounts of data against any sticky buffer.
For example, to match against a DNS black list called dns-bl:
dns.query; dataset:isset,dns-bl;
These keywords are aware of transforms. So to look up a DNS query against a MD5 black list:
datasets:
ua-seen:
type: string
state: ua-seen.lst
dns-sha256-seen:
type: sha256
state: dns-sha256-seen.lst
Example:
datasets:
defaults:
memcap: 100mb
hashsize: 2048
ua-seen:
type: string
load: ua-seen.lst
datasets:
ua-seen:
type: string
load: ua-seen.lst
memcap: 10mb
hashsize: 1024
ò Note
The hashsize should be close to the amount of entries in the dataset to avoid collisions. If it's set too low, this could
result in rather long startup time.
dataset:<cmd>,<name>,<options>;
dataset:<set|unset|isset|isnotset>,<name> \
[, type <string|md5|sha256|ipv4|ip>, save <file name>, load <file name>, state <file␣
˓→name>, memcap <size>, hashsize <size>];
type <type>
the data type: string, md5, sha256, ipv4, ip
load <file name>
file name for load the data when Suricata starts up
state
sets file name for loading and saving a dataset
save <file name>
advanced option to set the file name for saving the in-memory data when Suricata exits.
memcap <size>
maximum memory limit for the respective dataset
hashsize <size>
allowed size of the hash for the respective dataset
ò Note
ò Note
Notice how it is not possible to do certain operations alone with datasets (example 2 above), but, it is possible to use a
combination of other rule keywords. Keep in mind the cost of additional keywords though e.g. in the second example
rule above, negative performance impact can be expected due to pcrexform.
datarep
Data Reputation allows matching data against a reputation list.
Syntax:
datarep:<name>,<operator>,<value>, \
[, load <file name>, type <string|md5|sha256|ipv4|ip>, memcap <size>, hashsize <size>
˓→];
alert dns any any -> any any (dns.query; to_md5; datarep:dns_md5, >, 200, load dns_md5.
˓→rep, type md5, memcap 100mb, hashsize 2048; sid:1;)
alert dns any any -> any any (dns.query; to_sha256; datarep:dns_sha256, >, 200, load dns_
˓→sha256.rep, type sha256; sid:2;)
alert dns any any -> any any (dns.query; datarep:dns_string, >, 200, load dns_string.rep,
˓→ type string; sid:3;)
In these examples the DNS query string is checked against three different reputation lists. A MD5 list, a SHA256 list,
and a raw string (buffer) list. The rules will only match if the data is in the list and the reputation value is higher than
200.
set name
Name of an already defined dataset
type
Data type: string, md5, sha256, ipv4, ip
data
Data to add in serialized form (base64 for string, hex notation for md5/sha256, string representation for ipv4/ip)
Example adding 'google.com' to set 'myset':
dataset-remove
Unix Socket command to remove data from a set. On success, the removal becomes active instantly.
Syntax:
set name
Name of an already defined dataset
type
Data type: string, md5, sha256, ipv4, ip
data
Data to remove in serialized form (base64 for string, hex notation for md5/sha256, string representation for
ipv4/ip)
dataset-clear
Unix Socket command to remove all data from a set. On success, the removal becomes active instantly.
Syntax:
set name
Name of an already defined dataset
type
Data type: string, md5, sha256, ipv4, ip
dataset-lookup
Unix Socket command to test if data is in a set.
Syntax:
set name
Name of an already defined dataset
type
Data type: string, md5, sha256, ipv4, ip
data
Data to test in serialized form (base64 for string, hex notation for md5/sha256, string notation for ipv4/ip)
Example testing if 'google.com' is in the set 'myset':
dataset-dump
Unix socket command to trigger a dump of datasets to disk.
Syntax:
dataset-dump
data types
string
in the file as base64 encoded string
md5
in the file as hex encoded string
sha256
in the file as hex encoded string
ipv4
in the file as string
ip
in the file as string, it can be IPv6 or IPv4 address (standard notation or IPv4 in IPv6 one)
dataset
Datasets have a simple structure, where there is one piece of data per line in the file.
Syntax:
<data>
TW96aWxsYS80LjAgKGNvbXBhdGlibGU7ICk=
Mozilla/4.0 (compatible; )
datarep
The datarep format follows the dataset, expect that there are 1 more CSV field:
Syntax:
<data>,<value>
8.45.7 Security
As datasets potentially allow a rule distributor write access to your system with save and state dataset rules, the
locations allowed are strict by default, however there are two dataset options to tune the security of rules utilizing
dataset filenames:
datasets:
rules:
# Set to true to allow absolute filenames and filenames that use
# ".." components to reference parent directories in rules that specify
# their filenames.
allow-absolute-filenames: false
By setting datasets.rules.allow-write to false, all save and state rules will fail to load. This option is enabled
by default to preserve compatiblity with previous 6.0 Suricata releases, however may change in a future major release.
Pre-Suricata 6.0.13 behavior can be restored by setting datasets.rules.allow-absolute-filenames to true,
however allowing so will allow any rule to overwrite any file on your system that Suricata has write access to.
ò Note
Lua is disabled by default for use in rules, it must be enabled in the configuration file. See the security.lua
section of suricata.yaml and enable allow-rules.
Syntax:
lua:[!]<scriptfilename>;
The init function registers the buffer(s) that need inspection. Currently the following are available:
• packet -- entire packet, including headers
• payload -- packet payload (not stream)
• buffer -- the current sticky buffer
• stream
• dnp3
• dns.request
• dns.response
• dns.rrname
• ssh
• smtp
• tls
• http.uri
• http.uri.raw
• http.request_line
• http.request_headers
• http.request_headers.raw
• http.request_cookie
• http.request_user_agent
• http.request_body
• http.response_headers
• http.response_headers.raw
• http.response_body
• http.response_cookie
All the HTTP buffers have a limitation: only one can be inspected by a script at a time.
return 0
end
The script can return 1 or 0. It should return 1 if the condition(s) it checks for match, 0 if not.
Entire script:
function match(args)
a = tostring(args["http.request_line"])
if #a > 0 then
if a:find("^POST%s+/.*%.php%s+HTTP/1.0$") then
return 1
end
end
return 0
end
(continues on next page)
return 0
Pack- Functions
age
Name
base assert, ipairs, next, pairs, print, rawequal, rawlen, select, tonumber, tostring, type, warn, rawget, rawset,
error
table concat, insert, move, pack, remove, sort, unpack
string byte, char, dump, find, format, gmatch, gsub, len, lower, match, pack, packsize, rep, reverse, sub, un-
pack, upper
math abs, acos, asin, atan, atan2, ceil, cos, cosh, deg, exp, floor, fmod, frexp, ldexp, log, log10, max, min,
modf, pow, rad, random, randomseed, sin, sinh, sqrt, tan, tanh, tointeger, type, ult
utf8 offset, len, codes, char, codepoint
ò Note
Suricata 8.0 has moved to Lua 5.4 and has builtin support for bitwise and utf8 operations now.
A comprehensive list of existing lua functions - with examples - can be found at Lua functions (some of them, however,
work only for the lua-output functionality).
Where not specified, the statements below apply to Suricata. In general, references to Snort refer to the version 2.9
branch.
Or:
that, unlike Suricata, if there is no space (or if there is a tab) immediately after the colon before the header value,
the content of the header line will remain unchanged in the http_header buffer.
• When there are duplicate HTTP headers (referring to the header name only, not the value), the normalized buffer
(http_header) will concatenate the values in the order seen (from top to bottom), with a comma and space (",
") between each of them. If this hinders detection, use the http_raw_header buffer instead.
Example request:
Content-Length: 44, 55
• The HTTP 'Cookie' and 'Set-Cookie' headers are NOT included in the http_header buffer; instead they are
extracted and put into their own buffer – http_cookie. See the http_cookie Buffer section.
• The HTTP 'Cookie' and 'Set-Cookie' headers ARE included in the http_raw_header buffer so if you are trying
to match on something like particular header ordering involving (or not involving) the HTTP Cookie headers,
use the http_raw_header buffer.
• If 'enable_cookie' is set for Snort, the HTTP Cookie header names and trailing CRLF (i.e. "Cookie: \r\n" and
"Set-Cooke \r\n") are kept in the http_header buffer. This is not the case for Suricata which removes the entire
"Cookie" or "Set-Cookie" line from the http_header buffer.
• Other HTTP headers that have their own buffer (http_user_agent, http_host) are not removed from the
http_header buffer like the Cookie headers are.
• When inspecting server responses and file_data is used, content matches in http_* buffers should come
before file_data unless you use pkt_data to reset the cursor before matching in http_* buffers. Snort will
not complain if you use http_* buffers after file_data is set.
monster, elmo
monsterelmo
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:".EXE File Download Request";␣
˓→flow:established,to_server; content:"GET"; http_method; content:".exe"; http_uri;␣
• If you are unclear about behavior in a particular instance, you are encouraged to positively and negatively test
your rules that use an isdataat keyword.
• With Snort you can't combine the "relative" PCRE option ('R') with other buffer options like normalized URI
('U') – you get a syntax error.
8.47.16 Flowbits
• Suricata fully supports the setting and checking of flowbits (including the same flowbit) on the same
packet/stream. Snort does not always allow for this.
• In Suricata, flowbits:isset is checked after the fast pattern match but before other content matches. In
Snort, flowbits:isset is checked in the order it appears in the rule, from left to right.
• If there is a chain of flowbits where multiple rules set flowbits and they are dependent on each other, then the
order of the rules or the sid values can make a difference in the rules being evaluated in the proper order and
generating alerts as expected. See bug 1399 - https://redmine.openinfosecfoundation.org/issues/1399.
• Flow Keywords
8.47.17 flowbits:noalert;
A common pattern in existing rules is to use flowbits:noalert; to make sure a rule doesn't generate an alert if it
matches.
Suricata allows using just noalert; as well. Both have an identical meaning in Suricata.
user=suri
This rule snippet will never return true in Snort but will in Suricata:
• There are a number of configuration options and considerations (such as stream reassembly depth and libhtp
body-limit) that should be understood if you want fully utilize file extraction in Suricata.
• File Keywords
• File Extraction
• https://blog.inliniac.net/2011/11/29/file-extraction-in-suricata/
• https://blog.inliniac.net/2014/11/11/smtp-file-extraction-in-suricata/
8.47.23 Alerts
• In Snort, the number of alerts generated for a packet/stream can be limited by the event_queue configuration.
• Suricata has an internal hard-coded limit of 15 alerts per packet/stream (and this cannot be configured); all rules
that match on the traffic being analyzed will fire up to that limit.
• Sometimes Suricata will generate what appears to be two alerts for the same TCP packet. This happens when
Suricata evaluates the packet by itself and as part of a (reassembled) stream.
Buffer Snort 2.9.x Sup- Suricata Sup- PCRE Can be used Suricata Fast Pattern Priority
port? port? flag as Fast Pat- (lower number is higher pri-
tern? ority)
content YES YES <none> YES 3
(no mod-
ifier)
http_methodYES YES M Suricata only 3
http_stat_code
YES YES S Suricata only 3
http_stat_msgYES YES Y Suricata only 3
uricon- YES but depre- YES but depre- U YES 2
tent cated, use http_uri cated, use http_uri
instead instead
http_uri YES YES U YES 2
http_raw_uriYES YES I Suricata only 2
http_header YES YES H YES 2
http_raw_header
YES YES D Suricata only 2
http_cookie YES YES C Suricata only 2
http_raw_cookie
YES NO (use K NO n/a
http_raw_header
instead)
http_host NO YES W Suricata only 2
http_raw_hostNO YES Z Suricata only 2
http_client_body
YES YES P YES 2
http_server_body
NO YES Q Suricata only 2
http_user_agent
NO YES V Suricata only 2
dns_query NO YES n/a* Suricata only 2
tls_sni NO YES n/a* Suricata only 2
tls_cert_issuer
NO YES n/a* Suricata only 2
tls_cert_subject
NO YES n/a* Suricata only 2
file_data YES YES n/a* YES 2
* Sticky buffer
For matching multiple headers in HTTP2 traffic a rule using the new functionality would look like:
alert http2 any any -> any any (msg:"HTTP2 Multiple Header Buffer Example"; flow:established,to_server;
http.request_header; content:"method|3a 20|GET"; http.request_header; content:"authority|3a 20|example.com";
classtype:misc-activity; sid:1; rev:1;)
With HTTP2 there are multiple headers seen in the same flow record. We now have a way to write a rule in a more
efficient way using the multiple buffer capability.
Note Existing behavior when using sticky buffers still applies:
Example rule:
alert dns $HOME_NET any -> $EXTERNAL_NET any (msg:"DNS Query Sticky Buffer Classic Example Rule";
dns.query; content:"example"; content:".net"; classtype:misc-activity; sid:1; rev:1;)
The above rule will alert on a single dns query containing "example.net" or "example.domain.net" since the rule content
matches are within a single dns.query buffer and all content match requirements of the rule are met.
Note: This is new behavior. In versions of Suricata prior to version 7 multiple statements of the same sticky buffer did
not make a second instance of the buffer. For example:
dns.query; content:"example"; dns.query; content:".com";
would be equivalent to:
dns.query; content:"example"; content:".com";
Using our example from above, the first query is for example.net which matches content:"example"; but does not match
content:".com";
The second query is for something.com which would match on the content:".com"; but not the content:"example";
So with the Suricata behavior prior to Suricata 7, the signature would not fire in this case since both content conditions
will not be met.
Multiple buffer matching is currently enabled for use with the following keywords:
• dns.query
• file.data
• file.magic
• file.name
• http.request_header
• http.response_header
• http2.header_name
• ike.vendor
• krb5_cname
• krb5_sname
• mqtt.subscribe.topic
• mqtt.unsubscribe.topic
• quic.cyu.hash
• quic.cyu.string
• tls.certs
• tls.cert_subject
• tls.subjectaltname
8.49 Tag
The tag keyword allows tagging of the current and future packets.
Tagged packets can be logged in EVE and conditional PCAP logging.
Tagging is limited to a scope: host or session (flow). When using host a direction can be specified: src or dst. Tagging
will then occur based on the src or dst IP address of the packet generating the alert.
Tagging is further controlled by count: packets, bytes or seconds. If the count is ommited built-in defaults will be used:
• for session: 256 packets
• for host: 256 packets for the destination IP of the packet triggering the alert
The tag keyword can appear multiple times in a rule.
8.49.1 Syntax
tag:<scope>[,<count>, <metric>[,<direction>]];
Values for scope: session and host Values for metric: packets, bytes, seconds Values for direction: src and dst
ò Note
"direction" can only be specified if scope is "host" and both "count" and "metric" are also specified.
8.49.2 Examples
Keyword:
outputs:
- eve-log:
enabled: yes
filename: eve.json
types:
- alert:
tagged-packets: true
{
"timestamp": "2020-06-03T10:29:17.850417+0000",
"flow_id": 1576832511820424,
"event_type": "packet",
"src_ip": "192.168.0.27",
"src_port": 54634,
"dest_ip": "192.168.0.103",
"dest_port": 22,
"proto": "TCP",
"pkt_src": "wire/pcap",
"packet":
˓→"CAAn6mWJAPSNvfrHCABFAAAogkVAAIAG9rfAqAAbwKgAZ9VqABZvnJXH5Zf6aFAQEAljEwAAAAAAAAAA",
"packet_info": {
"linktype": 1
}
}
outputs:
- pcap-log:
enabled: yes
filename: log.pcap
limit: 1000mb
max-files: 2000
compression: none
mode: normal
use-stream-depth: no #If set to "yes" packets seen after reaching stream␣
˓→inspection depth are ignored. "no" logs all packets
# Use "all" to log all packets or use "alerts" to log only alerted packets and␣
˓→flows or "tag"
NINE
RULE MANAGEMENT
ò Note
suricata-update is bundled with Suricata version 4.1 and later. It can be used with older versions as well. It
will have to be installed separately in that case.
sudo suricata-update
default-rule-path: /var/lib/suricata/rules
rule-files:
- suricata.rules
sudo suricata-update
199
Suricata User Guide, Release 8.0.0-dev
Each of the rulesets has a name that has a 'vendor' prefix, followed by a set name. For example, OISF's traffic id ruleset
is called 'oisf/trafficid'.
To enable 'oisf/trafficid', enter:
Now restart Suricata again and the rules from the OISF TrafficID ruleset are loaded.
To see which rulesets are currently active, use "list-enabled-sources".
sudo suricata-update
and make sure your local.rules file is added to the list of rules:
default-rule-path: /usr/local/etc/suricata/rules
rule-files:
- suricata.rules
- /path/to/local.rules
If the rule failed to load, Suricata will display as much information as it has when it deemed the rule un-loadable. Pay
special attention to the details: look for mistakes in special characters, spaces, capital characters, etc.
Next, check if your log-files are enabled in the Suricata configuration file suricata.yaml.
If you had to correct your rule and/or modify Suricata's YAML configuration file, you'll have to restart Suricata.
If you see your rule is successfully loaded, you can double check your rule by doing something that should trigger it.
tail -f /var/log/suricata/fast.log
alert http any any -> any any (msg:"Do not read gossip during work";
content:"Scarlett"; nocase; classtype:policy-violation; sid:1; rev:1;)
There are two methods available when using the Unix socket.
Blocking reload
suricatasc -c reload-rules
suricatasc -c ruleset-reload-nonblocking
It is also possible to get information about the last reload via dedicated commands. See Commands in standard running
mode for more information.
suricatasc -c ruleset-profile-start
To stop profiling
suricatasc -c ruleset-profile-stop
To dump profiling
suricatasc -c ruleset-profile
suricatasc -c ruleset-profile-start
sleep 30
suricatasc -c ruleset-profile-stop
suricatasc -c ruleset-profile
On busy systems, using the sampling capability to capture performance on a subset of packets can be obtained via the
sample-rate variable in the profiling section in the suricata.yaml file.
TEN
When an alert happens it's important to figure out what it means. Is it serious? Relevant? A false positive?
To find out more about the rule that fired, it's always a good idea to look at the actual rule.
The first thing to look at in a rule is the description that follows the msg keyword. Let's consider an example:
The "ET" indicates the rule came from the Emerging Threats (Proofpoint) project. "SCAN" indicates the purpose of
the rule is to match on some form of scanning. Following that, a more or less detailed description is given.
Most rules contain some pointers to more information in the form of the "reference" keyword.
Consider the following example rule:
sid:2010496; rev:2;)
In this rule, the reference keyword indicates 3 urls to visit for more information:
isc.sans.org/diary.html?storyid=7747
doc.emergingthreats.net/2010496
www.emergingthreats.net/cgi-bin/cvsweb.cgi/sigs/CURRENT_EVENTS/CURRENT_Adobe
Some rules contain a reference like: "reference:cve,2009-3958;" should allow you to find info about the specific
CVE using your favorite search engine.
It's not always straight forward and sometimes not all of that information is available publicly. Usually asking about it
on the signature support channel can be helpful.
In Rule Management with Suricata-Update more information on the rule sources and their documentation and support
methods can be found.
In many cases, looking at just the alert and the packet that triggered it won't be enough to be conclusive. When using
the default Eve settings a lot of metadata will be added to the alert.
205
Suricata User Guide, Release 8.0.0-dev
For example, if a rule fired that indicates your web application is attacked, looking at the metadata might reveal that
the web application replied with 404 not found. This will usually mean the attack failed but not always.
Not every protocol leads to metadata generation, so when running an IDS engine like Suricata, it's often recommended
to combine it with full packet capture. Using tools like Evebox, Sguil or Snorby, the full TCP session or UDP flow can
be inspected.
Obviously there is a lot more to Incidence Response, but this should get you started.
ELEVEN
PERFORMANCE
11.1 Runmodes
Suricata consists of several 'building blocks' called threads, thread-modules and queues. A thread is like a process that
runs on a computer. Suricata is multi-threaded, so multiple threads are active at once. A thread-module is a part of
a functionality. One module is for example for decoding a packet, another is the detect-module and another one the
output-module. A packet can be processed by more than one thread. The packet will then be passed on to the next
thread through a queue. Packets will be processed by one thread at a time, but there can be multiple packets being
processed at a time by the engine (see Max-pending-packets). A thread can have one or more thread-modules. If they
have more modules, they can only be active one a a time. The way threads, modules and queues are arranged together
is called the "Runmode".
207
Suricata User Guide, Release 8.0.0-dev
For processing PCAP files, or in case of certain IPS setups (like NFQ), autofp is used. Here there are one or more
capture threads, that capture the packet and do the packet decoding, after which it is passed on to the flow worker
threads.
Finally, the single runmode is the same as the workers mode, however there is only a single packet processing thread.
This is mostly useful during development.
For more information about the command line options concerning the runmode, see Command Line Options.
concurrency issue in recognizing ftp-data flows due to processing them before the ftp flow got processed. In case of
such a flow, a variant of the hash is used.
11.2.2 RSS
Receive Side Scaling is a technique used by network cards to distribute incoming traffic over various queues on the
NIC. This is meant to improve performance but it is important to realize that it was designed for normal traffic, not for
the IDS packet capture scenario. RSS using a hash algorithm to distribute the incoming traffic over the various queues.
This hash is normally not symmetrical. This means that when receiving both sides of a flow, each side may end up in
a different queue. Sadly, when deploying Suricata, this is the common scenario when using span ports or taps.
The problem here is that by having both sides of the traffic in different queues, the order of processing of packets
becomes unpredictable. Timing differences on the NIC, the driver, the kernel and in Suricata will lead to a high chance
of packets coming in at a different order than on the wire. This is specifically about a mismatch between the two traffic
directions. For example, Suricata tracks the TCP 3-way handshake. Due to this timing issue, the SYN/ACK may only
be received by Suricata long after the client to server side has already started sending data. Suricata would see this
traffic as invalid.
None of the supported capture methods like AF_PACKET, PF_RING or NETMAP can fix this problem for us. It would
require buffering and packet reordering which is expensive.
To see how many queues are configured:
$ ethtool -l ens2f1
Channel parameters for ens2f1:
Pre-set maximums:
RX: 0
TX: 0
Other: 1
Combined: 64
Current hardware settings:
RX: 0
TX: 0
Other: 1
Combined: 8
Some NIC's allow you to set it into a symmetric mode. The Intel X(L)710 card can do this in theory, but the drivers
aren't capable of enabling this yet (work is underway to try to address this). Another way to address is by setting a
special "Random Secret Key" that will make the RSS symmetrical. See http://www.ndsl.kaist.edu/~kyoungsoo/papers/
TR-symRSS.pdf (PDF).
In most scenario's however, the optimal solution is to reduce the number of RSS queues to 1:
Example:
Some drivers do not support setting the number of queues through ethtool. In some cases there is a module load time
option. Read the driver docs for the specifics.
11.2.3 Offloading
Network cards, drivers and the kernel itself have various techniques to speed up packet handling. Generally these will
all have to be disabled.
LRO/GRO lead to merging various smaller packets into big 'super packets'. These will need to be disabled as they
break the dsize keyword as well as TCP state tracking.
Checksum offloading can be left enabled on AF_PACKET and PF_RING, but needs to be disabled on PCAP, NETMAP
and others.
11.2.4 Recommendations
Read your drivers documentation! E.g. for i40e the ethtool change of RSS queues may lead to kernel panics if done
wrong.
Generic: set RSS queues to 1 or make sure RSS hashing is symmetric. Disable NIC offloading.
AF_PACKET: 1 RSS queue and stay on kernel <=4.2 or make sure you have >=4.4.16, >=4.6.5 or >=4.7. Exception:
if RSS is symmetric cluster-type 'cluster_qm' can be used to bind Suricata to the RSS queues. Disable NIC offloading
except the rx/tx csum.
PF_RING: 1 RSS queue and use cluster-type 'cluster_flow'. Disable NIC offloading except the rx/tx csum.
NETMAP: 1 RSS queue. There is no flow based load balancing built-in, but the 'lb' tool can be helpful. Another option
is to use the 'autofp' runmode. Exception: if RSS is symmetric, load balancing is based on the RSS hash and multiple
RSS queues can be used. Disable all NIC offloading.
custom-values:
toclient-groups: 100
toserver-groups: 100
In general, increasing will improve performance. It will lead to minimal increase in memory usage. The default value
for toclient-groups and toserver-groups with detect.profile: high is 75.
11.3.5 af-packet
If using af-packet (default on Linux) it is recommended that af-packet v3 is used for IDS/NSM deployments. For IPS
it is recommended af-packet v2. To make sure af-packet v3 is used it can specifically be enforced it in the af-packet
config section of suricata.yaml like so:
af-packet:
- interface: eth0
....
....
....
use-mmap: yes
tpacket-v3: yes
11.3.6 ring-size
Ring-size is another af-packet variable that can be considered for tuning and performance benefits. It basically means
the buffer size for packets per thread. So if the setting is ring-size: 100000 like below:
af-packet:
- interface: eth0
threads: 5
ring-size: 100000
it means there will be 100,000 packets allowed in each buffer of the 5 threads. If any of the buffers gets filled (for
example packet processing can not keep up) that will result in packet drop counters increasing in the stats logs.
The memory used for those is set up and dedicated at start and is calculated as follows:
11.3.7 stream.bypass
Another option that can be used to improve performance is stream.bypass. In the example below:
stream:
memcap: 64mb
checksum-validation: yes # reject wrong csums
inline: auto # auto will use inline mode in IPS mode, yes or no set it␣
˓→statically
bypass: yes
reassembly:
memcap: 256mb
depth: 1mb # reassemble 1mb into a stream
toserver-chunk-size: 2560
toclient-chunk-size: 2560
randomize-chunk-size: yes
Inspection will be skipped when stream.reassembly.depth of 1mb is reached for a particular flow.
11.4 Hyperscan
11.4.1 Introduction
"Hyperscan is a high performance regular expression matching library (...)" (https://www.intel.com/content/www/us/
en/developer/articles/technical/introduction-to-hyperscan.html)
In Suricata it can be used to perform multi pattern matching (mpm) or single pattern matching (spm).
Support for hyperscan in Suricata was initially implemented by Justin Viiret and Jim Xu from Intel via https://github.
com/OISF/suricata/pull/1965.
Hyperscan is only for Intel x86 based processor architectures at this time. For ARM processors, vectorscan is a drop
in replacement for hyperscan, https://github.com/VectorCamp/vectorscan.
• ragel
• sqlite development libraries
Note: git is an additional dependency if cloning the hyperscan GitHub repository. Otherwise downloading the hyper-
scan zip from the GitHub repository will work too.
The steps to build and install hyperscan are:
To use hyperscan support, edit the suricata.yaml. Change the mpm-algo and spm-algo values to 'hs'.
Alternatively, use this command-line option: --set mpm-algo=hs --set spm-algo=hs
Note: The default suricata.yaml configuration settings for mpm-algo and spm-algo are "auto". Suricata will use hyper-
scan if it is present on the system in case of the "auto" setting.
If the current suricata installation does not have hyperscan support, refer to Installation
wget https://mirrors.edge.kernel.org/pub/software/network/ethtool/ethtool-5.2.tar.xz
tar -xf ethtool-5.2.tar.xz
cd ethtool-5.2
./configure && make clean && make && make install
/usr/local/sbin/ethtool --version
When doing high performance optimisation make sure irqbalance is off and not running:
Depending on the NIC's available queues (for example Intel's x710/i40 has 64 available per port/interface) the worker
threads can be set up accordingly. Usually the available queues can be seen by running:
/usr/local/sbin/ethtool -l eth1
Some NICs - generally lower end 1Gbps - do not support symmetric hashing see Packet Capture. On those systems
due to considerations for out of order packets the following setup with af-packet is suggested (the example below uses
eth1):
then set up af-packet with number of desired workers threads threads: auto (auto by default will use number of
CPUs available) and cluster-type: cluster_flow (also the default setting)
For higher end systems/NICs a better and more performant solution could be utilizing the NIC itself a bit more. x710/i40
and similar Intel NICs or Mellanox MT27800 Family [ConnectX-5] for example can easily be set up to do a bigger
chunk of the work using more RSS queues and symmetric hashing in order to allow for increased performance on the
Suricata side by using af-packet with cluster-type: cluster_qm mode. In that mode with af-packet all packets
linked by network card to a RSS queue are sent to the same socket. Below is an example of a suggested config set up
based on a 16 core one CPU/NUMA node socket system using x710:
˓→equal 16
The commands above can be reviewed in detail in the help or manpages of the ethtool. In brief the sequence makes
sure the NIC is reset, the number of RSS queues is set to 16, load balancing is enabled for the NIC, a low entropy
toeplitz key is inserted to allow for symmetric hashing, receive offloading is disabled, the adaptive control is disabled
for lowest possible latency and last but not least, the ring rx descriptor size is set to 1024. Make sure the RSS hash
function is Toeplitz:
In some cases:
might be enough or even better depending on the type of traffic. However not all NICs allow it. The sd specifies the
multi queue hashing algorithm of the NIC (for the particular proto) to use src IP, dst IP only. The sdfn allows for the
tuple src IP, dst IP, src port, dst port to be used for the hashing algorithm. In the af-packet section of suricata.yaml:
af-packet:
- interface: eth1
threads: 16
cluster-id: 99
cluster-type: cluster_qm
...
...
lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 72
On-line CPU(s) list: 0-71
Thread(s) per core: 2
Core(s) per socket: 18
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz
Stepping: 1
CPU MHz: 1199.724
CPU max MHz: 3600.0000
CPU min MHz: 1200.0000
BogoMIPS: 4589.92
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 46080K
NUMA node0 CPU(s): 0-17,36-53
NUMA node1 CPU(s): 18-35,54-71
It is recommended that 36 worker threads are used and the NIC set up could be as follows:
˓→equal 36
In the example above the set_irq_affinity script is used from the NIC driver's sources. In the cpu affinity section
of suricata.yaml config:
- interface: eth1
# Number of receive threads. "auto" uses the number of cores
threads: 18
cluster-id: 99
cluster-type: cluster_qm
defrag: no
use-mmap: yes
mmap-locked: yes
tpacket-v3: yes
ring-size: 100000
block-size: 1048576
- interface: eth1
# Number of receive threads. "auto" uses the number of cores
(continues on next page)
That way 36 worker threads can be mapped (18 per each af-packet interface slot) in total per CPUs NUMA 1 range -
18-35,54-71. That part is done via the worker-cpu-set affinity settings. ring-size and block-size in the config
section above are decent default values to start with. Those can be better adjusted if needed as explained in Tuning
Considerations.
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 23
Model: 1
Model name: AMD EPYC 7601 32-Core Processor
Stepping: 2
CPU MHz: 1200.000
CPU max MHz: 2200.0000
CPU min MHz: 1200.0000
BogoMIPS: 4391.55
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 64K
L2 cache: 512K
L3 cache: 8192K
NUMA node0 CPU(s): 0-7,64-71
NUMA node1 CPU(s): 8-15,72-79
NUMA node2 CPU(s): 16-23,80-87
NUMA node3 CPU(s): 24-31,88-95
NUMA node4 CPU(s): 32-39,96-103
NUMA node5 CPU(s): 40-47,104-111
(continues on next page)
The ethtool, show_irq_affinity.sh and set_irq_affinity_cpulist.sh tools are provided from the official
driver sources. Set up the NIC, including offloading and load balancing:
In the example above (1-7,64-71 for the irq affinity) CPU 0 is skipped as it is usually used by default on Linux systems
by many applications/tools. Let the NIC balance as much as possible:
- interface: eth1
# Number of receive threads. "auto" uses the number of cores
threads: 48 # 48 worker threads on cpus "8-55" above
cluster-id: 99
cluster-type: cluster_flow
defrag: no
use-mmap: yes
mmap-locked: yes
tpacket-v3: yes
ring-size: 100000
block-size: 1048576
In the example above there are 15 RSS queues pinned to cores 1-7,64-71 on NUMA node 0 and 40 worker threads
using other CPUs on different NUMA nodes. The reason why CPU 0 is skipped in this set up is as in Linux systems
it is very common for CPU 0 to be used by default by many tools/services. The NIC itself in this config is positioned
on NUMA 0 so starting with 15 RSS queues on that NUMA node and keeping those off for other tools in the system
could offer the best advantage.
ò Note
Performance and optimization of the whole system can be affected upon regular NIC driver and pkg/kernel upgrades
so it should be monitored regularly and tested out in QA/test environments first. As a general suggestion it is always
recommended to run the latest stable firmware and drivers as instructed and provided by the particular NIC vendor.
Other considerations
Another advanced option to consider is the isolcpus kernel boot parameter is a way of allowing CPU cores to be
isolated for use of general system processes. That way ensures total dedication of those CPUs/ranges for the Suricata
process only.
stream.wrong_thread / tcp.pkt_on_wrong_thread are counters available in stats.log or eve.json as
event_type: stats that indicate issues with the load balancing. There could be traffic/NICs settings related as well.
In very high/heavily increasing counter values it is recommended to experiment with a different load balancing method
either via the NIC or for example using XDP/eBPF. There is an issue open https://redmine.openinfosecfoundation.org/
issues/2725 that is a placeholder for feedback and findings.
11.6 Statistics
The stats.log produces statistics records on a fixed interval, by default every 8 seconds.
Usually, this is not the complete story though. These are kernel drop stats, but the NIC may also have dropped packets.
Use ethtool to get to those:
# ethtool -S em2
NIC statistics:
rx_packets: 35430208463
tx_packets: 216072
rx_bytes: 32454370137414
tx_bytes: 53624450
rx_broadcast: 17424355
tx_broadcast: 133508
rx_multicast: 5332175
tx_multicast: 82564
rx_errors: 47
tx_errors: 0
tx_dropped: 0
(continues on next page)
Ideally, this number is 0. Not only pkt loss affects it though, also bad checksums and stream engine running out of
memory.
Capture filters are specified on the command-line after all other options:
Capture filters can be set per interface in the pcap, af-packet, netmap and pf_ring sections. It can also be put in a file:
Using a capture filter limits what traffic Suricata processes. So the traffic not seen by Suricata will not be inspected,
logged or otherwise recorded.
pass ip 1.2.3.4 any <> any any (msg:"pass all traffic from/to 1.2.3.4"; sid:1;)
A big difference with capture filters is that logs such as Eve or http.log are still generated for this traffic.
11.7.3 suppress
Suppress rules can be used to make sure no alerts are generated for a host. This is not efficient however, as the sup-
pression is only considered post-matching. In other words, Suricata first inspects a rule, and only then will it consider
per-host suppressions.
Example:
cd suricata/suricata
git pull
And follow the described next steps. To enable packet profiling, make sure you enter the following during the config-
uring stage:
./configure --enable-profiling
Find a folder in which you have pcaps. If you do not have pcaps yet, you can get these with Wireshark. See Sniffing
Packets with Wireshark.
Go to the directory of your pcaps. For example:
cd ~/Desktop
With the ls command you can see the content of the folder. Choose a folder and a pcap file
for example:
cd ~/Desktop/2011-05-05
for example:
--------------------------------------------------------------------------
Date: 9/5/2013 -- 14:59:58
--------------------------------------------------------------------------
Num Rule Gid Rev Ticks % Checks Matches Max Ticks␣
˓→ Avg Ticks Avg Match Avg No Match
-------- ------------ -------- -------- ------------ ------ -------- -------- -----------
˓→ ----------- ----------- --------------
11.10 Tcmalloc
'tcmalloc' is a library Google created as part of the google-perftools suite for improving memory handling in a threaded
program. It's very simple to use and does work fine with Suricata. It leads to minor speed ups and also reduces memory
usage quite a bit.
11.10.1 Installation
On Ubuntu, install the libtcmalloc-minimal4 package:
11.10.2 Usage
Use the tcmalloc by preloading it:
Ubuntu:
Fedora:
If all cores are at peak load the system might be too slow for the traffic load or it might be misconfigured. Also keep an
eye on memory usage, if the actual memory usage is too high and the system needs to swap it will result in very poor
performance.
The load will give you a first indication where to start with the debugging at specific parts we describe in more detail
in the second part.
11.11.2 Logfiles
The next step would be to check all the log files with a focus on stats.log and suricata.log if any obvious issues are
seen. The most obvious indicator is the capture.kernel_drops value that ideally would not even show up but should be
below 1% of the capture.kernel_packets value as high drop rates could lead to a reduced amount of events and alerts.
If memcap is seen in the stats the memcap values in the configuration could be increased. This can result to higher
memory usage and should be taken into account when the settings are changed.
Don't forget to check any system logs as well, even a dmesg run can show potential issues.
If you see specific function calls at the top in red it's a hint that those are the bottlenecks. For example if you see IPOn-
lyMatchPacket it can be either a result of high drop rates or incomplete flows which result in decreased performance.
To look into the performance issues on a specific thread you can pass -t TID to perf top. In other cases you can see
functions that give you a hint that a specific protocol parser is used a lot and can either try to debug a performance bug
or try to filter related traffic.
In general try to play around with the different configuration options that Suricata does provide with a focus on the
options described in High Performance Configuration.
11.11.4 Traffic
In most cases where the hardware is fast enough to handle the traffic but the drop rate is still high it's related to specific
traffic issues.
Basics
Some of the basic checks are:
• Check if the traffic is bidirectional, if it's mostly unidirectional you're missing relevant parts of the flow (see
tshark example at the bottom). Another indicator could be a big discrepancy between SYN and SYN-ACK as
well as RST counter in the Suricata stats.
• Check for encapsulated traffic, while GRE, MPLS etc. are supported they could also lead to performance issues.
Especially if there are several layers of encapsulation.
• Use tools like iftop to spot elephant flows. Flows that have a rate of over 1Gbit/s for a long time can result in one
cpu core peak at 100% all the time and increasing the droprate while it might not make sense to dig deep into
this traffic.
• Another approach to narrow down issues is the usage of bpf filter. For example filter all HTTPS traffic with not
port 443 to exclude traffic that might be problematic or just look into one specific port port 25 if you expect
some issues with a specific protocol. See Ignoring Traffic for more details.
• If VLAN is used it might help to disable vlan.use-for-tracking in scenarios where only one direction of the flow
has the VLAN tag.
Advanced
There are several advanced steps and corner cases when it comes to a deep dive into the traffic.
If VLAN QinQ (IEEE 802.1ad) is used be very cautious if you use cluster_qm in combination with Intel drivers and
AF_PACKET runmode. While the RFC expects ethertype 0x8100 and 0x88A8 in this case (see https://en.wikipedia.
org/wiki/IEEE_802.1ad) most implementations only add 0x8100 on each layer. If the first seen layer has the same
VLAN tag but the inner one has different VLAN tags it will still end up in the same queue in cluster_qm mode. This
was observed with the i40e driver up to 2.8.20 and the firmware version up to 7.00, feel free to report if newer versions
have fixed this (see https://suricata.io/support/).
If you want to use tshark to get an overview of the traffic direction use this command:
The output will show you all flows within 10s and if you see 0 for one direction you have unidirectional traffic, thus
you don't see the ACK packets for example. Since Suricata is trying to work on flows this will have a rather big impact
on the visibility. Focus on fixing the unidirectional traffic. If it's not possible at all you can enable async-oneside in
the stream configuration setting.
Check for other unusual or complex protocols that aren't supported very well. You can try to filter those to see if it has
any impact on the performance. In this example we filter Cisco Fabric Path (ethertype 0x8903) with the bpf filter not
ether proto 0x8903 as it's assumed to be a performance issue (see https://redmine.openinfosecfoundation.org/issues/
3637)
Elephant Flows
The so called Elephant Flows or traffic spikes are quite difficult to deal with. In most cases those are big file transfers
or backup traffic and it's not feasible to decode the whole traffic. From a network security monitoring perspective it's
often enough to log the metadata of that flow and do a packet inspection at the beginning but not the whole flow.
If you can spot specific flows as described above then try to filter those. The easiest solution would be a bpf filter but
that would still result in a performance impact. Ideally you can filter such traffic even sooner on driver or NIC level (see
eBPF/XDP) or even before it reaches the system where Suricata is running. Some commercial packet broker support
such filtering where it's called Flow Shunting or Flow Slicing.
11.11.5 Rules
The Ruleset plays an important role in the detection but also in the performance capability of Suricata. Thus it's
recommended to look into the impact of enabled rules as well.
If you run into performance issues and struggle to narrow it down start with running Suricata without any rules enabled
and use the tools again that have been explained at the first part. Keep in mind that even without signatures enabled
Suricata still does most of the decoding and traffic analysis, so a fair amount of load should still be seen. If the load
is still very high and drops are seen and the hardware should be capable to deal with such traffic loads you should
deep dive if there is any specific traffic issue (see above) or report the performance issue so it can be investigated (see
https://suricata.io/join-our-community/).
Suricata also provides several specific traffic related signatures in the rules folder that could be enabled for testing to spot
specific traffic issues. Those are found the rules and you should start with decoder-events.rules, stream-events.rules
and app-layer-events.rules.
It can also be helpful to use Rule Profiling and/or Packet Profiling to find problematic rules or traffic pattern. This is
achieved by compiling Suricata with --enable-profiling but keep in mind that this has an impact on performance and
should only be used for troubleshooting.
TWELVE
CONFIGURATION
12.1 Suricata.yaml
Suricata uses the Yaml format for configuration. The Suricata.yaml file included in the source code, is the example
configuration of Suricata. This document will explain each option.
At the top of the YAML-file you will find % YAML 1.1. Suricata reads the file and identifies the file as YAML.
12.1.1 Max-pending-packets
With the max-pending-packets setting you can set the number of packets you allow Suricata to process simultaneously.
This can range from one packet to tens of thousands/hundreds of thousands of packets. It is a trade of higher perfor-
mance and the use of more memory (RAM), or lower performance and less use of memory. A high number of packets
being processed results in a higher performance and the use of more memory. A low number of packets, results in
lower performance and less use of memory. Choosing a low number of packets being processed while having many
CPU's/CPU cores, can result in not making use of the whole computer-capacity. (For instance: using one core while
having three waiting for processing packets.)
max-pending-packets: 1024
12.1.2 Runmodes
By default the runmode option is disabled. With the runmodes setting you can set the runmode you would like to use.
For all runmodes available, enter --list-runmodes in your command line. For more information, see Runmodes.
runmode: autofp
12.1.3 Default-packet-size
For the max-pending-packets option, Suricata has to keep packets in memory. With the default-packet-size option, you
can set the size of the packets on your network. It is possible that bigger packets have to be processed sometimes. The
engine can still process these bigger packets, but processing it will lower the performance.
default-packet-size: 1514
run-as:
user: suri
group: suri
233
Suricata User Guide, Release 8.0.0-dev
pid-file: /var/run/suricata.pid
ò Note
This configuration file option only sets the PID file when running in daemon mode. To force creation of a PID file
when not running in daemon mode, use the --pidfile command line option.
Also, if running more than one Suricata process, each process will need to specify a different pid-file location.
12.1.6 Action-order
All signatures have different properties. One of those is the Action property. This one determines what will happen
when a signature matches. There are four types of Action. A summary of what will happen when a signature matches
and contains one of those Actions:
1) Pass
If a signature matches and contains pass, Suricata stops scanning the packet and skips to the end of all rules (only for
the current packet). If the signature matches on a TCP connection, the entire flow will be passed but details of the flow
will still be logged.
2) Drop
This only concerns the IPS/inline mode. If the program finds a signature that matches, containing drop, it stops imme-
diately. The packet will not be sent any further. Drawback: The receiver does not receive a message of what is going
on, resulting in a time-out (certainly with TCP). Suricata generates an alert for this packet.
3) Reject
This is an active rejection of the packet. Both receiver and sender receive a reject packet. There are two types of reject
packets that will be automatically selected. If the offending packet concerns TCP, it will be a Reset-packet. For all other
protocols it will be an ICMP-error packet. Suricata also generates an alert. When in Inline/IPS mode, the offending
packet will also be dropped like with the 'drop' action.
4) Alert
If a signature matches and contains alert, the packet will be treated like any other non-threatening packet, except for
this one an alert will be generated by Suricata. Only the system administrator can notice this alert.
Inline/IPS can block network traffic in two ways. One way is by drop and the other by reject.
Rules will be loaded in the order of which they appear in files. But they will be processed in a different order. Signatures
have different priorities. The most important signatures will be scanned first. There is a possibility to change the order
of priority. The default order is: pass, drop, reject, alert.
action-order:
- pass
- drop
- reject
- alert
This means a pass rule is considered before a drop rule, a drop rule before a reject rule and so on.
#Define maximum number of possible alerts that can be triggered for the same
# packet. Default is 15
packet-alert-max: 15
We recommend that you use the default value for this setting unless you are seeing a high number of discarded alerts
(alert_queue_overflow) - see the Discarded and Suppressed Alerts Stats section for more details.
Once the alert queue reaches its max size, we are potentially at packet alert queue overflow, so new alerts will only be
appended in case their rules have a higher priority id (this is the internal id attributed by the engine, not the signature
id).
This may happen in two different situations:
• a higher priority rule is triggered after a lower priority one: the lower priority rule is replaced in the queue;
• a lower priority rule is triggered: the rule is just discarded.
ò Note
This behavior does not mean that triggered drop rules would have their action ignored, in IPS mode.
In this example from a stats.log, we read that 8 alerts were generated: 3 were kept in the packet queue while 4 were
discarded due to packets having reached max size for the alert queue, and 1 was suppressed due to coming from a
noalert rule.
# outputs.yaml
- fast
enabled: yes
filename: fast.log
append: yes
...
# suricata.yaml
...
...
The second scenario is where multiple sections are migrated to a different YAML file.
# host_1.yaml
max-pending-packets: 2048
outputs:
- fast
enabled: yes
filename: fast.log
append: yes
# suricata.yaml
include: host_1.yaml
...
If the same section, say outputs is later redefined after the include statement it will overwrite the included file. Therefore
any include statement at the end of the document will overwrite the already configured sections.
default-log-dir: /var/log/suricata
This directory can be overridden by entering the -l command line parameter or by changing the directory directly in
Yaml. To change it with the -l command line parameter, enter the following:
Stats
Engine statistics such as packet counters, memory use counters and others can be logged in several ways. A separate
text log 'stats.log' and an EVE record type 'stats' are enabled by default.
The stats have a global configuration and a per logger configuration. Here the global config is documented.
Outputs
There are several types of output. The general structure is:
outputs:
- fast:
enabled: yes
(continues on next page)
Enabling all of the logs, will result in a much lower performance and the use of more disc space, so enable only the
outputs you need.
outputs:
# Extensible Event Format (nicknamed EVE) event log in JSON format
- eve-log:
enabled: yes
filetype: regular #regular|syslog|unix_dgram|unix_stream|redis
filename: eve.json
# Enable for multi-threaded eve.json output; output files are amended with
# an identifier, e.g., eve.9.json
#threaded: false
#prefix: "@cee: " # prefix to prepend to each log entry
# the following are valid when type: syslog above
#identity: "suricata"
#facility: local5
#level: Info ## possible levels: Emergency, Alert, Critical,
## Error, Warning, Notice, Info, Debug
#ethernet: no # log ethernet header in events when available
#redis:
# server: 127.0.0.1
# port: 6379
# async: true ## if redis replies are read asynchronously
# mode: list ## possible values: list|lpush (default), rpush, channel|publish,␣
˓→xadd|stream
# ## lpush and rpush are using a Redis list. "list" is an alias for␣
˓→lpush
# include the name of the input pcap file in pcap file processing mode
pcap-file: false
# Community Flow ID
# Adds a 'community-id' field to EVE records. These are meant to give
# records a predictable flow ID that can be used to match records to
# output of other tools such as Zeek (Bro).
#
# Takes a 'seed' that needs to be same across sensors and tools
# to make the id less predictable.
types:
- alert:
# payload: yes # enable dumping payload in Base64
# payload-buffer-size: 4kb # max size of payload buffer to output in eve-log
# payload-printable: yes # enable dumping payload in printable (lossy)␣
˓→format
- anomaly:
# Anomaly log records describe unexpected conditions such
(continues on next page)
- files:
force-magic: no # force logging magic on all logged files
# force logging of checksums, available hash functions are md5,
# sha1 and sha256
#force-hash: [md5]
#- drop:
# alerts: yes # log alerts that caused drops
# flows: all # start or all: 'start' logs only a single drop
# # per flow direction. All logs each dropped pkt.
# Enable logging the final action taken on a packet by the engine
# (will show more information in case of a drop caused by 'reject')
# verdict: yes
- smtp:
#extended: yes # enable this for extended logging information
# this includes: bcc, message-id, subject, x_mailer, user-agent
# custom fields logging from the list:
# reply-to, bcc, message-id, subject, x-mailer, user-agent, received,
# x-originating-ip, in-reply-to, references, importance, priority,
# sensitivity, organization, content-md5, date
#custom: [received, x-mailer, x-originating-ip, relays, reply-to, bcc]
# output md5 of fields: body, subject
# for the body you need to set app-layer.protocols.smtp.mime.body-md5
# to yes
#md5: [body, subject]
. Attention
The TLS handshake parameters can be logged in a line based log as well. By default, the logfile is tls.log in the suricata
log directory. See Custom TLS logging for details about the configuration and customization of the log format.
Furthermore there is an output module to store TLS certificate files to disk. This is similar to File-store (File Extraction),
but for TLS certificates.
Example:
. Attention
This log keeps track of all HTTP-traffic events. It contains the HTTP request, hostname, URI and the User-Agent.
This information will be stored in the http.log (default name, in the suricata log directory). This logging can also be
performed through the use of the Eve-log capability.
Example of a HTTP-log line with non-extended logging:
GET [**] HTTP/1.1 [**] 301 => http://www.vg.no/ [**] 239 bytes [**] 192.168.1.6:64726 ->␣
˓→195.88.54.16:80
- pcap-log:
enabled: yes
filename: log.pcap
(continues on next page)
# Limit in MB.
limit: 32
In normal mode a pcap file "filename" is created in the default-log-dir or as specified by "dir". normal mode is
generally not as performant as multi mode.
In multi mode, multiple pcap files are created (per thread) which performs better than normal mode.
In multi mode the filename takes a few special variables:
• %n representing the thread number
• %i representing the thread id
• %t representing the timestamp (secs or secs.usecs based on 'ts-format')
Example: filename: pcap.%n.%t
ò Note
It is possible to use directories but the directories are not created by Suricata. For example filename: pcaps/
%n/log.%s will log into the pre-existing pcaps directory and per thread sub directories.
ò Note
that the limit and max-files settings are enforced per thread. So the size limit using 8 threads with 1000mb files and
2000 files is about 16TiB.
Stats
In stats you can set the options for stats.log. When enabling stats.log you can set the amount of time in seconds after
which you want the output-data to be written to the log file.
- stats:
enabled: yes #By default, the stats-option is enabled
filename: stats.log #The log-name. Combined with the default logging␣
(continues on next page)
The interval and several other options depend on the global stats section as described above.
Syslog
. Attention
The syslog output is deprecated in Suricata 8.0 and will be removed in Suricata 9.0. Please migrate to the eve
output which has the ability to send to syslog.
With this option it is possible to send all alert and event output to syslog.
- file-store:
# This configures version 2 of the file-store.
version: 2
enabled: no
detect:
profile: medium
custom-values:
toclient-groups: 2
toserver-groups: 25
sgh-mpm-context: auto
inspection-recursion-limit: 3000
stream-tx-log-limit: 4
At all of these options, you can add (or change) a value. Most signatures have the adjustment to focus on one direction,
meaning focusing exclusively on the server, or exclusively on the client.
If you take a look at example 4, the Detection-engine grouping tree, you see it has many branches. At the end of each
branch, there is actually a 'sig group head'. Within that sig group head there is a container which contains a list with
signatures that are significant for that specific group/that specific end of the branch. Also within the sig group head the
settings for Multi-Pattern-Matcher (MPM) can be found: the MPM-context.
As will be described again at the part 'Pattern matching settings', there are several MPM-algorithms of which can be
chosen from. Because every sig group head has its own MPM-context, some algorithms use a lot of memory. For
that reason there is the option sgh-mpm-context to set whether the groups share one MPM-context, or to set that every
group has its own MPM-context.
For setting the option sgh-mpm-context, you can choose from auto, full or single. The default setting is 'auto', meaning
Suricata selects full or single based on the algorithm you use. 'Full' means that every group has its own MPM-context,
and 'single' that all groups share one MPM-context. The algorithm "ac" uses a single MPM-context if the Sgh-MPM-
context setting is 'auto'. The rest of the algorithms use full in that case.
The inspection-recursion-limit option has to mitigate that possible bugs in Suricata cause big problems. Often Suricata
has to deal with complicated issues. It could end up in an 'endless loop' due to a bug, meaning it will repeat its actions
over and over again. With the option inspection-recursion-limit you can limit this action.
The stream-tx-log-limit defines the maximum number of times a transaction will get logged for a stream-only rule
match. This is meant to avoid logging the same data an arbitrary number of times.
Example 4 Detection-engine grouping tree
Prefilter Engines
The concept of prefiltering is that there are far too many rules to inspect individually. The approach prefilter takes is
that from each rule one condition is added to prefilter, which is then checked in one step. The most common example
is MPM (also known as fast_pattern). This takes a single pattern per rule and adds it to the MPM. Only for those rules
that have at least one pattern match in the MPM stage, individual inspection is performed.
Next to MPM, other types of keywords support prefiltering. ICMP itype, icode, icmp_seq and icmp_id for example.
TCP window, IP TTL are other examples.
For a full list of keywords that support prefilter, see:
suricata --list-keywords=all
detect:
prefilter:
default: mpm
The prefilter engines for other non-MPM keywords can then be enabled in specific rules by using the 'prefilter' keyword.
E.g.
detect:
prefilter:
default: auto
Thresholding Settings
Thresholding uses a central hash table for tracking thresholds of the types: by_src, by_dst, by_both.
detect:
thresholds:
hash-size: 16384
memcap: 16mb
detect.thresholds.hash-size controls the number of hash rows in the hash table. detect.thresholds.memcap
controls how much memory can be used for the hash table and the data stored in it.
Suricata offers various implementations of different multi-pattern-matcher algorithm's. These can be found below.
To set the multi-pattern-matcher algorithm:
mpm-algo: ac
After 'mpm-algo', you can enter one of the following algorithms: ac, hs and ac-ks.
On x86_64 hs (Hyperscan) should be used for best performance.
12.1.11 Threading
Suricata is multi-threaded. Suricata uses multiple CPUs/CPU cores so it can process a lot of network packets simulta-
neously. (In a single-core engine, the packets will be processed one at a time.)
There are four thread-modules: Packet acquisition, decode and stream application layer, detection, and outputs.
# The packet acquisition module reads packets from the network.
# The decode module decodes the packets and the stream application application layer has three tasks:
First: it performs stream-tracking, meaning it is making sure all steps will be taken to␣
˓→make a correct network-connection.
Finally: the application layer will be inspected. HTTP and DCERPC will be analyzed.
# The detection threads will compare signatures. There can be several detection threads so they can operate simulta-
neously.
# In Outputs all alerts and events will be processed.
Example 6 Threading
Most computers have multiple CPU's/ CPU cores. By default the operating system determines which core works on
which thread. When a core is already occupied, another one will be designated to work on the thread. So, which core
works on which thread, can differ from time to time.
There is an option within threading:
set-cpu-affinity: no
With this option you can cause Suricata setting fixed cores for every thread. In that case 1, 2 and 4 are at core 0 (zero).
Each core has its own detect thread. The detect thread running on core 0 has a lower priority than the other threads
running on core 0. If these other cores are to occupied, the detect thread on core 0 has not much packets to process.
The detect threads running on other cores will process more packets. This is only the case after setting the option to
'yes'.
Example 7 Balancing workload
detect-thread-ratio: 1.5
The detect thread-ratio will determine the amount of detect threads. By default it will be 1.5 x the amount of CPU's/CPU
cores present at your computer. This will result in having more detection threads then CPU's/ CPU cores. Meaning you
are oversubscribing the amount of cores. This may be convenient at times when there have to be waited for a detection
thread. The remaining detection thread can become active.
You can alter the per-thread stack-size if the default provided by your build system is too small. The default value is
provided by your build system; we suggest setting the value to 8MB if the default value is too small.
stack-size: 8MB
In the option 'cpu affinity' you can set which CPU's/cores work on which thread. In this option there are several sets of
threads. The management-, receive-, worker- and verdict-set. These are fixed names and can not be changed. For each
set there are several options: cpu, mode, and prio. In the option 'cpu' you can set the numbers of the CPU's/cores which
will run the threads from that set. You can set this option to 'all', use a range (0-3) or a comma separated list (0,1). The
option 'mode' can be set to 'balanced' or 'exclusive'. When set to 'balanced', the individual threads can be processed by
all cores set in the option 'cpu'. If the option 'mode' is set to 'exclusive', there will be fixed cores for each thread. As
mentioned before, threads can have different priority's. In the option 'prio' you can set a priority for each thread. This
priority can be low, medium, high or you can set the priority to 'default'. If you do not set a priority for a CPU, than the
settings in 'default' will count. By default Suricata creates one 'detect' (worker) thread per available CPU/CPU core.
ò Note
The 'prio' settings could overwrite each other, make sure to not include the same CPU core in different 'prio' settings.
cpu-affinity:
- management-cpu-set:
cpu: [ 0 ] # include only these cpus in affinity settings
- receive-cpu-set:
cpu: [ 0 ] # include only these cpus in affinity settings
- worker-cpu-set:
cpu: [ "all" ]
mode: "exclusive"
# Use explicitly 3 threads and don't compute number by using
# detect-thread-ratio variable:
# threads: 3
prio:
low: [ 0 ]
medium: [ "1-2" ]
high: [ 3 ]
default: "medium"
- verdict-cpu-set:
cpu: [ 0 ]
prio:
default: "high"
Rumode Workers:
IPS mode
Runmode AutoFp:
Runmode Workers:
12.1.12 IP Defrag
Occasionally network packets appear fragmented. On some networks it occurs more often than on others. Fragmented
packets exist of many parts. Before Suricata is able to inspect these kind of packets accurately, the packets have to be
reconstructed. This will be done by a component of Suricata; the defragment-engine. After a fragmented packet is
reconstructed by the defragment-engine, the engine sends on the reassembled packet to rest of Suricata.
At the moment Suricata receives a fragment of a packet, it keeps in memory that other fragments of that packet will
appear soon to complete the packet. However, there is a possibility that one of the fragments does not appear. To
prevent Suricata for keeping waiting for that packet (thereby using memory) there is a timespan after which Suricata
discards the fragments (timeout). This occurs by default after 60 seconds.
In IPS mode, it is possible to tell the engine what to do in case the memcap for the defrag engine is reached: "drop-
packet", "pass-packet", or "ignore" (default behavior).
defrag:
memcap: 32mb
memcap-policy: ignore # in IPS mode, what to do if memcap is reached
hash-size: 65536
trackers: 65535 # number of defragmented flows to follow
max-frags: 65535 # number of fragments do keep (higher than trackers)
prealloc: yes
timeout: 60
Example 10 Tuple
Keeping track of all these flows, uses memory. The more flows, the more memory it will cost.
To keep control over memory usage, there are several options:
The option memcap for setting the maximum amount of bytes the flow-engine will use, hash-size for setting the size
of the hash-table and prealloc for the following:
For packets not yet belonging to a flow, Suricata creates a new flow. This is a relative expensive action.
The risk coming with it, is that attackers /hackers can a attack the engine system at this part. When they
make sure a computer gets a lot of packets with different tuples, the engine has to make a lot of new flows.
This way, an attacker could flood the system. To mitigate the engine from being overloaded, this option
instructs Suricata to keep a number of flows ready in memory. This way Suricata is less vulnerable to these
kind of attacks.
The flow-engine has a management thread that operates independent from the packet processing. This thread is called
the flow-manager. This thread ensures that wherever possible and within the memcap. There will be 10000 flows
prepared.
In IPS mode, a memcap-policy exception policy can be set, telling Suricata what to do in case memcap is hit: 'drop-
packet', 'pass-packet', 'reject', or 'ignore'.
flow:
memcap: 33554432 #The maximum amount of bytes the flow-engine will make␣
˓→use of.
memcap-policy: bypass #How to handle the flow if memcap is reached (IPS mode)
hash-size: 65536 #Flows will be organized in a hash-table. With this␣
˓→option you can set the
At the point the memcap will still be reached, despite prealloc, the flow-engine goes into the emergency-mode. In this
mode, the engine will make use of shorter time-outs. It lets flows expire in a more aggressive manner so there will be
more space for new Flows.
emergency-recovery defines the percentage of flows that the engine needs to prune before clearing the emergency
mode. The default emergency-recovery value is 30. This is the percentage of prealloc'd flows after which the flow
-engine will be back to normal (when 30 percent of the 10000 flows are completed).
If during the emergency-mode the aggressive time-outs do not have the desired result, this option is the
final resort. It ends some flows even if they have not reached their time-outs yet.
emergency-recovery: 30 #Percentage of 10000 prealloc'd flows.
Flow Time-Outs
The amount of time Suricata keeps a flow in memory is determined by the Flow time-out.
There are different states in which a flow can be. Suricata distinguishes three flow-states for TCP and two for UDP. For
TCP, these are: New, Established and Closed,for UDP only new and established. For each of these states Suricata can
employ different timeouts.
The state new in a TCP-flow, means the period during the three way handshake. The state established is the state when
the three way handshake is completed. The state closed in the TCP-flow: there a several ways to end a flow. This is by
means of Reset or the Four-way FIN handshake.
New in a UDP-flow: the state in which packets are send from only one direction.
Established in a UDP-flow: packets are send from both directions.
In the example configuration the are settings for each protocol. TCP, UDP, ICMP and default (all other protocols).
flow-timeouts:
default:
new: 30 #Time-out in seconds after the last activity in this␣
˓→flow in a New state.
#state.
emergency-new: 10 #Time-out in seconds after the last activity in this␣
˓→flow in a New state
Stream-engine
The Stream-engine keeps track of the TCP-connections. The engine exists of two parts: The stream tracking- and the
reassembly-engine.
The stream-tracking engine monitors the state of a connection. The reassembly-engine reconstructs the flow as it used
to be, so it will be recognized by Suricata.
The stream-engine has two memcaps that can be set. One for the stream-tracking-engine and one for the reassembly-
engine. For both cases, in IPS mode, an exception policy (memcap-policy) can be set, telling Suricata what to do in
case memcap is hit: 'drop-flow', 'drop-packet', 'pass-flow', 'pass-packet', 'bypass', 'reject', or 'ignore'.
The stream-tracking-engine keeps information of the flow in memory. Information about the state, TCP-sequence-
numbers and the TCP window. For keeping this information, it can make use of the capacity the memcap allows.
TCP packets have a so-called checksum. This is an internal code which makes it possible to see if a packet has arrived
in a good state. The stream-engine will not process packets with a wrong checksum. This option can be set off by
entering 'no' instead of 'yes'.
stream:
memcap: 64mb # Max memory usage (in bytes) for TCP session tracking
memcap-policy: ignore # In IPS mode, call memcap policy if memcap is reached
checksum-validation: yes # Validate packet checksum, reject packets with invalid␣
˓→checksums.
To mitigate Suricata from being overloaded by fast session creation, the option prealloc-sessions instructs Suricata to
keep a number of sessions ready in memory.
A TCP-session starts with the three-way-handshake. After that, data can be sent and received. A session can last a long
time. It can happen that Suricata will be started after a few TCP sessions have already been started. This way, Suricata
misses the original setup of those sessions. This setup always includes a lot of information. If you want Suricata to
check the stream from that time on, you can do so by setting the option 'midstream' to 'true'. The default setting is 'false'.
In IPS mode, it is possible to define a 'midstream-policy', indicating whether Suricata should drop-flow, drop-packet,
pass-flow, pass-packet, reject, or bypass a midstream flow. The default is ignore. Normally Suricata is able to see
all packets of a connection. Some networks make it more complicated though. Some of the network-traffic follows a
different route than the other part, in other words: the traffic goes asynchronous. To make sure Suricata will check the
one part it does see, instead of getting confused, the option 'async-oneside' is brought to life. By default the option is
set to 'false'.
Suricata inspects content in the normal/IDS mode in chunks. In the inline/IPS mode it does that on the sliding window
way (see example ..) In the case Suricata is set in inline mode, it has to inspect packets immediately before sending it to
the receiver. This way Suricata is able to drop a packet directly if needed.(see example . . . ) It is important for Suricata
to note which operating system it is dealing with, because operating systems differ in the way they process anomalies
in streams. See Host-os-policy.
The drop-invalid option can be set to no to avoid blocking packets that are seen invalid by the streaming engine.
This can be useful to cover some weird cases seen in some layer 2 IPS setup.
The bypass option activates 'bypass' for a flow/session when either side of the session reaches its depth.
. Warning
The reassembly-engine has to keep data segments in memory in order to be able to reconstruct a stream. To avoid
resource starvation a memcap is used to limit the memory used. In IPS mode, an exception policy (memcap-policy) can
be set, telling Suricata what to do in case memcap is hit: 'drop-flow', 'drop-packet', 'pass-flow', 'pass-packet', 'bypass',
'reject', or 'ignore'.
Reassembling a stream is an expensive operation. With the option depth you can control how far into a stream re-
assembly is done. By default this is 1MB. This setting can be overridden per stream by the protocol parsers that do file
extraction.
Inspection of reassembled data is done in chunks. The size of these chunks is set with toserver-chunk-size and
toclient-chunk-size. To avoid making the borders predictable, the sizes can be varied by adding in a random
factor.
reassembly:
memcap: 256mb # Memory reserved for stream data reconstruction (in bytes)
memcap-policy: ignore # What to do when memcap for reassembly is hit
depth: 1mb # The depth of the reassembling.
toserver-chunk-size: 2560 # inspect raw stream in chunks of at least this size
toclient-chunk-size: 2560 # inspect raw stream in chunks of at least
randomize-chunk-size: yes
#randomize-chunk-range: 10
'Raw' reassembly is done for inspection by simple content, pcre keywords use and other payload inspection not done
on specific protocol buffers like http_uri. This type of reassembly can be turned off:
reassembly:
raw: no
Incoming segments are stored in a list in the stream. To avoid constant memory allocations a per-thread pool is used.
reassembly:
segment-prealloc: 2048 # pre-alloc 2k segments per thread
Resending different data on the same sequence number is a way to confuse network inspection.
reassembly:
check-overlap-different-data: true
Settings
The configuration allows specifying the following settings: hash-size, prealloc and memcap.
host:
hash-size: 4096
prealloc: 1000
memcap: 32mb
Asn1_max_frames
Asn1 (Abstract Syntax One) is a standard notation to structure and describe data.
Within Asn1-max-frames there are several frames. To protect itself, Suricata will inspect a maximum of 256. You can
set this amount differently if wanted.
Application layer protocols such as X.400 electronic mail, X.500 and LDAP directory services, H.323 (VoIP), BACnet
and SNMP, use ASN.1 to describe the protocol data units (PDUs) they exchange. It is also extensively used in the
Access and Non-Access Strata of UMTS.
Limit for the maximum number of asn1 frames to decode (default 256):
asn1-max-frames: 256
FTP
The FTP application layer parser is enabled by default and uses dynamic protocol detection.
By default, FTP control channel commands and responses are limited to 4096 bytes, but this value can be changed.
When a command request or response exceeds the line length limit, the stored data will be truncated, however the parser
will continue to watch for the end of line and acquire the next command. Commands that are truncated will be noted
in the eve log file with the fields command_truncated or reply_truncated. Please note that this affects the control
messages only, not FTP data (file transfers).
ftp:
enabled: yes
#memcap: 64mb
# Maximum line length for control messages before they will be truncated.
#max-line-length: 4kb
• IIS_7_0
• IIS_7_5
• Apache
• Apache_2_2
You can assign names to each block of settings. Which in this case is -apache and -iis7. Under these names you can
set IP-addresses, network-addresses the personality and a set of features.
The version-specific personalities know exactly how web servers behave, and emulate that. The IDS personality would
try to implement a best-effort approach that would work reasonably well in the cases where you do not know the
specifics.
The default configuration also applies to every IP-address for which no specific setting is available.
HTTP request bodies are often big, so they take a lot of time to process which has a significant impact on the perfor-
mance. With the option 'request-body-limit' you can set the limit (in bytes) of the client-body that will be inspected.
Setting it to 0 will inspect all of the body.
The same goes for HTTP response bodies.
libhtp:
default-config:
personality: IDS
request-body-limit: 3072
response-body-limit: 3072
server-config:
- apache:
address: [192.168.1.0/24, 127.0.0.0/8, "::1"]
personality: Apache_2_2
request-body-limit: 0
response-body-limit: 0
- iis7:
address:
- 192.168.0.0/24
- 192.168.10.0/24
personality: IIS_7_0
request-body-limit: 4096
response-body-limit: 8192
Suricata makes available the whole set of libhtp customisations for its users.
You can now use these parameters in the conf to customise suricata's use of libhtp.
# response-body-decompress-layer-limit:
# Limit to how many layers of compression will be
# decompressed. Defaults to 2.
# inspection limits
request-body-minimal-inspect-size: 32kb
request-body-inspect-window: 4kb
response-body-minimal-inspect-size: 40kb
response-body-inspect-window: 16kb
# auto will use http-body-inline mode in IPS mode, yes or no set it statically
http-body-inline: auto
# Take a random value for inspection sizes around the specified value.
# This lower the risk of some evasion technics but could lead
# detection change between runs. It is set to 'yes' by default.
#randomize-inspection-sizes: yes
# If randomize-inspection-sizes is active, the value of various
# inspection size will be chosen in the [1 - range%, 1 + range%]
# range
# Default value of randomize-inspection-range is 10.
#randomize-inspection-range: 10
decompression-time-limit
decompression-time-limit was implemented to avoid DOS by resource exhaustion on inputs such as decompression
bombs (found by fuzzing). The lower the limit, the better the protection against DOS is, but this may also lead to false
positives. In case the time limit is reached, the app-layer event http.compression_bomb is set (this event can also
set from other conditions). This can happen on slow configurations (hardware, ASAN, etc...)
Configure SMB
The SMB parser will parse version 1, 2 and 3 of the SMB protocol over TCP.
To enable the parser add the following to the app-layer section of the YAML.
smb:
enabled: yes
detection-ports:
dp: 139, 445
The parser uses pattern based protocol detection and will fallback to probing parsers if the pattern based detec-
tion fails. As usual, the pattern based detection is port independent. The probing parsers will only run on the
detection-ports.
SMB is commonly used to transfer the DCERPC protocol. This traffic is also handled by this parser.
Resource limits
Several options are available for limiting record sizes and data chunk tracking.
smb:
enabled: yes
max-read-size: 8mb
max-write-size: 1mb
max-read-queue-size: 16mb
max-read-queue-cnt: 16
max-write-queue-size: 16mb
max-write-queue-cnt: 16
The max-read-size option can be set to control the max size of accepted READ records. Events will be raised if a
READ request asks for too much data and/or if READ responses are too big. A value of 0 disables the checks.
The max-write-size option can be set to control the max size of accepted WRITE request records. Events will be raised
if a WRITE request sends too much data. A value of 0 disables the checks.
Additionally if the max-read-size or max-write-size values in the "negotiate protocol response" exceeds this limit an
event will also be raised.
For file tracking, extraction and file data inspection the parser queues up out of order data chunks for both READs and
WRITEs. To avoid using too much memory the parser allows for limiting both the size in bytes and the number of
queued chunks.
smb:
enabled: yes
max-read-queue-size: 16mb
max-read-queue-cnt: 16
max-write-queue-size: 16mb
max-write-queue-cnt: 16
max-read-queue-size controls how many bytes can be used per SMB flow for out of order READs. max-read-queue-cnt
controls how many READ chunks can be queued per SMB flow. Processing of these chunks will be blocked when any
of the limits are exceeded, and an event will be raised.
max-write-queue-size and max-write-queue-cnt are as the READ variants, but then for WRITEs.
Cache limits
The SMB parser uses several per flow caches to track data between different records and transactions. These caches
have a size ceiling. When the size limit is reached, new additions will automatically evict the oldest entries.
smb:
max-guid-cache-size: 1024
max-rec-offset-cache-size: 128
max-tree-cache-size: 512
max-dcerpc-frag-cache-size: 128
max-session-cache-size: 512
The max-guid-cache-size setting controls the size of the hash that maps the GUID to filenames. These are added through
CREATE commands and removed by CLOSE commands.
max-rec-offset-cache-size controls the size of the hash that maps the READ offset from READ commands to the READ
responses.
The max-tree-cache-size option contols the size of the SMB session to SMB tree hash.
max-dcerpc-frag-cache-size controls the size of the hash that tracks partial DCERPC over SMB records. These are
buffered in this hash to only parse the DCERPC record when it is fully reassembled.
The max-session-cache-size setting controls the size of a generic hash table that maps SMB session to filenames, GUIDs
and share names.
Configure HTTP2
HTTP2 has 2 parameters that can be customized. The point of these 2 parameters is to find a balance between the
completeness of analysis and the resource consumption.
http2.max-table-size refers to SETTINGS_HEADER_TABLE_SIZE from rfc 7540 section 6.5.2. Its default value is
4096 bytes, but it can be set to any uint32 by a flow.
http2.max-streams refers to SETTINGS_MAX_CONCURRENT_STREAMS from rfc 7540 section 6.5.2. Its default
value is unlimited.
SSL/TLS
SSL/TLS parsers track encrypted SSLv2, SSLv3, TLSv1, TLSv1.1 and TLSv1.2 sessions.
Protocol detection is done using patterns and a probing parser running on only TCP/443 by default. The pattern based
protocol detection is port independent.
tls:
enabled: yes
detection-ports:
dp: 443
Encrypted traffic
There is no decryption of encrypted traffic, so once the handshake is complete continued tracking of the session is of
limited use. The encryption-handling option controls the behavior after the handshake.
If encryption-handling is set to default (or if the option is not set), Suricata will continue to track the SSL/TLS
session. Inspection will be limited, as raw content inspection will still be disabled. There is no point in doing pattern
matching on traffic known to be encrypted. Inspection for (encrypted) Heartbleed and other protocol anomalies still
happens.
When encryption-handling is set to bypass, all processing of this session is stopped. No further parsing and
inspection happens. If stream.bypass is enabled this will lead to the flow being bypassed, either inside Suricata or
by the capture method if it supports it and is configured for it.
Finally, if encryption-handling is set to full, Suricata will process the flow as normal, without inspection limita-
tions or bypass.
The option has replaced the no-reassemble option. If no-reassemble is present, and encryption-handling
is not, false is interpreted as encryption-handling: default and true is interpreted as
encryption-handling: bypass.
Modbus
According to MODBUS Messaging on TCP/IP Implementation Guide V1.0b, it is recommended to keep the TCP
connection opened with a remote device and not to open and close it for each MODBUS/TCP transaction. In that case,
it is important to set the stream-depth of the modbus as unlimited.
modbus:
# Stream reassembly size for modbus, default is 0
stream-depth: 0
MQTT
The maximum size of a MQTT message is 256MB, potentially containing a lot of payload data (such as properties,
topics, or published payloads) that would end up parsed and logged. To acknowledge the fact that most MQTT messages,
however, will be quite small and to reduce the potential for denial of service issues, it is possible to limit the maximum
length of a message that Suricata should parse. Any message larger than the limit will just be logged with reduced
metadata, and rules will only be evaluated against a subset of fields. The default is 1 MB.
mqtt:
max-msg-length: 1mb
SMTP
SMTP parsers can extract files from attachments. It is also possible to extract raw conversations as files with the key
raw-extraction. Note that in this case the whole conversation will be stored as a file, including SMTP headers and
body content. The filename will be set to "rawmsg". Usual file-related signatures will match on the raw content of
the email. This configuration parameter has a false default value. It is incompatible with decode-mime. If both are
enabled, raw-extraction will be automatically disabled.
smtp:
# extract messages in raw format from SMTP
raw-extraction: true
Maximum transactions
SMTP, MQTT, FTP, PostgreSQL, SMB, DCERPC, HTTP1, ENIP and NFS have each a max-tx parameter that can
be customized. max-tx refers to the maximum number of live transactions for each flow. An app-layer event proto-
col.too_many_transactions is triggered when this value is reached. The point of this parameter is to find a balance
between the completeness of analysis and the resource consumption.
For HTTP2, this parameter is named max-streams as an HTTP2 stream will get translated into one Suricata transaction.
This configuration parameter is used whatever the value of SETTINGS_MAX_CONCURRENT_STREAMS negotiated
between a client and a server in a specific flow is.
be shown. The default setting is Notice. This means that error, warning and notice will be shown and messages
for the other levels won't be.
# Define your logging outputs. If none are defined, or they are all
# disabled you will get the default - console output.
outputs:
- console:
enabled: yes
# type: json
- file:
enabled: yes
level: info
filename: suricata.log
# format: "[%i - %m] %z %d: %S: %M"
# type: json
- syslog:
enabled: no
facility: local5
format: "[%i] <%d> -- "
# type: json
logging:
default-log-level: info
This option sets the default log level. The default log level is notice. This value will be used in the individual logging
configuration (console, file, syslog) if not otherwise set.
ò Note
The -v command line option can be used to quickly increase the log level at runtime. See the -v command line
option.
The default-log-level set in the configuration value can be overridden by the SC_LOG_LEVEL environment vari-
able.
(Here the part until the second : is the meta info, "This is Suricata version 7.0.2 RELEASE running in USER mode"
is the actual message.)
It is possible to determine which information will be displayed in this line and (the manner how it will be displayed) in
which format it will be displayed. This option is the so called format string:
The % followed by a character has a special meaning. There are thirteen specified signs:
The last three options, f, l and n, are mainly convenient for developers.
The log-format can be overridden in the command line by the environment variable: SC_LOG_FORMAT.
Output Filter
Within logging you can set an output-filter. With this output-filter you can set which part of the event-logs should be
displayed. You can supply a regular expression (Regex). A line will be shown if the regex matches.
Logging Outputs
There are different ways of displaying output. The output can appear directly on your screen, it can be placed in a file
or via syslog. The last mentioned is an advanced tool for log-management. The tool can be used to direct log-output
to different locations (files, other computers etc.)
outputs:
- console: # Output to screen (stdout/stderr).
enabled: yes # This option is enabled.
#level: notice # Use a different level than the default.
- file: # Output stored in a file.
enabled: no # This option is not enabled.
filename: /var/log/suricata.log # Filename and location on disc.
level: info # Use a different level than the default.
- syslog: # Output using syslog.
enabled: no # The use of this program is not enabled.
facility: local5 # Syslog facility to use.
format: "[%i] <%d> -- " # Output format specific to syslog.
#level: notice # Use a different level than the default.
./configure --enable-dpdk
Suricata makes use of DPDK for packet acquisition in workers runmode. The whole DPDK configuration resides in
the dpdk: node. This node encapsulates 2 main subnodes, and those are eal-params and interfaces.
dpdk:
eal-params:
proc-type: primary
allow: ["0000:3b:00.0", "0000:3b:00.1"]
interfaces:
(continues on next page)
The DPDK arguments, which are typically provided through the command line, are contained in the node dpdk.eal-
params. EAL is configured and initialized using these parameters. There are two ways to specify arguments: lengthy
and short. Dashes are omitted when describing the arguments. This setup node can be used to set up the memory
configuration, accessible NICs, and other EAL-related parameters, among other things. The node dpdk.eal-params
also supports multiple arguments of the same type. This can be useful for EAL arguments such as --vdev, --allow,
or --block. Values for these EAL arguments are specified as a comma-separated list. An example of such usage can
be found in the example above where the allow argument only makes 0000:3b:00.0 and 0000:3b:00.1 accessible to
Suricata. arguments with list node. such as --vdev, --allow, --block eal options. The definition of lcore affinity as
an EAL parameter is a standard practice. However, lcore parameters like -l, -c, and --lcores` are specified within the
suricata-yaml-threading section to prevent configuration overlap.
The node dpdk.interfaces wraps a list of interface configurations. Items on the list follow the structure that can be
found in other capture interfaces. The individual items contain the usual configuration options such as threads/copy-
mode/checksum-checks settings. Other capture interfaces, such as AF_PACKET, rely on the user to ensure that NICs
are appropriately configured. Configuration through the kernel does not apply to applications running under DPDK.
The application is solely responsible for the initialization of the NICs it is using. So, before the start of Suricata, the
NICs that Suricata uses, must undergo the process of initialization. As a result, there are extra configuration options
(how NICs can be configured) in the items (interfaces) of the dpdk.interfaces list. At the start of the configuration
process, all NIC offloads are disabled to prevent any packet modification. According to the configuration, checksum
validation offload can be enabled to drop invalid packets. Other offloads can not currently be enabled. Additionally,
the list items in dpdk.interfaces contain DPDK specific settings such as mempool-size or rx-descriptors. These settings
adjust individual parameters of EAL. One of the entries in dpdk.interfaces is the default interface. When loading
interface configuration and some entry is missing, the corresponding value of the default interface is used.
The worker threads must be assigned to specific cores. The configuration module threading must be used to set thread
affinity. Worker threads can be pinned to cores in the array configured in threading.cpu-affinity["worker-cpu-set"].
Performance-oriented setups have everything (the NIC, memory, and CPU cores interacting with the NIC) based on
one NUMA node. It is therefore required to know the layout of the server architecture to get the best results. The
CPU core ids and NUMA locations can be determined for example from the output of /proc/cpuinfo where physical
id described the NUMA number. The NUMA node to which the NIC is connected to can be determined from the file
/sys/class/net/<KERNEL NAME OF THE NIC>/device/numa_node.
Suricata operates in workers runmode. Packet distribution relies on Receive Side Scaling (RSS), which distributes
packets across the NIC queues. Individual Suricata workers then poll packets from the NIC queues. Internally, DPDK
runmode uses a symmetric hash (0x6d5a) that redirects bi-flows to specific workers.
Before Suricata can be run, it is required to allocate a sufficient number of hugepages. For efficiency, hugepages are
continuous chunks of memory (pages) that are larger (2 MB+) than what is typically used in the operating systems (4
KB). A lower count of pages allows faster lookup of page entries. The hugepages need to be allocated on the NUMA
node where the NIC and affiniated CPU cores reside. For example, if the hugepages are allocated only on NUMA node
0 and the NIC is connected to NUMA node 1, then the application will fail to start. As a result, it is advised to identify
the NUMA node to which the NIC is attached before allocating hugepages and setting CPU core affinity to that node.
In case Suricata deployment uses multiple NICs, hugepages must be allocated on each of the NUMA nodes used by
the Suricata deployment.
DPDK memory pools hold packets received from NICs. These memory pools are allocated in hugepages. One memory
pool is allocated per interface. The size of each memory pool can be individual and is set with the mempool-size.
Memory (in bytes) for one memory pool is calculated as: mempool-size * mtu. The sum of memory pool requirements
divided by the size of one hugepage results in the number of required hugepages. It causes no problem to allocate more
memory than required, but it is vital for Suricata to not run out of hugepages.
The mempool cache is local to the individual CPU cores and holds packets that were recently processed. As the
mempool is shared among all cores, the cache tries to minimize the required inter-process synchronization. The rec-
ommended size of the cache is covered in the YAML file.
To be able to run DPDK on Intel cards, it is required to change the default Intel driver to either vfio-pci or igb_uio driver.
The process is described in DPDK manual page regarding Linux drivers. The Intel NICs have the amount of RX/TX
descriptors capped at 4096. This should be possible to change by manually compiling the DPDK while changing
the value of respective macros for the desired drivers (e.g. IXGBE_MAX_RING_DESC/I40E_MAX_RING_DESC).
DPDK is natively supported by Mellanox and thus their NICs should work "out of the box".
Current DPDK support involves Suricata running on:
• a physical machine with a physical NICs such as:
– mlx5 (ConnectX-4/ConnectX-5/ConnectX-6)
– ixgbe
– i40e
– ice
• a virtual machine with virtual interfaces such as:
– e1000
– VMXNET3
– virtio-net
Other NICs using the same driver as mentioned above should work as well. The DPDK capture interface has not been
tested neither with the virtual interfaces nor in the virtual environments like VMs, Docker or similar.
The minimal supported DPDK is version 19.11 which should be available in most repositories of major distributions.
Alternatively, it is also possible to use meson and ninja to build and install DPDK from source files. It is required to
have correctly configured tool pkg-config as it is used to load libraries and CFLAGS during the Suricata configuration
and compilation. This can be tested by querying DPDK version as:
Pf-ring
The Pf_ring is a library that aims to improve packet capture performance over libcap. It performs packet acquisition.
There are three options within Pf_ring: interface, cluster-id and cluster-type.
pfring:
interface: eth0 # In this option you can set the network-interface
# on which you want the packets of the network to be read.
Pf_ring will load balance packets based on flow. All packet acquisition threads that will participate in the load balancing
need to have the same cluster-id. It is important to make sure this ID is unique for this cluster of threads, so that no
other engine / program is making use of clusters with the same id.
cluster-id: 99
Pf_ring can load balance traffic using pf_ring-clusters. All traffic for pf_ring can be load balanced according to the
configured cluster type value; in a round robin manner or a per flow manner that are part of the same cluster. All traffic
for pf_ring will be load balanced across acquisition threads of the same cluster id.
The "inner" flow means that the traffic will be load balanced based on address tuple after the outer vlan has been
removed.
The cluster_round_robin manner is a way of distributing packets one at a time to each thread (like distributing playing
cards to fellow players). The cluster_flow manner is a way of distributing all packets of the same flow to the same
thread. The flows itself will be distributed to the threads in a round-robin manner.
If your deployment has VLANs, the cluster types with "inner" will use the innermost address tuple for distribution.
The default cluster type is cluster_flow; the cluster_round_robin is not recommended with Suricata.
cluster-type: cluster_inner_flow_5_tuple
NFQ
Using NFQUEUE in iptables rules, will send packets to Suricata. If the mode is set to 'accept', the packet that has
been send to Suricata by a rule using NFQ, will by default not be inspected by the rest of the iptables rules after being
processed by Suricata. There are a few more options to NFQ to change this if desired.
If the mode is set to 'repeat', the packets will be marked by Suricata and be re-injected at the first rule of iptables. To
mitigate the packet from being going round in circles, the rule using NFQ will be skipped because of the mark.
If the mode is set to 'route', you can make sure the packet will be send to another tool after being processed by Suricata.
It is possible to assign this tool at the mandatory option 'route_queue'. Every engine/tool is linked to a queue-number.
This number you can add to the NFQ rule and to the route_queue option.
Add the numbers of the options repeat_mark and route_queue to the NFQ-rule:
nfq:
mode: accept #By default the packet will be accepted or dropped by␣
˓→Suricata
repeat-mark: 1 #If the mode is set to 'repeat', the packets will be␣
˓→marked after being
#processed by Suricata.
repeat-mask: 1
route-queue: 2 #Here you can assign the queue-number of the tool that␣
˓→Suricata has to
Example 1 NFQ1
mode: accept
Example 2 NFQ
mode: repeat
Example 3 NFQ
mode: route
Ipfw
Suricata does not only support Linux, it supports the FreeBSD operating system (this is an open source Unix operating
system) and Mac OS X as well. The in-line mode on FreeBSD uses ipfw (IP-firewall).
Certain rules in ipfw send network-traffic to Suricata. Rules have numbers. In this option you can set the rule to
which the network-traffic will be placed back. Make sure this rule comes after the one that sends the traffic to Suricata,
otherwise it will go around in circles.
The following tells the engine to re-inject packets back into the ipfw firewall at rule number 5500:
ipfw:
ipfw-reinjection-rule-number: 5500
Example 16 Ipfw-reinjection.
12.1.18 Rules
Rule Files
Suricata by default is setup for rules to be managed by Suricata-Update with the following rule file configuration:
default-rule-path: /var/lib/suricata/rules
rule-files:
- suricata.rules
default-rule-path: /var/lib/suricata/rules
rule-files:
- suricata.rules
- /etc/suricata/rules/custom.rules
File names can be specific with an absolute path, or just the base name. If just the base name is provided it will be
looked for in the default-rule-path.
If a rule file cannot be found, Suricata will log a warning message and continue to load, unless --init-errors-fatal
has been specified on the command line, in which case Suricata will exit with an error code.
For more information on rule management see Rule Management.
Threshold-file
Within this option, you can state the directory in which the threshold-file will be stored. The default directory is:
/etc/suricata/threshold.config
Classifications
The Classification-file is a file which makes the purpose of rules clear.
Some rules are just for providing information. Some of them are to warn you for serious risks like when you are being
hacked etc.
In this classification-file, there is a part submitted to the rule to make it possible for the system-administrator to distin-
guish events.
A rule in this file exists of three parts: the short name, a description and the priority of the rule (in which 1 has the
highest priority and 4 the lowest).
You can notice these descriptions returning in the rule and events / alerts.
Example:
Rule:
alert tcp $HOME_NET 21 -> $EXTERNAL_NET any (msg:"ET POLICY FTP Login Successful (non-
˓→anonymous)";
flow:from_server,established;flowbits:isset,ET.ftp.user.login; flowbits:isnotset,ftp.
˓→user.logged_in;
classtype:misc-activity; reference:urldoc.emergingthreats.net/2003410,;
reference:url,www.emergingthreats.net/cgi-bin/cvsweb.cgi/sigs/POLICY/POLICY_FTP_Login;␣
˓→sid:2003410; rev:7;)
Event/Alert:
classification-file: /etc/suricata/classification.config
Rule-vars
There are variables which can be used in rules.
Within rules, there is a possibility to set for which IP-address the rule should be checked and for which IP-address it
should not.
This way, only relevant rules will be used. To prevent you from having to set this rule by rule, there is an option in
which you can set the relevant IP-address for several rules. This option contains the address group vars that will be
passed in a rule. So, after HOME_NET you can enter your home IP-address.
vars:
address-groups:
HOME_NET: "[192.168.0.0/16,10.0.0.0/8,172.16.0.0/12]" #By using [], it is␣
˓→possible to set
#complicated variables.
EXTERNAL_NET: any
HTTP_SERVERS: "$HOME_NET" #The $-sign tells that␣
˓→what follows is
#a variable.
SMTP_SERVERS: "$HOME_NET"
SQL_SERVERS: "$HOME_NET"
DNS_SERVERS: "$HOME_NET"
TELNET_SERVERS: "$HOME_NET"
AIM_SERVERS: any
port-groups:
HTTP_PORTS: "80"
SHELLCODE_PORTS: "!80"
ORACLE_PORTS: 1521
SSH_PORTS: 22
SIP_PORTS: "[5060, 5061]"
Host-os-policy
Operating systems differ in the way they process fragmented packets and streams. Suricata performs differently with
anomalies for different operating systems. It is important to set of which operating system your IP-address makes use
of, so Suricata knows how to process fragmented packets and streams. For example in stream-reassembly there can be
packets with overlapping payloads.
Example 17 Overlapping payloads
In the configuration-file, the operating-systems are listed. You can add your IP-address behind the name of the operating
system you make use of.
host-os-policy:
windows: [0.0.0.0/0]
bsd: []
bsd-right: []
old-linux: []
linux: [10.0.0.0/8, 192.168.1.100, "8762:2352:6241:7245:E000:0000:0000:0000"]
old-solaris: []
solaris: ["::1"]
hpux10: []
hpux11: []
irix: []
macos: []
vista: []
windows2k3: []
Engine-analysis
The option engine-analysis provides information for signature writers about how Suricata organizes signatures inter-
nally.
Like mentioned before, signatures have zero or more patterns on which they can match. Only one of these patterns will
be used by the multi pattern matcher (MPM). Suricata determines which patterns will be used unless the fast-pattern
rule option is used.
The option engine-analysis creates a new log file in the default log dir. In this file all information about signatures and
patterns can be found so signature writers are able to see which pattern is used and change it if desired.
To create this log file, you have to run Suricata with ./src/suricata -c suricata.yaml --engine-analysis.
engine-analysis:
rules-fast-pattern: yes
Example:
[10703] 26/11/2010 -- 11:41:15 - (detect.c:560) <Info> (SigLoadSignatures)
-- Engine-Analysis for fast_pattern printed to file - /var/log/suricata/rules_fast_
˓→pattern.txt
alert tcp any any -> any any (content:"Volume Serial Number"; sid:1292;)
== Sid: 1292 ==
Fast pattern matcher: content
Fast pattern set: no
Fast pattern only set: no
Fast pattern chop set: no
Content negated: no
Original content: Volume Serial Number
Final content: Volume Serial Number
---
alert tcp any any -> any any (content:"abc"; content:"defghi"; sid:1;)
== Sid: 1 ==
Fast pattern matcher: content
Fast pattern set: no
Fast pattern only set: no
Fast pattern chop set: no
Content negated: no
Original content: defghi
Final content: defghi
---
alert tcp any any -> any any (content:"abc"; fast_pattern:only; content:"defghi"; sid:1;)
== Sid: 1 ==
Fast pattern matcher: content
Fast pattern set: yes
Fast pattern only set: yes
Fast pattern chop set: no
Content negated: no
Original content: abc
Final content: abc
---
alert tcp any any -> any any (content:"abc"; fast_pattern; content:"defghi"; sid:1;)
== Sid: 1 ==
Fast pattern matcher: content
Fast pattern set: yes
(continues on next page)
---
alert tcp any any -> any any (content:"abc"; fast_pattern:1,2; content:"defghi"; sid:1;)
== Sid: 1 ==
Fast pattern matcher: content
Fast pattern set: yes
Fast pattern only set: no
Fast pattern chop set: yes
Fast pattern offset, length: 1, 2
Content negated: no
Original content: abc
Final content: bc
profiling:
rules:
enabled: yes
This engine is not used by default. It can only be used if Suricata is compiled with:
-- enable-profiling
At the end of each session, Suricata will display the profiling statistics. The list will be displayed sorted.
This order can be changed as pleased. The choice is between ticks, avgticks, checks, maxticks and matches. The setting
of your choice will be displayed from high to low.
The amount of time it takes to check the signatures, will be administrated by Suricata. This will be counted in ticks.
One tick is one CPU computation. 3 GHz will be 3 billion ticks.
Beside the amount of checks, ticks and matches it will also display the average and the maximum of a rule per session
at the end of the line.
The option Limit determines the amount of signatures of which the statistics will be shown, based on the sorting.
sort: avgticks
limit: 100
Ticks
Packet Profiling
packets:
#stored.
append: yes #If set to yes, new packet profiling␣
˓→information will be added to the
#stored
Packet profiling is enabled by default in suricata.yaml but it will only do its job if you compiled Suricata with --enable
profiling.
The filename in which packet profiling information will be stored, is packet-stats.log. Information in this file can be
added to the last information that was saved there, or if the append option is set to no, the existing file will be overwritten.
Per packet, you can send the output to a csv-file. This file contains one line for each packet with all profiling information
of that packet. This option can be used only if Suricata is build with --enable-profiling and if the packet profiling option
is enabled in yaml.
It is best to use runmode 'single' if you would like to profile the speed of the code. When using a single thread, there is
no situation in which two threads have to wait for each other. When using two threads, the time threads might have to
wait for each other will be taken in account when/during profiling packets. For more information see Packet Profiling.
12.1.20 Decoder
Teredo
The Teredo decoder can be disabled. It is enabled by default.
decoder:
# Teredo decoder is known to not be completely accurate
# it will sometimes detect non-teredo as teredo.
teredo:
enabled: true
# ports to look for Teredo. Max 4 ports. If no ports are given, or
# the value is set to 'any', Teredo detection runs on _all_ UDP packets.
ports: $TEREDO_PORTS # syntax: '[3544, 1234]'
Using this default configuration, Teredo detection will run on UDP port 3544. If the ports parameter is missing, or set
to any, all ports will be inspected for possible presence of Teredo.
logging:
# Requires libunwind to be available when Suricata is configured and built.
# If a signal unexpectedly terminates Suricata, displays a brief diagnostic
# message with the offending stacktrace if enabled.
#stacktrace-on-signal: on
Beyond suricata.yaml, other ways to harden Suricata are - compilation : enabling ASLR and other exploit mitigation
techniques. - environment : running Suricata on a device that has no direct access to Internet.
Lua
Suricata 8.0 sandboxes Lua rules by default. The restrictions on the sandbox for Lua rules can be modified in the
security.lua section of the configuration file. Additionally, Lua rules can be completely disabled the same as the
Suricata 7.0 default:
security:
lua:
# Allow Lua rules. Disabled by default.
#allow-rules: false
12.2 Global-Thresholds
Thresholds can be configured in the rules themselves, see Thresholding Keywords. They are often set by rule writers
based on their intelligence for creating a rule combined with a judgement on how often a rule will alert.
Thresholds are tracked in a hash table that is sized according to configuration, see: Thresholding Settings.
threshold/event_filter
Syntax:
rate_filter
Rate filters allow changing of a rule action when a rule matches.
Syntax:
Example:
rate_filter gen_id 1, sig_id 1000, track by_rule, count 100, seconds 60, \
new_action alert, timeout 30
gen_id
Generator id. Normally 1, but if a rule uses the gid keyword to set another value it has to be matched in the gen_id.
sig_id
track
Where to track the rule matches. When using by_src/by_dst the tracking is done per IP-address. The Host table is used
for storage. When using by_rule it's done globally for the rule. Option by_both used to track per IP pair of source and
destination. Packets going to opposite directions between same addresses tracked as the same pair. The by_flow option
tracks the rule matches in the flow.
count
seconds
Time period within which the count needs to be reached to activate the rate_filter
new_action
New action that is applied to matching traffic when the rate_filter is in place.
Values:
<alert|drop|pass|reject>
Note: 'sdrop' and 'log' are supported by the parser but not implemented otherwise.
timeout
Example
Let's say we want to limit incoming connections to our SSH server. The rule 888 below simply alerts on SYN packets
to the SSH port of our SSH server. If an IP-address triggers this more than 10 or more with a minute, the drop
rate_filter is set with a timeout of 5 minutes.
Rule:
Rate filter:
rate_filter gen_id 1, sig_id 888, track by_src, count 10, seconds 60, \
new_action drop, timeout 300
suppress
Suppressions can be used to suppress alerts for a rule or a host/network. Actions performed when a rule matches, such
as setting a flowbit, are still performed.
Syntax:
Examples:
This will make sure the signature 2002087 will never match for src host 209.132.180.67.
Other possibilities/examples:
In the last example above, the by_either tracking means that if either the source ip or destination ip matches
217.110.97.128/25 the rule with sid 2003614 is suppressed.
alert tcp any any -> any 25 (msg:"ET POLICY Inbound Frequent Emails - Possible Spambot␣
˓→Inbound"; \
Suppress
Suppressions can be combined with rules with thresholds/detection_filters with no exceptions.
Each of the rules above will make sure 2002087 doesn't alert when the source of the emails is 209.132.180.67. It will
alert for all other hosts.
This suppression will simply convert the rule to "noalert", meaning it will never alert in any case. If the rule sets a
flowbit, that will still happen.
Threshold/event_filter
When applied to a specific signature, thresholds and event_filters (threshold from now on) will override the signature
setting. This can be useful for when the default in a signature doesn't suit your environment.
threshold gen_id 1, sig_id 2002087, type both, track by_src, count 3, seconds 5
threshold gen_id 1, sig_id 2002087, type threshold, track by_src, count 10, seconds 60
threshold gen_id 1, sig_id 2002087, type limit, track by_src, count 1, seconds 15
Each of these will replace the threshold setting for 2002087 by the new threshold setting.
Note: overriding all gids or sids (by using gen_id 0 or sig_id 0) is not supported. Bug https://redmine.
openinfosecfoundation.org/issues/425.
Rate_filter
see https://redmine.openinfosecfoundation.org/issues/425.
This value will be overwritten by specific exception policies whose settings are also defined in the yaml file.
Auto
In IPS mode, the default behavior for most of the exception policies is to fail close. This means dropping the flow,
or the packet, when the flow action is not supported. The default policy for the midstream exception will be ignore if
midstream flows are accepted.
It is possible to disable this default, by setting the exception policies' "master switch" yaml config option to ignore.
In IDS mode, setting auto mode actually means disabling the master-switch, or ignoring the exception policies.
Specific settings
Exception policies are implemented for:
To change any of these, go to the specific section in the suricata.yaml file (for more configuration details, check the
suricata.yaml's documentation).
The possible values for the exception policies, and the resulting behaviors, are:
• drop-flow: disable inspection for the whole flow (packets, payload, application layer protocol), drop the packet
and all future packets in the flow.
• drop-packet: drop the packet.
• reject: same as drop-flow, but reject the current packet as well (see reject action in Rule's Action).
• bypass: bypass the flow. No further inspection is done. Bypass may be offloaded.
• pass-flow: disable payload and packet detection; stream reassembly, app-layer parsing and logging still happen.
• pass-packet: disable detection, still does stream updates and app-layer parsing (depending on which policy
triggered it).
• ignore: do not apply exception policies (default behavior).
The drop, pass and reject are similar to the rule actions described in rule actions.
The main difference between IDS and IPS scenarios is that in IPS mode flows can be allowed or blocked (as in with
the PASS and DROP rule actions). Packet actions are not valid, as midstream pick-up is a configuration that affects the
whole flow.
Notes:
• Not valid means that Suricata will error out and won't start.
• REJECT will make Suricata send a Reset-packet unreach error to the sender of the matching packet.
If a given exception policy does not apply for a setting, no related counter is logged.
Stats for application layer errors are available in summarized form or per application layer protocol. As the latter is
extremely verbose, by default Suricata logs only the summary. If any further investigation is needed, it is recommended
to enable per-app-proto exception policy error counters temporarily (for stats configuration).
12.4.1 Variables
snort.conf
˓→8180,8181,8243,8280,8800,8888,8899,9000,9080,9090,9091,9443,9999,11371,55555]
suricata.yaml
vars:
address-groups:
HOME_NET: "[192.168.0.0/16,10.0.0.0/8,172.16.0.0/12]"
EXTERNAL_NET: "!$HOME_NET"
port-groups:
HTTP_PORTS: "80"
SHELLCODE_PORTS: "!80"
Note that Suricata can automatically detect HTTP traffic regardless of the port it uses. So the HTTP_PORTS variable
is not nearly as important as it is with Snort, if you use a Suricata enabled ruleset.
suricata.yaml
Suricata has no specific decoder options. All decoder related alerts are controlled by rules. See #Rules below.
suricata.yaml
Suricata's checksum handling works on-demand. The stream engine checks TCP and IP checksum by default:
stream:
checksum-validation: yes # reject wrong csums
Alerting on bad checksums can be done with normal rules. See #Rules, decoder-events.rules specifically.
# Configure active response for non inline operation. For more information, see REAMDE.
˓→active
suricata.yaml
Active responses are handled automatically w/o config if rules with the "reject" action are used.
Dropping privileges
snort.conf
# Configure specific UID and GID to run snort as after dropping privs. For more␣
˓→information see snort -h command line options
#
# config set_gid:
# config set_uid:
Suricata
To set the user and group use the --user <username> and --group <groupname> command-line options.
Snaplen
snort.conf
# Configure default snaplen. Snort defaults to MTU of in use interface. For more␣
˓→information see README
#
# config snaplen:
#
Suricata always works at full snap length to provide full traffic visibility.
Bpf
snort.conf
# Configure default bpf_file to use for filtering what traffic reaches snort. For more␣
˓→information see snort -h command line options (-F)
#
# config bpf_file:
#
suricata.yaml
BPF filters can be set per packet acquisition method, with the "bpf-filter: <file>" yaml option and in a file using the -F
command line option.
For example:
pcap:
- interface: eth0
#buffer-size: 16777216
#bpf-filter: "tcp and port 25"
#checksum-checks: auto
#threads: 16
#promisc: no
#snaplen: 1518
# Configure default log directory for snort to log to. For more information see snort -
˓→h command line options (-l)
#
# config logdir:
suricata.yaml
default-log-dir: /var/log/suricata/
# Configure DAQ related options for inline operation. For more information, see README.
˓→daq
#
# config daq: <type>
# config daq_dir: <dir>
# config daq_mode: <mode>
# config daq_var: <var>
#
# <type> ::= pcap | afpacket | dump | nfq | ipq | ipfw
# <mode> ::= read-file | passive | inline
# <var> ::= arbitrary <name>=<value passed to DAQ
# <dir> ::= path as to where to look for DAQ module so's
suricata.yaml
Suricata has all packet acquisition support built-in. It's configuration format is very verbose.
pcap:
- interface: eth0
#buffer-size: 16777216
#bpf-filter: "tcp and port 25"
#checksum-checks: auto
#threads: 16
#promisc: no
#snaplen: 1518
pfring:
afpacket:
nfq:
ipfw:
Passive vs inline vs reading files is determined by how Suricata is invoked on the command line.
12.4.7 Rules
snort.conf:
In snort.conf a RULE_PATH variable is set, as well as variables for shared object (SO) rules and preprocessor rules.
include $RULE_PATH/local.rules
include $RULE_PATH/emerging-activex.rules
...
suricata.yaml:
In the suricata.yaml the default rule path is set followed by a list of rule files. Suricata does not have a concept of shared
object rules or preprocessor rules. Instead of preprocessor rules, Suricata has several rule files for events set by the
decoders, stream engine, http parser etc.
default-rule-path: /etc/suricata/rules
rule-files:
(continues on next page)
The equivalent of preprocessor rules are loaded like normal rule files:
rule-files:
- decoder-events.rules
- stream-events.rules
- http-events.rules
- smtp-events.rules
12.5.2 YAML
Add a new section in the main ("master") Suricata configuration file -- suricata.yaml -- named multi-detect.
Settings:
• enabled: yes/no -> is multi-tenancy support enabled
• selector: direct (for unix socket pcap processing, see below), VLAN or device
• loaders: number of loader threads, for parallel tenant loading at startup
• tenants: list of tenants
• config-path: path from where the tenant yamls are loaded
– id: tenant id (numeric values only)
– yaml: separate yaml file with the tenant specific settings
• mappings:
– VLAN id or device: The outermost VLAN is used to match.
– tenant id: tenant to associate with the VLAN id or device
multi-detect:
enabled: yes
#selector: direct # direct or vlan
selector: vlan
loaders: 3
tenants:
- id: 1
yaml: tenant-1.yaml
- id: 2
yaml: tenant-2.yaml
- id: 3
(continues on next page)
mappings:
- vlan-id: 1000
tenant-id: 1
- vlan-id: 2000
tenant-id: 2
- vlan-id: 1112
tenant-id: 3
# Set the default rule path here to search for the files.
# if not set, it will look at the current working dir
default-rule-path: /etc/suricata/rules
rule-files:
- rules1
classification-file: /etc/suricata/classification.config
reference-config-file: /etc/suricata/reference.config
HOME_NET: "[192.168.0.0/16,10.0.0.0/8,172.16.0.0/12]"
EXTERNAL_NET: "!$HOME_NET"
...
port-groups:
HTTP_PORTS: "80"
SHELLCODE_PORTS: "!80"
...
vlan-id
Assign tenants to VLAN ids. Suricata matches the outermost VLAN id with this value. Multiple VLANs can have the
same tenant id. VLAN id values must be between 1 and 4094.
Example of VLAN mapping:
mappings:
- vlan-id: 1000
tenant-id: 1
- vlan-id: 2000
tenant-id: 2
- vlan-id: 1112
tenant-id: 3
The mappings can also be modified over the unix socket, see below.
Note: can only be used if vlan.use-for-tracking is enabled.
device
Assign tenants to devices. A single tenant can be assigned to a device. Multiple devices can have the same tenant id.
Example of device mapping:
mappings:
- device: ens5f0
tenant-id: 1
- device: ens5f1
tenant-id: 3
The mappings are static and cannot be modified over the unix socket.
Note: Not currently supported for IPS.
Note: support depends on a capture method using the 'livedev' API. Currently these are: pcap, AF_PACKET, PF_RING
and Netmap.
register-tenant 1 tenant-1.yaml
register-tenant 2 tenant-2.yaml
register-tenant 3 tenant-3.yaml
(continues on next page)
unregister-tenant <id>
unregister-tenant 2
unregister-tenant 1
This runs the traffic1.pcap against tenant 1 and it logs into /logs1/, traffic2.pcap against tenant 2 and logs to /logs2/ and
so on.
Registration
Tenants can be mapped to vlan ids.
register-tenant-handler <tenant id> vlan <vlan id>
The registration of tenant and tenant handlers can be done on a running engine.
Reloads
Reloading all tenants:
reload-tenants
reload-tenants
reload-tenant 1 tenant-1.yaml
reload-tenant 5
The [yaml path] is optional. If it isn't provided, the original path of the tenant will be used during the reload.
wget http://people.redhat.com/sgrubb/libcap-ng/libcap-ng-0.7.8.tar.gz
tar -xzvf libcap-ng-0.7.8.tar.gz
cd libcap-ng-0.7.8
./configure
make
make install
Download, configure, compile and install Suricata for your particular setup. See Installation. Depending on your
environment, you may need to add the --with-libpcap_ng-libraries and --with-libpcap_ng-includes options during the
configure step. e.g:
./configure --with-libcap_ng-libraries=/usr/local/lib \
--with-libcap_ng-includes=/usr/local/include
Now, when you run Suricata, tell it what user and/or group you want it to run as after startup with the --user and --group
options. e.g. (this assumes a 'suri' user and group):
You will also want to make sure your user/group permissions are set so Suricata can still write to its log files which are
usually located in /var/log/suricata.
mkdir -p /var/log/suricata
chown -R root:suri /var/log/suricata
chmod -R 775 /var/log/suricata
landlock:
enabled: yes
directories:
write:
- /var/log/suricata/
- /var/run/
read:
- /usr/
- /etc/
- /etc/suricata/
Following your running configuration you may have to add some directories. There are two lists you can use, write
to add directories where write is needed and read for directories where read access is needed.
Landlock is not active in some distributions and you may need to activate it at boot by adding lsm=landock to the Linux
command line. For example, on a Debian distribution with at least a linux 5.13, you can edit /etc/default/grub
and update the GRUB_CMDLINE_LINUX_DEFAULT option:
GRUB_CMDLINE_LINUX_DEFAULT="quiet lsm=landlock"
If you are interested in reading more about Landlock, you can use https://docs.kernel.org/userspace-api/landlock.html
as entry point.
12.8.2 Example
A test framework requires Suricata to be capturing before the tests can be carried out. Writing a test.service and
ensuring the correct execution order with After=suricata.service forces the unit to be started after suricata.
service. This does not enforce Suricata has fully initialised. By configuring suricata.service as Type=notify
instructs the service manager to wait for the notification before starting test.service.
12.8.3 Requirements
This feature is only supported for distributions under the following conditions:
1. Any distribution that runs under systemd
2. Unit file configuration: Type=notify
For notification to the service manager the unit file must be configured as shown in requirement [2]. Upon all require-
ments being met the service manager will start and await READY=1 status from Suricata. Otherwise the service manager
will treat the service unit as Type=simple and consider it started immediately after the main process ExecStart= has
been forked.
ps --no-headers -o comm 1
12.9 Includes
A Suricata configuration file (typically /etc/suricata/suricata.yaml) may include other files allowing a config-
uration file to be broken into multiple files. The special field name include is used to include one or more files.
The contents of the include file are inlined at the level of the include statement. Include fields may also be included
at any level within a mapping.
%YAML 1.1
---
HOME_NET: "[192.168.0.0/16,10.0.0.0/8,172.16.0.0/12]"
vars:
address-groups:
HOME_NET: "[192.168.0.0/16,10.0.0.0/8,172.16.0.0/12]"
ò Note
Suricata versions less than 7 required multiple include statements to be specified to include more than one file.
While Suricata 7.0 still supports this it will issue a deprecation warning. Suricata 8.0 will not allow multiple
include statements at the same level as this is not allowed by YAML.
THIRTEEN
REPUTATION
13.1 IP Reputation
13.1.1 IP Reputation Config
IP reputation has a few configuration directives, all disabled by default.
# IP Reputation
#reputation-categories-file: /etc/suricata/iprep/categories.txt
#default-reputation-path: /etc/suricata/iprep
#reputation-files:
# - reputation.list
reputation-categories-file
The categories file mapping numbered category values to short names.
reputation-categories-file: /etc/suricata/iprep/categories.txt
default-reputation-path
Path where reputation files from the "reputation-files" directive are loaded from by default.
default-reputation-path: /etc/suricata/iprep
reputation-files
YAML list of file names to load. In case of a absolute path the file is loaded directly, otherwise the path from "default-
reputation-path" is pre-pended to form the final path.
reputation-files:
- badhosts.list
- knowngood.list
- sharedhosting.list
Hosts
IP reputation information is stored in the host table, so the settings of the host table affect it.
Depending on the number of hosts reputation information is available for, the memcap and hash size may have to be
increased.
309
Suricata User Guide, Release 8.0.0-dev
Reloads
Sending Suricata a USR2 signal will reload the IP reputation data, along with the normal rules reload.
During the reload the host table will be updated to contain the new data. The iprep information is versioned. When the
reload is complete, Suricata will automatically clean up the old iprep information.
Only the reputation files will be reloaded, the categories file won't be. If categories change, Suricata should be restarted.
File format
The format of the reputation files is described in the IP Reputation Format page.
Categories file
The categories file provides a mapping between a category number, short name, and long description. It's a simple
CSV file:
<id>,<short name>,<description>
Example:
Reputation file
The reputation file lists a reputation score for hosts in the categories. It's a simple CSV file:
<ip>,<category>,<reputation score>
The IP is an IPv4 address in the quad-dotted notation or an IPv6 address. Both IP types support networks in CIDR
notation. The category is the number as defined in the categories file. The reputation score is the confidence that this
IP is in the specified category, represented by a number between 1 and 127 (0 means no data).
Example:
1.2.3.4,1,101
1.1.1.0/24,6,88
If an IP address has a score in multiple categories it should be listed in the file multiple times.
Example:
1.1.1.1,1,10
1.1.1.1,2,10
central database (The Hub) to collect, store and compile updated IP reputation details that are then distributed to user-
side sensor databases (Spokes) for inclusion in user security systems. The reputation data update frequency and security
action taken, is defined in the user security configuration.
The intent of IP Reputation is to allow sharing of intelligence regarding a vast number of IP addresses. This can be
positive or negative intelligence classified into a number of categories. The technical implementation requires three
major efforts; engine integration, the hub that redistributes reputation, and the communication protocol between hubs
and sensors. The hub will have a number of responsibilities. This will be a separate module running on a separate
system as any sensor. Most often it would run on a central database that all sensors already have communication with.
It will be able to subscribe to one or more external feeds. The local admin should be able to define the feeds to be
subscribed to, provide authentication credentials if required, and give a weight to that feed. The weight can be an
overall number or a by category weight. This will allow the admin to minimize the influence a feed has on their overall
reputation if they distrust a particular category or feed, or trust another implicitly. Feeds can be configured to accept
feedback or not and will report so on connect. The admin can override and choose not to give any feedback, but the
sensor should report these to the Hub upstream on connect. The hub will take all of these feeds and aggregate them
into an average single score for each IP or IP Block, and then redistribute this data to all local sensors as configured. It
should receive connections from sensors. The sensor will have to provide authentication and will provide feedback. The
hub should redistribute that feedback from sensors to all other sensors as well as up to any feeds that accept feedback.
The hub should also have an API to allow outside statistical analysis to be done to the database and fed back into the
stream. For instance a local site may choose to change the reputation on all Russian IP blocks, etc.
For more information about IP Reputation see IP Reputation Config, IP Reputation Keyword and IP Reputation Format.
FOURTEEN
INIT SCRIPTS
# suricata
description "Intrusion Detection System Daemon"
start on runlevel [2345]
stop on runlevel [!2345]
expect fork
exec suricata -D --pidfile /var/run/suricata.pid -c /etc/suricata/suricata.yaml -i eth1
313
Suricata User Guide, Release 8.0.0-dev
FIFTEEN
suricata --build-info
315
Suricata User Guide, Release 8.0.0-dev
If Suricata is running on a gateway and is meant to protect the computers behind that gateway you are dealing with the
first scenario: forward_ing .
If Suricata has to protect the computer it is running on, you are dealing with the second scenario: host (see drawing 2).
These two ways of using Suricata can also be combined.
The easiest rule in case of the gateway-scenario to send traffic to Suricata is:
It is possible to set a queue number. If you do not, the queue number will be 0 by default.
Imagine you want Suricata to check for example just TCP traffic, or all incoming traffic on port 80, or all traffic on
destination-port 80, you can do so like this:
In this example, Suricata checks all packets for outgoing connections to port 80.
To see if you have set your iptables rules correct make sure Suricata is running and enter:
This description of the use of iptables is the way to use it with IPv4. To use it with IPv6 all previous mentioned
commands have to start with ip6tables. It is also possible to let Suricata check both kinds of traffic.
There is also a way to use iptables with multiple networks (and interface cards). Example:
The options -i (input) -o (output) can be combined with all previous mentioned options.
If you would stop Suricata and use internet, the traffic will not come through. To make internet work correctly, first
delete all iptables rules.
To erase all iptables rules, enter:
sudo iptables -F
nft> add chain filter IPS { type filter hook forward priority 10;}
nft> add rule filter IPS iif eth0 oif eth1 queue
nft> add rule filter IPS iif eth1 oif eth0 queue
nft add rule filter IPS queue num 3-5 options fanout,bypass
This rule sends matching packets to 3 load-balanced queues starting at 3 and ending at 5. To get the packets in Suricata
with this setup, you need to specify multiple queues on command line:
suricata -q 3 -q 4 -q 5
For example, the following configuration will create a Suricata acting as IPS between interface eth0 and eth1:
af-packet:
- interface: eth0
threads: 1
defrag: no
cluster-type: cluster_flow
cluster-id: 98
copy-mode: ips
copy-iface: eth1
buffer-size: 64535
use-mmap: yes
- interface: eth1
threads: 1
cluster-id: 97
defrag: no
cluster-type: cluster_flow
copy-mode: ips
copy-iface: eth0
buffer-size: 64535
use-mmap: yes
This is a basic af-packet configuration using two interfaces. Interface eth0 will copy all received packets to eth1
because of the copy-* configuration variable
copy-mode: ips
copy-iface: eth1
copy-mode: ips
copy-iface: eth0
There are some important points to consider when setting up this mode:
• The implementation of this mode is dependent of the zero copy mode of AF_PACKET. Thus you need to set
use-mmap to yes on both interface.
• MTU on both interfaces have to be equal: the copy from one interface to the other is direct and packets bigger
then the MTU will be dropped by kernel.
• Set different values of cluster-id on both interfaces to avoid conflict.
• Any network card offloading creating bigger then physical layer datagram (like GRO, LRO, TSO) will result in
dropped packets as the transmit path can not handle them.
• Set stream.inline to auto or yes so Suricata switches to blocking mode.
The copy-mode variable can take the following values:
• ips: the drop keyword is honored and matching packets are dropped.
• tap: no drop occurs, Suricata acts as a bridge
Some specific care must be taken to scale the capture method on multiple threads. As we can't use defrag that will
generate too big frames, the in kernel load balancing will not be correct: the IP-only fragment will not reach the same
thread as the full featured packet of the same flow because the port information will not be present.
A solution is to use eBPF load balancing to get an IP pair load balancing without fragmentation. The AF_PACKET
IPS Configuration using multiple threads and eBPF load balancing looks like the following:
af-packet:
- interface: eth0
threads: 16
defrag: no
cluster-type: cluster_ebpf
ebpf-lb-file: /usr/libexec/suricata/ebpf/lb.bpf
cluster-id: 98
copy-mode: ips
copy-iface: eth1
buffer-size: 64535
use-mmap: yes
- interface: eth1
threads: 16
cluster-id: 97
defrag: no
cluster-type: cluster_ebpf
ebpf-lb-file: /usr/libexec/suricata/ebpf/lb.bpf
copy-mode: ips
copy-iface: eth0
buffer-size: 64535
use-mmap: yes
The eBPF file /usr/libexec/suricata/ebpf/lb.bpf may not be present on disk. See eBPF and XDP for more
information.
dpdk:
eal-params:
proc-type: primary
interfaces:
- interface: 0000:3b:00.1
threads: 4
promisc: true
multicast: true
checksum-checks: true
checksum-checks-offload: true
mempool-size: 262143
mempool-cache-size: 511
rx-descriptors: 4096
tx-descriptors: 4096
copy-mode: ips
(continues on next page)
- interface: 0000:3b:00.0
threads: 4
promisc: true
multicast: true
checksum-checks: true
checksum-checks-offload: true
mempool-size: 262143
mempool-cache-size: 511
rx-descriptors: 4096
tx-descriptors: 4096
copy-mode: ips
copy-iface: 0000:3b:00.1
mtu: 3000
threading:
set-cpu-affinity: yes
cpu-affinity:
- management-cpu-set:
cpu: [ 0 ]
- worker-cpu-set:
cpu: [ 2,4,6,8,10,12,14,16 ]
mode rings to the network protocol stack and finally, to the network interface card. Suricata receives packets from
the host stack mode rings and, in IPS mode, places packets to be transmitted into the host stack mode rings. Packets
transmitted by Suricata into the host stack mode rings are available for other host OS applications.
Paired network interfaces are specified in the netmap configuration section. For example, the following configuration
will create a Suricata acting as IPS between interface enp6s0f0 and enp6s0f1
netmap:
- interface: enp6s0f0
threads: auto
copy-mode: ips
copy-iface: enp6s0f1
- interface: enp6s0f1
threads: auto
copy-mode: ips
copy-iface: enp6s0f0
You can specify the threads value; the default value of auto will create a thread for each queue supported by the
NIC; restrict the thread count by specifying a value, e.g., threads: 1
This is a basic netmap configuration using two interfaces. Suricata will copy packets between interfaces enp6s0f0
and en60sf1 because of the copy-* configuration variable in interface's enp6s0f0 configuration
copy-mode: ips
copy-iface: enp6s0f1
copy-mode: ips
copy-iface: enp6s0f0
The host stack mode feature of Netmap can be used. host stack mode doesn't require a second network interface.
This example demonstrates host stack mode with a single physical network interface enp6s0f01
- interface: enp60s0f0
copy-mode: ips
copy-iface: enp6s0f0^
- interface: enp60s0f0^
copy-mode: ips
copy-iface: enp6s0f0
SIXTEEN
This guide explains how to work with Suricata in layer 4 inline mode using WinDivert on Windows.
First start by compiling Suricata with WinDivert support. For instructions, see Windows Installation. This documenta-
tion has not yet been updated with WinDivert information, so make sure to add the following flags before configuring
Suricata with configure:
WinDivert.dll and WinDivert.sys must be in the same directory as the Suricata executable. WinDivert automatically
installs the driver when it is run. For more information about WinDivert, see https://www.reqrypt.org/windivert-doc.
html.
To check if you have WinDivert enabled in your Suricata, enter the following command in an elevated command prompt
or terminal:
The filter is automatically stopped and normal traffic resumes when Suricata is stopped.
A quick start is to examine all traffic, in which case you can use the following command:
323
Suricata User Guide, Release 8.0.0-dev
SEVENTEEN
OUTPUT
17.1 EVE
17.1.1 Eve JSON Output
The EVE output facility outputs alerts, anomalies, metadata, file info and protocol specific records through JSON.
The most common way to use this is through 'EVE', which is a firehose approach where all these logs go into a single
file.
outputs:
# Extensible Event Format (nicknamed EVE) event log in JSON format
- eve-log:
enabled: yes
filetype: regular #regular|syslog|unix_dgram|unix_stream|redis
filename: eve.json
# Enable for multi-threaded eve.json output; output files are amended with
# an identifier, e.g., eve.9.json
#threaded: false
#prefix: "@cee: " # prefix to prepend to each log entry
# the following are valid when type: syslog above
#identity: "suricata"
#facility: local5
#level: Info ## possible levels: Emergency, Alert, Critical,
## Error, Warning, Notice, Info, Debug
#ethernet: no # log ethernet header in events when available
#redis:
# server: 127.0.0.1
# port: 6379
# async: true ## if redis replies are read asynchronously
# mode: list ## possible values: list|lpush (default), rpush, channel|publish,␣
˓→xadd|stream
# ## lpush and rpush are using a Redis list. "list" is an alias for␣
˓→lpush
325
Suricata User Guide, Release 8.0.0-dev
# include the name of the input pcap file in pcap file processing mode
pcap-file: false
# Community Flow ID
# Adds a 'community-id' field to EVE records. These are meant to give
# records a predictable flow ID that can be used to match records to
# output of other tools such as Zeek (Bro).
#
# Takes a 'seed' that needs to be same across sensors and tools
# to make the id less predictable.
types:
- alert:
(continues on next page)
- anomaly:
# Anomaly log records describe unexpected conditions such
# as truncated packets, packets with invalid IP/UDP/TCP
# length values, and other events that render the packet
# invalid for further processing or describe unexpected
# behavior on an established stream. Networks which
# experience high occurrences of anomalies may experience
(continues on next page)
- files:
force-magic: no # force logging magic on all logged files
# force logging of checksums, available hash functions are md5,
# sha1 and sha256
#force-hash: [md5]
#- drop:
# alerts: yes # log alerts that caused drops
# flows: all # start or all: 'start' logs only a single drop
# # per flow direction. All logs each dropped pkt.
# Enable logging the final action taken on a packet by the engine
# (will show more information in case of a drop caused by 'reject')
# verdict: yes
- smtp:
#extended: yes # enable this for extended logging information
# this includes: bcc, message-id, subject, x_mailer, user-agent
# custom fields logging from the list:
# reply-to, bcc, message-id, subject, x-mailer, user-agent, received,
# x-originating-ip, in-reply-to, references, importance, priority,
# sensitivity, organization, content-md5, date
#custom: [received, x-mailer, x-originating-ip, relays, reply-to, bcc]
# output md5 of fields: body, subject
# for the body you need to set app-layer.protocols.smtp.mime.body-md5
# to yes
#md5: [body, subject]
#- dnp3
- websocket
- ftp
- ftp-data
- rdp
(continues on next page)
Each alert, http log, etc will go into this one file: 'eve.json'. This file can then be processed by 3rd party tools like
Logstash (ELK) or jq.
If ethernet is set to yes, then ethernet headers will be added to events if available.
Output types
EVE can output to multiple methods. regular is a normal file. Other options are syslog, unix_dgram, unix_stream
and redis.
Output types:
# ## lpush and rpush are using a Redis list. "list" is an alias for lpush
# ## publish is using a Redis channel. "channel" is an alias for publish
# ## xadd is using a Redis stream. "stream" is an alias for xadd
# key: suricata ## string denoting the key/channel/stream to use (default to suricata)
# stream-maxlen: 100000 ## Automatically trims the stream length to at most
## this number of events. Set to 0 to disable trimming.
## Only used when mode is set to xadd/stream.
# stream-trim-exact: false ## Trim exactly to the maximum stream length above.
## Default: use inexact trimming (inexact by a few
## tens of items)
## Only used when mode is set to xadd/stream.
# Redis pipelining set up. This will enable to only do a query every
# 'batch-size' events. This should lower the latency induced by network
# connection at the cost of some memory. There is no flushing implemented
(continues on next page)
Alerts
Alerts are event records for rule matches. They can be amended with metadata, such as the application layer record
(HTTP, DNS, etc) an alert was generated for, and elements of the rule.
Metadata:
- alert:
#payload: yes # enable dumping payload in Base64
#payload-buffer-size: 4kb # max size of payload buffer to output in eve-log
#payload-printable: yes # enable dumping payload in printable (lossy) format
#payload-length: yes # enable dumping payload length, including the gaps
#packet: yes # enable dumping of packet (without stream segments)
#http-body: yes # Requires metadata; enable dumping of http body in Base64
#http-body-printable: yes # Requires metadata; enable dumping of http body in␣
˓→printable format
# metadata:
#rule:
# Log the metadata field from the rule in a structured
# format.
#metadata: true
Anomaly
Anomalies are event records created when packets with unexpected or anomalous values are handled. These events
include conditions such as incorrect protocol values, incorrect protocol length values, and other conditions which render
the packet suspect. Other conditions may occur during the normal progression of a stream; these are termed stream
events are include control sequences with incorrect values or that occur out of expected sequence.
Anomalies are reported by and configured by type:
• Decode
• Stream
• Application layer
Metadata:
- anomaly:
# Anomaly log records describe unexpected conditions such as truncated packets,
# packets with invalid IP/UDP/TCP length values, and other events that render
# the packet invalid for further processing or describe unexpected behavior on
# an established stream. Networks which experience high occurrences of
# anomalies may experience packet processing degradation.
#
# Anomalies are reported for the following:
# 1. Decode: Values and conditions that are detected while decoding individual
# packets. This includes invalid or unexpected values for low-level protocol
# lengths as well.
# 2. Stream: This includes stream related events (TCP 3-way handshake issues,
# unexpected sequence number, etc).
# 3. Application layer: These denote application layer specific conditions that
# are unexpected, invalid or are unexpected given the application monitoring
# state.
#
# By default, anomaly logging is disabled. When anomaly logging is enabled,
# application-layer anomaly reporting is enabled.
#
# Choose one or both types of anomaly logging and whether to enable
# logging of the packet header for packet anomalies.
types:
#decode: no
#stream: no
#applayer: yes
#packethdr: no
HTTP
HTTP transaction logging.
Config:
- http:
extended: yes # enable this for extended logging information
# custom allows additional http fields to be included in eve-log
# the example below adds three additional fields when uncommented
#custom: [Accept-Encoding, Accept-Language, Authorization]
# set this value to one among {both, request, response} to dump all
# http headers for every http request and/or response
# dump-all-headers: [both, request, response]
In the custom option values from both columns can be used. The HTTP Header column is case insensitive.
DNS
ò Note
DNS records are logged as one entry for the request, and one entry for the response.
YAML:
- dns:
#version: 3
TLS
TLS records are logged one record per session.
YAML:
- tls:
extended: yes # enable this for extended logging information
# custom allows to control which tls fields that are included
# in eve-log
#custom: [subject, issuer, serial, fingerprint, sni, version, not_before, not_after,␣
˓→certificate, chain, ja3, ja3s, ja4]
The default is to log certificate subject and issuer. If extended is enabled, then the log gets more verbose.
By using custom it is possible to select which TLS fields to log. Note that this will disable ``extended`` logging.
ARP
ARP records are logged as one entry for the request, and one entry for the response.
YAML:
- arp:
enabled: no
The logger is disabled by default since ARP can generate a large number of events.
MQTT
EVE-JSON output for MQTT consists of one object per MQTT transaction, with some common and various type-
specific fields. Two aspects can be configured:
YAML:
- mqtt:
# passwords: yes # enable output of passwords
# string-log-limit: 1kb # limit size of logged strings in bytes.
# Can be specified in kb, mb, gb. Just a number
# is parsed as bytes. Default is 1KB.
# Use a value of 0 to disable limiting.
# Note that the size is also bounded by
# the maximum parsed message size (see
# app-layer configuration)
The default is to output passwords in cleartext and not to limit the size of message payloads. Depending on the kind of
context the parser is used in (public output, frequent binary transmissions, ...) this can be configured for regular mqtt
events.
Drops
Drops are event types logged when the engine drops a packet.
Config:
- drop:
alerts: yes # log alerts that caused drops
flows: all # start or all: 'start' logs only a single drop
# per flow direction. All logs each dropped pkt.
# Enable logging the final action taken on a packet by the engine
# (will show more information in case of a drop caused by 'reject')
verdict: yes
Stats
Zero-valued Counters
While the human-friendly stats.log output will only log out non-zeroed counters, by default EVE Stats logs output all
enabled counters, which may lead to fairly verbose logs.
To reduce log file size, one may set null-values to false. Do note that this may impact on the visibility of information
for which a stats counter as zero is relevant.
Config:
- stats:
# Don't log stats counters that are zero. Default: true
#null-values: false # False will NOT log stats counters: 0
outputs:
- eve-log:
filename: eve-%s.json
The example above adds epoch time to the filename. All the date modifiers from the C library should be supported.
See the man page for strftime for all supported modifiers.
outputs:
- eve-log:
filename: eve.json
threaded: on
This example will cause each Suricata thread to write to its own "eve.json" file. Filenames are constructed by adding
a unique identifier to the filename. For example, eve.7.json.
outputs:
- eve-log:
filename: eve-%Y-%m-%d-%H:%M.json
rotate-interval: minute
The example above creates a new log file each minute, where the filename contains a timestamp. Other supported
rotate-interval values are hour and day.
In addition to this, it is also possible to specify the rotate-interval as a relative value. One example is to rotate the
log file each X seconds.
outputs:
- eve-log:
filename: eve-%Y-%m-%d-%H:%M:%S.json
rotate-interval: 30s
The example above rotates eve-log each 30 seconds. This could be replaced with 30m to rotate every 30 minutes, 30h
to rotate every 30 hours, 30d to rotate every 30 days, or 30w to rotate every 30 weeks.
outputs:
- eve-log:
enabled: yes
type: file
filename: eve-ips.json
types:
- alert
- drop
- eve-log:
enabled: yes
type: file
filename: eve-nsm.json
types:
- http
- dns
- tls
So here the alerts and drops go into 'eve-ips.json', while http, dns and tls go into 'eve-nsm.json'.
With the exception of drop, you can specify multiples of the same logger type, however, drop can only be used once.
ò Note
The use of independent json loggers such as alert-json-log, dns-json-log, etc. has been deprecated and will be re-
moved by June 2020. Please use multiple eve-log instances as documented above instead. Please see the deprecation
policy for more information.
File permissions
Log file permissions can be set individually for each logger. filemode can be used to control the permissions of a log
file, e.g.:
outputs:
- eve-log:
enabled: yes
filename: eve.json
filemode: 600
The example above sets the file permissions on eve.json to 600, which means that it is only readable and writable by
the owner of the file.
JSON flags
Several flags can be specified to control the JSON output in EVE:
outputs:
- eve-log:
json:
# Sort object keys in the same order as they were inserted
(continues on next page)
All these flags are enabled by default, and can be modified per EVE instance.
Community Flow ID
Often Suricata is used in combination with other tools like Bro/Zeek. Enabling the community-id option in the eve-log
section adds a new community_id field to each output.
Example:
{
"timestamp": "2003-12-16T13:21:44.891921+0000",
"flow_id": 1332028388187153,
"pcap_cnt": 1,
"event_type": "alert",
...
"community_id": "1:LQU9qZlK+B5F3KDmev6m5PMibrg=",
"alert": {
"action": "allowed",
"gid": 1,
"signature_id": 1,
},
}
{
"timestamp": "2003-12-16T13:21:45.037333+0000",
"flow_id": 1332028388187153,
"event_type": "flow",
"flow": {
"pkts_toserver": 5,
"pkts_toclient": 4,
"bytes_toserver": 338,
"bytes_toclient": 272,
"start": "2003-12-16T13:21:44.891921+0000",
"end": "2003-12-16T13:21:45.346457+0000",
"age": 1,
"state": "closed",
"reason": "shutdown",
"alerted": true
},
"community_id": "1:LQU9qZlK+B5F3KDmev6m5PMibrg=",
}
Options
- eve-log:
# Community Flow ID
# Adds a 'community_id' field to EVE records. These are meant to give
# a records a predictable flow id that can be used to match records to
# output of other tools such as Bro.
#
# Takes a 'seed' that needs to be same across sensors and tools
# to make the id less predictable.
Multi Tenancy
Suricata can be configured to support multiple tenants with different detection engine configurations. When these
tenants are configured and the detection engine is running then all EVE logging will also report the tenant_id field
for traffic matching a specific tenant.
{
"timestamp": "2017-04-07T22:24:37.251547+0100",
"flow_id": 586497171462735,
"pcap_cnt": 53381,
"event_type": "alert",
"src_ip": "192.168.2.14",
"src_port": 50096,
"dest_ip": "209.53.113.5",
"dest_port": 80,
"proto": "TCP",
"metadata": {
"flowbits": [
"http.dottedquadhost"
]
},
"tx_id": 4,
"alert": {
"action": "allowed",
"gid": 1,
"signature_id": 2018358,
(continues on next page)
Common Section
All the JSON log types share a common structure:
Field: flow_id
Correlates the network protocol, flow logs EVE data and any evidence that Suricata has logged to an alert event and
that alert's metadata, as well as to fileinfo/file transaction and anomaly logs, if available. The same correlation and
logs are produced regardless if there is an alert, for any session/flow.
The ability to correlate EVE logs belonging to a specific session/flow was introduced in 2014 (see commit
f1185d051c21).
Further below, you can see several examples of events logged by Suricata: an alert for an HTTP rule, fileinfo, http,
anomaly, and flow events, all easily correlated using the flow_id EVE field:
$ jq 'select(.flow_id==1676750115612680)' eve.json
{
"timestamp": "2023-09-18T06:13:41.532140+0000",
"flow_id": 1676750115612680,
"pcap_cnt": 130,
"event_type": "alert",
"src_ip": "142.11.240.191",
"src_port": 35361,
"dest_ip": "192.168.100.237",
"dest_port": 49175,
"proto": "TCP",
"pkt_src": "wire/pcap",
"ether": {
"src_mac": "52:54:00:36:3e:ff",
"dest_mac": "12:a9:86:6c:77:de"
},
"tx_id": 1,
"alert": {
"action": "allowed",
"gid": 1,
"signature_id": 2045001,
"rev": 1,
"signature": "ET ATTACK_RESPONSE Win32/LeftHook Stealer Browser Extension Config␣
(continues on next page)
{
"timestamp": "2023-09-18T06:13:33.903924+0000",
"flow_id": 1676750115612680,
"pcap_cnt": 70,
"event_type": "fileinfo",
"src_ip": "192.168.100.237",
"src_port": 49175,
"dest_ip": "142.11.240.191",
"dest_port": 35361,
"proto": "TCP",
"pkt_src": "wire/pcap",
"ether": {
"src_mac": "12:a9:86:6c:77:de",
"dest_mac": "52:54:00:36:3e:ff"
},
"http": {
"hostname": "142.11.240.191",
"http_port": 35361,
"url": "/",
"http_content_type": "text/xml",
"http_method": "POST",
"protocol": "HTTP/1.1",
"status": 200,
"length": 212
},
"app_proto": "http",
"fileinfo": {
"filename": "/",
"gaps": false,
"state": "CLOSED",
"stored": false,
"size": 137,
"tx_id": 0
}
}
{
"timestamp": "2023-09-18T06:13:33.903924+0000",
(continues on next page)
{
"timestamp": "2023-09-18T06:13:58.882971+0000",
"flow_id": 1676750115612680,
"pcap_cnt": 2878,
"event_type": "anomaly",
"src_ip": "192.168.100.237",
"src_port": 49175,
"dest_ip": "142.11.240.191",
"dest_port": 35361,
"proto": "TCP",
"pkt_src": "wire/pcap",
"ether": {
"src_mac": "12:a9:86:6c:77:de",
"dest_mac": "52:54:00:36:3e:ff"
},
"tx_id": 3,
"anomaly": {
"app_proto": "http",
"type": "applayer",
"event": "UNABLE_TO_MATCH_RESPONSE_TO_REQUEST",
"layer": "proto_parser"
}
}
{
"timestamp": "2023-09-18T06:13:21.216460+0000",
(continues on next page)
ò Note
It is possible to have even more detailed alert records, by enabling for instance logging http-body, or alert metadata
(alert output).
Event types
The common part has a field "event_type" to indicate the log type.
"event_type":"TYPE"
When an application layer protocol event is detected, the common section will have an app_proto field.
"app_proto": "http"
PCAP fields
"pcap_cnt": 123
pcap_cnt contains the packet number in the pcap. This can be used to look up a packet in Wireshark for example.
"pcap_filename":"/path/to/file.pcap"
pcap_filename contains the file name and location of the pcap that generated the event.
ò Note
the pcap fields are only available on "real" packets, and are omitted from internal "pseudo" packets such as flow
timeout packets.
"alert": {
"action": "allowed",
"gid": 1,
"signature_id": 2024056,
"rev": 4,
"signature": "ET MALWARE Win32/CryptFile2 / Revenge Ransomware Checkin M3",
"category": "Malware Command and Control Activity Detected",
"severity": 1,
"metadata": {
"affected_product": [
"Windows_XP_Vista_7_8_10_Server_32_64_Bit"
],
"attack_target": [
"Client_Endpoint"
],
"created_at": [
(continues on next page)
Action field
"action":"allowed"
Action is set to "allowed" unless a rule used the "drop" action and Suricata is in IPS mode, or when the rule used the
"reject" action. It is important to note that this does not necessarily indicate the final verdict for a given packet or flow,
since one packet may match on several rules.
Verdict
An object containning info on the final action that will be applied to a given packet, based on all the signatures triggered
by it and other possible events (e.g., a flow drop). For that reason, it is possible for an alert with an action allowed to
have a verdict drop, in IPS mode, for instance, if that packet was dropped due to a different alert.
• Action: alert, pass, drop (this latter only occurs in IPS mode)
• Reject-target: to_server, to_client, both (only occurs for 'reject' rules)
• Reject: an array of strings with possible reject types: tcp-reset, icmp-prohib (only occurs for 'reject' rules)
Example:
"verdict": {
"action": "drop",
"reject-target": "to_client",
"reject": "[icmp-prohib]"
}
Pcap Field
If pcap log capture is active in multi mode, a capture_file key will be added to the event with value being the full path
of the pcap file where the corresponding packets have been extracted.
Fields
• "type": Either "decode", "stream" or "applayer". In rare cases, type will be "unknown". When this occurs, an
additional field named "code" will be present. Events with type "applayer" are detected by the application layer
parsers.
• "event" The name of the anomalous event. Events of type "decode" are prefixed with "decoder"; events of type
"stream" are prefixed with "stream".
• "code" If "type" is "unknown", than "code" contains the unrecognized event code. Otherwise, this field is not
present.
The following field is included when "type" has the value "applayer":
• "layer" Indicates the handling layer that detected the event. This will be "proto_parser" (protocol parser),
"proto_detect" (protocol detection) or "parser."
When packethdr is enabled, the first 32 bytes of the packet are included as a byte64-encoded blob in the main part of
record. This applies to events of "type" "packet" or "stream" only.
Examples
"anomaly": {
"type": "decode",
"event": "decoder.icmpv4.unknown_type"
}
"anomaly": {
"type": "decode",
"event": "decoder.udp.pkt_too_small"
}
"anomaly": {
"type": "decode",
"event": "decoder.ipv4.wrong_ip_version"
}
"anomaly": {
"type": "stream",
"event": "stream.pkt_invalid_timestamp"
}
{
(continues on next page)
{
"timestamp": "2016-01-11T05:10:54.612110-0800",
"flow_id": 412547343494194,
"pcap_cnt": 1391293,
"event_type": "anomaly",
"src_ip": "192.168.122.149",
"src_port": 49324,
"dest_ip": "69.195.71.174",
"dest_port": 443,
"proto": "TCP",
"app_proto": "tls",
"anomaly": {
"type": "applayer",
"event": "APPLAYER_DETECT_PROTOCOL_ONLY_ONE_DIRECTION",
"layer": "proto_detect"
}
}
{
"timestamp": "2016-01-11T05:10:52.828802-0800",
"flow_id": 201217772575257,
"pcap_cnt": 1391281,
"event_type": "anomaly",
"src_ip": "192.168.122.149",
"src_port": 49323,
"dest_ip": "69.195.71.174",
"dest_port": 443,
"proto": "TCP",
"tx_id": 0,
"app_proto": "tls",
"anomaly": {
"type": "applayer",
"event": "INVALID_RECORD_TYPE",
"layer": "proto_parser"
(continues on next page)
- eve-log:
enabled: yes
type: file #file|syslog|unix_dgram|unix_stream
filename: eve.json
# the following are valid when type: syslog above
#identity: "suricata"
#facility: local5
#level: Info ## possible levels: Emergency, Alert, Critical,
## Error, Warning, Notice, Info, Debug
types:
- alert
- http:
extended: yes # enable this for extended logging information
# custom allows additional http fields to be included in eve-log
# the example below adds three additional fields when uncommented
#custom: [Accept-Encoding, Accept-Language, Authorization]
custom: [accept, accept-charset, accept-encoding, accept-language,
accept-datetime, authorization, cache-control, cookie, from,
max-forwards, origin, pragma, proxy-authorization, range, te, via,
x-requested-with, dnt, x-forwarded-proto, accept-range, age,
allow, connection, content-encoding, content-language,
content-length, content-location, content-md5, content-range,
content-type, date, etags, expires, last-modified, link, location,
proxy-authenticate, referer, refresh, retry-after, server,
(continues on next page)
The benefits here of using the extended logging is to see if this action for example was a POST or perhaps if a download
of an executable actually returned any bytes.
It is also possible to dump every header for HTTP requests/responses or both via the keyword dump-all-headers.
Examples
"http": {
"hostname": "www.digip.org",
"url" :"\/jansson\/releases\/jansson-2.6.tar.gz",
"http_user_agent": "<User-Agent>",
"http_content_type": "application\/x-gzip"
}
In case the hostname shows a port number, such as in case there is a header "Host: www.test.org:1337":
"http": {
"http_port": 1337,
"hostname": "www.test.org",
"url" :"\/this\/is\/test.tar.gz",
"http_user_agent": "<User-Agent>",
"http_content_type": "application\/x-gzip"
}
"http": {
"hostname": "direkte.vg.no",
"url":".....",
"http_user_agent": "<User-Agent>",
"http_content_type": "application\/json",
"http_refer": "http:\/\/www.vg.no\/",
"http_method": "GET",
"protocol": "HTTP\/1.1",
"status":"200",
"length":310
}
"http": {
"hostname": "test.co.uk",
"url":"\/test\/file.json",
"http_user_agent": "<User-Agent>",
"http_content_type": "application\/json",
"http_refer": "http:\/\/www.test.com\/",
"http_method": "GET",
"protocol": "HTTP\/1.1",
(continues on next page)
ò Note
Suricata 7 style DNS logging can be retained by setting the version field to 2, however this will be removed in
Suricata 9.
Fields
• "tc": Indicating in case of DNS answer flag, Truncation flag (ex: true if set)
• "rd": Indicating in case of DNS answer flag, Recursion Desired flag (ex: true if set)
• "ra": Indicating in case of DNS answer flag, Recursion Available flag (ex: true if set)
• "z": Indicating in case of DNS answer flag, Reserved bit (ex: true if set)
• "rcode": (ex: NOERROR)
• "ttl": Time-To-Live for this resource record
• "queries": A list of query objects
• "answers": A list of answer objects
• "authorities": A list of authority objects
• "additionals": A list of additional objects
More complex DNS record types may log additional fields for resource data:
• "soa": Section containing fields for the SOA (start of authority) record type
– "mname": Primary name server for this zone
– "rname": Authority's mailbox
– "serial": Serial version number
– "refresh": Refresh interval (seconds)
– "retry": Retry interval (seconds)
– "expire": Upper time limit until zone is no longer authoritative (seconds)
– "minimum": Minimum ttl for records in this zone (seconds)
• "sshfp": section containing fields for the SSHFP (ssh fingerprint) record type
– "fingerprint": Hex format of the fingerprint (ex: 12:34:56:78:9a:bc:de:...)
– "algo": Algorithm number (ex: 1 for RSA, 2 for DSS)
– "type": Fingerprint type (ex: 1 for SHA-1)
• "srv": section containing fields for the SRV (location of services) record type
– "target": Domain name of the target host (ex: foo.bar.baz)
– "priority": Target priority (ex: 20)
– "weight": Weight for target selection (ex: 1)
– "port": Port on this target host of this service (ex: 5060)
One can control which RR types are logged by using the "types" field in the suricata.yaml file. If this field is not
specified, all RR types are logged. More than 50 values can be specified with this field as shown below:
Configuration:
- eve-log:
enabled: yes
type: file #file|syslog|unix_dgram|unix_stream
filename: eve.json
# the following are valid when type: syslog above
#identity: "suricata"
#facility: local5
(continues on next page)
Examples
Example of a DNS query for the IPv4 address of "twitter.com" (resource record type 'A'):
"dns": {
"version": 3,
"type": "request",
"id": 16000,
"queries": [
{
"rrname": "twitter.com",
"rrtype": "A"
}
]
}
"dns": {
"version": 3,
"type": "answer",
"id": 45444,
"flags": "8180",
"qr": true,
"rd": true,
(continues on next page)
"dns": {
"version": 3,
"type": "answer",
"id": 18523,
"flags": "8180",
"qr": true,
"rd": true,
"ra": true,
"rcode": "NOERROR",
"grouped": {
"A": [
"192.0.78.24",
"192.0.78.25"
],
"CNAME": [
"suricata.io"
]
}
}
Examples
"ftp": {
"command": "RETR",
"command_data": "100KB.zip",
"reply": [
"Opening BINARY mode data connection for 100KB.zip (102400 bytes).",
"Transfer complete."
],
"completion_code": [
"150",
"226"
],
"ftp": {
"command": "EPRT",
"command_data": "|2|2a01:e34:ee97:b130:8c3e:45ea:5ac6:e301|41813|",
"reply": [
"EPRT command successful. Consider using EPSV."
],
"completion_code": [
"200"
],
"dynamic_port": 41813,
"mode": "active",
"reply_received": "yes"
}
Examples
"ftp_data": {
"filename": "temp.txt",
"command": "RETR"
}
Examples
"tls": {
"subject": "C=US, ST=California, L=Mountain View, O=Google Inc, CN=*.google.com",
"issuerdn": "C=US, O=Google Inc, CN=Google Internet Authority G2"
}
"tls": {
"session_resumed": true
}
"tls": {
"subject": "C=US, ST=California, L=Mountain View, O=Google Inc, CN=*.google.com",
"issuerdn": "C=US, O=Google Inc, CN=Google Internet Authority G2",
"serial": "0C:00:99:B7:D7:54:C9:F6:77:26:31:7E:BA:EA:7C:1C",
"fingerprint": "8f:51:12:06:a0:cc:4e:cd:e8:a3:8b:38:f8:87:59:e5:af:95:ca:cd",
"sni": "calendar.google.com",
"version": "TLS 1.2",
"notbefore": "2017-01-04T10:48:43",
"notafter": "2017-03-29T10:18:00"
}
Example of certificate logging using TLS custom logging (subject, sni, certificate):
"tls": {
"subject": "C=US, ST=California, L=Mountain View, O=Google Inc, CN=*.googleapis.com
"sni": "www.googleapis.com",
"certificate": "MIIE3TCCA8WgAwIBAgIIQPsvobRZN0gwDQYJKoZIhvcNAQELBQAwSTELMA [...]"
}
"tftp": {
"packet": "write",
"file": "rfc1350.txt",
"mode": "octet"
}
"smb": {
"id": 1,
"dialect": "unknown",
"command": "SMB2_COMMAND_CREATE",
"status": "STATUS_SUCCESS",
"status_code": "0x0",
"session_id": 4398046511201,
"tree_id": 1,
"filename": "atsvc",
"disposition": "FILE_OPEN",
"access": "normal",
(continues on next page)
File/pipe close:
"smb": {
"id": 15,
"dialect": "2.10",
"command": "SMB2_COMMAND_CLOSE",
"status": "STATUS_SUCCESS",
"status_code": "0x0",
"session_id": 4398046511121,
"tree_id": 1,
}
"smb": {
"id": 1,
"dialect": "2.??",
"command": "SMB1_COMMAND_NEGOTIATE_PROTOCOL",
"status": "STATUS_SUCCESS",
"status_code": "0x0",
"session_id": 0,
"tree_id": 0,
"client_dialects": [
"PC NETWORK PROGRAM 1.0",
"LANMAN1.0",
"Windows for Workgroups 3.1a",
"LM1.2X002",
"LANMAN2.1",
"NT LM 0.12",
"SMB 2.002",
"SMB 2.???"
(continues on next page)
"request": {
"native_os": "Unix",
"native_lm": "Samba 3.9.0-SVN-build-11572"
},
"response": {
"native_os": "Windows (TM) Code Name \"Longhorn\" Ultimate 5231",
"native_lm": "Windows (TM) Code Name \"Longhorn\" Ultimate 6.0"
}
DCERPC fields
"smb": {
"id": 4,
"dialect": "unknown",
"command": "SMB2_COMMAND_IOCTL",
"status": "STATUS_SUCCESS",
"status_code": "0x0",
"session_id": 4398046511201,
"tree_id": 0,
"dcerpc": {
"request": "REQUEST",
"response": "RESPONSE",
"opnum": 0,
"req": {
"frag_cnt": 1,
"stub_data_size": 136
},
"res": {
"frag_cnt": 1,
"stub_data_size": 8
},
"call_id": 2
}
}
DCERPC BIND/BINDACK:
"smb": {
"id": 53,
"dialect": "2.10",
"command": "SMB2_COMMAND_WRITE",
"status": "STATUS_SUCCESS",
"status_code": "0x0",
"session_id": 35184439197745,
"tree_id": 1,
"dcerpc": {
"request": "BIND",
"response": "BINDACK",
"interfaces": [
{
"uuid": "12345778-1234-abcd-ef00-0123456789ac",
"version": "1.0",
"ack_result": 2,
"ack_reason": 0
},
{
"uuid": "12345778-1234-abcd-ef00-0123456789ac",
"version": "1.0",
"ack_result": 0,
"ack_reason": 0
},
{
"uuid": "12345778-1234-abcd-ef00-0123456789ac",
"version": "1.0",
(continues on next page)
• "transaction_id" (hex): the unique id of the transaction, generated by node making the request (a.k.a the querying
node). Same transaction_id is echoed back by responding nodes.
• "client_version" (hex): identifies the type and version of the bittorrent-dht client. Some implementations may be
missing this field.
Extra fields:
error
• "request_type" (string): the type of the request (a.k.a. the query). Included if this packet was a request
• "request": a request (a.k.a. a query) sent by the bittorrent-dht client
– "request.id" (hex): the node ID of the node which sent the request (20 bytes in network byte order)
– "request.target" (hex): the target node ID. Used by the find_node request_type
– "request.info_hash" (hex): info hash of target torrent (20 bytes). Used by the get_peers and an-
nounce_peer request_types
– "request.token" (hex): token key received from previous get_peers request. Used by the announce_peer
request type
– "request.implied_port" (num): 0 or 1, if 1 ignore provided port and use source port of UDP packet.
Used by the announce_peer request_type
– "request.port" (num): port on which peer will download torrent. Used by the announce_peer re-
quest_type
response
node object
Examples:
"bittorrent_dht": {
"transaction_id": "0c17",
"client_version": "5554b50c",
"response": {
"id": "42aeb304a0845b3b9ee089327b48967b8e87b2e2"
}
}
"bittorrent_dht": {
"transaction_id": "05e4",
"client_version": "4c540126",
"request_type": "get_peers",
"request": {
"id": "41aff1580119f074e2f537f231f12adf684f0d1f",
"info_hash": "19a6fcfcba6cc2c6d371eb754074d095adb5d291"
}
}
"bittorrent_dht": {
"transaction_id": "05e4",
"client_version": "555462d6",
"response": {
"id": "19a6f98be177e32e7b5bd77276d529f03e3ba8a9",
"values": [
{
"ip": "45.238.190.2",
"port": 6881
},
{
"ip": "185.70.52.245",
"port": 51215
},
{
"ip": "45.21.238.247",
"port": 55909
},
{
"ip": "62.28.248.195",
"port": 6881
}
],
"token": "c17094641ca8844d711120baecb2b5cf25435614"
}
}
"bittorrent_dht": {
"transaction_id": "44e6",
"client_version": "4c540126",
"request_type": "get_peers",
"request": {
"id": "41aff1580119f074e2f537f231f12adf684f0d1f",
(continues on next page)
"bittorrent_dht": {
"transaction_id": "44e6",
"response": {
"id": "19a7c8f4f6d14d9f87a67671720633e551f30cb7",
"values": [
{
"ip": "45.22.252.153",
"port": 36798
},
{
"ip": "94.41.206.37",
"port": 30850
},
{
"ip": "84.228.120.50",
"port": 6881
},
{
"ip": "178.81.206.84",
"port": 12373
},
{
"ip": "110.188.93.186",
"port": 22223
}
],
"token": "c897ee539e02a54595b4d7cfb6319ad48e71b282"
}
}
"bittorrent_dht": {
"transaction_id": "aa",
"request_type": "announce_peer",
"request": {
"id": "abcdefghij0123456789",
"info_hash": "mnopqrstuvwxyz123456",
"token": "aoeusnth",
"port": 6881
}
}
"bittorrent_dht": {
"transaction_id": "aa",
"response": {
"id": "mnopqrstuvwxyz123456"
}
}
"bittorrent_dht": {
"transaction_id": "7fe9",
"client_version": "4c540126",
"request_type": "announce_peer",
"request": {
"id": "51bc83f53417a62a40e8a48170cad369a13fef3c",
"info_hash": "19a6fcfcba6cc2c6d371eb754074d095adb5d291",
"token": "cacbef35",
"implied_port": 1,
"port": 54892
}
}
"bittorrent_dht": {
"transaction_id": "7fe9",
"client_version": "4c54012f",
"response": {
"id": "19a66dece45e0288ab75d141e0255738a1ce8508"
}
}
"bittorrent_dht": {
"transaction_id": "aa",
"error": {
"num": 201,
"msg": "A Generic Error Ocurred"
}
}
"bittorrent_dht": {
"transaction_id": "aa",
"error": {
"num": 203,
"msg": "Malformed Packet"
}
}
NTLMSSP fields
"ntlmssp": {
"domain": "VNET3",
"user": "administrator",
(continues on next page)
"smb": {
"id": 3,
"dialect": "NT LM 0.12",
"command": "SMB1_COMMAND_SESSION_SETUP_ANDX",
"status": "STATUS_SUCCESS",
"status_code": "0x0",
"session_id": 2048,
"tree_id": 0,
"ntlmssp": {
"domain": "VNET3",
"user": "administrator",
"host": "BLU",
"version": "60.230 build 13699 rev 188"
},
"request": {
"native_os": "Unix",
"native_lm": "Samba 3.9.0-SVN-build-11572"
},
"response": {
"native_os": "Windows (TM) Code Name \"Longhorn\" Ultimate 5231",
"native_lm": "Windows (TM) Code Name \"Longhorn\" Ultimate 6.0"
}
}
Kerberos fields
"smb": {
"dialect": "2.10",
"command": "SMB2_COMMAND_SESSION_SETUP",
"status": "STATUS_SUCCESS",
"status_code": "0x0",
"session_id": 35184439197745,
"tree_id": 0,
"kerberos": {
"realm": "CONTOSO.LOCAL",
"snames": [
"cifs",
"DC1.contoso.local"
]
}
}
• "proto_version": The protocol version transported with the ssh protocol (1.x, 2.x)
• "software_version": The software version used by end user
• "hassh.hash": MD5 of hassh algorithms of client or server
• "hassh.string": hassh algorithms of client or server
Hassh must be enabled in the Suricata config file (set 'app-layer.protocols.ssh.hassh' to 'yes').
Example of SSH logging:
"ssh": {
"client": {
"proto_version": "2.0",
"software_version": "OpenSSH_6.7",
"hassh": {
"hash": "ec7378c1a92f5a8dde7e8b7a1ddf33d1",
"string": "curve25519-sha256,diffie-hellman-group14-sha256,diffie-hellman-
˓→group14-sha1,ext-info-c",
}
},
"server": {
"proto_version": "2.0",
"software_version": "OpenSSH_6.7",
"hassh": {
"hash": "ec7378c1a92f5a8dde7e8b7a1ddf33d1",
"string": "curve25519-sha256,[email protected],ecdh-sha2-nistp256",
}
}
}
• "state": display state of the flow (include "new", "established", "closed", "bypassed")
• "reason": mechanism that did trigger the end of the flow (include "timeout", "forced" and "shutdown")
• "alerted": "true" or "false" depending if an alert has been seen on flow
Example
"flow": {
"pkts_toserver": 23,
"pkts_toclient": 21,
"bytes_toserver": 4884,
"bytes_toclient": 7392,
"bypassed": {
"pkts_toserver": 10,
"pkts_toclient": 8,
"bytes_toserver": 1305,
"bytes_toclient": 984
},
"start": "2019-05-28T23:32:29.025256+0200",
"end": "2019-05-28T23:35:28.071281+0200",
"age": 179,
"bypass": "capture",
"state": "bypassed",
"reason": "timeout",
"alerted": false
}
The optional "cookie" field is a string identifier the RDP client has chosen to provide.
The optional "flags" field is a list of client directives. Possible values:
• "restricted_admin_mode_required"
• "redirected_authentication_mode_required"
• "correlation_info_present"
With this event, the initial RDP negotiation is complete in terms of tracking and logging.
With this event, the initial RDP negotiation is complete in terms of tracking and logging.
The session will use TLS encryption.
The "x509_serials" field is a list of observed certificate serial numbers, e.g., "16ed2aa0495f259d4f5d99edada570d1".
Examples
RDP logging:
"rdp": {
"tx_id": 0,
"event_type": "initial_request",
"cookie": "A70067"
}
"rdp": {
"tx_id": 1,
"event_type": "initial_response"
}
(continues on next page)
"rdp": {
"tx_id": 2,
"event_type": "connect_request",
"client": {
"version": "v5",
"desktop_width": 1152,
"desktop_height": 864,
"color_depth": 15,
"keyboard_layout": "en-US",
"build": "Windows XP",
"client_name": "ISD2-KM84178",
"keyboard_type": "enhanced",
"function_keys": 12,
"product_id": 1,
"capabilities": [
"support_errinfo_pdf"
],
"id": "55274-OEM-0011903-00107"
},
"channels": [
"rdpdr",
"cliprdr",
"rdpsnd"
]
}
"rdp": {
"tx_id": 3,
"event_type": "connect_response"
}
"rdp": {
"tx_id": 0,
"event_type": "initial_request",
"cookie": "AWAKECODI"
}
"rdp": {
"tx_id": 1,
"event_type": "initial_response",
"server_supports": [
"extended_client_data"
],
"protocol": "hybrid"
}
"rdp": {
"tx_id": 2,
"event_type": "tls_handshake",
(continues on next page)
Examples
"rfb": {
"server_protocol_version": {
"major": "003",
"minor": "007"
},
"client_protocol_version": {
"major": "003",
"minor": "007"
},
"authentication": {
"security_type": 2,
"vnc": {
"challenge": "0805b790b58e967f2b350a0c99de3881",
"response": "aecb26faeaaa62179636a5934bac1078"
},
"security_result": "OK"
},
"screen_shared": false,
"framebuffer": {
(continues on next page)
Transactions
A single MQTT communication can consist of multiple messages that need to be exchanged between broker and client.
For example, some actions at higher QoS levels (> 0) usually involve a combination of requests and acknowledgement
messages that are linked by a common identifier:
• CONNECT followed by CONNACK
• PUBLISH followed by PUBACK (QoS 1) or PUBREC/PUBREL/PUBCOMP (QoS 2)
• SUBSCRIBE followed by SUBACK
• UNSUBSCRIBE followed by UNSUBACK
The MQTT parser merges individual messages into one EVE output item if they belong to one transaction. In such
cases, the source and destination information (IP/port) reflect the direction of the initial request, but contain messages
from both sides.
Example for a PUBLISH at QoS 2:
{
"timestamp": "2020-05-19T18:00:39.016985+0200",
"flow_id": 1454127794305760,
"pcap_cnt": 65,
"event_type": "mqtt",
"src_ip": "0000:0000:0000:0000:0000:0000:0000:0001",
"src_port": 60105,
"dest_ip": "0000:0000:0000:0000:0000:0000:0000:0001",
"dest_port": 1883,
"proto": "TCP",
"mqtt": {
"publish": {
"qos": 2,
(continues on next page)
Note that some message types (aka control packet types), such as PINGREQ and PINGRESP, have no type-specific data,
nor do they have information that facilitate grouping into transactions. These will be logged as single items and only
contain the common fields listed below.
Common fields
• "connect.protocol_string": Protocol string as defined in the spec, e.g. MQTT (MQTT 3.1.1 and later) or MQIsdp
(MQTT 3.1).
• "connect.protocol_version": Protocol version as defined in the specification:
– protocol version 3: MQTT 3.1
– protocol version 4: MQTT 3.1.1
– protocol version 5: MQTT 5.0
• "connect.flags.username", "connect.flags.password": Set to true if credentials are submitted with the connect
request.
"connect": {
"qos": 0,
"retain": false,
"dup": false,
"protocol_string": "MQTT",
"protocol_version": 5,
"flags": {
"username": true,
"password": true,
"will_retain": false,
"will": true,
"clean_session": true
},
"client_id": "client",
"username": "user",
"password": "pass",
"will": {
"topic": "willtopic",
"message": "willmessage",
"properties": {
"content_type": "mywilltype",
"correlation_data": "3c32aa4313b3e",
"message_expiry_interval": 133,
"payload_format_indicator": 144,
"response_topic": "response_topic1",
"userprop": "uservalue",
"will_delay_interval": 200
}
},
"properties": {
"maximum_packet_size": 11111,
"receive_maximum": 222,
"session_expiry_interval": 555,
"topic_alias_maximum": 666,
(continues on next page)
"connack": {
"qos": 0,
"retain": false,
"dup": false,
"session_present": false,
"return_code": 0,
"properties": {
"topic_alias_maximum": 10
}
}
"publish": {
"qos": 1,
"retain": false,
"dup": false,
"topic": "topic",
"message_id": 1,
"message": "baa baa sheep",
"properties": {
"content_type": "mytype",
"correlation_data": "3c32aa4313b3e",
"message_expiry_interval": 77,
"payload_format_indicator": 88,
"response_topic": "response_topic1",
"topic_alias": 5,
(continues on next page)
"puback": {
"qos": 0,
"retain": false,
"dup": false,
"message_id": 1,
"reason_code": 16
}
• "subscribe.message_id": (Only present if QOS level > 0) Message ID for this subscription.
• "subscribe.topics": Array of pairs describing the subscribed topics:
– "subscribe.topics[].topic": Topic to subscribe to.
– "subscribe.topics[].qos": QOS level to apply for when subscribing.
• "subscribe.properties": (Optional, MQTT 5.0) SUBSCRIBE properties set on this request. See 3.8.2.1 in the
spec for more information on SUBSCRIBE properties.
Example of MQTT SUBSCRIBE logging:
"subscribe": {
"qos": 1,
"retain": false,
"dup": false,
"message_id": 1,
"topics": [
{
"topic": "topicX",
"qos": 0
},
{
"topic": "topicY",
"qos": 0
}
]
}
"suback": {
"qos": 0,
"retain": false,
"dup": false,
"message_id": 1,
"qos_granted": [
0,
0
]
}
• "unsubscribe.message_id": (Only present if QOS level > 0) Message ID for this unsubscribe action.
• "unsubscribe.topics": Array of topics to be unsubscribed from.
• "unsubscribe.properties": (Optional, MQTT 5.0) UNSUBSCRIBE properties set on this request. See 3.10.2.1
in the spec for more information on UNSUBSCRIBE properties.
Example of MQTT UNSUBSCRIBE logging:
"unsubscribe": {
"qos": 1,
"retain": false,
"dup": false,
"message_id": 1,
"topics": [
"topicX",
"topicY"
]
}
"unsuback": {
"qos": 0,
"retain": false,
"dup": false,
"message_id": 1
}
• "auth.reason_code": Return code/reason code for this message. See 3.15.2.1 in the spec for more information on
these codes.
• "auth.properties": (Optional, MQTT 5.0) Properties set on this request. See 3.15.2.2 in the spec for more infor-
mation on these properties.
Example of MQTT AUTH logging:
"auth": {
"qos": 0,
"retain": false,
"dup": false,
"reason_code": 16
}
• "auth.reason_code": (Optional) Return code/reason code for this message. See 3.14.2.1 in the spec for more
information on these codes.
• "auth.properties": (Optional, MQTT 5.0) Properties set on this request. See 3.14.2.2 in the spec for more infor-
mation on DISCONNECT properties.
Example of MQTT DISCONNECT logging:
"disconnect": {
"qos": 0,
"retain": false,
"dup": false,
"reason_code": 4,
"properties": {
"session_expiry_interval": 122,
}
}
Messages exceeding the maximum message length limit (config setting app-layer.protocols.mqtt.
max-msg-length) will not be parsed entirely to reduce the danger of denial of service issues. In such cases,
only reduced metadata will be included in the EVE-JSON output. Furthermore, since no message ID is parsed, such
messages can not be placed into transactions, hence, they will always appear as a single transaction.
These truncated events will -- besides basic communication metadata -- only contain the following fields:
• "truncated": Set to true if the entry is truncated.
• "skipped_length": Size of the original message.
Example of a truncated MQTT PUBLISH message (with 10000 being the maximum length):
{
"timestamp": "2020-06-23T16:25:48.729785+0200",
"flow_id": 1872904524326406,
"pcap_cnt": 107,
"event_type": "mqtt",
(continues on next page)
There are the two fields "request" and "response" which can each contain the same set of fields : * "settings": a list of
settings with "name" and "value" * "headers": a list of headers with either "name" and "value", or "table_size_update",
or "error" if any * "error_code": the error code from GOAWAY or RST_STREAM, which can be "NO_ERROR" *
"priority": the stream priority.
Examples
"http2": {
"request": {
"settings": [
{
"settings_id": "SETTINGSMAXCONCURRENTSTREAMS",
"settings_value": 100
},
{
"settings_id": "SETTINGSINITIALWINDOWSIZE",
"settings_value": 65535
}
]
},
"response": {}
}
"http2": {
"request": {
"headers": [
{
"name": ":authority",
"value": "localhost:3000"
},
(continues on next page)
{
"timestamp": "2021-11-24T16:56:24.403417+0000",
"flow_id": 1960113262002448,
"pcap_cnt": 780,
"event_type": "pgsql",
"src_ip": "172.18.0.1",
"src_port": 54408,
"dest_ip": "172.18.0.2",
"dest_port": 5432,
"proto": "TCP",
"pgsql": {
"tx_id": 4,
"request": {
"simple_query": "select * from rule limit 5000;"
},
"response": {
"field_count": 7,
"data_rows": 5000,
"data_size": 3035751,
"command_completed": "SELECT 5000"
}
}
}
While on the wire PGSQL messages follow basically two types (startup messages and regular messages), those may
have different subfields and/or meanings, based on the message type. Messages are logged based on their type and
relevant fields.
We list a few possible message types and what they mean in Suricata. For more details on message types and formats
as well as what each message and field mean for PGSQL, check PostgreSQL's official documentation.
Fields
• "response": even when there are several "Response" messages, there is one response field that summarizes all
responses for that transaction. The possible messages will be described in another section.
Request Messages
Requests are sent by the frontend (client), which would be the source of a pgsql flow. Some of the possible request
messages are:
• "startup_message": message sent to start a new PostgreSQL connection
• "password_message": if password output for PGSQL is enabled in suricata.yaml, carries the password sent during
Authentication phase
• "simple_query": issued SQL command during simple query subprotocol. PostgreSQL identifies specific sets of
commands that change the set of expected messages to be exchanged as subprotocols.
• "message": "cancel_request": sent after a query, when the frontend attempts to cancel said query. This
message is sent over a different port, thus bring shown as a different flow. It has no direct answer from the
backend, but if successful will lead to an ErrorResponse in the transaction where the query was sent.
• "message": requests which do not have meaningful payloads are logged like this, where the field value is the
message type
There are several different authentication messages possible, based on selected authentication method. (e.g. the SASL
authentication will have a set of authentication messages different from when md5 authentication is chosen).
Response Messages
Responses are sent by the backend (server), which would be the destination of a pgsql flow. Some of the possible
request messages are:
• "authentication_sasl_final": final SCRAM server-final-message, as explained at https://www.postgresql.
org/docs/14/sasl-authentication.html#SASL-SCRAM-SHA-256
• "message": Backend responses which do not have meaningful payloads are logged like this, where the field value
is the message type
• "error_response"
• "notice_response"
• "notification_response"
• "authentication_md5_password": a string with the md5 salt value
• "parameter_status": logged as an array
• "backend_key_data"
• "data_rows": integer. When one or many DataRow messages are parsed, the total returned rows
• "data_size": in bytes. When one or many DataRow messages are parsed, the total size in bytes of the data returned
• "command_completed": string. Informs the command just completed by the backend
• "ssl_accepted": bool. With this event, the initial PGSQL SSL Handshake negotiation is complete in terms of
tracking and logging. The session will be upgraded to use TLS encryption
Examples
The two pgsql events in this example represent a rejected SSL handshake and a following connection request where
the authentication method indicated by the backend was md5:
{
"timestamp": "2021-11-24T16:56:19.435242+0000",
"flow_id": 1960113262002448,
"pcap_cnt": 21,
"event_type": "pgsql",
"src_ip": "172.18.0.1",
"src_port": 54408,
"dest_ip": "172.18.0.2",
"dest_port": 5432,
"proto": "TCP",
"pgsql": {
"tx_id": 1,
"request": {
"message": "SSL Request"
},
"response": {
"accepted": false
}
}
}
{
"timestamp": "2021-11-24T16:56:19.436228+0000",
"flow_id": 1960113262002448,
"pcap_cnt": 25,
"event_type": "pgsql",
"src_ip": "172.18.0.1",
"src_port": 54408,
"dest_ip": "172.18.0.2",
"dest_port": 5432,
"proto": "TCP",
"pgsql": {
"tx_id": 2,
"request": {
"protocol_version": "3.0",
"startup_parameters": {
"user": "rules",
"database": "rules",
"optional_parameters": [
{
"application_name": "psql"
},
{
"client_encoding": "UTF8"
}
]
}
},
"response": {
"authentication_md5_password": "Z\\xdc\\xfdf"
(continues on next page)
{
"pgsql": {
"tx_id": 3,
"response": {
"message": "authentication_ok",
"parameter_status": [
{
"application_name": "psql"
},
{
"client_encoding": "UTF8"
},
{
"date_style": "ISO, MDY"
},
{
"integer_datetimes": "on"
},
{
"interval_style": "postgres"
},
{
"is_superuser": "on"
},
{
"server_encoding": "UTF8"
},
{
"server_version": "13.6 (Debian 13.6-1.pgdg110+1)"
},
{
"session_authorization": "rules"
},
{
"standard_conforming_strings": "on"
},
{
"time_zone": "Etc/UTC"
}
],
"process_id": 28954,
"secret_key": 889887985
}
}
}
ò Note
In Suricata, the AuthenticationOk message is also where the backend's process_id and secret_key are
logged. These must be sent by the frontend when it issues a CancelRequest message (seen below).
A CancelRequest message:
{
"timestamp": "2023-12-07T15:46:56.971150+0000",
"flow_id": 775771889500133,
"event_type": "pgsql",
"src_ip": "100.88.2.140",
"src_port": 39706,
"dest_ip": "100.96.199.113",
"dest_port": 5432,
"proto": "TCP",
"pkt_src": "stream (flow timeout)",
"pgsql": {
"tx_id": 1,
"request": {
"message": "cancel_request",
"process_id": 28954,
"secret_key": 889887985
}
}
}
ò Note
As the CancelRequest message is sent over a new connection, the way to correlate it with the proper frontend/flow
from which it originates is by querying on process_id and secret_key seen in the AuthenticationOk event.
References:
• PostgreSQL protocol - Canceling Requests in Progress
• PostgreSQL message format - BackendKeyData
Field Reference
response (object)
request (object)
request.startup_parameters (object)
Fields
• "init_spi", "resp_spi": The Security Parameter Index (SPI) of the initiator and responder.
• "version_major": Major version of the ISAKMP header.
• "version_minor": Minor version of the ISAKMP header.
• "payload": List of payload types in the current packet.
• "exchange_type": Type of the exchange, as numeric values.
• "exchange_type_verbose": Type of the exchange, in human-readable form. Needs extended: yes set in the
ike EVE output option.
• "alg_enc", "alg_hash", "alg_auth", "alg_dh", "alg_esn": Properties of the chosen security association by the
server.
• "ikev1.encrypted_payloads": Set to true if the payloads in the packet are encrypted.
• "ikev1.doi": Value of the domain of interpretation (DOI).
• "ikev1.server.key_exchange_payload", "ikev1.client.key_exchange_payload": Public key exchange payloads of
the server and client.
• "ikev1.server.key_exchange_payload_length", "ikev1.client.key_exchange_payload_length": Length of the pub-
lic key exchange payload.
• "ikev1.server.nonce_payload", "ikev1.client.nonce_payload": Nonce payload of the server and client.
• "ikev1.server.nonce_payload_length", "ikev1.client.nonce_payload_length": Length of the nonce payload.
• "ikev1.client.client_proposals": List of the security associations proposed to the server.
• "ikev1.vendor_ids": List of the vendor IDs observed in the communication.
• "server_proposals": List of server proposals with parameters, if there are more than one. This is a non-standard
case; this field is only present if such a situation was observed in the inspected traffic.
Examples
"ike": {
"version_major": 1,
"version_minor": 0,
"init_spi": "8511617bfea2f172",
"resp_spi": "c0fc6bae013de0f5",
"message_id": 0,
"exchange_type": 2,
"exchange_type_verbose": "Identity Protection",
"sa_life_type": "LifeTypeSeconds",
"sa_life_type_raw": 1,
"sa_life_duration": "Unknown",
"sa_life_duration_raw": 900,
"alg_enc": "EncAesCbc",
"alg_enc_raw": 7,
"alg_hash": "HashSha2_256",
"alg_hash_raw": 4,
"alg_auth": "AuthPreSharedKey",
"alg_auth_raw": 1,
"alg_dh": "GroupModp2048Bit",
"alg_dh_raw": 14,
"sa_key_length": "Unknown",
"sa_key_length_raw": 256,
"alg_esn": "NoESN",
"payload": [
"VendorID",
"Transform",
"Proposal",
(continues on next page)
"nonce_payload_length": 32,
"proposals": [
{
"sa_life_type": "LifeTypeSeconds",
"sa_life_type_raw": 1,
"sa_life_duration": "Unknown",
"sa_life_duration_raw": 900,
"alg_enc": "EncAesCbc",
"alg_enc_raw": 7,
"alg_hash": "HashSha2_256",
"alg_hash_raw": 4,
"alg_auth": "AuthPreSharedKey",
"alg_auth_raw": 1,
"alg_dh": "GroupModp2048Bit",
"alg_dh_raw": 14,
"sa_key_length": "Unknown",
"sa_key_length_raw": 256
}
]
},
"server": {
"key_exchange_payload": "1e43be52b088ec840ff81865074b6d459b5ca7813b46...",
"key_exchange_payload_length": 256,
"nonce_payload": "04d78293ead007bc1a0f0c6c821a3515286a935af12ca50e08905b15d6c8fcd4
˓→",
"nonce_payload_length": 32
},
"vendor_ids": [
"4048b7d56ebce88525e7de7f00d6c2d3",
"4a131c81070358455c5728f20e95452f",
"afcad71368a1f1c96b8696fc77570100",
"7d9419a65310ca6f2c179d9215529d56",
"cd60464335df21f87cfdb2fc68b6a448",
"90cb80913ebb696e086381b5ec427b1f"
]
},
}
Request/Response fields
Exception fields
Diagnostic fields
MEI fields
Example
"modbus": {
"id": 1,
"request": {
"transaction_id": 0,
"protocol_id": 0,
"unit_id": 0,
"function_raw": 1,
"function_code": "RdCoils",
"access_type": "READ | COILS",
"category": "PUBLIC_ASSIGNED",
"error_flags": "NONE",
},
"response": {
"transaction_id": 0,
"protocol_id": 0,
"unit_id": 0,
"function_raw": 1,
"function_code": "RdCoils",
"access_type": "READ | COILS",
"category": "PUBLIC_ASSIGNED",
"error_flags": "DATA_VALUE",
},
}
• "ja3": The JA3 fingerprint consisting of both a JA3 hash and a JA3 string
• "ja3s": The JA3S fingerprint consisting of both a JA3 hash and a JA3 string
• "ja4": The JA4 client fingerprint for QUIC
Examples
Example of QUIC logging with CYU, JA3 and JA4 hashes (note that the JA4 hash is only an example to illustrate the
format and does not correlate with the others):
"quic": {
"version": 1362113590,
"cyu": [
{
"hash": "7b3ceb1adc974ad360cfa634e8d0a730",
"string": "46,PAD-SNI-STK-SNO-VER-CCS-NONC-AEAD-UAID-SCID-TCID-PDMD-SMHL-ICSL-
˓→NONP-PUBS-MIDS-SCLS-KEXS-XLCT-CSCT-COPT-CCRT-IRTT-CFCW-SFCW"
}
],
"ja3": {
"hash": "324f8c50e267adba4b5dd06c964faf67",
"string": "771,4865-4866-4867,51-43-13-27-17513-16-45-0-10-57,29-23-24,"
},
"ja4": "q13d0310h3_55b375c5d22e_cd85d2d88918"
}
Output Reference
ja3s (object)
ja3 (object)
Fields
• "renewal_time": Time in seconds since client began IP address request or renewal process
• "rebinding_time": Time in seconds before the client begins to renew its IP address lease
• "dns_servers": IP address(es) of servers the client will use for DNS queries
Examples
"dhcp": {
"type":"reply",
"id":755466399,
"client_mac":"54:ee:75:51:e0:66",
"assigned_ip":"100.78.202.125",
"dhcp_type":"ack",
"renewal_time":21600,
"client_id":"54:ee:75:51:e0:66"
}
"dhcp": {
"type":"reply",
"id":2787908432,
"client_mac":"54:ee:75:51:e0:66",
"assigned_ip":"192.168.1.120",
"client_ip":"0.0.0.0",
"relay_ip":"192.168.1.1",
"next_server_ip":"0.0.0.0",
"dhcp_type":"offer",
"subnet_mask":"255.255.255.0",
"routers":["192.168.1.100"],
"hostname":"test",
"lease_time":86400,
"renewal_time":21600,
"rebinding_time":43200,
"client_id":"54:ee:75:51:e0:66",
"dns_servers":["192.168.1.50","192.168.1.49"]
}
Examples
"arp": {
"hw_type": "ethernet",
"proto_type": "ipv4",
"opcode": "request",
"src_mac": "00:1a:6b:6c:0c:cc",
"src_ip": "10.10.10.2",
"dest_mac": "00:00:00:00:00:00",
"dest_ip": "10.10.10.1"
}
"arp": {
"hw_type": "ethernet",
"proto_type": "ipv4",
"opcode": "reply",
"src_mac": "00:1a:6b:6c:0c:cc",
"src_ip": "10.10.10.2",
"dest_mac": "00:1d:09:f0:92:ab",
"dest_ip": "10.10.10.1"
}
Colorize output
DNS NXDOMAIN
Source: https://twitter.com/mattarnao/status/601807374647750657
1.3G
function log(args)
http_uri = HttpGetRequestUriRaw()
(continues on next page)
http_host = HttpGetRequestHost()
if http_host == nil then
http_host = "<hostname unknown>"
end
http_host = string.gsub(http_host, "%c", ".")
http_ua = HttpGetRequestHeader("User-Agent")
if http_ua == nil then
http_ua = "<useragent unknown>"
end
http_ua = string.gsub(http_ua, "%g", ".")
timestring = SCPacketTimeString()
ip_version, src_ip, dst_ip, protocol, src_port, dst_port = SCFlowTuple()
file:write (timestring .. " " .. http_host .. " [**] " .. http_uri .. " [**] " ..
http_ua .. " [**] " .. src_ip .. ":" .. src_port .. " -> " ..
dst_ip .. ":" .. dst_port .. "\n")
file:flush()
http = http + 1
end
17.2.2 YAML
To enable the lua output, add the 'lua' output and add one or more scripts like so:
outputs:
- lua:
enabled: yes
scripts-dir: /etc/suricata/lua-output/
scripts:
- tcp-data.lua
- flow.lua
The scripts-dir option is optional. It makes Suricata load the scripts from this directory. Otherwise scripts will be
loaded from the current workdir.
cd /etc/init.d
ls | grep syslog
You should see a file with the word syslog in it, e.g. "syslog", "rsyslogd", etc. Obviously if the name is "rsyslogd" you
can be fairly confident you are running rsyslogd. If unsure or the filename is just "syslog", take a look at that file. For
example, if it was "rsyslogd", run:
less rsyslogd
At the top you should see a comment line that looks something like this:
Locate those files and look at them to give you clues as to what syslog daemon you are running. Also look in the start()
section of the file you ran "less" on and see what binaries get started because that can give you clues as well.
17.3.3 Example
Here is an example where the Suricata sensor is sending syslog messages in rsyslogd format but the SIEM is expecting
and parsing them in a sysklogd format. In the syslog configuration file (usually in /etc with a filename like rsyslog.conf
or syslog.conf), first add the template:
user.alert @10.8.75.24:514;sysklogd
Of course this is just one example and it will probably be different in your environment depending on what syslog
daemons and SIEM you use but hopefully this will point you in the right direction.
. Attention
- http-log:
enabled: yes
filename: http.log
custom: yes # enable the custom logging format (defined by custom format)
customformat: "%{%D-%H:%M:%S}t.%z %{X-Forwarded-For}i %{User-agent}i %H %m %h %u
˓→%s %B %a:%p -> %A:%P"
append: no
#extended: yes # enable this for extended logging information
#filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'
And in your http.log file you would get the following, for example:
˓→255.18:80
. Attention
- tls-log:
enabled: yes # Log TLS connections.
filename: tls.log # File to store TLS logs.
append: yes
custom: yes # enabled the custom logging format (defined by customformat)
customformat: "%{%D-%H:%M:%S}t.%z %a:%p -> %A:%P %v %n %d %D"
And in your tls.log file you would get the following, for example:
/var/log/suricata/*.log /var/log/suricata/*.json
{
rotate 3
missingok
nocompress
create
sharedscripts
postrotate
/bin/kill -HUP `cat /var/run/suricata.pid 2>/dev/null` 2>/dev/null || true
endscript
}
ò Note
The above logrotate configuration file depends on the existence of a Suricata PID file. If running in daemon mode
a PID file will be created by default, otherwise the --pidfile option should be used to create a PID file.
In addition to the SIGHUP style rotation discussed above, some outputs support their own time and date based rotation,
however removal of old log files is still the responsibility of external tools. These outputs include:
• Eve
• PCAP log
EIGHTEEN
LUA SUPPORT
ò Note
Currently, there is a difference in the needs key in the init function, depending on what is the usage: output or
detection. The list of available functions may also differ.
For output logs, follow the pattern below. (The complete script structure can be seen at Lua Output:)
407
Suricata User Guide, Release 8.0.0-dev
Do notice that the functions and protocols available for log and match may also vary. DNP3, for instance, is not
available for logging.
18.2.2 packet
Initialize with:
SCPacketTimestamp
Get packets timestamp as 2 numbers: seconds & microseconds elapsed since 1970-01-01 00:00:00 UTC.
function log(args)
local sec, usec = SCPacketTimestamp()
end
SCPacketTimeString
Use SCPacketTimeString to get the packet's time string in the format: 11/24/2009-18:57:25.179869
function log(args)
ts = SCPacketTimeString()
SCPacketTuple
SCPacketPayload
p = SCPacketPayload()
18.2.3 flow
function init (args)
local needs = {}
needs["type"] = "flow"
return needs
end
SCFlowTimestamps
Get timestamps (seconds and microseconds) of the first and the last packet from the flow.
SCFlowTimeString
startts = SCFlowTimeString()
SCFlowTuple
SCFlowAppLayerProto
Get alproto as a string from the flow. If a alproto is not (yet) known, it returns "unknown".
Example:
function log(args)
alproto = SCFlowAppLayerProto()
if alproto ~= nil then
print (alproto)
end
end
SCFlowHasAlerts
Returns true if flow has alerts.
Example:
function log(args)
has_alerts = SCFlowHasAlerts()
if has_alerts then
-- do something
end
end
SCFlowStats
Gets the packet and byte counts per flow.
SCFlowId
Gets the flow id.
id = SCFlowId()
Note that simply printing 'id' will likely result in printing a scientific notation. To avoid that, simply do:
id = SCFlowId()
idstr = string.format("%.0f ",id)
print ("Flow ID: " .. idstr .. "\n")
18.2.4 http
For output, init with:
For detection, use the specific buffer (cf Lua Scripting for Detection for a complete list), as with:
function log(args)
a, o, e = HttpGetResponseBody();
--print("offset " .. o .. " end " .. e)
for n, v in ipairs(a) do
print(v)
end
end
HttpGetRequestHost
Get the host from libhtp's tx->request_hostname, which can either be the host portion of the url or the host portion of
the Host header.
Example:
http_host = HttpGetRequestHost()
if http_host == nil then
(continues on next page)
HttpGetRequestHeader
http_ua = HttpGetRequestHeader("User-Agent")
if http_ua == nil then
http_ua = "<useragent unknown>"
end
HttpGetResponseHeader
server = HttpGetResponseHeader("Server");
print ("Server: " .. server);
HttpGetRequestLine
rl = HttpGetRequestLine();
print ("Request Line: " .. rl);
HttpGetResponseLine
rsl = HttpGetResponseLine();
print ("Response Line: " .. rsl);
HttpGetRawRequestHeaders
rh = HttpGetRawRequestHeaders();
print ("Raw Request Headers: " .. rh);
HttpGetRawResponseHeaders
rh = HttpGetRawResponseHeaders();
print ("Raw Response Headers: " .. rh);
HttpGetRequestUriRaw
http_uri = HttpGetRequestUriRaw()
if http_uri == nil then
http_uri = "<unknown>"
end
HttpGetRequestUriNormalized
http_uri = HttpGetRequestUriNormalized()
if http_uri == nil then
http_uri = "<unknown>"
end
HttpGetRequestHeaders
a = HttpGetRequestHeaders();
for n, v in pairs(a) do
print(n,v)
end
HttpGetResponseHeaders
a = HttpGetResponseHeaders();
for n, v in pairs(a) do
print(n,v)
end
18.2.5 DNS
If your purpose is to create a logging script, initialize the buffer as:
If you are going to use the script for rule matching, choose one of the available DNS buffers listed in Lua Scripting for
Detection and follow the pattern:
DnsGetQueries
dns_query = DnsGetQueries();
if dns_query ~= nil then
for n, t in pairs(dns_query) do
rrname = t["rrname"]
rrtype = t["type"]
print ("QUERY: " .. ts .. " " .. rrname .. " [**] " .. rrtype .. " [**] " ..
"TODO" .. " [**] " .. srcip .. ":" .. sp .. " -> " ..
dstip .. ":" .. dp)
end
end
DnsGetAnswers
dns_answers = DnsGetAnswers();
if dns_answers ~= nil then
for n, t in pairs(dns_answers) do
rrname = t["rrname"]
rrtype = t["type"]
ttl = t["ttl"]
print ("ANSWER: " .. ts .. " " .. rrname .. " [**] " .. rrtype .. " [**] " ..
ttl .. " [**] " .. srcip .. ":" .. sp .. " -> " ..
dstip .. ":" .. dp)
end
end
DnsGetAuthorities
dns_auth = DnsGetAuthorities();
if dns_auth ~= nil then
for n, t in pairs(dns_auth) do
rrname = t["rrname"]
rrtype = t["type"]
ttl = t["ttl"]
print ("AUTHORITY: " .. ts .. " " .. rrname .. " [**] " .. rrtype .. " [**] " ..
ttl .. " [**] " .. srcip .. ":" .. sp .. " -> " ..
dstip .. ":" .. dp)
end
end
DnsGetRcode
rcode = DnsGetRcode();
if rcode == nil then
return 0
end
print (rcode)
DnsGetRecursionDesired
returns a bool
18.2.6 TLS
For log output, initialize with:
TlsGetVersion
Get the negotiated version in a TLS session as a string through TlsGetVersion.
Example:
TlsGetCertInfo
Make certificate information available to the script through TlsGetCertInfo.
Example:
TlsGetCertChain
Make certificate chain available to the script through TlsGetCertChain.
The output is an array of certificate with each certificate being an hash with data and length keys.
Example:
chain = TlsGetCertChain()
(continues on next page)
TlsGetCertNotAfter
Get the Unix timestamp of end of validity of certificate.
Example:
TlsGetCertNotBefore
Get the Unix timestamp of beginning of validity of certificate.
Example:
TlsGetCertSerial
Get TLS certificate serial number through TlsGetCertSerial.
Example:
TlsGetSNI
Get the Server name Indication from a TLS connection.
Example:
18.2.7 JA3
JA3 must be enabled in the Suricata config file (set 'app-layer.protocols.tls.ja3-fingerprints' to 'yes').
For log output, initialize with:
Ja3GetHash
Get the JA3 hash (md5sum of JA3 string) through Ja3GetHash.
Example:
Ja3GetString
Get the JA3 string through Ja3GetString.
Example:
Ja3SGetHash
Get the JA3S hash (md5sum of JA3S string) through JA3SGetHash.
Examples:
// matching code
return 0
end
JA3SGetString
Get the JA3S string through Ja3SGetString.
Examples:
// matching code
return 0
end
18.2.8 SSH
Initialize with:
SshGetServerProtoVersion
Get SSH protocol version used by the server through SshGetServerProtoVersion.
Example:
SshGetServerSoftwareVersion
Get SSH software used by the server through SshGetServerSoftwareVersion.
Example:
SshGetClientProtoVersion
Get SSH protocol version used by the client through SshGetClientProtoVersion.
Example:
SshGetClientSoftwareVersion
Get SSH software used by the client through SshGetClientSoftwareVersion.
Example:
HasshGet
Get MD5 of hassh algorithms used by the client through HasshGet.
Example:
HasshGetString
Get hassh algorithms used by the client through HasshGetString.
Example:
HasshServerGet
Get MD5 of hassh algorithms used by the server through HasshServerGet.
Example:
HasshServerGetString
Get hassh algorithms used by the server through HasshServerGetString.
Example:
18.2.9 Files
To use the file logging API, the script's init() function needs to look like:
SCFileInfo
returns fileid (number), txid (number), name (string), size (number), magic (string), md5 in hex (string), sha1 (string),
sha256 (string)
SCFileState
18.2.10 Alerts
Alerts are a subset of the 'packet' logger:
SCRuleIds
SCRuleAction
action = SCRuleAction()
SCRuleMsg
msg = SCRuleMsg()
SCRuleClass
In case of HTTP body data, the bodies are unzipped and dechunked if applicable.
SCStreamingBuffer
function log(args)
-- sb_ts and sb_tc are bools indicating the direction of the data
data, sb_open, sb_close, sb_ts, sb_tc = SCStreamingBuffer()
if sb_ts then
print("->")
else
print("<-")
end
hex_dump(data)
end
function init(args)
local needs = {}
needs["tls"] tostring(true)
needs["flowint"] = {"tls-cnt"}
(continues on next page)
Here we define a tls-cnt Flowint that can now be used in output or in a signature via dedicated functions. The access
to the Flow variable is done by index so in our case we need to use 0.
function match(args)
a = SCFlowintGet(0);
if a then
SCFlowintSet(0, a + 1)
else
SCFlowintSet(0, 1)
end
SCFlowintGet
Get the Flowint at index given by the parameter.
SCFlowintSet
Set the Flowint at index given by the first parameter. The second parameter is the value.
SCFlowintIncr
Increment Flowint at index given by the first parameter.
SCFlowintDecr
Decrement Flowint at index given by the first parameter.
SCFlowvarGet
Get the Flowvar at index given by the parameter.
SCFlowvarSet
Set a Flowvar. First parameter is the index, second is the data and third is the length of data.
You can use it to set string
function match(args)
a = SCFlowvarGet(0);
if a then
a = tostring(tonumber(a)+1)
SCFlowvarSet(0, a, #a)
else
a = tostring(1)
(continues on next page)
18.2.13 Misc
SCThreadInfo
SCLogPath
Expose the log path.
name = "fast_lua.log"
function setup (args)
filename = SCLogPath() .. "/" .. name
file = assert(io.open(filename, "a"))
end
SCByteVarGet
Get the ByteVar at index given by the parameter. These variables are defined by byte_extract or byte_math in Suricata
rules. Only callable from match scripts.
function init(args)
local needs = {}
needs["bytevar"] = {"var1", "var2"}
return needs
end
Here we define a register that we will be using variables var1 and var2. The access to the Byte variables is done by
index.
function match(args)
var1 = SCByteVarGet(0)
var2 = SCByteVarGet(1)
NINETEEN
FILE EXTRACTION
19.1 Architecture
The file extraction code works on top of selected protocol parsers (see supported protocols below). The application
layer parsers run on top of the stream reassembly engine and the UDP flow tracking.
In case of HTTP, the parser takes care of dechunking and unzipping the request and/or response data if necessary.
This means that settings in the stream engine, reassembly engine and the application layer parsers all affect the workings
of the file extraction.
The rule language controls which files are extracted and stored on disk.
Supported protocols are:
• HTTP
• SMTP
• FTP
• NFS
• SMB
• HTTP2
19.2 Settings
stream.checksum_validation controls whether or not the stream engine rejects packets with invalid checksums. A good
idea normally, but the network interface performs checksum offloading a lot of packets may seem to be broken. This
setting is enabled by default, and can be disabled by setting to "no". Note that the checksum handling can be controlled
per interface, see "checksum_checks" in example configuration.
file-store.stream-depth controls how far into a stream reassembly is done. Beyond this value no reassembly will be
done. This means that after this value the HTTP session will no longer be tracked. By default a setting of 1 Megabyte
is used. 0 sets it to unlimited. If set to no, it is disabled and stream.reassembly.depth is considered. Non-zero values
must be greater than stream.stream-depth to be used.
libhtp.default-config.request-body-limit / libhtp.server-config.<config>.request-body-limit controls how much of the
HTTP request body is tracked for inspection by the http_client_body keyword, but also used to limit file inspection. A
value of 0 means unlimited.
libhtp.default-config.response-body-limit / libhtp.server-config.<config>.response-body-limit is like the request body
limit, only it applies to the HTTP response body.
425
Suricata User Guide, Release 8.0.0-dev
19.3 Output
19.3.1 File-Store and Eve Fileinfo
There are two output modules for logging information about extracted files. The first is eve.files which is an eve
sub-logger that logs fileinfo records. These fileinfo records provide metadata about the file, but not the actual
file contents.
This must be enabled in the eve output:
- outputs:
- eve-log:
types:
- files:
force-magic: no
force-hash: [md5,sha256]
See Eve (Extensible Event Format) for more details on working with the eve output.
The other output module, file-store stores the actual files to disk.
The file-store module uses its own log directory (default: filestore in the default logging directory) and logs files
using the SHA256 of the contents as the filename. Each file is then placed in a directory named 00 to ff where the
directory shares the first 2 characters of the filename. For example, if the SHA256 hex string of an extracted file starts
with "f9bc6d..." the file we be placed in the directory filestore/f9.
The size of a file that can be stored depends on file-store.stream-depth, if this value is reached a file can be
truncated and might not be stored completely. If not enabled, stream.reassembly.depth will be considered.
Setting file-store.stream-depth to 0 permits store of the entire file; here, 0 means "unlimited."
file-store.stream-depth will always override stream.reassembly.depth when filestore keyword is used.
However, it is not possible to set file-store.stream-depth to a value less than stream.reassembly.depth.
Values less than this amount are ignored and a warning message will be displayed.
A protocol parser, like modbus, could permit to set a different store-depth value and use it rather than file-store.
stream-depth.
Using the SHA256 for file names allows for automatic de-duplication of extracted files. However, the timestamp of a
preexisting file will be updated if the same files is extracted again, similar to the touch command.
Optionally a fileinfo record can be written to its own file sharing the same SHA256 as the file it references. To
handle recording the metadata of each occurrence of an extracted file, these filenames include some extra fields to
ensure uniqueness. Currently the format is:
<SHA256>.<SECONDS>.<ID>.json
where <SECONDS> is the seconds from the packet that triggered the stored file to be closed and <ID> is a unique ID for
the runtime of the Suricata instance. These values should not be depended on, and are simply used to ensure uniqueness.
These fileinfo records are identical to the fileinfo records logged to the eve output.
See File-store (File Extraction) for more information on configuring the file-store output.
ò Note
This section documents version 2 of the file-store. Version 1 of the file-store has been removed as of Suricata
version 6.
19.4 Rules
Without rules in place no extraction will happen. The simplest rule would be:
alert http any any -> any any (msg:"FILE store all"; filestore; sid:1; rev:1;)
alert http any any -> any any (msg:"FILE PDF file claimed"; fileext:"pdf"; filestore;␣
˓→sid:2; rev:1;)
alert http any any -> any any (msg:"FILE pdf detected"; filemagic:"PDF document";␣
˓→filestore; sid:3; rev:1;)
alert http any any -> any any (msg:"Black list checksum match and extract MD5";␣
˓→filemd5:fileextraction-chksum.list; filestore; sid:4; rev:1;)
alert http any any -> any any (msg:"Black list checksum match and extract SHA1";␣
˓→filesha1:fileextraction-chksum.list; filestore; sid:5; rev:1;)
alert http any any -> any any (msg:"Black list checksum match and extract SHA256";␣
˓→filesha256:fileextraction-chksum.list; filestore; sid:6; rev:1;)
Bundled with the Suricata download, is a file with more example rules. In the archive, go to the rules directory and
check the files.rules file.
19.5 MD5
Suricata can calculate MD5 checksums of files on the fly and log them. See Storing MD5s checksums for an explanation
on how to enable this.
- file-store:
enabled: yes # set to yes to enable
dir: filestore # directory to store the files
force-hash: [md5] # force logging of md5 checksums
outputs:
- eve-log:
enabled: yes
filetype: regular #regular|syslog|unix_dgram|unix_stream|redis
filename: eve.json
types:
- files:
force-magic: no # force logging magic on all logged files
# force logging of checksums, available hash functions are md5,
# sha1 and sha256
#force-hash: [md5]
stream:
memcap: 64mb
checksum-validation: yes # reject wrong csums
inline: no # no inline mode
reassembly:
memcap: 32mb
depth: 0 # reassemble all of a stream
toserver-chunk-size: 2560
toclient-chunk-size: 2560
libhtp:
default-config:
personality: IDS
# Can be specified in kb, mb, gb. Just a number indicates
# it's in bytes.
request-body-limit: 0
response-body-limit: 0
Testing
For the purpose of testing we use this rule only in a file.rules (a test/example file):
alert http any any -> any any (msg:"FILE store all"; filestore; sid:1; rev:1;)
This rule above will save all the file data for files that are opened/downloaded through HTTP
Start Suricata (-S option ONLY loads the specified rule file and disregards any other rules that are enabled in suri-
cata.yaml):
Meta data:
TIME: 05/01/2012-11:09:52.425751
SRC IP: 2.23.144.170
DST IP: 192.168.1.91
(continues on next page)
FILENAME: /en/US/prod/collateral/routers/ps5855/prod_brochure0900aecd8019dc1f.
˓→pdf
˓→/US\/prod\/collateral\/routers\/ps5855\/prod_brochure0900aecd8019dc1f.pdf", "http_host
˓→source=web&cd=1&ved=0CDAQFjAA&url=http%3A%2F%2Fwww.cisco.com%2Fen%2FUS%2Fprod
˓→%2Fcollateral%2Frouters%2Fps5855%2Fprod_brochure0900aecd8019dc1f.pdf&
˓→US\/prod\/collateral\/routers\/ps5855\/prod_brochure0900aecd8019dc1f.pdf", "magic":
˓→/US\/prod\/collateral\/routers\/ps5855\/prod_brochure0900aecd8019dc1f.pdf", "http_host
˓→source=web&cd=1&ved=0CDAQFjAA&url=http%3A%2F%2Fwww.cisco.com%2Fen%2FUS%2Fprod
˓→%2Fcollateral%2Frouters%2Fps5855%2Fprod_brochure0900aecd8019dc1f.pdf&
˓→US\/prod\/collateral\/routers\/ps5855\/prod_brochure0900aecd8019dc1f.pdf", "magic":
- file-store:
version: 2
enabled: no # set to yes to enable
log-dir: files # directory to store the files
force-filestore: no
force-hash: [md5] # force logging of md5 checksums
- file-store:
enabled: yes # set to yes to enable
log-dir: files # directory to store the files
force-magic: no # force logging magic on all stored files
force-hash: [md5] # force logging of md5 checksums
force-filestore: no # force storing of all files
stream-depth: 1mb # reassemble 1mb into a stream, set to no to disable
waldo: file.waldo # waldo file to store the file_id across runs
max-open-files: 0 # how many files to keep open (O means none)
write-meta: yes # write a .meta file if set to yes
include-pid: yes # include the pid in filenames if set to yes.
- file-store:
version: 2
enabled: yes
dir: filestore
force-hash: [md5]
file-filestore: no
stream-depth: 1mb
max-open-files: 0
write-fileinfo: yes
Refer to the File Extraction section of the manual for information about the format of the file-store directory for file-store
v2.
TWENTY
431
Suricata User Guide, Release 8.0.0-dev
TWENTYONE
Results in:
Suricata Configuration:
AF_PACKET support: no
PF_RING support: no
NFQueue support: no
IPFW support: no
DAG enabled: yes
Napatech enabled: no
Start with:
Started up!
21.2 Napatech
21.2.1 Contents
• Introduction
• Package Installation
433
Suricata User Guide, Release 8.0.0-dev
• Basic Configuration
• Advanced Multithreaded Configuration
21.2.2 Introduction
Napatech packet capture accelerator cards can greatly improve the performance of your Suricata deployment using
these hardware based features:
• On board burst buffering (up to 12GB)
• Zero-copy kernel bypass DMA
• Non-blocking PCIe performance
• Port merging
• Load distribution to up 128 host buffers
• Precise timestamping
• Accurate time synchronization
The package uses a proprietary shell script to handle the installation process. In either case, gcc, make and the kernel
header files are required to compile the kernel module and install the software.
$ /opt/napatech3/bin/ntstart.sh -m
$ make
$ make install-full
threading:
set-cpu-affinity: no
.
.
.
napatech:
auto-config: yes
streams: ["0-3"]
ports: [all]
hashmode: hash5tuplesorted
Now modify ntservice.ini. You also need make sure that you have allocated enough host buffers in ntservice.
ini for the streams. It's a good idea to also set the TimeSyncReferencePriority. To do this make the following
changes to ntservice.ini:
HostBuffersRx = [4,16,-1] # [number of host buffers, Size(MB), NUMA node] TimeSyncReferencePrior-
ity = OSTime # Timestamp clock synchronized to the OS
Stop and restart ntservice after making changes to ntservice:
$ /opt/napatech3/bin/ntstop.sh
$ /opt/napatech3/bin/ntstart.sh
threading:
set-cpu-affinity: yes
cpu-affinity:
management-cpu-set:
cpu: [ 0 ]
receive-cpu-set:
cpu: [ 0 ]
worker-cpu-set:
cpu: [ all ]
.
.
.
napatech:
auto-config: yes
ports: [all]
hashmode: hash5tuplesorted
Prior to running Suricata in this mode you also need to configure a sufficient number of host buffers on each NUMA
node. So, for example, if you have a two processor server with 32 total cores and you plan to use all of the cores you
will need to allocate 16 host buffers on each NUMA node. It is also desirable to set the Napatech cards time source to
the OS.
To do this make the following changes to ntservice.ini:
$ /opt/napatech3/bin/ntstop.sh -m
$ /opt/napatech3/bin/ntstart.sh -m
In this example we will setup the Napatech capture accelerator to merge all physical ports, and then distribute the
merged traffic to four streams that Suricata will ingest.
The steps for this configuration are:
1. Disable the Napatech auto-config option in suricata.yaml
2. Specify the streams that Suricata is to use in suricata.yaml
3. Create a file with NTPL commands to create the underlying Napatech streams.
First suricata.yaml should be configured similar to the following:
napatech:
auto-config: no
streams: ["0-3"]
Next you need to make sure you have enough host buffers defined in ntservice.ini. As it's also a good idea to set up the
TimeSync. Here are the lines to change:
$ /opt/napatech3/bin/ntstop.sh
$ /opt/napatech3/bin/ntstart.sh
Now that ntservice is running we need to execute a few NTPL (Napatech Programming Language) commands to com-
plete the setup. Create a file will the following commands:
$ /opt/napatech3/bin/ntpl -f <my_ntpl_file>
It is possible to specify much more elaborate configurations using this option. Simply by creating the appropriate NTPL
file and attaching Suricata to the streams.
napatech:
hardware-bypass: true
ports[0-1,2-3]
Note that these "port-pairings" are also required for IDS configurations as the hardware needs to know on which port(s)
two sides of the connection will arrive.
For configurations relying on optical taps the two sides of the pairing will typically be different ports. For SPAN port
configurations where both upstream and downstream traffic are delivered to a single port both sides of the "port-pair"
will reference the same port.
For example tap configurations have a form similar to this:
ports[0-1,2-3]
ports[0-0,1-1,2-2,3-3]
Note that SPAN and tap configurations may be combined on the same adapter.
There are multiple ways that Suricata can be configured to bypass traffic. One way is to enable stream.bypass in the
configuration file. E.g.:
stream:
bypass: true
When enabled once Suricata has evaluated the first chunk of the stream (the size of which is also configurable) it will
indicate that the rest of the packets in the flow can be bypassed. In IDS mode this means that the subsequent packets of
the flow will be dropped and not delivered to Suricata. In inline operation the packets will be transmitted on the output
port but not delivered to Suricata.
Another way is by specifying the "bypass" keyword in a rule. When a rule is triggered with this keyword then the "pass"
or "drop" action will be applied to subsequent packets of the flow automatically without further analysis by Suricata.
For example given the rule:
drop tcp any 443 <> any any (msg: "SURICATA Test rule"; bypass; sid:1000001; rev:2;)
Once Suricata initially evaluates the fist packet(s) and identifies the flow, all subsequent packets from the flow will be
dropped by the hardware; thus saving CPU cycles for more important tasks.
The timeout value for how long to wait before evicting stale flows from the hardware flow table can be specified via the
FlowTimeout attribute in ntservice.ini.
napatech:
inline: enabled
ports[0-1, 2-3]
Will pair ports 0 and 1; and 2 and 3 as peers. Rules can be defined to pass traffic matching a given signature. For
example, given the rule:
pass tcp any 443 <> any any (msg: "SURICATA Test rule"; bypass; sid:1000001; rev:2;)
Suricata will evaluate the initial packet(s) of the flow and program the flow into the hardware. Subsequent packets from
the flow will be automatically be shunted from one port to it's peer.
21.2.11 Counters
The following counters are available:
• napa_total.pkts - The total of packets received by the card.
• napa_total.byte - The total count of bytes received by the card.
• napa_total.overflow_drop_pkts - The number of packets that were dropped because the host buffers were full.
(I.e. the application is not able to process packets quickly enough.)
• napa_total.overflow_drop_byte - The number of bytes that were dropped because the host buffers were full. (I.e.
the application is not able to process packets quickly enough.)
On flow-aware products the following counters are also available:
• napa_dispatch_host.pkts, napa_dispatch_host.byte:
The total number of packets/bytes that were dispatched to a host buffer for processing by Suricata. (Note: this
count includes packets that may be subsequently dropped if there is no room in the host buffer.)
• napa_dispatch_drop.pkts, napa_dispatch_drop.byte:
The total number of packets/bytes that were dropped at the hardware as a result of a Suricata "drop" bypass rule
or other adjudication by Suricata that the flow packets should be dropped. These packets are not delivered to the
application.
• napa_dispatch_fwd.pkts, napa_dispatch_fwd.byte:
When inline operation is configured this is the total number of packets/bytes that were forwarded as result of a
Suricata "pass" bypass rule or as a result of stream or encryption bypass being enabled in the configuration file.
These packets were not delivered to the application.
• napa_bypass.active_flows:
The number of flows actively programmed on the hardware to be forwarded or dropped.
• napa_bypass.total_flows:
The total count of flows programmed since the application started.
If enable-stream-stats is enabled in the configuration file then, for each stream that is being processed, the following
counters will be output in stats.log:
• napa<streamid>.pkts: The number of packets received by the stream.
• napa<streamid>.bytes: The total bytes received by the stream.
• napa<streamid>.drop_pkts: The number of packets dropped from this stream due to buffer overflow conditions.
• napa<streamid>.drop_byte: The number of bytes dropped from this stream due to buffer overflow conditions.
This is useful for fine-grain debugging to determine if a specific CPU core or thread is falling behind resulting in
dropped packets.
Debugging:
For debugging configurations it is useful to see what traffic is flowing as well as what streams are created and receiving
traffic. There are two tools in /opt/napatech3/bin that are useful for this:
• monitoring: this tool will, among other things, show what traffic is arriving at the port interfaces.
• profiling: this will show host-buffers, streams and traffic flow to the streams.
If Suricata terminates abnormally stream definitions, which are normally removed at shutdown, may remain in effect.
If this happens they can be cleared by issuing the "delete=all" NTPL command as follows:
# /opt/napatech3/bin/ntpl -e "delete=all"
napatech:
# When use_all_streams is set to "yes" the initialization code will query
# the Napatech service for all configured streams and listen on all of them.
# When set to "no" the streams config array will be used.
#
# This option necessitates running the appropriate NTPL commands to create
# the desired streams prior to running Suricata.
#use-all-streams: no
# Stream stats can be enabled to provide fine grain packet and byte counters
# for each thread/stream that is configured.
#
enable-stream-stats: no
Make sure that there are enough host-buffers declared in ntservice.ini to accommodate the number of cores/streams
being used.
21.2.13 Support
Contact a support engineer at: [email protected]
Napatech Documentation can be found at: https://docs.napatech.com (Click the search icon, with no search text, to see
all documents in the portal.)
21.3 Myricom
From: https://blog.inliniac.net/2012/07/10/suricata-on-myricom-capture-cards/
In this guide I'll describe using the Myricom libpcap support. I'm going to assume you installed the card properly,
installed the Sniffer driver and made sure that all works. Make sure dmesg shows that the card is in sniffer mode:
make
sudo make install
Next, configure the amount of ringbuffers. I'm going to work with 8 here, as my quad core + hyper threading has 8
logical CPUs. See below for additional information about the buffer-size parameter.
pcap:
- interface: eth5
threads: 8
buffer-size: 512kb
checksum-checks: no
The 8 threads setting causes Suricata to create 8 reader threads for eth5. The Myricom driver makes sure each of those
is attached to its own ringbuffer.
Then start Suricata as follows:
If you want 16 ringbuffers, update the "threads" variable in the Suricata configuration file to 16 and start Suricata:
Note that the pcap.buffer-size configuration setting shown above is currently ignored when using Myricom cards.
The value is passed through to the pcap_set_buffer_size libpcap API within the Suricata source code. From
Myricom support:
"The libpcap interface to Sniffer10G ignores the pcap_set_buffer_size() value. The call␣
˓→to snf_open() uses zero as the dataring_size which informs the Sniffer library to use␣
˓→a default value or the value from the SNF_DATARING_SIZE environment variable."
The following pull request opened by Myricom in the libpcap project indicates that a future SNF software release could
provide support for setting the SNF_DATARING_SIZE via the pcap.buffer-size yaml setting:
• https://github.com/the-tcpdump-group/libpcap/pull/435
Until then, the data ring and descriptor ring values can be explicitly set using the SNF_DATARING_SIZE and
SNF_DESCRING_SIZE environment variables, respectively.
The SNF_DATARING_SIZE is the total amount of memory to be used for storing incoming packet data. This size
is shared across all rings. The SNF_DESCRING_SIZE is the total amount of memory to be used for storing meta
information about the packets (packet lengths, offsets, timestamps). This size is also shared across all rings.
Myricom recommends that the descriptor ring be 1/4 the size of the data ring, but the ratio can be modified based on
your traffic profile. If not set explicitly, Myricom uses the following default values: SNF_DATARING_SIZE = 256MB,
and SNF_DESCRING_SIZE = 64MB
Expanding on the 16 thread example above, you can start Suricata with a 16GB Data Ring and a 4GB Descriptor Ring
using the following command:
The bypass implementation relies on one of the most powerful concept of eBPF: maps. A map is a data structure shared
between user space and kernel space/hardware. It allows user space and kernel space to interact, pass information. Maps
are often implemented as arrays or hash tables that can contain arbitrary key, value pairs.
XDP
XDP provides another Linux native way of optimising Suricata's performance on sniffing high speed networks:
XDP or eXpress Data Path provides a high performance, programmable network data path in the Linux
kernel as part of the IO Visor Project. XDP provides bare metal packet processing at the lowest point in
the software stack which makes it ideal for speed without compromising programmability. Furthermore,
new functions can be implemented dynamically with the integrated fast path without kernel modification.
More info about XDP:
• IOVisor's XDP page
• Cilium's BPF and XDP reference guide
21.4.2 Requirements
You will need a kernel that supports XDP and, for the most performance improvement, a network card that support
XDP in the driver.
Suricata XDP code has been tested with 4.13.10 but 4.15 or later is necessary to use all features like the CPU redirect
map.
If you are using an Intel network card, you will need to stay with in tree kernel NIC drivers. The out of tree drivers do
not contain the XDP support.
Having a network card with support for RSS symmetric hashing is a good point or you will have to use the XDP CPU
redirect map feature.
21.4.3 Prerequisites
This guide has been confirmed on Debian/Ubuntu "LTS" Linux.
Disable irqbalance
irqbalance may cause issues in most setups described here, so it is recommended to deactivate it
Kernel
You need to run a kernel 4.13 or newer.
libbpf
Suricata uses libbpf to interact with eBPF and XDP
cd libbpf/src/
make && sudo make install
In some cases your system will not find the libbpf library that is installed under /usr/lib64 so you may need to modify
your ldconfig configuration.
./autogen.sh
Then you need to add the eBPF flags to configure and specify the Clang compiler for building all C sources, including
the eBPF programs
The clang compiler is needed if you want to build eBPF files as the build is done via a specific eBPF backend available
only in llvm/clang suite. If you don't want to use Clang for building Suricata itself, you can still specify it separately,
using the --with-clang parameter
stream:
bypass: true
This will bypass flows as soon as the stream depth will be reached.
If you want, you can also bypass encrypted flows by setting encryption-handling to bypass in the app-layer tls section
app-layer:
protocols:
tls:
enabled: yes
detection-ports:
dp: 443
encryption-handling: bypass
Another solution is to use a set of signatures using the bypass keyword to obtain a selective bypass. Suricata traffic
ID defines flowbits that can be used in other signatures. For instance one could use
alert any any -> any any (msg:"bypass Skype"; flowbits:isset,traffic/id/skype; noalert;␣
˓→bypass; sid:1000001; rev:1;)
cp ebpf/vlan_filter.bpf /usr/libexec/suricata/ebpf/
- interface: eth3
threads: 16
cluster-id: 97
cluster-type: cluster_flow # choose any type suitable
defrag: yes
# eBPF file containing a 'filter' function that will be inserted into the
# kernel and used as load balancing function
ebpf-filter-file: /usr/libexec/suricata/ebpf/vlan_filter.bpf
use-mmap: yes
ring-size: 200000
- interface: eth3
threads: 16
cluster-id: 97
cluster-type: cluster_qm # symmetric RSS hashing is mandatory to use this mode
# eBPF file containing a 'filter' function that will be inserted into the
# kernel and used as packet filter function
ebpf-filter-file: /usr/libexec/suricata/ebpf/bypass_filter.bpf
bypass: yes
use-mmap: yes
ring-size: 200000
Constraints on eBPF code to have a bypass compliant code are stronger than for regular filters. The filter must expose
flow_table_v4 and flow_table_v6 per CPU array maps with similar definitions as the one available in bypass_filter.c.
These two maps will be accessed and maintained by Suricata to handle the lists of flows to bypass.
If you are not using VLAN tracking (vlan.use-for-tracking set to false in suricata.yaml) then you also have to set
the VLAN_TRACKING define to 0 in bypass_filter.c.
cp ebpf/lb.bpf /usr/libexec/suricata/ebpf/
Then use cluster_ebpf as load balancing method in the interface section of af-packet and point the ebpf-lb-file
variable to the lb.bpf file
- interface: eth3
threads: 16
cluster-id: 97
cluster-type: cluster_ebpf
defrag: yes
# eBPF file containing a 'loadbalancer' function that will be inserted into the
# kernel and used as load balancing function
ebpf-lb-file: /usr/libexec/suricata/ebpf/lb.bpf
use-mmap: yes
ring-size: 200000
cp ebpf/xdp_filter.bpf /usr/libexec/suricata/ebpf/
- interface: eth3
threads: 16
cluster-id: 97
cluster-type: cluster_qm # symmetric hashing is a must!
defrag: yes
# Xdp mode, "soft" for skb based version, "driver" for network card based
# and "hw" for card supporting eBPF.
xdp-mode: driver
xdp-filter-file: /usr/libexec/suricata/ebpf/xdp_filter.bpf
# if the ebpf filter implements a bypass function, you can set 'bypass' to
# yes and benefit from these feature
bypass: yes
use-mmap: yes
ring-size: 200000
# Uncomment the following if you are using hardware XDP with
# a card like Netronome (default value is yes)
# use-percpu-hash: no
XDP bypass is compatible with AF_PACKET IPS mode. Packets from bypassed flows will be send directly from one
card to the second card without going by the kernel network stack.
If you are using hardware XDP offload you may have to set use-percpu-hash to false and build and install the XDP
filter file after setting USE_PERCPU_HASH to 0.
In the XDP filter file, you can set ENCRYPTED_TLS_BYPASS to 1 if you want to bypass the encrypted TLS 1.2 packets in
the eBPF code. Be aware that this will mean that Suricata will be blind on packets on port 443 with the correct pattern.
If you are not using VLAN tracking (vlan.use-for-tracking set to false in suricata.yaml) then you also have to set
the VLAN_TRACKING define to 0 in xdp_filter.c.
Use in tree kernel drivers: XDP support is not available in Intel drivers available on Intel website.
Enable symmetric hashing
˓→equal 16
In the above setup you are free to use any recent set_irq_affinity script. It is available in any Intel x520/710 NIC
sources driver download.
NOTE: We use a special low entropy key for the symmetric hashing. More info about the research for symmetric
hashing set up
for i in rx tx tso ufo gso gro lro tx nocache copy sg txvlan rxvlan; do
/sbin/ethtool -K eth3 $i off 2>&1 > /dev/null;
done
for proto in tcp4 udp4 ah4 esp4 sctp4 tcp6 udp6 ah6 esp6 sctp6; do
/sbin/ethtool -N eth3 rx-flow-hash $proto sd
done
This command triggers load balancing using only source and destination IPs. This may be not optimal in term of load
balancing fairness but this ensures all packets of a flow will reach the same thread even in the case of IP fragmentation
(where source and destination port will not be available for some fragmented packets).
In case your system has more then 64 core, you need to set CPUMAP_MAX_CPUS to a value greater than this number
in xdp_lb.c and xdp_filter.c.
A sample configuration for pure XDP load balancing could look like
- interface: eth3
threads: 16
cluster-id: 97
cluster-type: cluster_cpu
xdp-mode: driver
(continues on next page)
It is possible to use xdp_monitor to have information about the behavior of CPU redirect. This program is available in
Linux tree under the samples/bpf directory and will be build by the make command. Sample output is the following
sudo ./xdp_monitor --stats
XDP-event CPU:to pps drop-pps extra-info
XDP_REDIRECT 11 2,880,212 0 Success
XDP_REDIRECT total 2,880,212 0 Success
XDP_REDIRECT total 0 0 Error
cpumap-enqueue 11:0 575,954 0 5.27 bulk-average
cpumap-enqueue sum:0 575,954 0 5.27 bulk-average
cpumap-kthread 0 575,990 0 56,409 sched
cpumap-kthread 1 576,090 0 54,897 sched
Confirm you have the XDP filter engaged in the output (example):
...
...
(runmode-af-packet.c:220) <Config> (ParseAFPConfig) -- Enabling locked memory for mmap␣
˓→on iface eth3
...
...
- interface: eth3
pinned-maps: true
- interface: eth3
pinned-maps: true
pinned-maps-name: ipv4_drop
xdp-filter-file: /usr/libexec/suricata/ebpf/xdp_filter.bpf
If XDP bypass is used in IPS mode stopping Suricata will trigger an interruption in the traffic. To fix that, the provided
XDP filter xdp_filter.bpf is containing a map that will trigger a global bypass if set to 1. You need to use pinned-maps
to benefit from this feature.
To use it you need to set #define USE_GLOBAL_BYPASS 1 (instead of 0) in the xdp_filter.c file and rebuild the eBPF
code and install the eBPF file in the correct place. If you write 1 as key 0 then the XDP filter will switch to global
bypass mode. Set key 0 to value 0 to send traffic to Suricata.
The switch must be activated on all sniffing interfaces. For an interface named eth0 the global switch map will be
/sys/fs/bpf/suricata-eth0-global_bypass.
#define BUILD_CPUMAP 0
/* Increase CPUMAP_MAX_CPUS if ever you have more than 64 CPUs */
#define CPUMAP_MAX_CPUS 64
#define USE_PERCPU_HASH 0
#define GOT_TX_PEER 0
Then build the bpf file with make and install it in the expected place.
The Suricata configuration is rather simple as you need to activate hardware mode and the use-percpu-hash option in
the af-packet configuration of the interface
xdp-mode: hw
use-percpu-hash: no
The load balancing will be done on IP pairs inside the eBPF code, so using cluster_qm as cluster type is a good idea
cluster-type: cluster_qm
As of Linux 4.19, the number of threads must be a power of 2. So set threads variable of the af-packet interface to a
power of 2 and in the eBPF filter set the following variable accordingly
#define RSS_QUEUE_NUMBERS 32
iface-bypassed-stats command will return the number of elements in IPv4 and IPv6 flow tables for each interface
# suricatasc
>>> iface-bypassed-stats
Success:
{
(continues on next page)
}
}
The stats entry also contains a stats.flow_bypassed object that has local and capture bytes and packets counters as well
as a bypassed and closed flow counter
{
"local_pkts": 0,
"local_bytes": 0,
"local_capture_pkts": 20,
"local_capture_bytes": 25000,
"closed": 84,
"pkts": 4799,
"bytes": 2975133
}
local_pkts and local_bytes are for Suricata bypassed flows. This can be because local bypass is used or because the
capture method can not bypass more flows. pkts and bytes are counters coming from the capture method. They can
take some time to appear due to the accounting at timeout. local_capture_pkts and local_capture_bytes are counters
for packets that are seen by Suricata before the capture method efficiently bypass the traffic. There is almost always
some for each flow because of the buffer in front of Suricata reading threads.
21.5 Netmap
Netmap is a high speed capture framework for Linux and FreeBSD. In Linux it is available as an external module, while
in FreeBSD 11+ it is available by default.
Linux
On Linux, NETMAP is not included by default. It can be pulled from github. Follow the instructions on installation
included in the NETMAP repository.
When NETMAP is installed, add --enable-netmap to the configure line. If the includes are not added to a standard
location, the location can be specified when configuring Suricata.
Example:
. Warning
the interface that netmap reads from will become unavailable for normal network operations. You can lock yourself
out of your system.
IDS
Suricata can be started in 2 ways to use netmap:
suricata --netmap=<interface>
suricata --netmap=igb0
In the above example Suricata will start reading from the igb0 network interface. The number of threads created
depends on the number of RSS queues available on the NIC.
suricata --netmap
In the above example Suricata will take the netmap block from the Suricata configuration and open each of the interfaces
listed.
netmap:
- interface: igb0
threads: 2
- interface: igb1
threads: 4
For the above configuration, both igb0 and igb1 would be opened. With 2 threads for igb0 and 4 capture threads for
igb1.
. Warning
This multi threaded setup only works correctly if the NIC has symmetric RSS hashing. If this is not the case,
consider using the 'lb' method below.
IPS
Suricata's Netmap based IPS mode is based on the concept of creating a layer 2 software bridge between 2 interfaces.
Suricata reads packets on one interface and transmits them on another.
Packets that are blocked by the IPS policy, are simply not transmitted.
netmap:
- interface: igb0
copy-mode: ips
copy-iface: igb1
- interface: igb1
(continues on next page)
ò Note
yaml:
netmap:
- interface: netmap:suricata
threads: 6
startup:
suricata --netmap=netmap:suricata
The interface name as passed to Suricata includes a 'netmap:' prefix. This tells Suricata that it's going to read from
netmap pipes instead of a real interface.
Then Zeek (formerly Bro) can be configured to load 6 instances. Both will get a copy of the same traffic. The number
of netmap pipes does not have to be equal for both tools.
FreeBSD 11
On FreeBSD 11 the named pipe is not available.
starting lb:
lb -i eth0 -p 6
yaml:
netmap:
- interface: netmap:eth0
threads: 6
startup:
suricata --netmap
ò Note
Single NIC
When an interface enters NETMAP mode, it is no longer available to the OS for other operations. This can be unde-
sirable in certain cases, but there is a workaround.
By running Suricata in a special inline mode, the interface will show it's traffic to the OS.
netmap:
- interface: igb0
copy-mode: tap
copy-iface: igb0^
- interface: igb0^
copy-mode: tap
copy-iface: igb0
The copy-mode can be both 'tap' and 'ips', where the former never drops packets based on the policies in use, and the
latter may drop packets.
. Warning
ò Note
This set up can also be used to mix NETMAP with firewall setups like pf or ipfw.
VALE switches
VALE is a virtual switch that can be used to create an all virtual network or a mix of virtual and real nics.
A simple all virtual setup:
vale-ctl -n vi0
vale-ctl -a vale0:vi0
vale-ctl -n vi1
vale-ctl -a vale0:vi1
We now have a virtual switch "vale0" with 2 ports "vi0" and "vi1".
We can start Suricata to listen on one of the ports:
suricata --netmap=vale0:vi1
Then we can
netmap:
- interface: igb0
copy-mode: tap
copy-iface: igb1
- interface: igb1
copy-mode: tap
copy-iface: igb0
The only difference with the IPS mode is that the copy-mode setting is set to tap.
21.6 AF_XDP
AF_XDP (eXpress Data Path) is a high speed capture framework for Linux that was introduced in Linux v4.18.
AF_XDP aims at improving capture performance by redirecting ingress frames to user-space memory rings, thus by-
passing the network stack.
Note that during af_xdp operation the selected interface cannot be used for regular network usage.
Further reading:
• https://www.kernel.org/doc/html/latest/networking/af_xdp.html
This feature is enabled provided the libraries above are installed, the user does not need to add any additional command
line options.
The command line option --disable-af-xdp can be used to disable this feature.
Example:
./configure --disable-af-xdp
af-xdp:
suricata --af-xdp=<interface>
suricata --af-xdp=igb0
In the above example Suricata will start reading from the igb0 network interface.
af-xdp:
threads: <number>
threads: auto
threads: 8
force-xdp-mode
There are two operating modes employed when loading the XDP program, these are:
• XDP_DRV: Mode chosen when the driver supports AF_XDP
• XDP_SKB: Mode chosen when no AF_XDP support is unavailable
XDP_DRV mode is the preferred mode, used to ensure best performance.
af-xdp:
force-xdp-mode: <value> where: value = <skb|drv|none>
force-xdp-mode: drv
force-bind-mode
During binding the kernel will first attempt to use zero-copy (preferred). If zero-copy support is unavailable it will
fallback to copy mode, copying all packets out to user space.
af-xdp:
force-bind-mode: <value> where: value = <copy|zero|none>
force-bind-mode: zero
For both options, the kernel will attempt the 'preferred' option first and fallback upon failure. Therefore the default
(none) means the kernel has control of which option to apply. By configuring these options the user is forcing said
option. Note that if enabled, the bind will only attempt this option, upon failure the bind will fail i.e. no fallback.
mem-unaligned
AF_XDP can operate in two memory alignment modes, these are:
• Aligned chunk mode
• Unaligned chunk mode
Aligned chunk mode is the default option which ensures alignment of the data within the UMEM.
Unaligned chunk mode uses hugepages for the UMEM. Hugepages start at the size of 2MB but they can be as large as
1GB. Lower count of pages (memory chunks) allows faster lookup of page entries. The hugepages need to be allocated
on the NUMA node where the NIC and CPU resides. Otherwise, if the hugepages are allocated only on NUMA node 0
and the NIC is connected to NUMA node 1, then the application will fail to start. Therefore, it is recommended to first
find out to which NUMA node the NIC is connected to and only then allocate hugepages and set CPU cores affinity to
the given NUMA node.
Memory assigned per socket/thread is 16MB, so each worker thread requires at least 16MB of free space. As stated
above hugepages can be of various sizes, consult the OS to confirm with cat /proc/meminfo.
Example
af-xdp:
mem-unaligned: <yes/no>
mem-unaligned: yes
Introduced from Linux v5.11 a SO_PREFER_BUSY_POLL option has been added to AF_XDP that allows a true polling
of the socket queues. This feature has been introduced to reduce context switching and improve CPU reaction time
during traffic reception.
Enabled by default, this feature will apply the following options, unless disabled (see below). The following options
are used to configure this feature.
enable-busy-poll
Enables or disables busy polling.
af-xdp:
enable-busy-poll: <yes/no>
enable-busy-poll: yes
busy-poll-time
Sets the approximate time in microseconds to busy poll on a blocking receive when there is no data.
af-xdp:
busy-poll-time: <time>
busy-poll-time: 20
busy-poll-budget
Budget allowed for batching of ingress frames. Larger values means more frames can be stored/read. It is recommended
to test this for performance.
af-xdp:
busy-poll-budget: <budget>
busy-poll-budget: 64
Linux tunables
The SO_PREFER_BUSY_POLL option works in concert with the following two Linux knobs to ensure best capture per-
formance. These are not socket options:
• gro-flush-timeout
• napi-defer-hard-irq
The purpose of these two knobs is to defer interrupts and to allow the NAPI context to be scheduled from a watchdog
timer instead.
The gro-flush-timeout indicates the timeout period for the watchdog timer. When no traffic is received for
gro-flush-timeout the timer will exit and softirq handling will resume.
The napi-defer-hard-irq indicates the number of queue scan attempts before exiting to interrupt context. When
enabled, the softirq NAPI context will exit early, allowing busy polling.
af-xdp:
gro-flush-timeout: 2000000
napi-defer-hard-irq: 2
˓→equal 16
ethtool -x eth3
ethtool -n eth3
In the above setup you are free to use any recent set_irq_affinity script. It is available in any Intel x520/710 NIC
sources driver download.
NOTE: We use a special low entropy key for the symmetric hashing. More info about the research for symmetric
hashing set up
capture:
# disable NIC offloading. It's restored when Suricata exits.
# Enabled by default.
#disable-offloading: false
for proto in tcp4 udp4 ah4 esp4 sctp4 tcp6 udp6 ah6 esp6 sctp6; do
/sbin/ethtool -N eth3 rx-flow-hash $proto sd
done
This command triggers load balancing using only source and destination IPs. This may be not optimal in terms of load
balancing fairness but this ensures all packets of a flow will reach the same thread even in the case of IP fragmentation
(where source and destination port will not be available for some fragmented packets).
21.7 DPDK
21.7.1 Introduction
The Data Plane Development Kit (DPDK) is a set of libraries and drivers that enhance and speed up packet processing
in the data plane. Its primary use is to provide faster packet processing by bypassing the kernel network stack, which
can provide significant performance improvements. For detailed instructions on how to setup DPDK, please refer to
Suricata.yaml to learn more about the basic setup for DPDK. The following sections contain examples of how to set
up DPDK and Suricata for more obscure use-cases.
# global check
cat /proc/meminfo
HugePages_Total: 1024
HugePages_Free: 1024
After the termination of Suricata and other hugepage-related applications, if the count of free hugepages is not equal
with the total number of hugepages, it indicates some hugepages were not freed completely. This can be fixed by
removing DPDK-related files from the hugepage-mounted directory (filesystem). It's important to exercise caution
while removing hugepages, especially when other hugepage-dependent applications are in operation, as this action will
disrupt their memory functionality. Removing the DPDK files from the hugepage directory can often be done as:
...
dpdk:
eal-params:
proc-type: primary
vdev: 'net_bonding0,mode=0,slave=0000:04:00.0,slave=0000:04:00.1'
In the DPDK part of suricata.yaml we have added a new parameter to the eal-params section for virtual devices - vdev.
DPDK Environment Abstraction Layer (EAL) can initialize some virtual devices during the initialization of EAL. In
this case, EAL creates a new device of type net_bonding. Suffix of net_bonding signifies the name of the interface (in
this case the zero). Extra arguments are passed after the device name, such as the bonding mode (mode=0). This is the
round-robin mode as is described in the DPDK documentation of Bond PMD. Members (slaves) of the net_bonding0
interface are appended after the bonding mode parameter.
When the device is specified within EAL parameters, it can be used within Suricata interfaces list. Note that the list
doesn't contain PCIe addresses of the physical ports but instead the net_bonding0 interface. Threading section is also
adjusted according to the items in the interfaces list by enablign set-cpu-affinity and listing CPUs that should be used
in management and worker CPU set.
...
threading:
set-cpu-affinity: yes
cpu-affinity:
- management-cpu-set:
cpu: [ 0 ] # include only these CPUs in affinity settings
- receive-cpu-set:
cpu: [ 0 ] # include only these CPUs in affinity settings
- worker-cpu-set:
cpu: [ 2,4,6,8 ]
...
...
dpdk:
eal-params:
proc-type: primary
interfaces:
- interface: 0000:3b:00.0
interrupt-mode: true
threads: 4
TWENTYTWO
22.1 Introduction
Suricata can listen to a unix socket and accept commands from the user. The exchange protocol is JSON-based and the
format of the message is generic.
An example script called suricatasc is provided in the source and installed automatically when installing/updating
Suricata.
The unix socket is always enabled by default.
You'll need to have JSON support in Python:
• python-simplejson - simple, fast, extensible JSON encoder/decoder for Python
Debian/Ubuntu:
The creation of the socket is managed by setting enabled to 'yes' or 'auto' under unix-command in Suricata YAML
configuration file:
unix-command:
enabled: yes
#filename: custom.socket # use this to specify an alternate file
The filename variable can be used to set an alternate socket filename. The filename is always relative to the local
state base directory.
Clients are implemented for some programming languages and can be used as code example to write custom scripts:
• Python: https://github.com/OISF/suricata/blob/master/python/suricata/sc/suricatasc.py (provided with Suricata
and used in this document)
• Perl: https://github.com/aflab/suricatac (a simple Perl client with interactive mode)
• C: https://github.com/regit/SuricataC (a Unix socket mode client in C without interactive mode)
cd python
sudo ./bin/suricatasc
465
Suricata User Guide, Release 8.0.0-dev
# suricatasc
Command list: shutdown, command-list, help, version, uptime, running-mode, capture-mode,␣
˓→conf-get, dump-counters, iface-stat, iface-list, quit
>>> iface-list
Success: {'count': 2, 'ifaces': ['eth0', 'eth1']}
(continues on next page)
NOTE: You need to quote commands with more than one argument:
suricata --unix-socket=custom.socket
In this last case, you will need to provide the complete path to the socket to suricatasc. To do so, you need to pass
the filename as first argument of suricatasc:
suricatasc custom.socket
Once Suricata is started, you can use suricatasc to connect to the command socket and provide different pcap files:
root@tiger:~# suricatasc
>>> pcap-file /home/benches/file1.pcap /tmp/file1
Success: Successfully added file to list
>>> pcap-file /home/benches/file2.pcap /tmp/file2
Success: Successfully added file to list
>>> pcap-file-continuous /home/pcaps /tmp/dirout
Success: Successfully added file to list
You can add multiple files without waiting for each to be processed; they will be sequentially processed and the gen-
erated log/alert files will be put into the directory specified as second argument of the pcap-file command. You need
to provide an absolute path to the files and directory as Suricata doesn't know from where the script has been run. If
you pass a directory instead of a file, all files in the directory will be processed. If using pcap-file-continuous
and passing in a directory, the directory will be monitored for new files being added until you use pcap-interrupt
or delete/move the directory.
To display how many files are waiting to get processed, you can do:
>>> pcap-file-number
Success: 3
>>> pcap-file-list
Success: {'count': 2, 'files': ['/home/benches/file1.pcap', '/home/benches/file2.pcap']}
>>> pcap-current
Success:
"/tmp/test.pcap"
When passing in a directory, you can see last processed time (modified time of last file) in milliseconds since epoch:
>>> pcap-last-processed
Success:
1509138964000
>>> pcap-interrupt
Success:
"Interrupted"
# suricatasc
SND: {"version": "0.1"}
RCV: {"return": "OK"}
>>> iface-list
SND: {"command": "iface-list"}
RCV: {"message": {"count": 1, "ifaces": ["wlan0"]}, "return": "OK"}
Success: {'count': 1, 'ifaces': ['wlan0']}
>>> iface-stat wlan0
SND: {"command": "iface-stat", "arguments": {"iface": "wlan0"}}
(continues on next page)
There is one thing to be careful about: a Suricata message is sent in multiple send operations. This result in possible
incomplete read on client side. The worse workaround is to sleep a bit before trying a recv call. An other solution is to
use non blocking socket and retry a recv if the previous one has failed.
Pcap-file json format is:
{
"command": "pcap-file",
"arguments": {
"output-dir": "path to output dir",
"filename": "path to file or directory to run",
"tenant": 0,
"continuous": false,
"delete-when-done": false
}
}
output-dir and filename are required. tenant is optional and should be a number, indicating which tenant the file or
directory should run under. continuous is optional and should be true/false, indicating that file or directory should be
run until pcap-interrupt is sent or ctrl-c is invoked. delete-when-done is optional and should be true/false, indicating
that the file or files under the directory specified by filename should be deleted when processing is complete. delete-
when-done defaults to false, indicating files will be kept after processing.
TWENTYTHREE
471
Suricata User Guide, Release 8.0.0-dev
For TLS traffic where the appliance security policy does not lead to decryption of the traffic, the TLS handshake is
presented to Suricata for analysis and logging.
23.1.5 IPS
When using Suricata in IPS mode with the appliance, some things will have to be considered:
• if Suricata DROPs a packet in the decrypted traffic, this will be seen by the appliance after which it will trigger
a RST session teardown.
• if a packet takes more than one second to process, it will automatically be considered a DROP by the appliance.
This should not happen in normal traffic, but with very inefficient Lua scripts this could perhaps happen. The
appliance can also be configured to wait for 5 seconds.
• When using the Suricata 'replace' keyword to modify data, be aware that the 3.x appliance software will not pass
the modification on to the destination so this will not have any effect. The 4.x appliance software does support
passing on modifications that were made to the unencrypted text, by default this feature is disabled but you can
enable it if you want modifications to be passed on to the destination in the re-encrypted stream. Due to how
Suricata works, the size of the payloads cannot be changed.
TWENTYFOUR
MAN PAGES
24.1 Suricata
24.1.1 SYNOPSIS
suricata [OPTIONS] [BPF FILTER]
24.1.2 DESCRIPTION
suricata is a high performance Network IDS, IPS and Network Security Monitoring engine. Open Source and owned
by a community run non-profit foundation, the Open Information Security Foundation (OISF).
suricata can be used to analyze live traffic and pcap files. It can generate alerts based on rules. suricata will generate
traffic logs.
When used with live traffic suricata can be passive or active. Active modes are: inline in a L2 bridge setup, inline with
L3 integration with host firewall (NFQ, IPFW, WinDivert), or out of band using active responses.
24.1.3 OPTIONS
-h
Display a brief usage overview.
-V
Displays the version of Suricata.
-c <path>
Path to configuration file.
--include <path>
Additional configuration files to include. Multiple additional configuration files can be provided and will be
included in the order specified on the command line. These additional configuration files are loaded as if they
existed at the end of the main configuration file.
Example including one additional file:
--include /etc/suricata/other.yaml
-T
Test configuration.
473
Suricata User Guide, Release 8.0.0-dev
-v
Increase the verbosity of the Suricata application logging by increasing the log level from the default. This option
can be passed multiple times to further increase the verbosity.
• -v: INFO
• -vv: PERF
• -vvv: CONFIG
• -vvvv: DEBUG
This option will not decrease the log level set in the configuration file if it is already more verbose than the level
requested with this option.
-r <path>
Run in pcap offline mode (replay mode) reading files from pcap file. If <path> specifies a directory, all files in
that directory will be processed in order of modified time maintaining flow state between files.
--pcap-file-continuous
Used with the -r option to indicate that the mode should stay alive until interrupted. This is useful with directories
to add new files and not reset flow state between files.
--pcap-file-recursive
Used with the -r option when the path provided is a directory. This option enables recursive traversal into sub-
directories to a maximum depth of 255. This option cannot be combined with --pcap-file-continuous. Symlinks
are ignored.
--pcap-file-delete
Used with the -r option to indicate that the mode should delete pcap files after they have been processed. This is
useful with pcap-file-continuous to continuously feed files to a directory and have them cleaned up when done.
If this option is not set, pcap files will not be deleted after processing.
--pcap-file-buffer-size <value>
Set read buffer size using setvbuf to speed up pcap reading. Valid values are 4 KiB to 64 MiB. Default value
is 128 KiB. Supported on Linux only.
-i <interface>
After the -i option you can enter the interface card you would like to use to sniff packets from. This option will
try to use the best capture method available. Can be used several times to sniff packets from several interfaces.
--pcap[=<device>]
Run in PCAP mode. If no device is provided the interfaces provided in the pcap section of the configuration file
will be used.
--af-packet[=<device>]
Enable capture of packet using AF_PACKET on Linux. If no device is supplied, the list of devices from the
af-packet section in the yaml is used.
--af-xdp[=<device>]
Enable capture of packet using AF_XDP on Linux. If no device is supplied, the list of devices from the af-xdp
section in the yaml is used.
-q <queue id>
Run inline of the NFQUEUE queue ID provided. May be provided multiple times.
-s <filename.rules>
With the -s option you can set a file with signatures, which will be loaded together with the rules set in the yaml.
It is possible to use globbing when specifying rules files. For example, -s '/path/to/rules/*.rules'
-S <filename.rules>
With the -S option you can set a file with signatures, which will be loaded exclusively, regardless of the rules set
in the yaml.
It is possible to use globbing when specifying rules files. For example, -S '/path/to/rules/*.rules'
-l <directory>
With the -l option you can set the default log directory. If you already have the default-log-dir set in yaml, it will
not be used by Suricata if you use the -l option. It will use the log dir that is set with the -l option. If you do not
set a directory with the -l option, Suricata will use the directory that is set in yaml.
-D
Normally if you run Suricata on your console, it keeps your console occupied. You can not use it for other
purposes, and when you close the window, Suricata stops running. If you run Suricata as daemon (using the -D
option), it runs at the background and you will be able to use the console for other tasks without disturbing the
engine running.
--runmode <runmode>
With the --runmode option you can set the runmode that you would like to use. This command line option can
override the yaml runmode option.
Runmodes are: workers, autofp and single.
For more information about runmodes see Runmodes in the user guide.
-F <bpf filter file>
Use BPF filter from file.
-k [all|none]
Force (all) the checksum check or disable (none) all checksum checks.
--user=<user>
Set the process user after initialization. Overrides the user provided in the run-as section of the configuration
file.
--group=<group>
Set the process group to group after initialization. Overrides the group provided in the run-as section of the
configuration file.
--pidfile <file>
Write the process ID to file. Overrides the pid-file option in the configuration file and forces the file to be written
when not running as a daemon.
--init-errors-fatal
Exit with a failure when errors are encountered loading signatures.
--strict-rule-keywords[=all|<keyword>|<keywords(csv)]
Applies to: classtype, reference and app-layer-event.
By default missing reference or classtype values are warnings and not errors. Additionally, loading outdated
app-layer-event events are also not treated as errors, but as warnings instead.
If this option is enabled these warnings are considered errors.
If no value, or the value 'all', is specified, the option applies to all of the keywords above. Alternatively, a comma
separated list can be supplied with the keyword names it should apply to.
--disable-detection
Disable the detection engine.
--disable-hashing
Disable support for hash algorithms such as md5, sha1 and sha256.
By default hashing is enabled. Disabling hashing will also disable some Suricata features such as the filestore,
ja3, and rule keywords that use hash algorithms.
--dump-config
Dump the configuration loaded from the configuration file to the terminal and exit.
--dump-features
Dump the features provided by Suricata modules and exit. Features list (a subset of) the configuration values and
are intended to assist with comparing provided features with those required by one or more rules.
--build-info
Display the build information the Suricata was built with.
--list-app-layer-protos
List all supported application layer protocols.
--list-keywords=[all|csv|<kword>]
List all supported rule keywords.
--list-runmodes
List all supported run modes.
--set <key>=<value>
Set a configuration value. Useful for overriding basic configuration parameters. For example, to change the
default log directory:
--set default-log-dir=/var/tmp
This option cannot be used to add new entries to a list in the configuration file, such as a new output. It can only
be used to modify a value in a list that already exists.
For example, to disable the eve-log in the default configuration file:
--set outputs.1.eve-log.enabled=no
Also note that the index values may change as the suricata.yaml is updated.
See the output of --dump-config for existing values that could be modified with their index.
--engine-analysis
Print reports on analysis of different sections in the engine and exit. Please have a look at the conf parameter
engine-analysis on what reports can be printed
--unix-socket=<file>
Use file as the Suricata unix control socket. Overrides the filename provided in the unix-command section of the
configuration file.
--reject-dev=<device>
Use device to send out RST / ICMP error packets with the reject keyword.
--pcap-buffer-size=<size>
Set the size of the PCAP buffer (0 - 2147483647).
--netmap[=<device>]
Enable capture of packet using NETMAP on FreeBSD or Linux. If no device is supplied, the list of devices from
the netmap section in the yaml is used.
--pfring[=<device>]
Enable PF_RING packet capture. If no device provided, the devices in the Suricata configuration will be used.
--pfring-cluster-id <id>
Set the PF_RING cluster ID.
--pfring-cluster-type <type>
Set the PF_RING cluster type (cluster_round_robin, cluster_flow).
-d <divert-port>
Run inline using IPFW divert mode.
--dag <device>
Enable packet capture off a DAG card. If capturing off a specific stream the stream can be select using a device
name like "dag0:4". This option may be provided multiple times read off multiple devices and/or streams.
--napatech
Enable packet capture using the Napatech Streams API.
--erf-in=<file>
Run in offline mode reading the specific ERF file (Endace extensible record format).
--simulate-ips
Simulate IPS mode when running in a non-IPS mode.
24.1.5 SIGNALS
Suricata will respond to the following signals:
SIGUSR2
Causes Suricata to perform a live rule reload.
SIGHUP
Causes Suricata to close and re-open all log files. This can be used to re-open log files after they may have
been moved away by log rotation utilities.
24.1.7 EXAMPLES
To capture live traffic from interface eno1:
suricata -i eno1
suricata -r /path/to/capture.pcap
To capture using AF_PACKET and override the flow memcap setting from the suricata.yaml:
24.1.8 BUGS
Please visit Suricata's support page for information about submitting bugs or feature requests.
24.1.9 NOTES
• Suricata Home Page
https://suricata.io/
• Suricata Support Page
https://suricata.io/support/
24.2.2 DESCRIPTION
Suricata socket control tool
24.2.3 COMMANDS
shutdown
Shut Suricata instance down.
command-list
List available commands.
help
Get help about the available commands.
version
Print the version of Suricata instance.
uptime
Display the uptime of Suricata.
running-mode
Display running mode. This can either be workers, autofp or single.
capture-mode
Display the capture mode. This can be either of PCAP_DEV, PCAP_FILE, PFRING(DISABLED), NFQ,
NFLOG, IPFW, ERF_FILE, ERF_DAG, AF_PACKET_DEV, NETMAP(DISABLED), UNIX_SOCKET or WIN-
DIVERT(DISABLED).
conf-get <variable>
Get configuration value for a given variable. Variable to be provided can be either of the configuration parameters
that are written in suricata.yaml.
dump-counters
Dump Suricata's performance counters.
ruleset-reload-rules
Reload the ruleset and wait for completion.
reload-rules
Alias .. describe ruleset-reload-rules.
ruleset-reload-nonblocking
Reload ruleset and proceed without waiting.
ruleset-reload-time
Return time of last reload.
ruleset-stats
Display the number of rules loaded and failed.
ruleset-failed-rules
Display the list of failed rules.
register-tenant-handler <id> <htype> [hargs]
Register a tenant handler with the specified mapping.
unregister-tenant-handler <id> <htype> [hargs]
Unregister a tenant handler with the specified mapping.
register-tenant <id> <filename>
Register tenant with a particular ID and filename.
reload-tenant <id> [filename]
Reload a tenant with specified ID. A filename to a tenant yaml can be specified. If it is omitted, the original yaml
that was used to load / last reload the tenant is used.
reload-tenants
Reload all registered tenants by reloading their yaml.
unregister-tenant <id>
Unregister tenant with a particular ID.
add-hostbit <ipaddress> <hostbit> <expire>
Add hostbit on a host IP with a particular bit name and time of expiry.
remove-hostbit <ipaddress> <hostbit>
Remove hostbit on a host IP with specified IP address and bit name.
list-hostbit <ipaddress>
List hostbit for a particular host IP.
reopen-log-files
Reopen log files to be run after external log rotation.
memcap-set <config> <memcap>
Update memcap value of a specified item.
memcap-show <config>
Show memcap value of a specified item.
memcap-list
List all memcap values available.
24.2.5 BUGS
Please visit Suricata's support page for information about submitting bugs or feature requests.
24.2.6 NOTES
• Suricata Home Page
https://suricata.io/
• Suricata Support Page
https://suricata.io/support/
24.3.2 DESCRIPTION
This tool helps control Suricata's features.
24.3.3 OPTIONS
-h
24.3.4 COMMANDS
suricatactl-filestore(1)
24.3.5 BUGS
Please visit Suricata's support page for information about submitting bugs or feature requests.
24.3.6 NOTES
• Suricata Home Page
https://suricata.io/
• Suricata Support Page
https://suricata.io/support/
24.4.2 DESCRIPTION
This command lets you perform certain operations on Suricata filestore.
24.4.3 OPTIONS
-h
24.4.4 COMMANDS
prune [-h|--help] [-n|--dry-run] [-v|verbose] [-q|--quiet] -d <DIRECTORY> --age <AGE>
Prune files older than a given age.
-d <DIRECTORY> | --directory <DIRECTORY> is a required argument which tells that user must provide the suricata
filestore directory on which all the specified operations are to be performed.
--age <AGE> is a required argument asking the age of the files. Files older than the age mentioned with this option
shall be pruned.
-h | --help is an optional argument with which you can ask for help about the command usage.
-n | --dry-run is an optional argument which makes the utility print only what would happen
-v | --verbose is an optional argument to increase the verbosity of command.
-q | --quiet is an optional argument that helps log errors and warnings only and keep silent about everything else.
24.4.5 BUGS
Please visit Suricata's support page for information about submitting bugs or feature requests.
24.4.6 NOTES
• Suricata Home Page
https://suricata.io/
• Suricata Support Page
https://suricata.io/support/
TWENTYFIVE
ACKNOWLEDGEMENTS
Thank you to the following for their Wiki and documentation contributions that have made this user guide possible:
• Andreas Herz
• Andreas Moe
• Anne-Fleur Koolstra
• Christophe Vandeplas
• Darren Spruell
• David Cannings
• David Diallo
• David Wharton
• Eric Leblond
• god lol
• Haris Haq
• Ignacio Sanchez
• Jason Ish
• Jason Taylor
• Josh Smith
• Juliana Fajardini
• Ken Steele
• Les Syv
• Lukas Sismis
• Mark Solaris
• Martin Holste
• Mats Klepsland
• Matt Jonkman
• Michael Bentley
• Michael Hrishenko
• Nathan Jimerson
483
Suricata User Guide, Release 8.0.0-dev
• Nicolas Merle
• Peter Manev
• Philipp Buehler
• Philippe Antoine
• Ralph Broenink
• Rob MacGregor
• Russel Fulton
• Shivani Bhardwaj
• Victor Julien
• Vincent Fang
• Zach Rasmor
TWENTYSIX
LICENSES
26.1.1 Preamble
The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU
General Public License is intended to guarantee your freedom to share and change free software--to make sure the
software is free for all its users. This General Public License applies to most of the Free Software Foundation's software
and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered
by the GNU Lesser General Public License instead.) You can apply it to your programs, too.
When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to
make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that
you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free
programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender
the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if
you modify it.
For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the
rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them
these terms so they know their rights.
We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal
permission to copy, distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty
for this free software. If the software is modified by someone else and passed on, we want its recipients to know that
what they have is not the original, so that any problems introduced by others will not reflect on the original authors'
reputations.
Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors
of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this,
we have made it clear that any patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and modification follow.
485
Suricata User Guide, Release 8.0.0-dev
• b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than
your cost of physically performing source distribution, a complete machine-readable copy of the corresponding
source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for
software interchange; or,
• c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This
alternative is allowed only for noncommercial distribution and only if you received the program in object code
or executable form with such an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for making modifications to it. For an executable
work, complete source code means all the source code for all modules it contains, plus any associated interface definition
files, plus the scripts used to control compilation and installation of the executable. However, as a special exception,
the source code distributed need not include anything that is normally distributed (in either source or binary form) with
the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that
component itself accompanies the executable.
If distribution of executable or object code is made by offering access to copy from a designated place, then offering
equivalent access to copy the source code from the same place counts as distribution of the source code, even though
third parties are not compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License.
Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate
your rights under this License. However, parties who have received copies, or rights, from you under this License will
not have their licenses terminated so long as such parties remain in full compliance.
5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission
to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept
this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate
your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the
Program or works based on it.
6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a
license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You
may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible
for enforcing compliance by third parties to this License.
7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited
to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the
conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to
satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence
you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution
of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy
both it and this License would be to refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the
section is intended to apply and the section as a whole is intended to apply in other circumstances.
It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest
validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution
system, which is implemented by public license practices. Many people have made generous contributions to the wide
range of software distributed through that system in reliance on consistent application of that system; it is up to the
author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted
interfaces, the original copyright holder who places the Program under this License may add an explicit geographical
distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus
excluded. In such case, this License incorporates the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to
time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems
or concerns.
Each version is given a distinguishing version number. If the Program specifies a version number of this License which
applies to it and "any later version", you have the option of following the terms and conditions either of that version or
of any later version published by the Free Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software Foundation.
10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different,
write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to
the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software
generally.
26.1.3 NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PRO-
GRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN
WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITH-
OUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE
ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR
CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY
COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PRO-
GRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPE-
CIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE
THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED IN-
ACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM
TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN
ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
You should have received a copy of the GNU General Public License along with this program; if not, write
to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY
NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w` and `show c` should show the appropriate parts of the General Public License.
Of course, the commands you use may be called something other than `show w' and `show c`; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer"
for the program, if necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes
at compilers) written by James Hacker.
<signature of Ty Coon>, 1 April 1989 Ty Coon, President of Vice
This General Public License does not permit incorporating your program into proprietary programs. If your program
is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If
this is what you want to do, use the GNU Lesser General Public License instead of this License.
f. Licensed Material means the artistic or literary work, database, or other material to which the Licensor applied
this Public License.
g. Licensed Rights means the rights granted to You subject to the terms and conditions of this Public License,
which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that
the Licensor has authority to license.
h. Licensor means the individual(s) or entity(ies) granting rights under this Public License.
i. NonCommercial means not primarily intended for or directed towards commercial advantage or monetary com-
pensation. For purposes of this Public License, the exchange of the Licensed Material for other material subject
to Copyright and Similar Rights by digital file-sharing or similar means is NonCommercial provided there is no
payment of monetary compensation in connection with the exchange.
j. Share means to provide material to the public by any means or process that requires permission under the Li-
censed Rights, such as reproduction, public display, public performance, distribution, dissemination, communi-
cation, or importation, and to make material available to the public including in ways that members of the public
may access the material from a place and at a time individually chosen by them.
k. Sui Generis Database Rights means rights other than copyright resulting from Directive 96/9/EC of the Euro-
pean Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or
succeeded, as well as other essentially equivalent rights anywhere in the world.
l. You means the individual or entity exercising the Licensed Rights under this Public License. Your has a corre-
sponding meaning.
TWENTYSEVEN
ò Note
Pre-installation requirements
Before you can build Suricata for your system, run the following command to ensure that you have everything you need
for the installation.
export PATH=$PATH:${HOME}/.cargo/bin
cargo install --force cbindgen
Depending on the current status of your system, it may take a while to complete this process.
IPS
By default, Suricata works as an IDS. If you want to use it as an IDS and IPS program, enter:
495
Suricata User Guide, Release 8.0.0-dev
Suricata
First, it is convenient to create a directory for Suricata. Name it 'suricata' or 'oisf', for example. Open the terminal and
enter:
Followed by:
cd suricata # cd oisf
./scripts/bundle.sh
Followed by:
./autogen.sh
./configure
make
Auto-setup
You can also use the available auto-setup features of Suricata. Ex:
make install-conf would do the regular "make install" and then it would automatically create/setup all the necessary
directories and suricata.yaml for you.
make install-rules would do the regular "make install" and then it would automatically download and set-up the latest
ruleset from Emerging Threats available for Suricata.
make install-full would combine everything mentioned above (install-conf and install-rules) - and will present you with
a ready to run (configured and set-up) Suricata.
Post installation
cd suricata/suricata
next, enter:
git pull
Formatting
clang-format
ò Note
Before opening a pull request, please also try to ensure it is formatted properly. We use clang-format for this, which
has git integration through the git-clang-format script to only format your changes.
On some systems, it may already be installed (or be installable via your package manager). If so, you can simply run it.
It is recommended to format each commit as you go. However, you can always reformat your whole branch after the
fact.
ò Note
Depending on your installation, you might have to use the version-specific git clang-format in the com-
mands below, e.g. git clang-format-9, and possibly even provide the clang-format binary with --binary
clang-format-9.
As an alternative, you can use the provided scripts/clang-format.sh that isolates you from the different ver-
sions.
The following command will format only the code changed in the most recent commit:
Note that this modifies the files, but doesn't commit them. If the changes are trivial, you’ll likely want to run
The following command will format the changes in staging, i.e. files you git add-ed:
$ git clang-format
# Or with script:
$ scripts/clang-format.sh cached
In case you have multiple commits on your branch already and forgot to format them you can fix that up as well.
The following command will format every commit in your branch off master and rewrite history using the existing
commit metadata.
Tip: Create a new version of your branch first and run this off the new version.
Note that the above should only be used for rather minimal formatting changes. As mentioned, we prefer that you add
such changes to a dedicated commit for formatting changes:
Note the usage of first_commit_on_your_branch^, not master, to avoid picking up new commits on master in
case you've updated master since you've branched.
Check formatting
$ scripts/clang-format.sh check-branch
Add the --diffstat parameter if you want to see the files needing formatting. Add the --diff parameter if you want
to see the actual diff of the formatting change.
Note
Do not reformat whole files by default, i.e. do not use clang-format proper in general.
If you were ever to do so, formatting changes of existing code with clang-format shall be a different commit and must
not be mixed with actual code changes.
$ clang-format -i {file}
Disabling clang-format
There might be times, where the clang-format's formatting might not please. This might mostly happen with macros,
arrays (single or multi-dimensional ones), struct initialization, or where one manually formatted code.
You can always disable clang-format.
/* clang-format off */
#define APP_LAYER_INCOMPLETE(c, n) (AppLayerResult){1, (c), (n)}
/* clang-format on */
• See http://apt.llvm.org for other releases in case the clang-format version is not found in the default repos.
On fedora:
• Install the clang and git-clang-format packages with
Line length
Indent
...
return TM_ECODE_OK;
}
Use 8 space indentation when wrapping function parameters, loops and if statements.
Use 4 space indentation when wrapping variable definitions.
clang-format:
• AlignAfterOpenBracket: DontAlign
• Cpp11BracedListStyle: false
• IndentWidth: 4
• TabWidth: 8 [llvm]
• UseTab: Never [llvm]
Braces
int SomeFunction(void)
{
DoSomething();
}
Opening and closing braces go on the same line as as the _else_ (also known as a "cuddled else").
if (this) {
DoThis();
} else {
DoThat();
}
Structs, unions and enums should have the opening brace on the same line:
union {
TCPVars tcpvars;
ICMPV4Vars icmpv4vars;
ICMPV6Vars icmpv6vars;
} l4vars;
struct {
uint8_t type;
uint8_t code;
} icmp_s;
enum {
DETECT_TAG_TYPE_SESSION,
DETECT_TAG_TYPE_HOST,
DETECT_TAG_TYPE_MAX
};
clang-format:
• BreakBeforeBraces: Custom [breakbeforebraces]
• BraceWrapping:
– AfterClass: true
– AfterControlStatement: false
– AfterEnum: false
– AfterFunction: true
– AfterStruct: false
– AfterUnion: false
– AfterExternBlock: true
– BeforeElse: false
– IndentBraces: false
Flow
Don't use conditions and statements on the same line. E.g.
if (a)
b = a; // <- right
void empty_function(void)
{
}
int short_function(void)
{
return 1;
}
if (error) {
goto error;
} else {
a = b;
}
if (error) {
goto error;
}
a = b;
clang-format:
• AllowShortBlocksOnASingleLine: false [llvm]
• AllowShortBlocksOnASingleLine: Never [llvm] (breaking change in clang 10!) [clang10]
• AllowShortEnumsOnASingleLine: false [clang11]
• AllowShortFunctionsOnASingleLine: None
• AllowShortIfStatementsOnASingleLine: Never [llvm]
Alignment
Pointers
void *ptr;
void f(int *a, const char *b);
void (*foo)(int *);
clang-format:
• PointerAlignment: Right
• DerivePointerAlignment: false
struct bla {
int a; /* comment */
unsigned bb; /* comment */
int *ccc; /* comment */
};
void alignment()
{
// multiple consecutive vars
int a = 13; /* comment */
int32_t abc = 1312; /* comment */
int abcdefghikl = 13; /* comment */
}
clang-format:
• AlignConsecutiveAssignments: false
• AlignConsecutiveDeclarations: false
• AlignTrailingComments: true
Functions
parameter names
TODO
Function names
static vs non-static
inline
Variables
Names
Scope
TODO
Macros
Macro names are ALL_CAPS_WITH_UNDERSCORES. Enclose parameters in parens on each usage inside the macro.
Align macro values on consecutive lines.
#define MULTILINE_DEF(a, b) \
if ((a) > 2) { \
auto temp = (b) / 2; \
(b) += 10; \
someFunctionCall((a), (b)); \
}
clang-format:
• AlignConsecutiveMacros: true [clang9]
• AlignEscapedNewlines: Right
Comments
TODO
Function comments
/**
* \brief Helper function to get a node, creating it if it does not
* exist.
*
* This function exits on memory failure as creating configuration
* nodes is usually part of application initialization.
*
* \param name The name of the configuration node to get.
* \param final Flag to set created nodes as final or not.
*
* \retval The existing configuration node if it exists, or a newly
* created node for the provided name. On error, NULL will be returned.
*/
static ConfNode *ConfGetNodeOrCreate(char *name, int final)
General comments
File names
File names are all lowercase and have a .c. .h or .rs (Rust) extension.
Most files have a _subsystem_ prefix, e.g. detect-dsize.c, util-ip.c
Some cases have a multi-layer prefix, e.g. util-mpm-ac.c
Enums
Use a common prefix for all enum values. Value names are ALL_CAPS_WITH_UNDERSCORES.
Put each enum values on a separate line. Tip: Add a trailing comma to the last element to force "one-value-per-line"
formatting in clang-format.
// right
enum {
VALUE_ONE,
VALUE_TWO, // <- force one-value-per-line
};
clang-format:
• AllowShortEnumsOnASingleLine: false [clang11]
switch statements
Switch statements are indented like in the following example, so the 'case' is indented from the switch:
switch (ntohs(p->ethh->eth_type)) {
case ETHERNET_TYPE_IP:
DecodeIPV4(tv, dtv, p, pkt + ETHERNET_HEADER_LEN,
len - ETHERNET_HEADER_LEN, pq);
break;
Fall through cases will be commented with /* fall through */. E.g.:
switch (suri->run_mode) {
case RUNMODE_PCAP_DEV:
case RUNMODE_AFP_DEV:
case RUNMODE_PFRING:
/* find payload for interface and use it */
default_packet_size = GetIfaceMaxPacketSize(suri->pcap_dev);
if (default_packet_size)
break;
/* fall through */
default:
default_packet_size = DEFAULT_PACKET_SIZE;
Do not put short case labels on one line. Put opening brace on same line as case statement.
switch (a) {
case 13: {
int a = bla();
break;
}
case 15:
blu();
break;
default:
gugus();
}
clang-format:
• IndentCaseLabels: true
• IndentCaseBlocks: false [clang11]
• AllowShortCaseLabelsOnASingleLine: false [llvm]
• BreakBeforeBraces: Custom [breakbeforebraces]
• BraceWrapping:
– AfterCaseLabel: false (default)
const
TODO
goto
Goto statements should be used with care. Generally, we use it primarily for error handling. E.g.:
fileext = SCMalloc(sizeof(DetectFileextData));
if (unlikely(fileext == NULL))
goto error;
return fileext;
error:
if (fileext != NULL)
DetectFileextFree(fileext);
return NULL;
}
int goto_style_nested()
{
if (foo()) {
label1:
bar();
}
label2:
return 1;
}
clang-format:
• IndentGotoLabels: true (default) [clang10]
Includes
TODO
A .c file shall include it's own header first.
clang-format:
• SortIncludes: false
Unittests
When writing unittests that use a data array containing a protocol message, please put an explanatory comment that
contain the readable content of the message
So instead of:
int SMTPProcessDataChunkTest02(void)
{
char mimemsg[] = {0x4D, 0x49, 0x4D, 0x45, 0x2D, 0x56, 0x65, 0x72,
int SMTPParserTest14(void)
{
/* 220 mx.google.com ESMTP d15sm986283wfl.6<CR><LF> */
static uint8_t welcome_reply[] = { 0x32, 0x32, 0x30, 0x20,
Banned functions
Also, check the existing code. If yours is wildly different, it's wrong. Example: https://github.com/oisf/suricata/blob/
master/src/decode-ethernet.c
ò Note
This changes various parts of Suricata making the suricata binary unsafe for production use.
The targets can be used with libFuzzer, AFL and other fuzz platforms.
Reproducing issues
Extending Coverage
Adding Fuzz Targets
Oss-Fuzz
Suricata is continuously fuzz tested in Oss-Fuzz. See https://github.com/google/oss-fuzz/tree/master/projects/suricata
Table of Contents
• Testing Suricata
– General Concepts
– Unit tests
∗ Code Examples
– Suricata-Verify
– Generating Input
∗ Using real traffic
∗ Crafting input samples with Scapy
∗ Other examples from our Suricata-Verify tests:
∗ Finding Capture Samples
General Concepts
There are a few ways of testing Suricata:
• Unit tests: for independently checking specific functions or portions of code. This guide has specific sections to
further explain those, for C and Rust;
• Suricata-Verify: those are used to check more complex behavior, like the log output or the alert counts for a
given input, where that input is usually comprised of several packets;
• Static and dynamic analysis tools: to help in finding bugs, memory leaks and other issues (like scan-build,
from clang, which is also used for our C formatting checks; or ASAN, which checks for memory issues);
• Fuzz testing: especially good for uncovering existing, often non-trivial bugs. For more on how to fuzz test
Suricata, check Fuzz Testing;
• CI checks: each PR submitted to the project's public repositories will be run against a suit of Continuous Inte-
gration workflows, as part of our QA process. Those cover: formatting and commit checks; fuzz tests (CI Fuzz),
and several builds. See our github workflows for details and those in action at https://github.com/OISF/suricata/
actions.
ò Note
If you can run unit tests or other checks and report failures in our issue tracker, that is rather useful and
appreciated!
The focus of this document are Unit tests and Suricata-Verify tests, especially on offering some guidance regarding
when to use each type of test, and how to prepare input for them.
Unit tests
Use these to check that specific functions behave as expected, in success and in failure scenarios. Specially useful
during development, for nom parsers in the Rust codebase, for instance, or for checking that messages or message parts
of a protocol/stream are processed as they should.
To execute all unit tests (both from C and Rust code), as well as libhtp ones, from the Suricata main directory, run:
make check
Check the Suricata Devguide on Unit Tests - C or Unit tests - Rust for more on how to write and run unit tests, given
that the way to do so differs, depending on the language.
Code Examples
An example from the DNS parser. This checks that the given raw input (note the comments indicating what it means),
once processed by dns_parse_name yields the expected result, including the unparsed portion.
Packet *p = PacketGetFromAlloc();
(continues on next page)
memset(&dtv, 0, sizeof(DecodeThreadVars));
memset(&tv, 0, sizeof(ThreadVars));
FAIL_IF_NOT(ENGINE_ISSET_EVENT(p, DCE_PKT_TOO_SMALL));
PacketFree(p);
PASS;
}
Suricata-Verify
As mentioned above, these tests are used to check more complex behavior that involve a complete flow, with exchange
of requests and responses. This can be done in an easier and more straightforward way, since one doesn't have to
simulate the network traffic and Suricata engine mechanics - one simply runs it, with the desired input packet capture,
configuration and checks.
A Suricata-verify test can help to ensure that code refactoring doesn't affect protocol logs, or signature detection, for
instance, as this could have a major impact to Suricata users and integrators.
For simpler tests, providing the pcap input is enough. But it is also possible to provide Suricata rules to be inspected,
and have Suricata Verify match for alerts and specific events.
Refer to the Suricata Verify readme for details on how to create this type of test. It suffices to have a packet capture
representative of the behavior one wants to test, and then follow the steps described there.
The Git repository for the Suricata Verify tests is a great source for examples, like the app-layer-template one.
Generating Input
Using real traffic
Having a packet capture for the desired protocol you want to test, open it in Wireshark, and select the specific packet
chosen for the test input, then use the Wireshark option Follow [TCP/UDP/HTTP/HTTP2/QUIC] Stream. This al-
lows for inspecting the whole network traffic stream in a different window. There, it's possible to choose to Show and
save data as C Arrays, as well as to select if one wants to see the whole conversation or just client or server
packets. It is also possible to reach the same effect by accessing the Analyze->Follow->TCP Stream top menu in
Wireshark. (There are other stream options, the available one will depend on the type of network traffic captured).
This option will show the packet data as hexadecimal compatible with C-array style, and easily adapted for Rust, as
well. As shown in the image:
Wireshark can be also used to capture sample network traffic and generate pcap files.
Going through Suricata-Verify tests readme files it is also possible to find an assorted collection of pcap generation
possibilities, some with explanation on the how-tos. To list a few:
• http2-range
• http-range
• smb2-delete
• smtp-rset
• http-auth-unrecognized
If you can't capture traffic for the desired protocol from live traffic, or craft something up, you can try finding the type
of traffic you are interested in in public data sets. There's a thread for Sharing good sources of sample captures in our
forum.
./configure --enable-unittests
The unit tests specific command line options can be found at Command Line Options.
Example: You can run tests specifically on flowbits. This is how you should do that:
suricata -u -U flowbit
It is highly appreciated if you would run unit tests and report failing tests in our issue tracker.
If you want more info about the unittests, regular debug mode can help. This is enabled by adding the configure option:
--enable-debug
SC_LOG_LEVEL=Debug suricata -u
This will be very verbose. You can also add the SC_LOG_OP_FILTER to limit the output, it is grep-like:
This example will show all lines (debug, info, and all other levels) that contain either something or something else.
Keep in mind the log level precedence: if you choose Info level, for instance, Suricata won't show messages from the
other levels.
void MyUnitTest(void)
{
int n = 1;
void *p = NULL;
FAIL_IF(n != 1);
FAIL_IF_NOT(n == 1);
(continues on next page)
PASS;
}
UtRegisterTest("MyUnitTest", MyUnitTest);
where the first argument is the name of the test, and the second argument is the function. Existing modules should
already have a function that registers its unit tests. Otherwise the unit tests will need to be registered. Look for a
module similar to your new module to see how best to register the unit tests or ask the development team for help.
Examples
From conf-yaml-loader.c:
/**
* Test that a configuration section is overridden but subsequent
* occurrences.
*/
static int
ConfYamlOverrideTest(void)
{
char config[] =
"%YAML 1.1\n"
"---\n"
"some-log-dir: /var/log\n"
"some-log-dir: /tmp\n"
"\n"
"parent:\n"
" child0:\n"
" key: value\n"
"parent:\n"
" child1:\n"
" key: value\n"
;
const char *value;
ConfCreateContextBackup();
ConfInit();
PASS;
}
#ifdef UNITTESTS
.
.
.
static int IKEChosenSaParserTest(void)
{
DetectIkeChosenSaData *de = NULL;
de = DetectIkeChosenSaParse("alg_hash=2");
FAIL_IF_NULL(de);
FAIL_IF(de->sa_value != 2);
FAIL_IF(strcmp(de->sa_type, "alg_hash") != 0);
DetectIkeChosenSaFree(NULL, de);
PASS;
}
#endif /* UNITTESTS */
void IKEChosenSaRegisterTests(void)
{
#ifdef UNITTESTS
UtRegisterTest("IKEChosenSaParserTest", IKEChosenSaParserTest);
#endif /* UNITTESTS */
The line above will make rustc compile the Rust side of Suricata and run unit tests in the http2 rust module.
For running all Suricata unit tests from our Rust codebase, just run cargo test.
ò Note
If you want to understand when to use a unit test, please read the devguide section on Testing Suricata.
In general, it is preferable to have the unit tests in the same file that they test. At the end of the file, after all other
functions. Add a tests module, if there isn't one yet, and add the #[test] attribute before the unit test function. It is
also necessary to import (use) the module to test, as well as any other modules used. As seen in the example below:
Example
mod tests {
use crate::nfs::rpc_records::*;
use nom::Err::Incomplete;
use nom::Needed::Size;
#[test]
fn test_partial_input_ok() {
let buf: &[u8] = &[
0x80, 0x00, 0x00, 0x9c, // flags
0x8e, 0x28, 0x02, 0x7e, // xid
0x00, 0x00, 0x00, 0x01, // msgtype
0x00, 0x00, 0x00, 0x02, // rpcver
0x00, 0x00, 0x00, 0x03, // program
0x00, 0x00, 0x00, 0x04, // progver
0x00, 0x00, 0x00, 0x05, // procedure
];
let expected = RpcRequestPacketPartial {
hdr: RpcPacketHeader {
frag_is_last: true,
frag_len: 156,
xid: 2384986750,
msgtype: 1
},
rpcver: 2,
program: 3,
progver: 4,
procedure: 5
};
let r = parse_rpc_request_partial(buf);
match r {
Ok((rem, hdr)) => {
assert_eq!(rem.len(), 0);
assert_eq!(hdr, expected);
},
_ => { panic!("failed {:?}",r); }
}
}
}
Once that is done, Rust should recognize the new test. If you want to check a single test, run:
Where tests refers to mod tests. If you know the test name is unique, you can even run:
Following the same idea, it is also possible to test specific modules or submodules. For instance:
27.2 Contributing
27.2.1 Contributing to Suricata
This guide describes what steps to take if you want to contribute a patch or patchset to Suricata.
Essentially, these are:
1. Agree to and sign our Contribution Agreement
2. Communicate early, and use the preferred channels
3. Claim (or open) a ticket
4. Fork from master
5. Follow our Coding Style
6. Use our Documentation Style
7. Stick to our commit guidelines
8. Add version numbers to your Pull Requests
9. Incorporate Feedback into new PRs
10. [Work merged] Wrap up!
The rest of this document will cover those in detail.
ò Note
Important!
Before contributing, please review and sign our Contribution Agreement.
Communication is Key!
To clarify questions, discuss or suggest new features, talk about bugs and optimizations, and/or ask for help, it is
important to communicate.
These are our main channels:
• Suricata's issue tracker
• Suricata's forum
• Suricata's Discord server
ò Note
If you want to add new functionalities (e.g. a new application layer protocol), please ask us first whether we see that
being merged into Suricata or not. This helps both sides understand how the new feature will fit in our roadmap,
and prevents wasting time and motivation with contributions that we may not accept. Therefore, before starting any
code related to a new feature, do request comments from the team about it.
Expectations
If you submit a new feature that is not part of Suricata's core functionalities, it will have the community supported
status. This means we would expect some commitment from you, or the organization who is sponsoring your work,
before we could approve the new feature, as the Suricata development team is pretty lean (and many times overworked).
This means we expect that:
• the new contribution comes with a set of Suricata-verify tests (and possibly unit tests, where those apply), before
we can approve it;
• proof of compatibility with existing keywords/features is provided, when the contribution is for replacing an
existing feature;
• you would maintain the feature once it is approved - or some other community member would do that, in case
you cannot.
ò Note
Regardless of contribution size or complexity, we expect that you respect our guidelines and processes. We appreci-
ate community contributors: Suricata wouldn't be what it is without them; and the value of our tool and community
also comes from how seriously we take all this, so we ask that our contributors do the same!
If a feature is community supported, the Suricata team will try to spend minimal time on it - to be able to focus on the
core functionalities. If for any reason you're not willing or able to commit to supporting a feature, please indicate this.
The team and/or community members can then consider offering help. It is best to indicate this prior to doing the actual
work, because we will reject features if no one steps up.
It is also important to note that community supported features will be disabled by default, and if it brings in new
dependencies (libraries or Rust crates) those will also be optional and disabled by default.
Supporting a feature means to actually maintain it:
• fixing bugs
• writing documentation
• keeping it up to date
• offering end-user support via forum and/or Discord chat
Coding Style
We have a Coding Style that must be followed.
Documentation Style
For documenting code, please follow Rust documentation and/or Doxygen guidelines, according to what your contri-
bution is using (Rust or C).
When writing or updating documentation pages, please:
• wrap up lines at 79 (80 at most) characters;
• when adding diagrams or images, we prefer alternatives that can be generated automatically, if possible;
• bear in mind that our documentation is published on Read the Docs and can also be built to pdf, so it is important
that it looks good in such formats.
Rule examples
example-rule
This will present the rule in a box with an easier to read font size, and also allows highlighting specific elements in the
signature, as the names indicate - action, header, options, or emphasize custom portions:
• example-rule-action
• example-rule-header
• example-rule-options
• example-rule-emphasis
When using these, indicate the portion to be highlighted by surrounding it with ` . Before using them, one has to invoke
the specific role, like so:
.. role:: example-rule-role
It is only necessary to invoke the role once per document. One can see these being invoked in our introduction to the
rule language (see Rules intro).
A rule example like:
.. container:: example-rule
Results in:
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"HTTP GET Request Containing Rule in URI";
flow:established,to_server; http.method; content:"GET"; http.uri; content:"rule"; fast_pattern; classtype:bad-unknown;
sid:123; rev:1;)
Example - emphasis:
.. container:: example-rule
alert ssh any any -> any any (msg:"match SSH protocol version";
:example-rule-emphasis:`ssh.proto;` content:"2.0"; sid:1000010;)
Renders as:
alert ssh any any -> any any (msg:"match SSH protocol version"; ssh.proto; content:"2.0"; sid:1000010;)
Feedback
You'll likely get some feedback. Even our most experienced devs do, so don't feel bad about it.
After discussing what needs to be changed (usually on the PR itself), it's time to go back to "Create your own branch"
and do it all again. This process can iterate quite a few times, as the contribution is refined.
Wrapping up
Merged! Cleanup
Congrats! Your change has been merged into the main repository. Many thanks!
We strongly suggest cleaning up: delete your related branches, both locally and on GitHub - this helps you in keeping
things organized when you want to make new contributions.
Update ticket
You can now put the URL of the merged pull request in the Redmine ticket. Next, mark the ticket as "Closed" or
"Resolved".
Well done! You are all set now.
• Meaningful and short (50 chars max) subject line followed by an empty line
• Naming convention: prefix message with sub-system ("rule parsing: fixing foobar"). If you're
not sure what to use, look at past commits to the file(s) in your PR.
• Description, wrapped at ~72 characters
7. Commits should be individually compilable, starting with the oldest commit. Make sure that
each commit can be built if it and the preceding commits in the PR are used.
8. Commits should be authored with the format: "FirstName LastName <[email protected]>"
Information that needs to be part of a commit (if applicable):
1. Ticket it fixes. E.g. "Fixes Bug #123."
2. Compiler warnings addressed.
3. Coverity Scan issues addressed.
4. Static analyzer error it fixes (cppcheck/scan-build/etc)
ò Note
When in doubt, check our git history for other messages or changes done to the same module your're working on.
This is a good example of a commit message:
pcap/file: normalize file timestamps
Normalize the timestamps that are too far in the past to epoch.
Bug: #6240.
Pull Requests
A github pull request is actually just a pointer to a branch in your tree. GitHub provides a review interface that we use.
1. A branch can only be used in for an individual PR.
2. A branch should not be updated after the pull request
3. A pull request always needs a good description (link to issue tracker if related to a ticket).
4. Incremental pull requests need to link to the prior iteration
5. Incremental pull requests need to describe changes since the last PR
6. Link to the ticket(s) that are addressed to it.
7. When fixing an issue, update the issue status to In Review after submitting the PR.
8. Pull requests are automatically tested using github actions (https://github.com/OISF/suricata/blob/master/
.github/workflows/builds.yml). Failing builds won't be considered and should be closed immediately.
9. Pull requests that change, or add a feature should include a documentation update commit
Tests and QA
As much as possible, new functionality should be easy to QA.
1. Add suricata-verify tests for verification. See https://github.com/OISF/suricata-verify
2. Add unittests if a suricata-verify test isn't possible.
3. Provide pcaps that reproduce the problem. Try to trim as much as possible to the pcap includes the minimal set
of packets that demonstrate the problem.
4. Provide example rules if the code added new keywords or new options to existing keywords
ò Note
It is expected that the author will create a new PR with a new version of the patch as described in Pull Requests
Criteria. A PR may be closed as stale if it has not been updated in two months after changes were requested.
A PR may be labeled decision-required if the reviewer thinks the team needs more time to analyze the best approach
to a proposed solution or discussion raised by the PR.
Once in approved state, the PRs are in the responsibility of the maintainer, along with the next branches/PRs.
ò Note
Exceptions
There can be cases where backports may be "missed" -- some issues may not be labeled as needing backports and
some PRs may be merged without an issue.
This guide may be insufficient for some situations. When in doubt, please reach out to the team on the backport
ticket or PR.
Selection overview
All items considered for backports should be reviewed with the following:
• risk estimate: will the change introduce new bugs? Consider the scope and items affected by the change.
• behavioral change: how much will the behavior of the system be changed by the backport. For example,
a small change to decode additional encapsulation protocols may result in more traffic being presented to
Suricata.
• default settings: if the issue alters behavior, can it be made optional, and at what cost?
Redmine: for security and bug fixes, when creating a new Redmine issue, label the Redmine issue with "Needs backport
to x.0", where x.0 is a supported Suricata release, e.g, 7.0.x.
We want to minimize the occurrence of "missed backports" -- that is, work that should be backported but wasn't.
Sometimes this happens when there is no Redmine issue, or the Redmine issue wasn't labeled as needing a backport.
Therefore, we will be periodically reviewing:
• Redmine issues without backport labels, including recently closed issues, to see which require backport
labels.
• PRs without associated Redmine issues. Those requiring backports should be labeled with needs backport.
Then, also periodically, we will create backport issues from those items identified in the previous steps. When doing
so, we will evaluate what are the relevant target backport releases. Some issues reported against master or the current
Suricata release may not apply to older releases.
ò Note
Commit hashes
We have a CI check that ensures the validity of the cherry-pick line.
ò Note
Exceptions
Sometimes, the fix for master will not work for the stable or old releases. In such cases, the backporting process
won't be through cherry-picking, but through actually implementing a fix for the specific version.
Create a PR:
Please indicate in the title that this is a backport PR, with something like (7.0.x-backport), and add the related milestone
label.
In the PR description, indicate the backport ticket.
QA
Add suricata-verify PRs when needed. Some existing suricata-verify tests may require version specification changes.
27.3.4 Engines
Flow
Stream
Defrag
Table of Contents
· C code
– Visual context
Baseline
General Concepts
Frame support was introduced with Suricata 7.0. Up until 6.0.x, Suricata's architecture and state of parsers meant that
the network traffic available to the detection engine was just a stream of data, without detail about higher level parsers.
ò Note
For Suricata, Frame is a generic term that can represent any unit of network data we are interested in, which could
be comprised of one or several records of other, lower level protocol(s). Frames work as "stream annotations",
allowing Suricata to tell the detection engine what type of record exists at a specific offset in the stream.
The application layer parser exposes frames it supports to the detect engine, by tagging them as they're parsed. The
rest works automatically.
In order to allow the engine to identify frames for records of a given application layer parser, thought must be given as
to which frames make sense for the specific protocol you are handling. Some parsers may have clear header and data
fields that form its protocol data unit (pdu). For others, the distinction might be between request and response, only.
Whereas for others it may make sense to have specific types of data. This is better understood by seeing the different
types of frame keywords, which vary on a per-protocol basis.
It is also important to keep follow naming conventions when defining Frame Types. While a protocol may have strong
naming standards for certain structures, do compare those with what Suricata already has registered:
• hdr: used for the record header portion
• data: is used for the record data portion
• pdu: unless documented otherwise, means the whole record, comprising hdr and data
Basic steps
Once the frame types that make sense for a given protocol are defined, the basic steps for adding them are:
• create an enum with the frame types;
• identify the parsing function(s) where application layer records are parsed;
• identify the correct moment to register the frames;
• use the Frame API calls directly or build upon them and use your functions to register the frames;
• register the relevant frame callbacks when registering the parser.
Once these are done, you can enable frame eve-output to confirm that your frames are being properly registered. It is
important to notice that some hard coded limits could influence what you see on the logs (max size of log output; type
of logging for the payload, cf. https://redmine.openinfosecfoundation.org/issues/4988).
If all the steps are successfully followed, you should be able to write a rule using the frame keyword and the frame
types you have registered with the application layer parser.
Using the SMB parser as example, before frame support, a rule would look like:
Though the steps are the same, there are a few differences when implementing frame support in Rust or in C. The
following sections elaborate on that, as well as on the process itself. (Note that the code snippets have omitted portions
of code that weren't so relevant to this document).
Rust
This section shows how Frame support is added in Rust, using examples from the SIP parser, and the telnet parser.
Define the frame types. The frame types are defined as an enum. In Rust, make sure to derive from the
AppLayerFrameType:
Listing 1: rust/src/sip/sip.rs
#[derive(AppLayerFrameType)]
pub enum SIPFrameType {
Pdu,
RequestLine,
ResponseLine,
RequestHeaders,
ResponseHeaders,
RequestBody,
ResponseBody,
}
Frame registering. Some understanding of the parser will be needed in order to find where the frames should be
registered. It makes sense that it will happen when the input stream is being parsed into records. See when some pdu
and request frames are created for SIP:
Listing 2: rust/src/sip/sip.rs
fn parse_request(&mut self, flow: *const core::Flow, stream_slice: StreamSlice) -> bool {
let input = stream_slice.as_slice();
let _pdu = Frame::new(
flow,
&stream_slice,
input,
input.len() as i64,
SIPFrameType::Pdu as u8,
None,
);
SCLogDebug!("ts: pdu {:?}", _pdu);
match sip_parse_request(input) {
Ok((_, request)) => {
let mut tx = self.new_tx(crate::core::Direction::ToServer);
sip_frames_ts(flow, &stream_slice, &request, tx.id);
tx.request = Some(request);
if let Ok((_, req_line)) = sip_take_line(input) {
tx.request_line = req_line;
}
self.transactions.push_back(tx);
return true;
}
ò Note
Use the Frame API or build upon them as needed. These are the frame registration functions highlighted above:
Listing 3: rust/src/sip/sip.rs
fn sip_frames_ts(flow: *const core::Flow, stream_slice: &StreamSlice, r: &Request, tx_
˓→id: u64) {
let oi = stream_slice.as_slice();
let _f = Frame::new(
flow,
stream_slice,
oi,
r.request_line_len as i64,
SIPFrameType::RequestLine as u8,
Some(tx_id),
);
(continues on next page)
Register relevant frame callbacks. As these are inferred from the #[derive(AppLayerFrameType)] statement, all
that is needed is:
Listing 4: rust/src/sip/sip.rs
get_frame_id_by_name: Some(SIPFrameType::ffi_id_from_name),
get_frame_name_by_id: Some(SIPFrameType::ffi_name_from_id),
ò Note
on frame_len
For protocols which search for an end of frame char, like telnet, indicate unknown length by passing -1. Once the
length is known, it must be updated. For those where length is a field in the record (e.g. SIP), the frame is set to
match said length, even if that is bigger than the current input
The telnet parser has examples of using the Frame API directly for registering telnet frames, and also illustrates how
that is done when length is not yet known:
Listing 5: rust/src/telnet/telnet.rs
fn parse_request(
&mut self, flow: *const Flow, stream_slice: &StreamSlice, input: &[u8],
) -> AppLayerResult {
let mut start = input;
while !start.is_empty() {
if self.request_frame.is_none() {
(continues on next page)
Listing 6: rust/src/telnet/telnet.rs
1 match parser::parse_message(start) {
2 Ok((rem, request)) => {
3 let consumed = start.len() - rem.len();
4 if rem.len() == start.len() {
5 panic!("lockup");
6 }
7 start = rem;
8
StreamSlice contains the input data to the parser, alongside other Stream-related data important in parsing context.
Definition is found in applayer.rs:
Listing 7: rust/src/applayer.rs
pub struct StreamSlice {
input: *const u8,
input_len: u32,
/// STREAM_* flags
flags: u8,
offset: u64,
}
C code
Implementing Frame support in C involves a bit more manual work, as one cannot make use of the Rust derives. Code
snippets from the HTTP parser:
Defining the frame types with the enum means:
Listing 8: src/app-layer-htp.c
enum HttpFrameTypes {
HTTP_FRAME_REQUEST,
HTTP_FRAME_RESPONSE,
};
SCEnumCharMap http_frame_table[] = {
{
"request",
HTTP_FRAME_REQUEST,
},
{
"response",
HTTP_FRAME_RESPONSE,
},
{ NULL, -1 },
};
The HTTP parser uses the Frame registration functions from the C API (app-layer-frames.c) directly for registering
request Frames. Here we also don't know the length yet. The 0 indicates flow direction: toserver, and 1 would be
used for toclient:
Listing 9: src/app-layer-htp.c
Frame *frame = AppLayerFrameNewByAbsoluteOffset(
hstate->f, hstate->slice, consumed, -1, 0, HTTP_FRAME_REQUEST);
if (frame) {
SCLogDebug("frame %p/%" PRIi64, frame, frame->id);
hstate->request_frame_id = frame->id;
AppLayerFrameSetTxId(frame, HtpGetActiveRequestTxID(hstate));
}
SCLogDebug("HTTP request complete: data offset %" PRIu64 ", request_size %"␣
˓→PRIu64,
hstate->last_request_data_stamp, request_size);
SCLogDebug("frame %p/%" PRIi64 " setting len to %" PRIu64, frame, frame->id,
request_size);
frame->len = (int64_t)request_size;
Register relevant callbacks (note that the actual functions will also have to be written, for C):
ò Note
The GetFrameIdByName functions can be "probed", so they should not generate any output or that could be mis-
leading (for instance, Suricata generating a log message stating that a valid frame type is unknown).
Visual context
input and input_len are used to calculate the proper offset, for storing the frame. The stream buffer slides forward,
so frame offsets/frames have to be updated. The relative offset (rel_offset) reflects that:
Start:
[ stream ]
[ frame ...........]
rel_offset: 2
len: 19
Slide:
[ stream ]
[ frame .... .]
rel_offset: -10
len: 19
Slide:
[ stream ]
[ frame ........... ]
rel_offset: -16
len: 19
The way the engine handles stream frames can be illustrated as follows:
Parsers
Callbacks
The API calls callbacks that are registered at the start of the program.
The function prototype is:
Examples
A C example:
#[no_mangle]
pub extern "C" fn rs_dns_parse_response_tcp(_flow: *const core::Flow,
state: *mut std::os::raw::c_void,
_pstate: *mut std::os::raw::c_void,
input: *const u8,
input_len: u32,
_data: *const std::os::raw::c_void,
_flags: u8)
-> AppLayerResult
Return Types
APP_LAYER_OK / AppLayerResult::ok()
When a parser returns "OK", it signals to the API that all data has been consumed. The parser will be called again
when more data is available.
APP_LAYER_ERROR / AppLayerResult::err()
Returning "ERROR" from the parser indicates to the API that the parser encountered an unrecoverable error and the
processing of the protocol should stop for the rest of this flow.
ò Note
This should not be used for recoverable errors. For those events should be set.
APP_LAYER_INCOMPLETE / AppLayerResult::incomplete()
Using "INCOMPLETE" a parser can indicate how much more data is needed. Many protocols use records that have
the size as one of the first parameters. When the parser receives a partial record, it can read this value and then tell the
API to only call the parser again when enough data is available.
consumed is used how much of the current data has been processed needed is the number of bytes that the parser needs
on top of what was consumed.
Example:
ò Note
The parser will be called again when the needed data is available OR when the stream ends. In the latter case the data
will be incomplete. It's up to the parser to decide what to do with it in this case.
In some cases it may be preferable to actually support dealing with incomplete records. For example protocols like
SMB and NFS can use very large records during file transfers. Completely queuing these before processing could be a
waste of resources. In such cases the "INCOMPLETE" logic could be used for just the record header, while the record
data is streamed into the parser.
Transactions
Table of Contents
• Transactions
– General Concepts
– How the engine uses transactions
∗ Logging
∗ Rule Matching
– Progress Tracking
∗ In Summary - Transactions and State
– Examples
∗ Enums
∗ API Callbacks
∗ Sequence Diagrams
∗ Template Protocol
– Work In Progress changes
– Common words and abbreviations
General Concepts
For Suricata, transactions are an abstraction that help with detecting and logging. An example of a complete transaction
is a pair of messages in the form of a request (from client to server) and a response (from server to client) in HTTP.
In order to know when to log an event for a given protocol, the engine tracks the progress of each transaction - that is,
when is it complete, or when it reaches a key intermediate state. They aid during the detection phase, when dealing
with protocols that can have large PDUs (protocol data units), like TCP, in controlling state for partial rule matching --
in case of rules that mention more than one field.
Transactions are implemented and stored in the per-flow state. The engine interacts with them using a set of callbacks
the parser registers.
Logging
Suricata controls when logging should happen based on transaction completeness. For simpler protocols, such as dns
or ntp, that will most likely happen once per transaction, by the time of its completion. In other cases, like with HTTP,
this may happen at intermediary states.
In OutputTxLog, the engine will compare current state with the value defined for the logging to happen, per flow
direction (logger->tc_log_progress, logger->ts_log_progress). If state is less than that value, the engine
skips to the next logger. Code snippet from: suricata/src/output-tx.c:
Rule Matching
Transaction progress is also used for certain keywords to know what is the minimum state before we can expect a match:
until that, Suricata won't even try to look for the patterns.
As seen in DetectAppLayerMpmRegister that has int progress as parameter, and
DetectAppLayerInspectEngineRegister, which expects int tx_min_progress, for instance. In the code
snippet, HTTP2StateDataClient, HTTP2StateDataServer and 0 are the values passed to the functions - in the last
example, for FTPDATA, the existence of a transaction implies that a file is being transferred. Hence the 0 value.
void DetectFiledataRegister(void)
{
.
.
DetectAppLayerMpmRegister("file_data", SIG_FLAG_TOSERVER, 2,
PrefilterMpmFiledataRegister, NULL,
ALPROTO_HTTP2, HTTP2StateDataClient);
DetectAppLayerMpmRegister("file_data", SIG_FLAG_TOCLIENT, 2,
PrefilterMpmFiledataRegister, NULL,
ALPROTO_HTTP2, HTTP2StateDataServer);
.
.
DetectAppLayerInspectEngineRegister("file_data",
ALPROTO_HTTP2, SIG_FLAG_TOCLIENT, HTTP2StateDataServer,
DetectEngineInspectFiledata, NULL);
DetectAppLayerInspectEngineRegister(
"file_data", ALPROTO_FTPDATA, SIG_FLAG_TOSERVER, 0,␣
˓→DetectEngineInspectFiledata, NULL);
.
.
}
Progress Tracking
As a rule of thumb, transactions will follow a request-response model: if a transaction has had a request and a response,
it is complete.
But if a protocol has situations where a request or response won’t expect or generate a message from its counterpart,
it is also possible to have uni-directional transactions. In such cases, transaction is set to complete at the moment of
creation.
For example, DNS responses may be considered as completed transactions, because they also contain the request data,
so all information needed for logging and detection can be found in the response.
In addition, for file transfer protocols, or similar ones where there may be several messages before the file exchange is
completed (NFS, SMB), it is possible to create a level of abstraction to handle such complexity. This could be achieved
by adding phases to the model implemented by the protocol (e.g., protocol negotiation phase (SMB), request parsed
(HTTP), and so on).
This is controlled by implementing progress states. In Suricata, those will be enums that are incremented as the parsing
progresses. A state will start at 0. The higher its value, the closer the transaction would be to completion. Due to how
the engine tracks detection across states, there is an upper limit of 48 to the state progress (it must be < 48).
The engine interacts with transactions' state using a set of callbacks the parser registers. State is defined per flow
direction (STREAM_TOSERVER / STREAM_TOCLIENT).
Examples
This section shares some examples from Suricata codebase, to help visualize how Transactions work and are handled
by the engine.
Enums
From src/app-layer-ftp.h:
enum {
FTP_STATE_IN_PROGRESS,
FTP_STATE_PORT_DONE,
FTP_STATE_FINISHED,
};
From src/app-layer-ssl.h:
enum {
TLS_STATE_IN_PROGRESS = 0,
TLS_STATE_CERT_READY = 1,
TLS_HANDSHAKE_DONE = 2,
TLS_STATE_FINISHED = 3
};
API Callbacks
void AppLayerParserRegisterStateProgressCompletionStatus(
AppProto alproto, const int ts, const int tc)
tx_comp_st_ts: 1,
tx_comp_st_tc: 1,
AppLayerParserRegisterStateProgressCompletionStatus(ALPROTO_DCERPC, 1, 1);
src/app-layer-ftp.c:
AppLayerParserRegisterStateProgressCompletionStatus(
ALPROTO_FTP, FTP_STATE_FINISHED, FTP_STATE_FINISHED);
Sequence Diagrams
An HTTP2 transaction is an example of a bidirectional transaction, in Suricata (note that, while HTTP2 may have
multiple streams, those are mapped to transactions in Suricata. They run in parallel, scenario not shown in this Sequence
Diagram - which shows one transaction, only):
A TLS Handshake is a more complex example, where several messages are exchanged before the transaction is consid-
ered completed:
Template Protocol
Suricata has a template protocol for educational purposes, which has simple bidirectional transactions.
A completed transaction for the template looks like this:
Following are the functions that check whether a transaction is considered completed, for the Template Protocol. Those
are called by the Suricata API. Similar functions exist for each protocol, and may present implementation differences,
based on what is considered a transaction for that given protocol.
In C:
return 0;
}
And in Rust:
Currently we are working to have files be part of the transaction instead of the per-flow state, as seen in https://redmine.
openinfosecfoundation.org/issues/4444.
Another work in progress is to limit the number of transactions per flow, to prevent Denial of Service (DoS) by quadratic
complexity - a type of attack that may happen to protocols which can have multiple transactions at the same time - such
as HTTP2 so-called streams (see https://redmine.openinfosecfoundation.org/issues/4530).
27.4.4 Detection
27.4.5 Output
Low Level Logging
Suricata's alert, protocol, and other types of output are built up from a set of low level loggers. These loggers include:
• Packet logging (alerts)
• Flow logging
• Transaction logging (application layer)
• File information logging
• File data logging (file extraction)
• Statistics
These low level logging facilities are used to build up Suricata's logging include EVE, but they can also be hooked into
by plugins or applications using Suricata as a library.
ò Note
At this time only a C API exists to hook into the low level logging functions.
The Suricata source code contains an example plugin demonstrating how to hook into some of these APIs. See https:
//github.com/OISF/suricata/blob/master/examples/plugins/c-custom-loggers/custom-logger.c.
Packet Logging
Flow Logging
Transaction Logging
. Attention
Transaction loggers cannot be registered from a plugin at this time, see https://redmine.openinfosecfoundation.org/
issues/7236 for more information.
Stream Logging
Stream logging allows for the logging of streaming data such as TCP reassembled data and HTTP body data. The
provided log function will be called each time a new chunk of data is available.
Stream loggers can be registered with the SCOutputRegisterStreamingLogger function:
ThreadDeinitFunc ThreadDeinit);
File Logging
File-data Logging
27.5 LibSuricata
27.5.1 Using Suricata as a Library
The ability to turn Suricata into a library that can be utilized in other tools is currently a work in progress, tracked by
Redmine Ticket #2693: https://redmine.openinfosecfoundation.org/issues/2693.
A related work are Suricata plugins, also in progress and tracked by Redmine Ticket #4101: https://redmine.
openinfosecfoundation.org/issues/4101.
27.6 Upgrading
27.6.1 Upgrading 7.0 to 8.0
EVE File Types
• The ThreadInit function will now be called when in threaded and non-threaded modes. This simplifies the
initialization for EVE filetypes as they can use the same flow of execution for both modes. To upgrade, either
remove the call to ThreadInit from Init, or move per-thread setup code from Init to ThreadInit.
• Many of the function arguments to the callbacks have been made const where it made sense.
Please see the latest example EVE filetype plugin for an up to date example.
TWENTYEIGHT
Once the Suricata release distribution file has been downloaded, the PGP signature should be verified. This can be
done using the GPG application and is usually available on Linux/BSD systems without having to manually install any
additional packages. For Mac or Windows systems installation packages can be found at https://gnupg.org/.
Depending on the trust level assigned to the OISF signing keys, something similar to the following output should be
seen:
549
Suricata User Guide, Release 8.0.0-dev
This indicates a valid signature and that the signing key is trusted.
ò Note
This indicates that the OISF signing key was not imported to the local GPG keyring.
ò Note
This indicates that the OISF signing key was imported and the signatures are valid, but either the keys have not
been marked as trusted OR the keys are possibly a forgery.
If there are questions regarding the validity of the downloaded file, the OISF team can be reached at security @
oisf.net (remove the spaces between the @ before sending).
TWENTYNINE
APPENDIX
551
Suricata User Guide, Release 8.0.0-dev
643
Suricata User Guide, Release 8.0.0-dev
644 Bibliography
INDEX
645
Suricata User Guide, Release 8.0.0-dev
646 Index
Suricata User Guide, Release 8.0.0-dev
Index 647