Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
308 views34 pages

Sound The Alarm - Detection and Response

This document provides an overview of incident response frameworks and the incident response lifecycle according to NIST. It discusses the five core functions of the NIST Cybersecurity Framework (identify, protect, detect, respond, recover), and focuses on the last three steps (detect, respond, recover). The document then describes the four phases of the NIST incident response lifecycle (preparation, detection and analysis, containment/eradication/recovery, post-incident activity). It provides examples of tasks for each phase.

Uploaded by

ashish
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
308 views34 pages

Sound The Alarm - Detection and Response

This document provides an overview of incident response frameworks and the incident response lifecycle according to NIST. It discusses the five core functions of the NIST Cybersecurity Framework (identify, protect, detect, respond, recover), and focuses on the last three steps (detect, respond, recover). The document then describes the four phases of the NIST incident response lifecycle (preparation, detection and analysis, containment/eradication/recovery, post-incident activity). It provides examples of tasks for each phase.

Uploaded by

ashish
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 34

Module 1 – Intro to detection and response

The Incident Response Lifecycle


Frameworks help organizations develop a standardized approach to their incident response
process, so that incidents are managed in an effective and consistent way. There are many
different types of frameworks that organizations can adopt and modify according to their needs.

We'll focus on the NIST CSF. the five core functions of the NIST CSF are: identify, protect, detect,
respond, and recover.

This course will explore the last three steps of this framework: detect, respond, and recover.

The NIST incident response lifecycle is another NIST framework with additional
substeps dedicated to incident response. It begins with preparation. Next, detection and analysis,
and then containment, eradication and recovery, and finally post-incident activity.
One thing to note is that the incident lifecycle isn't a linear process.
It's a cycle, which means that steps can overlap as new discoveries are made.

1. Preparation: the planning and training process


The organization takes action to ensure it has the correct tools and resources in place:

 Set up uniform company email conventions.


 Create a collaborative, ethical environment where employees feel comfortable
asking questions.
 Provide cybersecurity training on a quarterly basis.

2. Detection and analysis: the detect and assess process


Security professionals create processes to detect and assess incidents:

 Identify signs of an incident


 Filter external emails to flag messages containing attachments such as
voicemails.
 Have an incident response plan to reference.

3. Containment, eradication, and recovery: the minimize and mitigate


process
Security professionals and stakeholders collaborate to minimize the impact of the
incident and mitigate any operational disruption.

 Communicate with sender to confirm the origin of the voice message.


 Provide employees with an easy way to report and contain suspicious messages.

4. Post-incident activity: the learning process


New protocols, procedures, playbooks, etc. are implemented to help reduce any similar
incidents in the future.

 Update the playbook to highlight additional red flags employees should be aware
of
 Review processes and workflows related to permissions and adjust oversight of
those permissions.
According to NIST, an incident is "an occurrence that actually or imminently jeopardizes, without
lawful authority, the confidentiality, integrity, or availability of information or an information
system; or constitutes a violation or imminent threat of violation of law, security policies, security
procedures, or acceptable use policies.

It's important to understand that all security incidents are events, but not all events are security
incidents. What are events?
An event is an observable occurrence on a network, system, or device.

Incident response operations


Incident response teams
Computer security incident response teams, or CSIRTs, are a specialized group of security
professionals that are trained in incident management and response.
Security professionals involved in a CSIRT typically include three key security related roles:

1. Security analyst

2. Technical lead

3. Incident coordinator

Security analyst
The job of the security analyst is to continuously monitor an environment for any security
threats. This includes:

 Analyzing and triaging alerts

 Performing root-cause investigations

 Escalating or resolving alerts

If a critical threat is identified, then analysts escalate it to the appropriate team lead, such as
the technical lead.

Technical lead
The job of the technical lead is to manage all of the technical aspects of the incident response
process, such as applying software patches or updates

Incident coordinator
Responding to an incident also requires cross-collaboration with nonsecurity professionals.

Other roles
Depending on the organization, many other roles can be found in a CSIRT, including a
dedicated communications lead, a legal lead, a planning lead, and more.
Security operations center
A security operations center (SOC) is an organizational unit dedicated to monitoring
networks, systems, and devices for security threats or attacks. A SOC is involved in various
types of blue team activities, such as network monitoring, analysis, and response to incidents.

SOC organization
A SOC is composed of SOC analysts, SOC leads, and SOC managers. Each role has its own
respective responsibilities. SOC analysts are grouped into three different tiers.

Tier 1 SOC analyst


The first tier is composed of the least experienced SOC analysts who are known as level 1s
(L1s). They are responsible for:

 Monitoring, reviewing, and prioritizing alerts based on criticality or severity

 Creating and closing alerts using ticketing systems

 Escalating alert tickets to Tier 2 or Tier 3

Tier 2 SOC analyst


The second tier comprises the more experienced SOC analysts, or level 2s (L2s). They are
responsible for:

 Receiving escalated tickets from L1 and conducting deeper investigations

 Configuring and refining security tools

 Reporting to the SOC Lead

Tier 3 SOC lead


The third tier of a SOC is composed of the SOC leads, or level 3s (L3s). These highly
experienced professionals are responsible for:
 Managing the operations of their team

 Exploring methods of detection by performing advanced detection techniques, such as


malware and forensics analysis

 Reporting to the SOC manager

SOC manager
The SOC manager is at the top of the pyramid and is responsible for:

 Hiring, training, and evaluating the SOC team members

 Creating performance metrics and managing the performance of the SOC team

 Developing reports related to incidents, compliance, and auditing

 Communicating findings to stakeholders such as executive management

Incident response plans


Incident response plans have:
 Incident response procedures. These are step-by-step instructions on how to respond to
incidents.
 System information. These are things like network diagrams, data flow diagrams, logging,
and asset inventory information.
 And other documents like contact lists, forms, and templates.

Incident response tools


 detection and management tools
 documentation tools
 investigative tools

The value of documentation


Documentation is any form of recorded content that is used for
a specific purpose.
This can be audio, digital, or handwritten instructions, and even videos.

Word processors are a common way to document.


Some popular tools to use are Google Docs, OneNote, Evernote, and Notepad++.
Ticketing systems like Jira can also be used to document and track incidents.
Lastly, Google Sheets, audio recorders, cameras, and
handwritten notes are also tools you can use to document.

Intrusion detection and Prevention systems


An intrusion detection system, or IDS,
works in a very similar way to home intrusion sensors.
An intrusion detection system
is an application that monitors
system and network activity, and
produces alerts on possible intrusions

Intrusion prevention systems, or IPS,


have all the same capabilities as an IDS,
but they can do more.
They monitor system activity for
intrusions and take action to stop it.

Many tools have the ability to perform


the function of both IDS and IPS.
Some popular tools are Snort,
Zeek, Kismet, Sagan, and Suricata.

Overview of IDS tools


An intrusion detection system (IDS) is an application that monitors system activity and alerts on
possible intrusions. An IDS does not stop or prevent the activity. Instead, security professionals will
investigate the alert and act to stop it, if necessary.

Detection categories
As a security analyst, you will investigate alerts that an IDS generates. There are four types of
detection categories you should be familiar with:

1. A true positive is an alert that correctly detects the presence of an attack.

2. A true negative is a state where there is no detection of malicious activity. This is


when no malicious activity exists and no alert is triggered.

3. A false positive is an alert that incorrectly detects the presence of a threat. This is
when an IDS identifies an activity as malicious, but it isn't. False positives are an
inconvenience for security teams because they spend time and resources investigating
an illegitimate alert.

4. A false negative is a state where the presence of a threat is not detected. This is when
malicious activity happens but an IDS fails to detect it. False negatives are dangerous
because security teams are left unaware of legitimate attacks that they can be
vulnerable to.

Overview of IPS tools


An intrusion prevention system (IPS) is an application that monitors system activity for
intrusive activity and takes action to stop the activity.

Overview of EDR tools


Endpoint detection and response (EDR) is an application that monitors an endpoint for
malicious activity. EDR tools are installed on endpoints. Remember that an endpoint is any
device connected on a network. Examples include end-user devices, like computers, phones,
tablets, and more.
EDR tools monitor, record, and analyze endpoint system activity to identify, alert, and
respond to suspicious activity. Unlike IDS or IPS tools, EDRs collect endpoint activity data
and perform behavioral analysis to identify threat patterns happening on an endpoint.
Behavioral analysis uses the power of machine learning and artificial intelligence to
analyze system behavior to identify malicious or unusual activity. EDR tools also use
automation to stop attacks without the manual intervention of security professionals

Tools like Open EDR®, Bitdefender™ Endpoint Detection and Response, and FortiEDR™ are
examples of EDR tools.

Alert and event management with SIEM and SOAR tools

The SIEM process


The SIEM process consists of three critical steps:

1. Collect and aggregate data

2. Normalize data

3. Analyze data

First, SIEM tools collect and aggregate data.


This data is typically in the form of logs, which are basically a record of
all the events that happened on a given source. During the first step, the SIEM collects event data
from various sources like firewalls, servers, routers, and more. This data, also known as logs,
contains event details like timestamps, IP addresses, and more.

Next, SIEM tools normalize data. SIEM tools collect data from many different sources. This data
must be transformed into a single format so that it can be easily processed by the SIEM.
Normalization takes the raw data that the SIEM has collected and cleans it up by
removing non essential attributes so that only what's relevant is included.

Finally, the normalized data gets analyzed according to configured rules.


SIEM analyzes the normalized data against a rule set to detect any possible
security incidents, which then get categorized or reported as alerts for
security analysts to review.
Note: A part of the analysis process includes correlation. Correlation involves the comparison of
multiple log events to identify common patterns that indicate potential security threats.
SIEM tools
There are many SIEM tools. The following are some SIEM tools commonly used in the
cybersecurity industry:

 AlienVault® OSSIM™

 Chronicle

 Elastic

 Exabeam

 IBM QRadar® Security Intelligence Platform

 LogRhythm

 Splunk

Security orchestration, automation, and response, or


SOAR, is a collection of applications, tools, and
workflows that uses automation to respond to security events.
While SIEM tools collect, analyze, and report on security events for
security analysts to review, SOAR automates analysis and
response to security events and incidents.
SOAR can also be used to track and manage cases.

Module 2 -Network Monitoring & Analysis


Understanding Network traffic
Network traffic is the amount of data that moves across a network.
While network data is the data that's transmitted between devices on a network.

By knowing what's normal, you can easily spot what's abnormal.


We can detect traffic abnormalities through observation to spot indicators
of compromise, also known as IoC, which are observable evidence that
suggests signs of a potential security incident.

Flow analysis
Flow refers to the movement of network communications and includes information related to
packets, protocols, and ports.
malicious actors can use protocols and ports that are not commonly associated to maintain
communications between the compromised system and their own machine. These
communications are what’s known as command and control (C2), which are the techniques used
by malicious actors to maintain communications with compromised systems.

Packet payload information


Network packets contain components related to the transmission of the packet. This includes
details like source and destination IP address, and the packet payload information, which is the
actual data that’s transmitted.

Organizations can monitor the payload information of packets to uncover unusual activity, such as
sensitive data transmitting outside of the network, which could indicate a possible data exfiltration
attack.

Temporal patterns
Network packets contain information relating to time. This information is useful in
understanding time patterns. For example, a company operating in North America
experiences bulk traffic flows between 9 a.m. to 5 p.m., which is the baseline of normal
network activity. If large volumes of traffic are suddenly outside of the normal hours of
network activity, then this is considered off baseline and should be investigated.

Organizations may deploy a network operations center (NOC), which is an organizational unit
that monitors the performance of a network and responds to any network disruption, such as a
network outage. While a SOC is focused on maintaining the security of an organization through
detection and response, a NOC is responsible for maintaining network performance, availability,
and uptime.

Network monitoring tools


Network monitoring can be automated or performed manually. Some common network
monitoring tools can include:

 Intrusion detection systems (IDS)

 Network protocol analyzers, also known as packet sniffers, are tools designed to capture
and analyze data traffic within a network.

Capture and View Network Traffic


Packets and packet captures
Previously in the program,
you learned that when data is sent,
it's divided into packets.
Just like an addressed envelope in the mail,
packets contain delivery information which
is used to route it to its destination.
This information includes a sender
and receiver's IP address,
the type of packet that's being sent, and more.
Packets can provide lots of information about
the communications happening between
devices over a network.
A network protocol analyzer, or packet sniffer,
is a tool designed to capture and
analyze data traffic within a network.
As a security analyst,
you'll use packet sniffers to inspect
packets for indicators of compromise.

A packet capture, or P-cap, is a file


containing data packets intercepted
from an interface or network. It's sort of like intercepting an envelope in the mail

Network protocol analyzers


Network protocol analyzers (packet sniffers) are tools designed to capture and analyze data
traffic within a network. Examples of network protocol analyzers include tcpdump,
Wireshark, and TShark.

How network protocol analyzers work


1. First, packets must be collected from the network via the Network Interface Card
(NIC), which is hardware that connects computers to a network, like a router. NICs
receive and transmit network traffic, but by default they only listen to network traffic
that’s addressed to them. To capture all network traffic that is sent over the network, a
NIC must be switched to a mode that has access to all visible network data packets. In
wireless interfaces this is often referred to as monitoring mode, and in other systems it
may be called promiscuous mode. This mode enables the NIC to have access to all
visible network data packets, but it won’t help analysts access all packets across a
network. A network protocol analyzer must be positioned in an appropriate network
segment to access all traffic between different hosts.
2. The network protocol analyzer collects the network traffic in raw binary format.
Binary format consists of 0s and 1s and is not as easy for humans to interpret. The
network protocol analyzer takes the binary and converts it so that it’s displayed in a
human-readable format, so analysts can easily read and understand the information.

Capturing packets
Packet sniffing is the practice of capturing and inspecting data packets across a network. A
packet capture (p-cap) is a file containing data packets intercepted from an interface or
network. Packet captures can be viewed and further analyzed using network protocol
analyzers. For example, you can filter packet captures to only display information that's most
relevant to your investigation, such as packets sent from a specific IP address.

P-cap files can come in many formats depending on the packet capture library that’s used.
Each format has different uses and network tools may use or support specific packet capture
file formats by default. You should be familiar with the following libraries and formats:

1. Libpcap is a packet capture library designed to be used by Unix-like systems, like


Linux and MacOS®. Tools like tcpdump use Libpcap as the default packet capture
file format.
2. WinPcap is an open-source packet capture library designed for devices running
Windows operating systems. It’s considered an older file format and isn’t
predominantly used.

3. Npcap is a library designed by the port scanning tool Nmap that is commonly used in
Windows operating systems.

4. PCAPng is a modern file format that can simultaneously capture packets and store
data. Its ability to do both explains the “ng,” which stands for “next generation.”

Pro tip: Analyzing your home network can be a good way to practice using these tools.

Interpret network communications with packets


Reexamine the fields of a packet header
The internet layer accepts and
delivers packets for the network.
It's also the layer where the Internet Protocol operates
as the foundation for all communications on the internet.
It's responsible for making
sure packets reach their destinations.

IP packets contain headers.


Headers contain the data fields essential to
the transfer of data to its intended destination.
Different protocols use different headers.

Let's start with the Version field, which


specifies which version of IP is being used,
either IPv4 or IPv6.
Referring back to our mail analogy,
the Version field is like the different classes of mail,
like priority, express, or regular.

Next, IHL stands for Internet Header Length.


This field specifies the length of
the IP header plus any options.

The next field, ToS stands for Type of Service.


This field tells us if
certain packets should be treated with different care.
For example, think of ToS like
a fragile sticker on a mailed package.

Next is the Total Length field,


which identifies the length of the entire packet,
including the headers and the data.
This can be compared to the dimensions
and weight of an envelope.

The next three fields,


Identification, Flags,
and Fragment Offset,
deal with information related to fragmentation.
Fragmentation is when an IP packet
gets broken up into chunks,
which then get transmitted over the wire and
reassembled when they arrive at their destination.

The TTL field stands for Time to Live.


Like its name suggests,
this field determines how long
a packet can live before it gets dropped.

The Protocol field specifies the protocol used by


providing a value which corresponds to a protocol.
For example, TCP is represented by 6.
This is similar to including the number
of a house in a postal address.

The Header Checksum stores a value called a checksum,


which is used to determine if
any errors have occurred in the header.

The Source Address specifies the source IP address and


the Destination Address specifies
the destination IP address.
This is just like the sender and
receiver's contact information found on an envelope.

The Options field is not required and is commonly


used for network troubleshooting
rather than common traffic.
If it's used, the header length increases.
It's like purchasing postal insurance for an envelope.
IPv6
IPv6 adoption has been increasing because of its large address space. There are eight fields in
the header:

 Version: This field indicates the IP version. For an IPv6 header, IPv6 is used.

 Traffic Class: This field is similar to the IPv4 Type of Service field. The Traffic
Class field provides information about the packet's priority or class to help with
packet delivery.

 Flow Label: This field identifies the packets of a flow. A flow is the sequence of
packets sent from a specific source.

 Payload Length: This field specifies the length of the data portion of the packet.

 Next Header: This field indicates the type of header that follows the IPv6 header
such as TCP.

 Hop Limit: This field is similar to the IPv4 Time to Live field. The Hop Limit limits
how long a packet can travel in a network before being discarded.

 Source Address: This field specifies the source address of the sender.

 Destination Address: This field specifies the destination address of the receiver.

Wireshark
Wireshark is an open-source network protocol analyzer. It uses a graphical user interface
(GUI), which makes it easier to visualize network communications for packet analysis
purposes. Wireshark has many features to explore that are beyond the scope of this course.
You'll focus on how to use basic filtering to isolate network packets so that you can find what
you need.

Display filters
Wireshark's display filters let you apply filters to packet capture files. Here, you'll focus on
display filtering syntax and filtering for protocols, IP addresses, and ports.

Comparison operators
You can use different comparison operators to locate specific header fields and values.
Comparison operators can be expressed using either abbreviations or symbols. For example,
this filter using the == equal symbol in this filter ip.src == 8.8.8.8 is identical to using the eq
abbreviation in this filter ip.src eq 8.8.8.8.

Pro tip: You can combine comparison operators with Boolean logical operators like and
and or to create complex display filters. Parentheses can also be used to group
expressions and to prioritize search terms.

This table summarizes the different types of comparison operators you can use for display
filtering.

Contains operator
The contains operator is used to filter packets that contain an exact match of a string of text.
Here is an example of a filter that displays all HTTP streams that match the keyword
"moved".
Matches operator
The matches operator is used to filter packets based on the regular expression (regex) that's
specified. Regular expression is a sequence of characters that forms a pattern. You'll explore
more about regular expressions later in this program.

Filter toolbar
You can apply filters to a packet capture using Wireshark's filter toolbar. In this example, dns
is the applied filter, which means Wireshark will only display packets containing the DNS
protocol.

Pro tip: Wireshark uses different colors to represent protocols. You can customize colors and
create your own filters.

Filter for protocols


Protocol filtering is one of the simplest ways you can use display filters. You can simply
enter the name of the protocol to filter. For example, to filter for DNS packets simply type
dns in the filter toolbar. Here is a list of some protocols you can filter for:

 dns

 http

 ftp

 ssh

 arp

 telnet

 icmp

Filter for an IP address


You can use display filters to locate packets with a specific IP address.

For example, if you would like to filter packets that contain a specific IP address use ip.addr,
followed by a space, the equal == comparison operator, and the IP address. Here is an
example of a display filter that filters for the IP address 172.21.224.2:

ip.addr == 172.21.224.2

To filter for packets originating from a specific source IP address, you can use the ip.src
filter. Here is an example that looks for the 10.10.10.10 source IP address:

ip.src == 10.10.10.10

To filter for packets delivered to a specific destination IP address, you can use the ip.dst
filter. Here is an example that searches for the 4.4.4.4 destination IP address:
ip.dst == 4.4.4.4

Filter for a MAC address


You can also filter packets according to the Media Access Control (MAC) address. As a
refresher, a MAC address is a unique alphanumeric identifier that is assigned to each physical
device on a network.

Here's an example:

eth.addr == 00:70:f4:23:18:c4

Filter for ports


Port filtering is used to filter packets based on port numbers. This is helpful when you want to
isolate specific types of traffic. DNS traffic uses TCP or UDP port 53 so this will list traffic
related to DNS queries and responses only.

For example, if you would like to filter for a UDP port:

udp.port == 53

Likewise, you can filter for TCP ports as well:

tcp.port == 25

Follow streams
Wireshark provides a feature that lets you filter for packets specific to a protocol and view
streams. A stream or conversation is the exchange of data between devices using a protocol.
Wireshark reassembles the data that was transferred in the stream in a way that's simple to
read.

Resources

 To learn more about Wireshark's full features and capabilities, explore the Wireshark
official user guide.

Packet captures with tcpdump


Tcpdump is a popular network analyzer.
It's pre-installed on many Linux distributions and can be
installed on most Unix-like operating systems, like macOS.
You can easily capture and monitor
network traffic such as TCP,
IP, ICMP, and many more.
Tcpdump is a command line tool.
This means that it does not have
a graphical user interface.

The command we ran is: sudo tcpdump -i any


-v -c 1.

We're using sudo because the Linux account we're logged


in on doesn't have the permission to run tcpdump.
Then, we specify tcpdump to start tcpdump
and -i to specify which
interface we want to sniff traffic on.
The -v stands for verbose,
which displays detailed packet information.
The -c stands for count,
which specifies how many packets tcpdump will capture.
Here we've specified one.

The first field is the packet's timestamp,


which details the specific time of the packet travel.
It begins with hours, minutes,
seconds, and fractions of a second.

Next, IP is listed as the Version field.


It's listed as IP,
which means it's IPv4.
The verbose option has given us
more details about the IP packet fields,
such as protocol type and
the length of the packet.

The first field, ToS stands for Type of Service.


Recall that this tells us if
certain packets should be treated with different care.
This is represented by a value in hexadecima

The TTL field is Time to Live,


which tells us how long a packet can
travel across a network before it gets dropped.
The next three fields are
Identification, Offset, and Flags,
which provide three fields with
information relating to fragmentation.
These fields provide instructions on how
to reassemble packets in the correct order.
For example the DF,
beside flags stands for Don't Fragment

Next, the proto is the Protocol field.


It specifies the protocol in use and also
provides us with the value that
corresponds to the protocol.
Here the protocol is tcp,
which is represented by the number 6.
The last field, length, is the Total Length of the packet,
including the IP header.
Next, we can observe
the IP addresses that are communicating with each other.
The direction of the arrow
indicates the direction of the traffic flow.
The last piece of the IP address
indicates the port number or name.

Next, the cksum or checksum field corresponds to


the Header Checksum, which stores a value that's
used to determine if
any errors have occurred in the header.
Here, it's telling us it's correct with no errors.

The remaining fields are related to TCP.


For example, Flags indicate TCP flags.
The P is the push flag, and
the period indicates it's an ACK flag.
This means that the packet is pushing out data

Note: Before you can begin capturing network traffic, you must identify which network interface
you'll want to use to capture packets from. You can use the -D flag to list the network interfaces
available on a system.

Options
With tcpdump, you can apply options, also known as flags, to the end of commands to filter
network traffic. Short options are abbreviated and represented by a hyphen and a single
character like -i. Long options are spelled out using a double hyphen like --interface.
Tcpdump has over fifty options that you can explore using the manual page. Here, you’ll
examine a couple of essential tcpdump options including how to write and read packet
capture files.

Note: Options are case sensitive. For example, a lowercase -w is a separate option with a
different use than the option with an uppercase -W.

Note: tcpdump options that are written using short options can be written with or without a
space between the option and its value. For example, sudo tcpdump -i any -c 3 and sudo
tcpdump -iany -c3 are equivalent commands.

-w
Using the -w flag, you can write or save the sniffed network packets to a packet capture file
instead of just printing it out in the terminal. This is very useful because you can refer to this
saved file for later analysis. In this command, tcpdump is capturing network traffic from all
network interfaces and saving it to a packet capture file named packetcapture.pcap:

sudo tcpdump -i any -w packetcapture.pcap


-r
Using the -r flag, you can read a packet capture file by specifying the file name as a
parameter. Here is an example of a tcpdump command that reads a file called
packetcapture.pcap:
sudo tcpdump -r packetcapture.pcap
-v
As you’ve learned, packets contain a lot of information. By default, tcpdump will not print
out all of a packet's information. This option, which stands for verbose, lets you control how
much packet information you want tcpdump to print out.

There are three levels of verbosity you can use depending on how much packet information
you want tcpdump to print out. The levels are -v, -vv, and -vvv. The level of verbosity
increases with each added v. The verbose option can be helpful if you’re looking for packet
information like the details of a packet’s IP header fields. Here’s an example of a tcpdump
command that reads the packetcapture.pcap file with verbosity:

sudo tcpdump -r packetcapture.pcap -v


-c
The -c option stands for count. This option lets you control how many packets tcpdump will
capture. For example, specifying -c 1 will only print out one single packet, whereas -c 10
prints out 10 packets. This example is telling tcpdump to only capture the first three packets it
sniffs from any network interface:

sudo tcpdump -i any -c 3


-n
By default, tcpdump will perform name resolution. This means that tcpdump automatically
converts IP addresses to names. It will also resolve ports to commonly associated services
that use these ports. This can be problematic because tcpdump isn’t always accurate in name
resolution. For example, tcpdump can capture traffic from port 80 and automatically
translates port 80 to HTTP in the output. However, this is misleading because port 80 isn’t
always going to be using HTTP; it could be using a different protocol.

Additionally, name resolution uses what’s known as a reverse DNS lookup. A reverse DNS
lookup is a query that looks for the domain name associated with an IP address. If you
perform a reverse DNS lookup on an attacker’s system, they might be alerted that you are
investigating them through their DNS records.

Using the -n flag disables this automatic mapping of numbers to names and is considered to
be best practice when sniffing or analyzing traffic. Using -n will not resolve hostnames,
whereas -nn will not resolve both hostnames or ports. Here’s an example of a tcpdump
command that reads the packetcapture.pcap file with verbosity and disables name resolution:

sudo tcpdump -r packetcapture.pcap -v -n


Pro tip: You can combine options together. For example, -v and -n can be combined as -vn.
But, if an option accepts a parameter right after it like -c 1 or -r capture.pcap then you can’t
combine other options to it.

Expressions
Using filter expressions in tcpdump commands is also optional, but knowing how and when
to use filter expressions can be helpful during packet analysis. There are many ways to use
filter expressions.
You can also use boolean operators like and, or, or not to further filter network traffic for
specific IP addresses, ports, and more. The example below reads the packetcapture.pcap file
and combines two expressions ip and port 80 using the and boolean operator:

sudo tcpdump -r packetcapture.pcap -n 'ip and port 80'

Pro tip: You can use single or double quotes to ensure that tcpdump executes all of the
expressions. You can also use parentheses to group and prioritize different expressions.
Grouping expressions is helpful for complex or lengthy commands. For example, the
command ip and (port 80 or port 443) tells tcpdump to prioritize executing the filters
enclosed in the parentheses before filtering for IPv4.

Module 3 -Incident Detection and


verification
The detection and analysis phase of the lifecycle
detection refers to the prompt discovery of security events and analysis involves the investigation
and validation of alerts.

Cybersecurity incident detection methods


Threat hunting
Threats evolve and attackers advance their tactics and techniques. With threat hunting, the
combination of active human analysis and technology is used to identify threats like

fileless malware. Note: Threat hunting specialists are known as threat hunters. Threat
hunters perform research on emerging threats and attacks and then determine the
probability of an organization being vulnerable to a particular attack. Threat hunters use a
combination of threat intelligence, indicators of compromise, indicators of attack, and
machine learning to search for threats in an organization.

Threat intelligence
evidence-based threat information that provides context about existing or emerging threats. It can
be difficult for organizations to efficiently manage large volumes of threat intelligence.
Organizations can leverage a threat intelligence platform (TIP) which is an application that collects,
centralizes, and analyzes threat intelligence from different sources. TIPs provide a centralized
platform for organizations to identify and prioritize relevant threats and improve their security
posture.

Cyber deception
Cyber deception involves techniques that deliberately deceive malicious actors with the goal
of increasing detection and improving defensive strategies.

Honeypots are an example of an active cyber defense mechanism that uses deception technology.
Honeypots are systems or resources that are created as decoys vulnerable to attacks with the
purpose of attracting potential intruders.
Indicators of compromise
Indicators of compromise (IoCs) are observable evidence that suggests signs of a potential
security incident. IoCs chart specific pieces of evidence that are associated with an attack, like a
file name associated with a type of malware. You can think of an IoC as evidence that points to
something that's already happened, like noticing that a valuable has been stolen from inside of a
car.

Indicators of attack (IoA) are the series of observed events that indicate a real-time incident. IoAs
focus on identifying the behavioral evidence of an attacker, including their methods and
intentions.

Essentially, IoCs help to identify the who and what of an attack after it's taken place, while IoAs
focus on finding the why and how of an ongoing or unknown attack. For example, observing a
process that makes a network connection is an example of an IoA. The filename of the process and
the IP address that the process contacted are examples of the related IoCs.

Note: Indicators of compromise are not always a confirmation that a security incident has
happened. IoCs may be the result of human error, system malfunctions, and other reasons not
related to security.

Pyramid of Pain
It’s important for security professionals to understand the different types of indicators of
compromise so that they can quickly and effectively detect and respond to them

David J. Bianco created the concept of the Pyramid of Pain, with the goal of improving how
indicators of compromise are used in incident detection.

The Pyramid of Pain captures the relationship between indicators of compromise and the level of
difficulty that malicious actors experience when indicators of compromise are blocked by security
teams. It lists the different types of indicators of compromise that security professionals use to
identify malicious activity.
1. Hash values: Hashes that correspond to known malicious files. These are often used
to provide unique references to specific samples of malware or to files involved in an
intrusion.

2. IP addresses: An internet protocol address like 192.168.1.1

3. Domain names: A web address such as www.google.com

4. Network artifacts: Observable evidence created by malicious actors on a network.


For example, information found in network protocols such as User-Agent strings.

5. Host artifacts: Observable evidence created by malicious actors on a host. A host is


any device that’s connected on a network. For example, the name of a file created by
malware.

6. Tools: Software that’s used by a malicious actor to achieve their goal. For example,
attackers can use password cracking tools like John the Ripper to perform password
attacks to gain access into an account.

7. Tactics, techniques, and procedures (TTPs): This is the behavior of a malicious


actor. Tactics refer to the high-level overview of the behavior. Techniques provide
detailed descriptions of the behavior relating to the tactic. Procedures are highly
detailed descriptions of the technique. TTPs are the hardest to detect.

The power of crowdsourcing


Crowdsourcing is the practice of gathering information using public input and collaboration.
Threat intelligence platforms use crowdsourcing to collect information from the global
cybersecurity community

Examples of information-sharing organizations include Information Sharing and Analysis Centers


(ISACs), which focus on collecting and sharing sector-specific threat intelligence to companies
within specific industries like energy, healthcare, and others. Open-source intelligence (OSINT) is
the collection and analysis of information from publicly available sources to generate usable
intelligence. OSINT can also be used as a method to gather information related to threat actors,
threats, vulnerabilities, and more.

VirusTotal
VirusTotal is a service that allows anyone to analyze suspicious files, domains, URLs, and
IP addresses for malicious content. VirusTotal also offers additional services and tools for
enterprise use. This reading focuses on the VirusTotal website, which is available for free and
non-commercial use.

Other tools
There are other investigative tools that can be used to analyze IoCs. These tools can also
share the data that's uploaded to them to the security community.

Jotti malware scan


Jotti's malware scan is a free service that lets you scan suspicious files with several antivirus
programs. There are some limitations to the number of files that you can submit.
Urlscan.io
Urlscan.io is a free service that scans and analyzes URLs and provides a detailed report
summarizing the URL information.

CAPE Sandbox
CAPE Sandbox is an open source service used to automate the analysis of suspicious files.
Using an isolated environment, malicious files such as malware are analyzed and a
comprehensive report outlines the malware behavior.

MalwareBazaar
MalwareBazaar is a free repository for malware samples. Malware samples are a great source
of threat intelligence that can be used for research purposes.

Response and recovery


The triage process
Triage is the prioritizing of incidents according to their level of importance or urgency. The
triage process helps security teams evaluate and prioritize security alerts and allocate
resources effectively so that the most critical issues are addressed first.

The triage process consists of three steps:

1. Receive and assess

2. Assign priority

3. Collect and analyze

Similar to an incident response plan, a business continuity plan (BCP) is a document that outlines
the procedures to sustain business operations during and after a significant disruption. A BCP
helps organizations ensure that critical business functions can resume or can be quickly restored
when an incident occurs.

Business continuity planning


Similar to an incident response plan, a business continuity plan (BCP) is a document that outlines
the procedures to sustain business operations during and after a significant disruption. A BCP
helps organizations ensure that critical business functions can resume or can be quickly restored
when an incident occurs.

Recovery strategies
When an outage occurs due to a security incident, organizations must have some sort of a
functional recovery plan set to resolve the issue and get systems fully operational. BCPs can
include strategies for recovery that focus on returning to normal operations. Site resilience is
one example of a recovery strategy

There are three types of recovery sites used for site resilience:

 Hot sites: A fully operational facility that is a duplicate of an organization's primary


environment. Hot sites can be activated immediately when an organization's primary
site experiences failure or disruption.
 Warm sites: A facility that contains a fully updated and configured version of the hot
site. Unlike hot sites, warm sites are not fully operational and available for immediate
use but can quickly be made operational when a failure or disruption occurs.

 Cold sites: A backup facility equipped with some of the necessary infrastructure
required to operate an organization's site. When a disruption or failure occurs, cold
sites might not be ready for immediate use and might need additional work to be
operational.

The post-incident activity phase of the lifecycle


The post-incident activity phase entails the process of
reviewing an incident to identify areas for improvement during incident handling.
During this phase of the lifecycle, different types of documentation get
updated or created.
One of the critical forms of documentation that gets created is the final report.
The final report is documentation that provides a comprehensive review of
an incident.
It includes a timeline and details of all events related to the incident and
recommendations for future prevention.
Incidents provide organizations and their security teams with an opportunity to learn from
what happened and prioritize ways to improve the incident handling process.

This is typically done through a lessons learned meeting, also known as a post-mortem. A
lessons learned meeting includes all involved parties after a major incident. Depending on the
scope of an incident, multiple meetings can be scheduled to gather sufficient data.

Module 4 – Logs, IDS ,SIEM tools


Logs
Since different types of devices and systems can create logs,
there are different log data sources in an environment.
 Network: Network logs are generated by network devices like firewalls, routers, or
switches.

 System: System logs are generated by operating systems like Chrome OS™,
Windows, Linux, or macOS®.

 Application: Application logs are generated by software applications and contain


information relating to the events occurring within the application such as a
smartphone app.

 Security: Security logs are generated by various devices or systems such as antivirus
software and intrusion detection systems. Security logs contain security-related
information such as file deletion.
 Authentication: Authentication logs are generated whenever authentication occurs
such as a successful login attempt into a computer.

Log details
Generally, logs contain a date, time, location, action, and author of the action. Here is an
example of an authentication log:

Login Event [05:45:15] User1 Authenticated successfully

Logs contain information and can be adjusted to contain even more information. Verbose
logging records additional, detailed information beyond the default log recording. Here is an
example of the same log above but logged as verbose.

Login Event [2022/11/16 05:45:15.892673] auth_performer.cc:470 User1 Authenticated


successfully from device1 (192.168.1.2)

Log management is the process of collecting, storing, analyzing, and disposing of log data.

What to log
The issue with overlogging
Log retention

Organizations might operate in industries with regulatory requirements. For example, some
regulations require organizations to retain logs for set periods of time and organizations can
implement log retention practices in their log management policy.

Organizations that operate in the following industries might need to modify their log
management policy to meet regulatory requirements:

 Public sector industries, like the Federal Information Security Modernization Act
(FISMA)

 Healthcare industries, like the Health Insurance Portability and Accountability Act of
1996 (HIPAA)

 Financial services industries, such as the Payment Card Industry Data Security
Standard (PCI DSS), the Gramm-Leach-Bliley Act (GLBA), and the Sarbanes-Oxley
Act of 2002 (SOX)

Log protection

Along with management and retention, the protection of logs is vital in maintaining log
integrity. It’s not unusual for malicious actors to modify logs in attempts to mislead security
teams and to even hide their activity.

Storing logs in a centralized log server is a way to maintain log integrity. When logs are
generated, they get sent to a dedicated server instead of getting stored on a local machine.
This makes it more difficult for attackers to access logs because there is a barrier between the
attacker and the log location.
Variations of logs
Let's explore some commonly used log formats.

Syslog
One of the most commonly used log formats is Syslog. Syslog is both a protocol and a log
format. As a protocol, it transports and writes logs. As a log format, it contains a header, followed
by structured-data, and a message. The Syslog entry includes three sections: a
header, structured-data, and a message.

The header contains data fields like Timestamp, the Hostname, the Application name, and the
Message ID.The structured-data portion contains additional data information in key-value pairs.
Here, the eventSource is a key that specifies the data source of the log, which is the value
Application.
Lastly, the message component contains the detailed log message about the event.
In this example, "This is a log entry!" is the message.
1. Protocol: The syslog protocol is used to transport logs to a centralized log server for
log management. It uses port 514 for plaintext logs and port 6514 for encrypted logs.

Here is an example of a syslog entry that contains all three components: a header,
followed by structured-data, and a message:

<236>1 2022-03-21T01:11:11.003Z virtual.machine.com evntslog - ID01 [user@32473


iut="1" eventSource="Application" eventID="9999"] This is a log entry!

Priority (PRI)
The priority (PRI) field indicates the urgency of the logged event and is contained with angle
brackets. In this example, the priority value is <236> . Generally, the lower the priority level,
the more urgent the event is.

Note: Syslog headers can be combined with JSON, and XML formats. Custom log formats
also exist.

JavaScript Object Notation (JSON)


Let's explore another common log format you might encounter as a security analyst. JavaScript
Object Notation, more popularly known as JSON, is a text-based format designed to be easy to
read and write. It also uses key-value pairs to structure data.

Here's an example of a JSON log. The curly brackets represent the beginning and end of an
object. The object is the data that's enclosed between the brackets. It's organized using key-
value pairs where each key has a corresponding value separated by colons. For example, for the
first line, the key is Alert and the value is Malware.JSON is known for its simplicity and easy
readability.

XML (eXtensible Markup Language)


XML (eXtensible Markup Language) is a language and a format used for storing and
transmitting data. XML is a native file format used in Windows systems. XML syntax uses
the following:
 Tags

 Elements

 Attributes

eXtensible Markup Language, or XML, is a language and a format used for storing and
transmitting data. Instead of key-value pairs, it uses tags and other keys to structure data.
Here, we have an example of an XML log entry with four fields: firstName, lastName,
employeeID, and dateJoined, which are separated with arrows.
<Event> <EventID>4688</EventID> <Version>5</Version> </Event>

CSV (Comma Separated Value)


Finally, Comma Separated Values, or CSV, is a format that uses separators like commas to
separate data values. In this example, there are many different data fields which are separated
with commas.

CEF (Common Event Format)


Common Event Format (CEF) is a log format that uses key-value pairs to structure data
and identify fields and their corresponding values. The CEF syntax is defined as containing
the following fields:

CEF:Version|Device Vendor|Device Product|Device Version|Signature ID|Name|Severity|


Extension

Fields are all separated with a pipe character |. However, anything in the Extension part of the
CEF log entry must be written in a key-value format. Syslog is a common method used to
transport logs like CEF. When Syslog is used a timestamp and hostname will be prepended to
the CEF message. Here is an example of a CEF log entry that details malicious activity
relating to a worm infection:

Sep 29 08:26:10 host CEF:1|Security|threatmanager|1.0|100|worm successfully stopped|10|


src=10.0.0.2 dst=2.1.2.2 spt=1232

Here is a breakdown of the fields:

 Syslog Timestamp: Sep 29 08:26:10

 Syslog Hostname: host

 Version: CEF:1

 Device Vendor: Security

 Device Product: threatmanager

 Device Version: 1.0

 Signature ID: 100


 Name: worm successfully stopped

 Severity: 10

 Extension: This field contains data written as key-value pairs. There are two IP
addresses, src=10.0.0.2 and dst=2.1.2.2, and a source port number spt=1232.
Extensions are not required and are optional to add.

This log entry contains details about a Security application called threatmanager that
successfully stopped a worm from spreading from the internal network at 10.0.0.2 to the
external network 2.1.2.2 through the port 1232. A high severity level of 10 is reported.

Note: Extensions and syslog prefix are optional to add to a CEF log.

Overview of IDS
Security monitoring with detection tools
Telemetry is the collection and transmission of data for analysis.
While logs record events occurring on systems,
telemetry describes the data itself.
For example, packet captures
are considered network telemetry.
For security professionals,
logs and telemetry are sources of
evidence that can be used to
answer questions during investigations.

Host-based intrusion detection system


A host-based intrusion detection system (HIDS) is an application that monitors the activity
of the host on which it's installed. A HIDS is installed as an agent on a host. A host is also
known as an endpoint, which is any device connected to a network like a computer or a
server.

Network-based intrusion detection system


A network-based intrusion detection system (NIDS) is an application that collects and
monitors network traffic and network data. NIDS software is installed on devices located at
specific parts of the network that you want to monitor. The NIDS application inspects
network traffic from different devices on the network. If any malicious network traffic is
detected, the NIDS logs it and generates an alert.

Detection techniques
The two types of detection techniques that are commonly used by IDS technologies are signature-
based analysis and anomaly-based analysis.

Signature-based analysis
Signature analysis, or signature-based analysis, is a detection method that is used to find
events of interest. A signature is a pattern that is associated with malicious activity.
Signatures can contain specific patterns like a sequence of binary numbers, bytes, or even
specific data like an IP address.

Advantages
 Low rate of false positives: Signature-based analysis is very efficient at detecting
known threats because it is simply comparing activity to signatures. This leads to
fewer false positives. Remember that a false positive is an alert that incorrectly
detects the presence of a threat.

Disadvantages

 Signatures can be evaded: Signatures are unique, and attackers can modify their
attack behaviors to bypass the signatures. For example, attackers can make slight
modifications to malware code to alter its signature and avoid detection.

 Signatures require updates: Signature-based analysis relies on a database of


signatures to detect threats. Each time a new exploit or attack is discovered, new
signatures must be created and added to the signature database.

 Inability to detect unknown threats: Signature-based analysis relies on detecting


known threats through signatures. Unknown threats can't be detected, such as new
malware families or zero-day attacks, which are exploits that were previously
unknown.

Anomaly-based analysis
Anomaly-based analysis is a detection method that identifies abnormal behavior. There are
two phases to anomaly-based analysis: a training phase and a detection phase. In the training
phase, a baseline of normal or expected behavior must be established. Baselines are
developed by collecting data that corresponds to normal system behavior. In the detection
phase, the current system activity is compared against this baseline. Activity that happens
outside of the baseline gets logged, and an alert is generated.

Advantages

 Ability to detect new and evolving threats: Unlike signature-based analysis, which
uses known patterns to detect threats, anomaly-based analysis can detect unknown
threats.

Disadvantages

 High rate of false positives: Any behavior that deviates from the baseline can be
flagged as abnormal, including non-malicious behaviors. This leads to a high rate of
false positives.

 Pre-existing compromise: The existence of an attacker during the training phase will
include malicious behavior in the baseline. This can lead to missing a pre-existing
attacker.

Components of a detection signature

NIDS rules consists of three components: an action, a header, and rule options

the action is
the first item specified in a signature.
This determines the action to take if
the rule criteria matches are met.
Actions differ across NIDS rule languages,
but some common actions are: alert, pass, or reject.

if a rule specifies to alert on


suspicious network traffic that
establishes an unusual connection to a port,
the IDS will inspect
the traffic packets and send out an alert.
The header defines the signature's network traffic.
These include information such as
source and destination IP addresses,
source and destination ports,
protocols, and traffic direction

The rule options lets you customize


signatures with additional parameters.
Typically, rule options are separated by
semi-colons and enclosed in parentheses.

Examine Suricata logs


In Suricata, alerts and events are output in a format known as EVE JSON.
EVE stands for Extensible Event Format and JSON stands for
JavaScript Object Notation.
JSON uses key-value pairs, which simplifies both
searching and extracting text from log files.

Suricata generates two types of log data: alert logs and network telemetry logs.
Alert logs contain information that's relevant to security investigations.
Usually this is the output of signatures which have triggered an alert.
For example, a signature that detects suspicious traffic across the network
generates an alert log that captures details of that traffic.

Introduction to Suricata
Suricata is an open-source intrusion detection system, intrusion prevention system, and
network analysis tool.

Suricata features
There are three main ways Suricata can be used:

 Intrusion detection system (IDS): As a network-based IDS, Suricata can monitor


network traffic and alert on suspicious activities and intrusions. Suricata can also be
set up as a host-based IDS to monitor the system and network activities of a single
host like a computer.
 Intrusion prevention system (IPS): Suricata can also function as an intrusion
prevention system (IPS) to detect and block malicious activity and traffic. Running
Suricata in IPS mode requires additional configuration such as enabling IPS mode.
 Network security monitoring (NSM): In this mode, Suricata helps keep networks
safe by producing and saving relevant network logs. Suricata can analyze live
network traffic, existing packet capture files, and create and save full or conditional
packet captures. This can be useful for forensics, incident response, and for testing
signatures.

Rules
Rules or signatures are used to identify specific patterns, behavior, and conditions of network
traffic that might indicate malicious activity.

Suricata uses signatures analysis, which is a detection method used to find events of interest.
Signatures consist of three components:

 Action: The first component of a signature. It describes the action to take if network
or system activity matches the signature. Examples include: alert, pass, drop, or reject.

 Header: The header includes network traffic information like source and destination
IP addresses, source and destination ports, protocol, and traffic direction.

 Rule options: The rule options provide you with different options to customize
signatures.

Note: The terms rule and signature are synonymous.

Note: Rule order refers to the order in which rules are evaluated by Suricata. Rules are
processed in the order in which they are defined in the configuration file. However, Suricata
processes rules in a different default order: pass, drop, reject, and alert. Rule order affects the
final verdict of a packet especially when conflicting actions such as a drop rule and an alert
rule both match on the same packet.

Custom rules
Although Suricata comes with pre-written rules, it is highly recommended that you modify or
customize the existing rules to meet your specific security requirements.

A configuration file is a file used to configure the settings of an application. Configuration


files let you customize exactly how you want your IDS to interact with the rest of your
environment.

Suricata's configuration file is suricata.yaml, which uses the YAML file format for syntax
and structure

Log files
There are two log files that Suricata generates when alerts are triggered:
 eve.json: The eve.json file is the standard Suricata log file. This file contains detailed
information and metadata about the events and alerts generated by Suricata stored in
JSON format. For example, events in this file contain a unique identifier called
flow_id which is used to correlate related logs or alerts to a single network flow,
making it easier to analyze network traffic. The eve.json file is used for more detailed
analysis and is considered to be a better file format for log parsing and SIEM log
ingestion.

 fast.log: The fast.log file is used to record minimal alert information including basic
IP address and port details about the network traffic. The fast.log file is used for basic
logging and alerting and is considered a legacy file format and is not suitable for
incident response or threat hunting tasks.

The main difference between the eve.json file and the fast.log file is the level of detail that
is recorded in each. The fast.log file records basic information, whereas the eve.json file
contains additional verbose information.

alert http $HOME_NET any -> $EXTERNAL_NET any (msg:"GET on wire";


flow:established,to_server; content:"GET"; http_method; sid:12345; rev:3;)

Let's further examine the rule options in our example:

 The msg: option provides the alert text. In this case, the alert will print out the
text “GET on wire”, which specifies why the alert was triggered.
 The flow:established,to_server option determines that packets from the client to the
server should be matched. (In this instance, a server is defined as the device
responding to the initial SYN packet with a SYN-ACK packet.)
 The content:"GET" option tells Suricata to look for the word GET in the content of
the http.method portion of the packet.
 The sid:12345 (signature ID) option is a unique numerical value that identifies the
rule.
 The rev:3 option indicates the signature's revision which is used to identify the
signature's version. Here, the revision version is 3.
To summarize, this signature triggers an alert whenever Suricata observes the text GET as the
HTTP method in an HTTP packet from the home network going to the external network.

 sudo suricata -r sample.pcap -S custom.rules -k none

Now you’ll further examine the options in the command:

 The -r sample.pcap option specifies an input file to mimic network traffic. In


this case, the sample.pcap file.
 The -S custom.rules option instructs Suricata to use the rules defined in
the custom.rules file.
 The -k none option instructs Suricata to disable all checksum checks.
As a refresher, checksums are a way to detect if a packet has been modified in transit.
Because you are using network traffic from a sample packet capture file, you won't
need Suricata to check the integrity of the checksum.
1. Use the cat command to display the entries in the eve.json file:
cat /var/log/suricata/eve.json

The output returns the raw content of the file. You'll notice that there is a lot of data returned
that is not easy to understand in this format.

2. Use the jq command to display the entries in an improved format:


jq . /var/log/suricata/eve.json | less

You can use the lowercase f and b keys to move forward or backward through the output.
Also, if you enter a command incorrectly and it fails to return to the command-line prompt,
you can press CTRL+C to stop the process and force the shell to return to the command-line
prompt.

The jq tool is very useful for processing JSON data, however, a full explanation of its
capabilities is outside of the scope of this lab.

4. Use the jq command to extract specific event data from the eve.json file:
jq -c "[.timestamp,.flow_id,.alert.signature,.proto,.dest_ip]" /var/log/suricata/eve.json

Note: The jq command above extracts the fields specified in the list in the square brackets
from the JSON payload. The fields selected are the timestamp (.timestamp), the flow id
(.flow_id), the alert signature or msg (.alert.signature), the protocol (.proto), and the
destination IP address (.dest_ip).

5. Use the jq command to display all event logs related to a specific flow_id from
the eve.json file. The flow_id value is a 16-digit number and will vary for each of the
log entries. Replace X with any of the flow_id values returned by the previous query:
Image here...

jq "select(.flow_id==X)" /var/log/suricata/eve.json
Copied!
content_copy
Note: A network flow refers to a sequence of packets between a source and destination that
share common characteristics such as IP addresses, protocols, and more. In cybersecurity,
network traffic flows help analysts understand the behavior of network traffic to identify and
analyze threats. Suricata assigns a unique flow_id to each network flow. All logs from a
network flow share the same flow_id. This makes the flow_id field a useful field for
correlating network traffic that belongs to the same network flows.
Overview of SIEM
SIEM process overview
Previously, you covered the SIEM process. As a refresher, the process consists of three steps:

1. Collect and aggregate data: SIEM tools collect event data from various data sources.

2. Normalize data: Event data that's been collected becomes normalized. Normalization
converts data into a standard format so that data is structured in a consistent way and
becomes easier to read and search. While data normalization is a common feature in
many SIEM tools, it's important to note that SIEM tools vary in their data
normalization capabilities.

3. Analyze data: After the data is collected and normalized, SIEM tools analyze and
correlate the data to identify common patterns that indicate unusual activity.

Log ingestion
Data is required for SIEM tools to work effectively. SIEM tools must first collect data using
log ingestion. Log ingestion is the process of collecting and importing data from log
sources into a SIEM tool. Data comes from any source that generates log data, like a
server.

Log forwarders
A common way that organizations collect log data is to use log forwarders. Log forwarders
are software that automate the process of collecting and sending log data. Some
operating systems have native log forwarders.

Resources
Here are some resources if you’d like to learn more about the log ingestion process for
Splunk and Chronicle:

 Guide on getting data into Splunk

 Guide on data ingestion into Chronicle

Query for events with Chronicle


we'll explore using Chronicle's search field to locate an event. Chronicle uses the YARA-L
language to define rules for detection. It's a computer language used to create
rules for searching through ingested log data.
The default method of search is using UDM search,
which stands for Unified Data Model.
It searches through normalized data.

Splunk searches
Splunk has its own querying language called Search Processing Language (SPL). SPL is
used to search and retrieve events from indexes using Splunk’s Search & Reporting app.
An SPL search can contain many different commands and arguments.
Here is an example of a basic SPL search that is querying an index for a failed event:

index=main fail

 index=main: This is the beginning of the search command that tells Splunk to retrieve
events from an index named main. An index stores event data that's been collected
and processed by Splunk.

Here are some resources should you like to learn more about searching for events with
Splunk and Chronicle:

 Splunk’s Search Manual on how to use the Splunk search processing language (SPL)

 Chronicle's quickstart guide on the different types of searches

You might also like