Module 6
System Investigation, Network
and Mobile Forensics
Dr. Rasika Naik
Mrs. Arti Sawant
Live Data Collection
Primary step of any digital investigation is to collect information & then
deciding the initial response strategy.
The goal of an initial response is two fold:
● Confirm there is an incident
● Retrieve the system’s volatile data that will no longer be there after
you power off the system.
Collecting Volatile Data on WINDOWS
Steps to use for data collection:
1. Execute a trusted cmd.exe. 7.List all running processes.
2. Record the system time and date. 8. List current and recent connections.
3. Determine who is logged in to the system (and
9. Record the system time and date.
remote-access users, if applicable).
4. Record modification, creation, and access times 10. Document the commands used
of all files. during initial response.
5. Determine open ports.
6. List applications associated with open ports.
Collecting Volatile Data on UNIX/LINUX
1. Execute a trusted shell. 7. Determine the running processes.
2. Record the system time and date. 8. List current and recent connections.
3. Determine who is logged on to the 9. Record the system time.
system.
10. Record the steps taken.
4. Record modification, creation, and
11. Record cryptographic checksums.
access times of all files.
5. Determine open ports.
6. List applications associated with open
ports.
Investigating Windows systems
We assume that you have performed the following tasks:
▼ Conducted an initial response and confirmed that further investigation is necessary
■ Consulted with legal counsel.
▲ Performed a forensic duplication of the evidence drive, using Safeback, EnCase, or
another imaging tool.
You will need a formal approach to investigating the system, because a disorganized
approach will lead to mistakes and overlooked evidence. This chapter outlines many of the
steps you will need to take to unearth the evidence for proving or disproving allegations.
Conducting a Windows Investigation
After you’ve set up your forensic workstation with the proper tools and recorded the
low-level partition data from the target image, you are ready to conduct your
investigation. The following basic investigative steps are required for a formal
examination of a target system:
1. Review all pertinent logs.
2. Perform keyword searches.
3. Review relevant files
4. Identify unauthorized user accounts or groups.
5. Identify rogue processes and services
6. Look for unusual or hidden files/directories.
7. Check for unauthorized access points.
8. Examine jobs run by the Scheduler service.
9. Analyze trust relationships.
10. Review security identifiers.
Where Evidence Resides on Windows System
● Volatile data in kernel structures
● Slack space, where you can obtain information from previously deleted files that
are unrecoverable
● Free or unallocated space, where you can obtain previously deleted files, including
damaged or inaccessible clusters
● The logical file system
● The event logs
● The Registry, which you should think of as an enormous log file
● Application logs not managed by the Windows Event Log Service
● The swap files, which harbor information that was recently located in system RAM
(named pagefile.sys on the active partition)
● Special application-level files, such as Internet Explorer’s Internet history files
(index.dat), Netscape’s fat.db, the history.hst file, and the browser cache
● Temporary files created by many applications
● The Recycle Bin (a hidden, logical file structure where recently deleted items can
be found)
● The printer spool
● Sent or received email, such as the .pst files for Outlook mail
1. Reviewing All Pertinent Logs
The Windows NT, 2000, and XP operating systems maintain three separate log
files: the System log, Application log, and Security log. By reviewing these logs,
you may be able to obtain the following information:
● Determine which users have been accessing specific files
● Determine who has been successfully logging on to a system
● Determine who has been trying unsuccessfully to log on to a system
● Track usage of specific applications
● Track alterations to the audit policy
● Track changes to user permissions (such as increased access)
Logs on a Live System:
Windows provides a utility called Event Viewer to access the audit logs on a
local host.
Select Start | Programs | Administrative Tools | Event Viewer to open Event
Viewer.
Where to Look for Evidence
● Windows can log the creation and termination of each process on
the system. To enable this feature, you set the audit policy to
monitor the success and failure of detailed tracking.
● When a process is created, it is given a process ID (PID) that is
unique to the process. (Resource Monitor)
● With detailed tracking turned on, you can determine every process a
user executes on the system by reviewing the following event IDs:
○ 592 A new process has been created
○ 593 A process has exited
Event Log Dumps
To dump the log
PsLogList
dumpel.exe
Elogdmp.exe
To transfer file
Netcat
cryptcat
Offline Investigation of Logs
To view the event logs from an offline system, you must obtain copies of the
secevent.evt, appevent.evt, and sysevent.evt files from the forensic duplicate.
These log files are usually stored in the default location of
\%systemroot%\System32\Config.
You can obtain these files via a DOS boot disk (with NTFS for DOS if the file
system is NTFS) or via a Linux boot disk with the appropriate kernel to mount
NTFS drives, or simply extract them from your forensic image.
● Once you recover the three .evt files, you can view the log files on your
forensic workstation.
● In Event Viewer, select Log | Open and specify the path to the copied .evt
files.
● You select the log type (Security, Application, or System) when choosing
the .evt file to review.
● It is possible, although unlikely, that your forensic workstation will not be
able to read the imported event logs. In this case, perform the following
steps to access the logs:
1.Disable the EventLog service on the forensic workstation by opening
Control Panel | Services and selecting Disable for the EventLog option. (This
change will not be effective until you reboot the workstation.)
2. Use the User Manager to change the forensic workstation’s audit policy to
log nothing at all. This will prevent your forensic workstation from writing to
the evidence Security log.
3. Reboot the forensic workstation, and then verify that the EventLog service
is not on by viewing Control Panel | Services.
4. Place the evidence .evt files into the \%systemroot%\System32\Config
directory. Since Event Viewer automatically defaults to populating the
three.evt files in \%systemroot%\System32\Config, you will need to either
rename the forensic workstation’s .evt files or overwrite whatever log files
your system was currently using.
5. Use Control Panel | Services to start the EventLog service by selecting
Manual Start and then starting the EventLog service.
6. Start Event Viewer. You will now be able to view the evidence event logs.
Event Log Drawbacks
1.default security event log
2.Time consuming and difficult
3. Impossible to identify remote connections only with event log data.
4. Max log size and time length
2. Performing Keyword Searches
● During investigations into possession of intellectual property or proprietary
information, sex offenses, and practically any case involving text-based
communication, it is important to perform string searches of the subject’s
hard drive.
● Many different keywords can be critical to an investigation, including user
IDs, passwords, sensitive data (code words),known filenames, and subject-
specific words (for example, marijuana, mary jane, bong, and dope).
● String searches can be conducted on the logical file structure or at the
physical level to examine the contents of an entire drive.
● Commonly used disk-search utilities include dtSearch, offered by dtSearch Corp.
● Both utilities perform the search from a physical level.
● EnCase has a string-search capability that can be run against the evidence image
file that it creates (a physical-level string search).
3. Reviewing Relevant Files
● Windows systems write input and output to so many files at a time that
almost all actions taken on the system leave some trace of their occurrence.
● Windows has tempfiles, cache files, a Registry that keeps track of recently
used files, a Recycle Bin that maintains deleted files, and countless other
locations where runtime data is stored. (regedit, %temp%)
● It is important to recognize files by their extensions as well as by their true
file headers (if possible). At a minimum, you need to know
what .doc, .tmp, .log, .txt, .wpd, .gif, .exe, and .jpg files are.
● Although EnCase provides viewing capability for many file types, it doesn’t
cover everything.
● So, even if you’re using this forensic utility, you may also need a
comprehensive file viewer, such as Quickview Plus (by JASC Software).
● Now windows have Quicklook.
The Registry
● The Registry can reveal the software installed in the past, the security
configuration of the machine, DLL trojans and startup programs, and the most
recently used (MRU) files for many different applications.
The Registry consists of five root keys or root handles (also called hives):
● HKEY_CLASSES_ROOT
● HKEY_CURRENT_USER
● HKEY_LOCAL_MACHINE (abbreviated as HKLM)
● HKEY_USERS
● HKEY_CURRENT_CONFIG
The five hives are made from four major files on the system: SAM, SECURITY,
SOFTWARE, and SYSTEM. The default location for these files is the \WINNT\
System32\Config directory.
The Registry on a Live System To review the contents of the Registry, use the
Registry Editor (Regedit), as shown here:
Deleted Files and Data
In general, there are four ways to recover deleted data:
● Using undelete tools (ex: r-undelete)
● Restoring files located in the Recycle Bin
● Recovering .tmp files (%tmp%)
● Using low-level tools to repair the file system (ex Norton Utilities & File
Scavenger)
Web Browser Files
● Employees need access to the Internet at work, but many companies do not
want their employees spending the majority of their work hours shopping,
surfing, trading stocks, chatting, or downloading pornography on company
systems.
● These activities require the use of web browsers.
● Web browsers such as Netscape and Internet Explorer maintain log files.
● Both browsers record browsing history and track sites that were recently
visited. They also maintain a cache that contains a certain amount of the actual
files and web pages recently viewed.
Broken Links
● Another important step is to check for broken links on the system.
● Checking links can also help you determine what software had been on a
system.
● Links are used to associate a desktop shortcut or a Start menu item with an
application or a document.
● Manually removing applications or documents does not remove the links that
were created for them.
● Users may delete files but forget to delete the desktop icon on the system.
● The NTRK tool chklnks.exe is excellent for unearthing files that were once
installed but now are nowhere to be found.
4. Identifying Unauthorized User Accounts or Groups
There are several ways to audit user accounts and user groups on a live system:
● Look in the User Manager for unauthorized user accounts (during a live
system response).
● Use usrstat from the NTRK to view all domain accounts on a domain
controller, looking for suspicious entries.
● Examine the Security log using Event Viewer, filtering for event ID 624
(addition of a new account), 626 (user account enabled), 636 (changing an
account group), and 642 (user account changed).
5. Identifying Rogue Processes
● Identifying rogue processes is much simpler when reviewing a live system.
● Since most rogue processes listen for network connections or sniff the
network for cleartext user IDs and passwords, these processes are easier to
find when they are executing.
● Several tools get information about running processes:
○ pslist lists the name of the running process
○ ListDLLs provides the full command-line arguments for each running
process, and Fport shows which processes are listening on which ports.
6. Looking for Unusual or Hidden Files
● Once an attacker gains unlawful access to a Windows system, she needs to
hide her files for later use.
● Once an insider chooses to perform unauthorized or unacceptable deeds on his
system, he may choose to make a few files “invisible.”
● Both of these attackers can take advantage of NTFS file streams to hide data
behind legitimate files
● NTFS has a feature, originally developed on the Macintosh Hierarchical File
System (HFS), to store multiple instances of file data under one file entry.
7. Checking for Unauthorized Access Points
● One of the biggest differences between Windows NT and Unix systems is
that NT does not allow remote-command-line–level access across a
network without the use of external utilities.
● This changed dramatically with Windows 2000, which comes with a Telnet
Server for remote-command administration.
● Any service that allows some degree of remote access could provide an
entry point to unwanted intruders.
● In addition to built-in and third-party applications, trojans may provide
such services.
These services include the following
● Terminal server
● SQL/Oracle
● Third-party telnet daemons on Windows NT
● Windows 2000 Telnet Server
● Third-party FTP daemons
● Web servers (such as Apache and IIS)
● Virtual network computing (TCP port 5800) and PC Anywhere (TCP port
5631)
● Remote-access services (PPP and PPTP)
● X Servers
● When responding to victim systems, you must identify the access points
to the system to determine how access was obtained.
● Tools such as netstat are critical for identifying the access points to a
system.
● They use API calls to read the contents of kernel and user space TCP
and UDP connection tables. If you intend to capture this information,
you will need to allow the restored image to boot.
● If you performed this step during the live system review, before the
system was shut down for imaging, compare the results of the two
operations.
8. Examining Jobs Run by the Scheduler Service
● A common plan by attackers is to have a scheduled event start backdoor
programs for them, change the audit policy, or perhaps scheduled wiping of
files. (right-clicking (My) Computer > Manage > [System Tools] > Task
Scheduler.)
● Malicious scheduled jobs are typically scheduled by using the at or soon
utility. The at command, with no command-line arguments, will show any
jobs that have been scheduled. The image shows the kind of scheduled event
you do not want to occur on your system: netcat sending a command shell to
a remote system every Monday evening at 7:30
9. Analyzing Trust Relationships
● Trust relationships among domains can certainly increase the scope of a
compromise should a valid user ID and password be stolen by an attacker.
● Access to one machine may mean logical access to many others.
● Trust relationships may increase the scope of a compromise and raise the
severity of the incident.
● Unfortunately, determining trust within a Windows domain is not as simple
as it is in the Unix environment.
Analyzing Trust Relationships contd..
Windows NT supports nontransitive, or one-way, trust.
This means that access and services are provided in one direction only.
If your NT PDC trusts another domain, it does not need to trust your PDC. Therefore,
users on the trusted domain can use services on your domain, but not vice versa.
Windows 2000 can provide a two-way, or transitive, trust relationship. Domains located
within an Active Directory forest require two-way trusts to communicate properly.
For example, in Windows 2000 Active Directory Services, if Domain A trusts Domain B,
and Domain B trusts Domain C, then Domain A trusts Domain C. This relationship is
shown in Figure 12-14.
10. Reviewing Security Identifiers (SIDs)
● To establish the actions of a specific user ID, you may need to compare SIDs
found on the victim machine with those at the central authentication authority.
● The SID is used to identify a user or a group uniquely. Each system has its
own identifier, and each user has his own identifier on that system.
● The computer identifier and the user identifier are combined to make the SID.
Thus, SIDs can uniquely identify user accounts. SIDs do not apply to share
security.
● For example, the following is a SID that belongs to the administrator
account:
S-1-5-21-917267712-1342860078-1792151419-500
The S denotes the series of digits as a SID.
The 1 is the revision level, the 5 is the identifier-authority value, and 21-
917267712-1342860078-1792151419 includes the sub authority values. The
500 is the relative identifier.
● Access to shares is accomplished with usernames and passwords.
● However, SIDs do apply when remote access to a domain is
provided.
● A SID with the server’s unique sequence of numbers is placed in
the Registry of the workstation after the first successful logon to
that server.
● Therefore, SIDs can be the digital fingerprints that prove that a
remote system was used to log on to a machine and access a
domain.
Investigating Unix Systems
STEPS IN A UNIX INVESTIGATION
■Review all pertinent logs
■ Perform keyword searches
■ Review relevant files
■ Identify unauthorized user accounts or groups
■ Identify rogue processes
■ Check for unauthorized access points
■ Analyze trust relationships
▲ Check for kernel module rootkits
1. REVIEWING PERTINENT LOGS
Unix operating systems have a variety of log files that can yield important clues
during incident response.
Not only are system activities such as logons, startups, and shutdowns logged,
but also events associated with Unix network services.
Most log files are located in a common directory, usually /var/log.
Some logs are placed in non intuitive locations, such as /etc.
Additionally, not all log files are even on the system in question. You may find
pertinent logs on a network server or security device, such as a firewall or an
IDS.
Network Logging
The syslog configuration file controls which types of messages are sent to which logs
Each line in the configuration file contains three fields:
● The facility field denotes the subsystem that produced the log file.
● The priority field indicates the severity of the log. There are eight priority levels:
debug, info, notice, warning, err, crit, alert, and emerg.
● The action field specifies how the log will be recorded. The action could be the
name of a log file or even the IP address of a remote logging host.
Jun 4 22:14:15 server1 sshd[41458] : Failed password for root from 10.0.2.2 port
22 ssh2
/var/lag/syslog
Remote Syslog Server Logs
● The log files generated locally by the syslog daemon are text files that are
usually world-readable but writable only by root.
● This means that any attacker who has gained administrator-level access can
easily modify the syslog log file removing selected entries, modifying
selected entries, or adding misleading entries.
● These modifications are nearly impossible to detect.
● If you suspect that an attacker has gained root-level access on the system
where the logs are stored, do not trust the logs.
● The only way to tell for certain if an attacker modified the log files is to
perform redundant logging to a secure, remote syslog server.
TCP Wrapper Logging
Another extremely valuable program that uses syslog is TCP Wrappers. TCP
Wrappers is a host-based access control for TCP and UDP services. Any
connection attempts to “wrapped” services are logged via syslog.
Notice that the log entry provides a lot of valuable information: the time and
date of the attempted logon, the hostname (victim), the service (sshd), the
account (root), and the IP address of the system that attempted to log on.
Here is example that shows how a successful connection to a service is
recorded:
Apr 26 20:36:59 victim in.tftpd[524]: connect from 10.10.10.10
Other Network Logs
This log entry provides the following information:
▼ The time and date that the transfer occurred
■ The number of seconds that the transfer took (1)
■ The remote host (10.1.1.1)
■ The number of bytes transferred
■ The name of the transferred file
■ The type of file transfer (b for binary)
■ A special action flag (_ indicates no special action)
■ The direction of transfer (o represents outgoing; i is incoming)
■ The access mode (r is for real, as opposed to anonymous or guest)
■The username (chris)
■ The service name (ftp)
■ The authentication method (0 for none)
■ The user ID (* indicates none available)
▲ The status of the transfer (c for complete)
Host Logging
Unix provides a variety of log files that track host operations. Some
of the more useful logs record
● su command execution
● logged-on users
● logon attempts
● and cron job (scheduled program) execution.
User Activity Logging
Along with logons, other types of user activities are recorded in Unix
logs.
● Process accounting logs and (lastcomm or acctcom command)
● Shell history files record the commands executed by users.
(cat ~/.bash_history )
2. PERFORMING KEYWORD SEARCHES
● Keyword searches are a critical part of almost every incident response
investigation, ranging from email harassment to remote network
compromise cases.
● Keywords can be a wide range of ASCII strings, including an attacker’s
backdoor password, a username, a MAC address, or an IP address.
● You can conduct keyword searches on the logical file structure or at the
physical level, examining the contents of an entire drive.
String Searches with grep
The powerful, flexible grep command is a primary tool for string searches. To perform a
string search within a file, use the grep command as follows
[root@lucky]# grep root /etc/passwd
root:x:0:0:root:/root:/bin/bash
The line in the passwd file with the string root inside appears as output. The passwd file is a
text file.
grep on a binary file:
[root@lucky]# grep PROMISC /sbin/ifconfig
Binary file /sbin/ifconfig matches
This time, the string does not appear. Instead, you see a notification that a file of
type binary has a matching entry. If you want to see the match, use the -a option
to handle binary files
[root@lucky]# grep -a PROMISC /sbin/ifconfig [NO FLAGS] UP
BROADCAST DEBUG LOOPBACK POINTOPOINT NOTRAILERS
RUNNING NOARP PROMISC ALLMULTI SLAVE MASTER MULTICAST
DYNAMIC
Solaris system
$ strings /sbin/ifconfig | grep NOTRAILERS NOTRAILERS
File Searches with find
Another useful command for string searches is find. You can use the find command to find any
filename that matches a regular expression.
[root@aplinux /]# find / -name "\.\.\." –print
/home/mugge/MDAc/temp/.../root/..
The find command is helpful for many searches.
It can search a file system for files that match a wide variety of characteristics, including modification
or access time, owner of file, string inside a file, string in the name of the file, and so on.
You can also use find in combination with other commands, such as strings or grep, using the
powerful exec feature.
3. REVIEWING RELEVANT FILES
● We use a few techniques to help identify which files are likely to be
relevant to any given incident.
● These techniques include identifying relevant files by their time/date
stamps and by the information gained during the initial response to
Unix.
● We also search configuration and system files commonly abused by
attackers.
Incident Time and Time/Date Stamps
The atime, or access time, is the last time that a file or directory was
accessed.
● This includes even read access (such as cat filename).
● The mtime, or modification time, records the last time a file was
modified.
● The ctime, is similar to the mtime, but it records the last time the
inode value was changed. This value can change with events such as
changing permissions or ownership.
Special Files
● SUID and SGID Files
● Unusual and Hidden Files and Directories
● Configuration Files
● Startup Files
● Tmp Directory
4. IDENTIFYING UNAUTHORIZED USER ACCOUNTS OR GROUPS
● Attackers will often modify account and group information on victim
systems.
● This modification can come in the form of additional accounts or
escalations in privilege of current accounts.
● The goal is usually to create a backdoor for future access.
● You should audit user and group accounts on suspected victim systems
to validate that an attacker did not manipulate this information.
● Auditing Unix system account information is a straightforward process.
User Account Investigation
User information is stored in the /etc/passwd file.
This is a text file that you can easily review through a variety of mechanisms.
Every user on a Unix system has an entry in the /etc/passwd file. A typical entry
looks like this:
lester:x:512:516:Lester Pace:/home/lester:/bin/bash
Group Account Investigation
Group accounts use the groupID shown in the /etc/passwd file as well as the
/etc/groups file.
A typical /etc/group file looks like this:
$ cat /etc/group root::0:root,ashunn
bin::2:root,bin,daemon
sys::3:root,bin,sys,adm
adm::4:root,adm,daemon
uucp::5:root,uucp
5. IDENTIFYING ROGUE PROCESSES
● Identifying rogue processes is much easier when examining a live
system.
● During the initial investigation, you should have recorded all
listening ports and running processes.
● You should carefully examine the running processes to verify their
validity.
● Also review all binaries associated with listening services and
running processes to ensure that they have not been modified
ps aux | grep sendmail
6. CHECKING FOR UNAUTHORIZED ACCESS POINTS
● Some of the most common access points that we have seen intruders take
advantage of include X Servers, FTP, telnet, TFTP, DNS, sendmail, finger,
SNMP, IMAP, POP, HTTP, and HTTPS
● As you conduct your investigation of the Unix system, you will need to
examine all network services as potential access points.
● Network services could be vulnerable, allowing intruders access to your
system, or network services could already be trojaned by a successful
intruder.
Examine every potential access point to ensure that it is configured
securely and has the latest patches or software version. Compare
checksums with known-good versions of each application to verify
that the programs have not been trojaned
7. ANALYZING TRUST RELATIONSHIPS
● Trust relationships are usually configured through files such as
/etc/hosts.equiv or any .rhosts file in a user’s home directory.
● Trust relationships can be established with ssh through shared keys and
through NFS shares.
● Furthermore, firewalls and host-based access controls such as TCP Wrappers
are often configured to let certain source IP addresses communicate with
protected hosts, another form of trust.
● Investigate all possible trust relationships to determine if they played a part in
the incident.
8. DETECTING TROJAN LOADABLE KERNEL MODULES
● Loadable kernel modules (LKMs), or kernel extensions, are found on the
various flavors of Linux, BSD, and Solaris.
● They extend the capabilities of the base operating system kernel, typically to
provide additional support within the operating system for device and file system
drivers. LKMs can be dynamically loaded by a user with root-level access, and
they run at the kernel level instead of at a normal user-process level. Several
intrusion-based LKMs have been developed, and once a malicious user obtains
privileged access to your system, she can install one.
● Some common malicious LKMs include Adore, Knark, and Itf. These LKMs
provide several capabilities for attackers, such as providing remote root access
and hiding files, processes, and services.
LKMs on Live Systems
● Detecting trojan LKMs on a live system can be complicated because these
tools actually intercept system calls (such as ps or directory listing) to provide
false information.
● They are specifically designed to prevent detection with traditional response
methods.
● However, in many cases, you can find them by combining externally executed
commands with local commands to detect anomalies or discrepancies.
● An example would be an external port scan compared to a port scan
performed directly on the local suspect system
LKM Detection Utilities
● The chkrootkit Utility
Nelson Murilo’s chkrootkit detects several rootkits, worms, and LKMs.
● The KSTAT Utility
The KSAT utility provides several functions useful for detection of trojan
LKMs:
Questions
1. Why are external log files important during investigation? Give at least two
reasons.
2. What is the difference between mtime and ctime?
3. Why are operating system features that automatically start programs important
to attackers? Name a few ways that Unix systems automatically start programs.
4. What challenges might an investigator encounter when reviewing utmp and
wtmp logs?
Investigating Applications
Techniques
● Memory analysis: Extract running application data from RAM.
● File system analysis: Investigate installed applications, logs, and
artifacts.
● Registry analysis (Windows): Find application traces, settings, and
execution history.
● Process monitoring: Detect malicious activity and unauthorized
executions.
Investigating Applications contd..
Tools
● Process Monitor (ProcMon) – Tracks real-time application behavior.
● Autoruns – Analyzes startup programs and hidden applications.
● Volatility – Memory forensics (RAM analysis).
● Process Explorer – Monitors running processes and detects
anomalies.
● FTK Imager – Captures forensic images of applications and artifacts .
Investigating Web Browser
Techniques:
● Cache and history analysis: Retrieve visited sites, downloads, and
cached pages.
● Cookie analysis: Find authentication tokens and tracking data.
● Session storage inspection: Check for active logins and saved
credentials.
● Artifact extraction: Examine SQLite databases and IndexedDB storage.
Investigating Web Browser contd..
Tools:
● Browser History Examiner – Extracts and analyzes browser activity.
● Hindsight – Chrome forensics tool for investigating browser artifacts.
● Web Historian – Collects and visualizes browsing history.
● Wireshark – Captures and analyzes HTTP/S traffic for browser activity.
● Fiddler – Intercepts and analyzes web traffic.
Email Tracing
Techniques
● Header analysis: Identify sender IP, email servers, and routes.
● Metadata extraction: Examine timestamps, message IDs, and
encoding.
● Attachment inspection: Check for malware or phishing payloads.
● SMTP logs analysis: Trace email transactions on servers.
● Phishing detection: Investigate fraudulent or spoofed emails.
Email Tracing contd..
Tools
● Wireshark – Captures SMTP, IMAP, and POP3 traffic for email
analysis.
● Email Header Analyzer (MxToolbox) – Decodes email headers.
● ExifTool – Extracts metadata from email attachments.
● MailXray – Deep analysis of email metadata and embedded objects.
● Google Admin Toolbox MessageHeader – Analyzes Gmail headers.
dd Command
● The dd command in Linux is a command-line tool used for copying
and converting data at a low level.
● It can be used to duplicate entire disks, create disk images, backup
partitions, and even write files like ISO images to USB drives.
● The dd command operates by copying data byte-by-byte, giving you
the ability to control the copying process down to the smallest level.
● dd Command Syntax:
dd if=[input file] of=[output file][options]
dd Command contd..
Practical Use Cases for the dd command
1. To Backup The Entire Hard Disk
# dd if=/dev/sda of=/dev/sdb
2. To Backup a Partition
# dd if=/dev/hda1 of=~/partition.img
3. To Create An Image of a Hard Disk
# dd if=/dev/hda of=~/hdadisk.img
4. To Create CDROM Backup
# dd if=/dev/cdrom of=tgsservice.iso bs=2048
5. Creating a Bootable USB Drive
#dd if=linux.iso of=/dev/sdb bs=4M
dcfldd Command
● dcfldd (Department of Defense Computer Forensics Lab dd) is an enhanced version of the
traditional dd command.
● It is specifically designed for forensic imaging, secure data wiping, and disk cloning while
providing additional features like hashing, logging, and progress monitoring.
● Why Use dcfldd Over dd?
Compared to dd, dcfldd provides:
✅ Progress monitoring (shows the percentage of completion)
✅ Multiple output support (write to multiple destinations simultaneously)
✅ Hashing on the fly (MD5, SHA-1, SHA-256, etc.)
✅ Verification (ensures copied data integrity)
✅ Logging (records the imaging process for forensic purposes)
dcfldd Command contd..
Basic Syntax:
dcfldd if=<input> of=<output> [options]
Common Uses of dcfldd
● Creating a Forensic Disk Image (With Hashing)
dcfldd if=/dev/sda of=/backup.img hash=sha256 hashlog=hash.txt
● Cloning a Hard Drive
dcfldd if=/dev/sda of=/dev/sdb bs=4M
● Wiping a Disk (Secure Data Deletion)
dcfldd if=/dev/zero of=/dev/sda bs=1M
● Writing an ISO File to a USB Drive
dcfldd if=linux.iso of=/dev/sdb bs=4M
● Splitting the Output Across Multiple Locations
dcfldd if=/dev/sda of=backup1.img of=backup2.img
Foremost tool
● Foremost is a forensic program to recover lost files based on their headers,
footers, and internal data structures.
● Foremost can work on image files, such as those generated by dd, Safeback,
Encase, etc, or directly on a drive. The headers and footers can be specified
by a configuration file or you can use command line switches to specify
built-in file types.
● Installed size: 102 KB
How to install: sudo apt install foremost
● Basic Syntax:
foremost -i <input> -o <output> [options]
-i <input> → Specifies the input file (disk, partition, or image)
-o <output> → Defines the output directory for recovered files
Foremost tool contd..
Common uses of foremost
1. Recover Deleted Files from a Disk
foremost -i /dev/sdb1 -o /recovered_files
2. Recover Files from a Disk Image
foremost -i disk_image.img -o /recovered_files
3. Recover Specific File Types
foremost -t jpg,pdf,mp4 -i /dev/sda1 -o /recovered_files
4. Running in Verbose Mode for More Details
foremost -v -i /dev/sda1 -o /recovered_files
5. Recovering Data from a USB Drive
foremost -i /dev/sdb -o /usb_recovery
Scalpel tool
scalpel is a file carving tool used for recovering deleted files from storage media such
as hard drives, USB drives, and disk images. It is similar to foremost but offers more
customization and efficiency.
scalpel works by searching for file headers and footers in raw disk data and
reconstructing deleted files without relying on the file system.
Key Features of scalpel
✅ Recovers deleted files without needing a file system
✅ Faster than foremost due to multi-threaded processing
✅ Highly configurable via /etc/scalpel/scalpel.conf
✅ Supports a wide range of file types (JPEG, PDF, DOCX, MP4, etc.)
✅ Works on raw disk images and physical devices
Scalpel tool contd..
Basic Syntax:
scalpel <input_file_or_device> -o <output_directory>
Common Uses of scalpel
1. Recover All Deleted Files from a Disk
scalpel /dev/sdb1 -o /recovered_files
2. Recover Files from a Disk Image
scalpel disk_image.img -o /recovered_files
3. Recover Specific File Types
Before running scalpel, edit the configuration file:
sudo nano /etc/scalpel/scalpel.conf
Then run: scalpel /dev/sda1 -o /recovered_files
4. Running in Verbose Mode for More Details
scalpel -v /dev/sda1 -o /recovered_files
Understanding Network Intrusions and Attacks
1. Intrusions versus Attacks
It is important for investigators to realize the difference between an intrusion and an attack,
because whether or not there was a real unauthorized entry to the network or system, it can be
a significant aspect in evidencing the elements of a criminal offense.
2. Recognizing Direct versus Distributed Attacks
A direct attack is launched from a computer used by the attacker (often after
pre-intrusion/attack tools, such as port scanners, are used to find potential victims).
As compared to direct attack, the distributed attack is more complex.
This type of attack includes multiple victims, which include not only the target of the attack,
but also intermediary remote systems from which the attack is launched that are controlled by
the attacker.
Understanding Network Intrusions and Attacks
3. Automated Attacks
Attacks implemented by a computer program rather than the attacker physically
carrying out the phases in the attack sequence are called automated attack.
4. Accidental Attacks
Sometimes, intrusions and attacks may really be unintentional.
The user who appears to have sent the virus via e-mail is frequently a victim of
the attack himself/herself.
In numerous cases, huge quantities of virus attacks are introduced accidentally
or unknowingly.
When a lower state of obligation is present, some acts are still considered
criminal.
5. Preventing Intentional Internal Security Breaches
Security breaches is an event that affects unauthorized access of data, applications, services,
networks, and/or devices by avoiding their core security mechanisms. It happens when an
individual or an application illegally move in a private, confidential, or unauthorized logical IT
perimeter.
Internal attackers are more hazardous for several reasons:
a. People inside the network generally know more about the company, the network, and the layout
of the buildings, normal working process, and other information that makes it easier for them to
gain access without recognition.
b. Internal attackers generally have at least some degree of legal access and could find it easy to
determine passwords and fleapits in the current security system.
c. Internal hackers know what activities will incur the most damage, what information is on the
network.
In a high security environment, actions should be taken to avert this kind of theft. For example:
• Install computers lacking floppy drives or even totally diskless workstations.
• Apply system or group strategy that avoids users from installing software.
• Lock PC cases and cover physical access to serial ports, USB ports, and other connection points,
so that removable media devices cannot be connected.
6.Preventing Unauthorized External Intrusions
Unauthorized intrusion can be defined as attacks in which the attacker gets access into the
system by means of different hacking or cracking techniques.
7. Planning for Firewall Failures
The planning must take into consideration the possibility that the firewall will fail:
a. If intruders do get in, what is the contingency plan?
b. How can they reduce the amount of damage attackers can do?
c. How can the most sensitive or valuable data be protected?
When considering maintenance and testing and examining firewall failure, organizations
should ask the following questions:
d. When was the last time the firewall rule set was fully verified?
e. When was the firewall rule set updated?
f. When was the last time the firewall was fully tested?
g. When was the last time the firewall rule set was optimized?
Understanding Network Intrusions and Attacks
8.External Intruders with Internal Access
External intruders are basically outsiders who physically break into your facility to gain
access to your network, although not actually a true insider because he or she is not
authorized to be there and does not have a valid account on the network.
9. Recognizing the “Fact of the Attack”
To recognize that an attack is happening, IDSs use two methods:
a. Pattern recognition: Investigating files, network traffic, series in RAM, or other data for
recurrent or identifiable marks of attack, like mysterious increases in file size or particular
character strings.
b. Effect recognition: Recognizing the results of an attack, like a system crash triggered by
overload or an unexpected reboot for no reason. Exploits are called “fragattacks” the when
number of TCP/IP exploits use patchy packages. It is more problematic to recognize
effect, because the “effects” often look like normal network traffic or problems triggered
by hardware or software faults.
Understanding Network Intrusions and Attacks
10. Identifying and Categorizing Attack Types
The attack type refers to how an intruder obtains access to your computer or network and what
the attacker does once he/she has gained entry—the DOS attacks, scanning and spoofing, nuke
attacks, and distribution of malicious code, some of the more corporate types of hack attacks,
including social engineering attacks. You will be well equipped to safeguard against the attack
when you have a simple understanding of how each type of attack works.
It is useful to sort these different intrusions and attacks into classifications:
• Pre-intrusion/attack activities
• Password-cracking methods
• Technical exploits
• Malicious code attacks
Analyzing Network Traffic
• Network forensics analysis, like any other forensic investigation, presents
many challenges
• The first challenge is related to traffic data sniffing.
• Depending on the network configuration and security measures where the
sniffer is deployed, the tool may not capture all desired traffic data.
• To solve this issue, the network administrator should use a span port on
network devices in multiple places of the network.
• One tedious task in the network forensic is the data correlation.
• Data correlation can be either causal or temporal.
• For the latter case, timestamps should be logged as well.
Collecting Network-Based Evidence
Capturing network communications is a serious and essential step when examining suspected crimes
or exploitations.
1. What is Network-Based Evidence?
Collecting network-based evidence involves setting up a computer system to carry out network
monitoring, setting up the network monitor, and assessing the efficiency of the network monitor.
2. What are the Goals of Network Monitoring?
Network monitoring is not planned to prevent attacks. Instead, it permits investigators to complete a
number of tasks:
• Confirm or dismiss suspicions surrounding an alleged computer security incident.
• Collect additional evidence and information.
• Verify the scope of a settlement.
• Identify additional parties involved.
• Determine a timeline of events occurring on the network.
• Make sure of the compliance with a desired activity.
Collecting Network-Based Evidence
3. Types of Network Monitoring
Network monitoring consists of several different types of data collection:
• Event monitoring: Event monitoring is based on rules or thresholds working on the
network monitoring platform.
• Trap and trace monitoring: Noncontent observing records the session or transaction data
briefing the network activity. Law enforcement refers to such noncontent observing as a
pen register or a trap and trace.
• Full content monitoring: Full content observing produces data that contains the raw
packets collected from the wire.
It offers the highest fidelity, because it represents the actual communication passed
between computers on a network.
Collecting Network-Based Evidence
4. Setting Up a Network Monitoring System
Hardware- and software-based network diagnostic tools, IDS sensors, and
packet capture utilities all have their dedicated purposes. Creating a successful
network observation system includes the following steps:
• Define your goals for performing the network surveillance.
• Make sure that you have the proper legal standing to perform the monitoring
activity.
• Obtain and implement proper hardware and software.
• Make sure the security of the platform, both by electronic means and by
physical means.
• Confirm the appropriate placement of the monitor on the network.
• Assess your network monitor.
Collecting Network-Based Evidence
A. Deciding Your Goals
The first step for carrying out network investigation is to know why you are doing it in the
first place.
Decide what you intend to achieve, like:
• Watch traffic to and from a specific host.
• Observe traffic to and from a specific network.
• Observe a specific individual’s actions.
• Confirm intrusion attempts.
• Look for specific attack signatures.
• Emphasis on the use of a specific protocol.
B. Choosing Appropriate Hardware
How much data your system can collect, it depends on the following three parameters:
1. CPU type
2. RAM amount
3. Hard drive
Collecting Network-Based Evidence
C. Choosing Appropriate Software
Possibly the most difficult challenge in accumulating a network monitor is choosing its software.
Here are some factors that can affect which software you choose:
• Which host operating system will you use?
• Do you want to permit remote access to your monitor or access your monitor only at the console?
• Do you want to implement a silent network sniffer?
• Do you need portability of the capture files?
• What are the technical skills of those responsible for the monitor?
• How much data traverses the network?
It is important to choose appropriate:
• Operating system
• Remote access
• Silent sniffers
• Data file formats
Collecting Network-Based Evidence
D. Deploying the Network Monitor
The placement of the network monitor is maybe the most important issue in
setting up an investigation system. Newer devices and network technology, like
network switches, VLANs, and multiple data-rate networks, have created some
new challenges for investigators. The typical goal of network investigation is to
capture all activity relating to a specific target system.
E. Evaluating Your Network Monitor
When carrying out network monitoring, you cannot just start tcpdump and walk
away from the console. You will want to check to make sure the disk is not
filling rapidly, verify that the packet capture program is executing suitably, and
see what kind of load the network monitoring is carrying.
Collecting Network-Based Evidence
5. Performing a Trap and Trace
You can accomplish a trap and trace by using free, standard tools like tcpdump. Tcpdump and
WinDump capture files have the same binary format, so you can capture traffic using tcpdump
and view it using WinDump.
• Initiate a tarp and trace with tcpdump at command line or perform a trap and trace with
WinDump for the Windows operating system.
• Create a trap and trace output file. It is easy to create a permanent output file than to view
the data live on console. If we do not have the output file, then the information is lost the
minute you terminate your tcpdump or WinDump process. UNIX “cat” command is used to
view the capture file.
6. Using TCPDUMP for Full Content Monitoring
We conduct full content monitoring for computer security incident response. Tcpdump tool is
used for full content monitoring.
• Finding Full Content Data
While monitoring the system, we collect the maximum traffic.
• Maintaining Your Full Content Data Files
Collecting Network-Based Evidence
7. Collecting Network-Based Log Files
When you collect the evidences, make sure that you are looking over the potential sources
of evidence when you respond to an incident.
Some examples are:
• Routers, firewalls, servers, IDS sensors, and other network devices may preserve logs
that record network- based events.
• DHCP servers record network access when a PC requests an IP lease
• All investigative clues have some unique challenges for the investigator. Those
challenges are:
a) The network-based logs are stored in many formats.
b) These logs may originate from several different operating systems.
c) These logs may require special software to access and read.
d) These logs are geographically dispersed and sometimes use an inaccurate current time.
The main challenge for investigators is in tracing all these logs and associating them.
Evidence Handling
There should be some rules and regulations for performing forensic investigation.
• Rule 1. An examination should never be performed on the original media.
• Rule 2. A copy is made onto forensically sterile media. New media should always be used if
available.
• Rule 3. The copy of the evidence must be an exact, bit-by-bit copy (Sometimes referred to
as a bit-stream copy).
• Rule 4. The computer and the data on it must be protected during the acquisition of the
media to ensure that the data is not modified (Use a write blocking device when possible).
• Rule 5. The examination must be conducted in such a way as to prevent any modification
of the evidence.
• Rule 6. The chain of the custody of all evidence must be clearly maintained to provide an
audit log of whom might have accessed the evidence and at what time.
Investigating Routers
• Routers can be tools used by investigators as they can be targets of attack,
stepping stones for attackers.
• To allow investigators to resolve complex network incidents, routers can
provide valuable information and evidence.
• Routers lack the data storage and functionality of many of the other
technologies we have examined in previous chapters, and thus they are less
likely to be the ultimate target of attacks.
• During network penetrations, routers are more likely to be springboards for
attackers
• . The information stored on routers, such as passwords, routing tables, and
network block information, makes routers a valuable first step for attackers
bent on penetrating internal networks.
Investigating Routers
1. Obtaining Volatile Data Prior to Powering Down
• Establishing a Router Connection
• Saving the Router Configuration
2. Finding the Proof
We categorize the types of incidents that involve routers as:
a. Direct compromise
b. Routing table manipulation
c. Theft of information
d. Denial of service
Mobile Forensics
Introduction- What is Mobile forensics?
● It is branch of digital forensics , of recovering different kinds of evidences and
analysis of different types of data from mobile phones
● It helps investigator to reach criminals
● Need- Mobile devices store a vast amount of personal and sensitive information,
including call logs, text messages, photos, videos, location data, and application data,
making them valuable sources of evidence in criminal investigations, cybersecurity
incidents, and legal cases.
● Mobile forensics involves extracting and analyzing data from mobile devices,
including deleted files, application data, GPS data, call logs, text messages, and
photographs and videos.
● Mobile forensics emphasizes the importance of collecting and analyzing data in a way
that ensures the integrity of the evidence and its admissibility in court.
Data types on mobile device
● Call logs and text messages
● Contacts and address book
● Media files (photos, videos, audio)
● Application data
● Location data (GPS, Wi-Fi)
● Internet history
● Deleted data
Several procedure to be done at crime scene
● Make away all people from crime scene
● Drawing or taking photograph of scene
● Record the status and location of each device
● Avoid any activity which will harm integrity of evidences
● All the evidences including mobile phone and related devices should be
collected
Procedure for Mobile forensics
● Preservation
● Acquisition
● Examination
● Analysis
● Reporting
1. Preservation
● Securing the mobile device in a forensically sound manner, preventing any
tampering or data loss
● The device is taken into custody and secured to prevent unauthorized access
or modifications
● A detailed record is made of the device's condition, including its serial
number, model, and any visible damage.
● Chain of Custody: A clear record is maintained of who has had access to
the device and when, ensuring the integrity of the evidence.
2. Acquisition
● To create a forensic image of the device's data, ensuring a complete and
unaltered copy for analysis
● Relevant data, such as text messages, call logs, contacts, photos, and
application data, is extracted from the device.
● The forensic image and extracted data are stored securely, ensuring their
integrity and availability for analysis.
3. Examination
● To get the digital evidences from mobile phone
● The evidences may exist clearly or hidden by using scientific method.
4. Analysis
● To verify whether the extracted data for relevant information and evidence
related to the investigation
● Forensic analysts review the extracted data, looking for patterns, keywords,
and other relevant information
● The data is interpreted in the context of the investigation, identifying
potential evidence and relationships.
● A comprehensive report is prepared, outlining the findings of the analysis
and the evidence discovered.
5. Reporting
● To present the findings of the mobile forensics analysis in a clear and
concise manner, suitable for legal proceedings or other purposes
● A detailed report is prepared, summarizing the analysis process, findings,
and conclusions.
● The report, along with the forensic image and extracted data, is presented as
evidence in legal proceedings or other investigations
● Expert Testimony: In some cases, forensic analysts may be called upon to
provide expert testimony regarding the analysis and findings.