Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
14 views13 pages

Unit2 (CF)

The document discusses the processes and challenges of data recovery and backup in computer forensics, highlighting the importance of effective backup strategies and the need for new architectures to ensure data integrity and accessibility. It outlines the critical steps for evidence collection and preservation in the event of a cyber incident, emphasizing the importance of following legal rules of evidence. Additionally, it details the features of BMC's Backup and Recovery Solution (BRS) for managing data recovery efficiently.

Uploaded by

Kayathri K
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views13 pages

Unit2 (CF)

The document discusses the processes and challenges of data recovery and backup in computer forensics, highlighting the importance of effective backup strategies and the need for new architectures to ensure data integrity and accessibility. It outlines the critical steps for evidence collection and preservation in the event of a cyber incident, emphasizing the importance of following legal rules of evidence. Additionally, it details the features of BMC's Backup and Recovery Solution (BRS) for managing data recovery efficiently.

Uploaded by

Kayathri K
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

Unit-2

COMPUTER FORENSIC EVIDENCE & CAPTURE


Data Recovery Defined
 Data recovery is the process in which highly trained engineers evaluate and extract data from damaged
media and return it in an intact format.
 Many people, even computer experts, fail to recognize data recovery as an option during a data crisis. But
it is possible to retrieve files that have been deleted and passwords that have been forgotten or to recover
entire hard drives that have been physically damaged.
Data Back-up and Recovery
Back-up Obstacles
 Back-up Window: The back-up window is the period of time when back-ups can be run. The back-up
window is generally timed to occur during nonproduction periods when network bandwidth and CPU
utilization are low.
 Network bandwidth: If a network cannot handle the impact of transporting hundreds of gigabytes of data
over a short period of time, the organization’s centralized backup strategy is not viable.
 System throughput: Three I/O bottlenecks are commonly found in traditional backup schemes.
These are
1. The ability of the system being backed up to push data to the backup server
2. The ability of the backup server to accept data from multiple systems simultaneously
3. The available throughput of the tape device(s) onto which the data is moved
 Lack-of Resources: Many companies fail to make appropriate investments in data protection until it is too
late.
The Role of Back-up in Data Recovery
There are many factors that affect back-up. For example:
 Storage costs are decreasing: The cost per megabyte of primary (online) storage has fallen
dramatically over the past several years and continues to do so as disk drive technologies advance.
 Systems have to be on-line continuously: Because systems must be continuously online, the dilemma
becomes that you can no longer take files offline long enough to perform backup.
 The role of Back-up has changed: The role of backup now includes the responsibility for recovering
user errors and ensuring that good data has been saved and can quickly be restored.
CONVENTIONAL TAPE BACK-UP IN TODAY’S MARKET
 A typical tape management system consists of a dedicated workstation with the front-end interfaced to
the network and the back-end controlling a repository of tape devices.
 The media server runs tape management software.
 It can administer backup devices throughout an enterprise and can run continuous parallel backups and
restores.
 An alternative to tape backup is to physically replicate or mirror all data and keep two copies online at
all times.
 The advantage is that the data does not have to be restored, so there are no issues with immediate data
availability.
ISSUES WITH TODAY’S BACK-UP
 NETWORK BACKUP creates network performance problems. Using the production network to
carry backup data, as well as for normal user data access, can severely overburden today’s busy network
resources.
 OFFLINE BACKUP affects data accessibility. The time that the host is offline for data backup must
be minimized. This requires extremely high- speed, continuous parallel backup of the raw image of the
data.
 LIVE BACKUPS allow data access during the backup process but affect performance. The downside
to the live backup is that it puts a tremendous burden on the host.
 MIRRORING doesn’t protect against user error and replication of bad data. Fully replicated online
data sounds great, albeit at twice the cost per megabyte of a single copy of online data.

NEW ARCHITECTURES AND TECHNIQUES ARE REQUIRED


 Backup at extremely high speed is required. Recovery must be available at file level. The time that
systems off-line for back-up must be eliminated.
 Remote hot recovery sites are needed for immediate resumption of data access. Backup of critical data is
still required to ensure against data errors and user errors.
 To achieve effective backup and recovery, the decoupling of data from its storage space is needed.
 It is necessary to develop techniques to journal modified pages, so that journaling can be invoked within
the primary storage device, without host intervention.
 Part of the primary storage area must be set aside for data to be backed up. This area must be as large as
the largest backup block. We should have fast nonrandom restoration of critical data.
The Data Recovery Solution
SHRINKING EXPERTISE, GROWING COMPLEXITY
 The complex systems that have evolved over the past 30 years must be monitored, managed, controlled,
and optimized. But most of the bright young graduates this term haven’t had much exposure to mainframe
concepts.
 Backups often take place while an application is running. Application changes take place on the fly. If
an outage occurs, the company stands to lose tens of thousands of dollars an hour.
FAILURES:
 Disk storage is more reliable than ever, but hardware failures are still possible.
 A simple mistake can be made by an application programmer, system programmer, or operations person.
Logic errors in programs or application of the wrong update at the wrong time can result in a system
crash or, worse.
 Disasters do really occurs! Floods, tornadoes, earthquakes, tsunamis, and even terrorism can do strike.
We must be ready.
BUDGETS AND DOWNTIME
 We have fewer resources (people, processing power, time, and money) to do more work than
ever before, and we must keep your expenses under control.
 Systems must remain available to make money and serve customers. Downtime is much too
expensive to be tolerated.
RECOVERY: THINK BEFORE YOU BACK-UP
 One of the most critical data-management tasks involves recovering data in the event of a problem.
 You must evaluate your preparations, make sure that all resources are available in usable condition,
automate processes as much as possible, and make sure you have the right kind of resources.
Evaluate your preparation
If all of the resources (image copies, change accumulations, and logs) are available at recovery time,
these preparations certainly allow for a standard recovery. Finding out at recovery time that some critical
resource is missing can be disastrous!
Don’t let your resources fall through the cracks
Identifying different types of conditions is critical to ensuring a successful recovery. Checking your
assets to make sure they’re ready should be part of your plan.
Automated Recovery
 With proper planning and automation, recovery is made possible, reliance on specific personnel is
reduced, and the human-error factor is nearly eliminated.
 Data integrity and your business relay on building recovery job control language (JCL). In the event of a
disaster, the Information Management System (IMS) recovery control (RECON) data sets must be
modified in preparation for the recovery.
 Cleaning your RECON data sets can take hours if done manually, and it’s an error-prone process.
Make Recoveries Efficient
 Multithreading tasks shorten the recovery process.
 Recovering multiple databases with one pass through your log data certainly will save time.
 Taking image copies, rebuilding indexes, and validating pointers concurrently with the recovery
process further reduce downtime.
Take Back-ups
 The first step to a successful recovery is the backup of your data.
 Your goal in backing up data is to do so quickly, efficiently, and usually with minimal impact to
your customers.
 You might need only very brief out-ages to take instant copies of your data, or you might have
intelligent storage devices that allow you to take a snapshot of your data.
 Both methods call for tools to assist in the management of resources.
BACK-UP AND RECOVERY SOLUTION
BMC software has developed a model called the Back-up and Recovery Solution (BRS) for the
Information Management System (IMS) product.
Image Copy
 BRS contains an Image Copy component to help manage your image copy process.
 BRS can take batch, on-line (fuzzy), or incremental image copies; Snapshot copies; or Instant Snapshot
copies.
 The Image Copy component of BRS offers a variety of powerful features: dynamic allocation of all
input and output data sets, stacking of output data sets, high performance access methods (faster I/O),
copying by volume, compression of output image copies, and database group processing-all while
interfacing with DBRC and processing asynchronously.
Change Accumulation
 The BRS Change Accumulation component takes advantage of multiple engines, large virtual storage
resources, and high-speed channels and controllers that are available in many environments.
 Use of multiple tack control block (TCB) structures enables overlapping of as much processing as
possible, reducing both elapsed and CPU time.
Recovery
 The BRS Recovery component, which functionally replaces the IMS Database Recovery utility for
null- function (DL/I) databases and data-entry databases (DEDBs), allow recovery of multiple
databases with one pass of the log and change accumulation data sets while dynamically allocating all
data sets required for recovery.
 BRS recovers multiple databases to any point in time. BRS can determine the best choice for a Point-
in- Time (PIT) recovery. Full DBRS support includes:
RECOVERY MANAGER
o Recovery Manager component lets you automate and synchronize recoveries across applications
and databases by creating meaningful groups of related databases and creating optimized JCL to
perform the recovery of these groups.
o Recovery Manager component provides a positive response for the IMS commands that are used
to deallocate and start your databases.
o Recovery Manager component fully automates the process of cleaning the RECON data sets for
restart following a disaster recovery.
o Recovery Manager component also allows you to test your recovery strategy and notifies you
when media errors have jeopardized your recovery resources.
POINTER CHECKING
BRS offers the capability to verify the validity of database pointers through the Concurrent Pointer
Checking function for both full-function databases and Fast Path data-entry databases (DEDBs).
INDEX REBUILD
If indexes are ever damaged or lost, the Index Rebuild function of BRS allows you rebuild them rather than
recover them.
RECOVERY ADVISOR
 The Recovery Advisor component of BRS allows you to monitor the frequency of your image copies and
change accumulations.
 It helps you to determine whether all your databases are being backed-up. By using any number of back-
up and recovery tools available, you can better manage your world and be ready to recover!
EVIDENCE COLLECTION AND DATA SEZIURE
Why Collect Evidence?
The simple reasons for collecting evidence are:
 Future Prevention: Without knowing what happened, you have no hope of ever being able to stop
someone else from doing it again.
 Responsibility: The attacker is responsible for the damage done, and the only way to bring him to justice
is with adequate evidence to prove his actions. The victim has a responsibility to the community.
Information gathered after a compromise can be examined and used by others to prevent further attacks.
Collection Options
Once a compromise has been detected, you have two options:
 Pull the system off the network and begin collecting evidence: In this case you may find that you have
insufficient evidence or, worse, that the attacker left a dead man switch that destroys any evidence once
the system detects that its offline.
 Leave it online and attempt to monitor the intruder: you may accidentally alert the intruder while
monitoring and cause him to wipe his tracks any way necessary, destroying evidence as he goes.
Obstacles
 Computer transactions are fast, they can be conducted from anywhere, can be encrypted or anonymous,
and have no intrinsic identifying features such as handwriting and signatures to identify those responsible.
 Any paper trail of computer records they may leave can be easily modified or destroyed, or may be only
temporary.
 Auditing programs may automatically destroy the records left when computer transactions are
finished with them.
 Investigating electronic crimes will always be difficult because of the ease of altering the data and the fact
that transactions may be done anonymously.
 The best we can do is to follow the rules of evidence collection and be as assiduous as possible.
Types of Evidence
 Real Evidence: Real evidence is any evidence that speaks for itself without relying on anything else. In
electronic terms, this can be a log produced by an audit function— provided that the log can be shown to
be free from contamination.
 Testimonial Evidence: Testimonial evidence is any evidence supplied by a witness. As long as the
witness can be considered reliable, testimonial evidence can be almost as powerful as real evidence.
 Hearsay: Hearsay is any evidence presented by a person who was not a direct witness. Hearsay is
generally inadmissible in court and should be avoided.
The Rules of Evidence
1. Admissible: Admissible is the most basic rule. The evidence must be able to be used in court.
2. Authentic: You must be able to show that the evidence relates to the incident in a relevant way.
3. Complete: It’s not enough to collect evidence that just shows one perspective of the incident.
4. Reliable: Your evidence collection and analysis procedures must not cast doubt on the evidence’s
authenticity and veracity.
5. Believable: The evidence you present should be clearly understandable and believable to a jury.

Using the preceding five rules, we can derive some basic do’s and don’ts:
 Minimize handling and corruption of original data: Once you’ve created a master copy of the original
data, don’t touch it or the original. Any changes made to the originals will affect the outcomes of any
analysis later done to copies.
 Account for any changes and keep detailed logs of your actions: Sometimes evidence alteration is
unavoidable. In these cases, it is absolutely essential that the nature, extent, and reasons for the changes be
documented.
 Comply with the five rules of evidence: Following these rules is essential to guaranteeing successful
evidence collection.
 Do not exceed your knowledge: If you ever find yourself ―out of your depth,‖ either go and learn more
before continuing (if time is available) or find someone who knows the territory.
 Follow your local security policy: If you fail to comply with your company’s security policy, you may
find yourself with some difficulties.
 Capture as accurate an image of the system as possible: Capturing an accurate image of the system is
related to minimizing the handling or corruption of original data.
 Be prepared to testify: If you’re not willing to testify to the evidence you have collected, you might as
well stop before you start. No one is going to believe you if they can’t replicate your actions and reach the
same results.
 Work fast: The faster you work, the less likely the data is going to change. Volatile evidence may vanish
entirely if you don’t collect it in time. If multiple systems are involved, work parallel.
 Proceed from volatile to persistent evidence: Always try to collect the most volatile evidence first.
 Don’t shutdown before collecting evidence: You should never, ever shutdown a system before you collect
the evidence. Not only do you lose any volatile evidence, but also the attacker may have trojaned the
startup and shutdown scripts, plug-and-play devices may alter the system configuration, and temporary
file systems may be wiped out.
 Don’t run any programs on the affected system: The attacker may have left trojaned programs and
libraries on the system; you may inadvertently trigger something that could change or destroy the evidence
you’re looking for.
Volatile Evidence
Always try to collect the most volatile evidence first. An example an order of volatility would be:
1. Registers and cache
2. Routing tables
3. Arp cache
4. Process table
5. Kernel statistics and modules
6. Main memory
7. Temporary file systems
8. Secondary memory
9. Router configuration
10. Network topology

General Procedure
 Identification of Evidence: You must be able to distinguish between evidence and junk data
 Preservation of Evidence: The evidence you find must be preserved as close as possible to its
original state.
 Analysis of Evidence: Analysis requires in-depth knowledge of what you are looking for and how to get it.
 Presentation of Evidence: The manner of presentation is important, and it must be understandable by a
layman to be effective.
Collection and Archiving
Once we’ve developed a plan of attack and identified the evidence that needs to be collected.
Logs and Logging: You should run some kind of system logging function. It is important to keep these logs
secure and to back them up periodically. Messages and logs from programs can be used to show what damage
an attacker did.
Monitoring: By monitoring we can gather statistics, watch out for irregular, and trace where an attacker is
coming from and what he is doing. Unusual activity or the sudden appearance of unknown users should be
considered definite cause for closer inspection. You should display a disclaimer stating what monitoring is done
when users log on.
Methods of Collection
There are two basic forms of collection: freezing the scene and honeypotting.
Freezing the Scene
 It involves taking a snapshot of the system in its compromised state. You should then start to collect
whatever data is important onto removable nonvolatile media in a standard format.
 All data collected should have a cryptographic message digest created, and those digests should be
compared to the originals for verification.
Honeypotting
 It is the process of creating a replica system and luring the attacker into it for further monitoring.
 The placement of misleading information and the attacker’s response to it is a good method for
determining the attacker’s motives.
Artifacts
 There is almost always something left behind by the attacker be it code fragments, trojaned
programs, running processes, or sniffer log files. These are known as artifacts.
 Never attempt to analyze an artifact on the compromised system.
 Artifacts are capable of anything, and we want to make sure their effects are controlled.
Collection Steps
1. Find the Evidence: Use a checklist. Not only does it help you to collect evidence, but it also can be
used to double-check that everything you are looking for is there.
2. Find the Relevant Data: Once you’ve found the evidence, you must figure out what part of it is
relevant to the case.
3. Create an Order of Volatility: The order of volatility for your system is a good guide and
ensures that you minimize loss of uncorrupted evidence.
4. Remove external avenues of change: It is essential that you avoid alterations to the original data.
5. Collect the Evidence: Collect the evidence using the appropriate tools for the job.
6. Document everything: Collection procedures may be questioned later, so it is important that you
document everything you do. Timestamps, digital signatures, and signed statements are all important.
Controlling Contamination: The Chain of Custody
 Once the data has been collected, it must be protected from contamination. Originals should never
be used in forensic examination; verified duplicates should be used.
 A good way of ensuring that data remains uncorrupted is to keep a chain of custody. This is a detailed
list of what was done with the original copies once they were collected.
Analysis
 Once the data has been successfully collected, it must be analyzed to extract the evidence you wish
to present and to rebuild what actually happened.
Time
 To reconstruct the events that led to your system being corrupted, you must be able to create a
timeline.
 Never, ever change the clock on an affected system.

Forensic Analysis of Back-ups


 When we analyze back-ups, it is best to have a dedicated host for the job. We need a dedicated host
which is secure, clean and isolated from any network for analyzing back- ups.
 Document everything you do. Ensure that what you do is repeatable and capable of always giving the
same results.
Reconstructing the Attack
 After collecting the data, we can attempt to reconstruct the chain of events leading to and
following the attacker’s break-in.
 We must correlate all the evidence we have gathered.
 Include all of the evidence we’ve found when reconstructing the attack---no matter how small it is.
Searching and Seizing
 There is no one methodology for performing a computer forensic investigation and analysis.
 There are too many variables for to be just one way.
 Some of the typical variable that comes to the mind includes operating systems; software applications;
cryptographic algorithms and applications; and hardware platforms.
 But moving beyond these obvious variables spring other equally challenging variables: law,
international boundaries, publicity, and methodology.
There are a few widely accepted guidelines for computer forensic analysis:
 A computer forensic examiner is impartial. Our job is to analyze the media and report our findings with
no presumption of guilt or innocence.
 The media used in computer forensic examinations must be sterilized before each use.
 A true image (bit stream) of the original media must be made and used for the analysis.
 The integrity of the original media must be maintained throughout the entire investigation.
Before the Investigation
 For the sake of first argument, you must have skilled technicians in-house and a top notch lab the right
equipment, the right computer forensic tools, and so on.
 District attorneys may require more documentation on the chain of evidence handling.
 When you have a case arise, you know what is required and can work the case from the inception in
support of these requirements.
Methodology Development
 Define your methodology, and working according to this methodology.
 Here methodology defines a method, a set of rules: guidelines that are employed by a discipline.
Document Everything
The chain of evidence is so important in computer forensic investigations. If
resources allow, have two computer forensic personnel assigned to each case every
step of the way. Important in the documentation are the times that dates steps were
taken; the names of those involved; and under whose authority were the steps taken?
Evidence Search and Seizure
Prior to search and seizure, you already have the proper documents filled as well as
permission from the authority to search and seize the suspect’s machine.
Step 1: Preparation
 You should check all media that is to be used in the examination process.
Document the wiping and scanning process.
 Check to make sure that all computer forensic tools are licensed for use and all lab
equipment is in working order.
Step 2: Snapshot
 We should photograph the scene, whether it is a room in a home or in a business.
You should also note the scene.
 Take advantage of your investigative skills here.
 Note pictures, personal items, and the like. Photograph the actual Evidence. For
example, the evidence is a PC in a home office.
 Take a photograph of the monitor. Remove the case cover carefully and
photograph the internals.
Step 3: Transport
 If you have the legal authority to transport the evidence to your lab, you should pack
the evidence securely.
 Photograph/videotape and document the handling of evidence leaving the scene to the
transport vehicle and from transport vehicle to the lab examination facility.
Step 4: Examination
 You should prepare the acquired evidence for examination in your lab.
 There are many options to on what tool to use image the drive.
 You could use EnCase, the Unix command DD, ByetBack, or also SafeBack.
 It is wise to have a variety of tools in your lab.
 Each of these tools has its respective strengths.
 The important note to remember here is: Turn off virus-scanning software.
 We must record the time and date of the COMS.
 Do not boot the suspect machine.
 When making the image, make sure that the tool you use does not access the file
system of the target evidence media.
 After making the image, seal the original media in an electrostatic-safe container,
catalog it, and initial the container.
 Finally, the examination of the acquired image begins.

You might also like