Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
28 views18 pages

Ch8 Cryptology

Uploaded by

cy.sky.pang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views18 pages

Ch8 Cryptology

Uploaded by

cy.sky.pang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Ch8 cryptology

With enough time, resources, and motivation, hackers can successfully attack most
cryptosystems and reveal the information. So, a more realistic goal of cryptography is to
make obtaining the information too work intensive or time consuming to be worthwhile to the
attacker.

The algorithm(publicly known), the set of rules also known as the cipher, dictates how
enciphering(加密) and deciphering(解密) take place.

In encryption, the key (also known as cryptovariable) is a value that comprises a large
sequence of random bits. Is it just any random number of bits crammed together? Not really.
An algorithm contains a keyspace, which is a range of values that can be used to construct a
key. When the algorithm needs to generate a new key, it uses random values from this
keyspace. The larger the keyspace, the more available values that can be used to represent
different keys—and the more random the keys are, the harder it is for intruders to figure them
out.

A large keyspace allows for more possible keys. (Today, we are commonly using key sizes of
128, 256, 512, or even 1,024 bits and larger.) So a key size of 512 bits would provide 2 512
possible combinations (the keyspace).

A large keyspace allows for more possible keys. (Today, we are commonly using key sizes of
128, 256, 512, or even 1,024 bits and larger.) So a key size of 512 bits would provide 2 512
possible combinations (the keyspace).
A cryptosystem is made up of at least the following:
 Software
 Protocols
 Algorithms
 Keys

Cryptosystems can provide the following services:


 Confidentiality - Renders (使)the information unintelligible(難理解) except by authorized
entities.
 Integrity - Ensures that data has not been altered in an unauthorized manner since it was
created, transmitted, or stored.
 Authentication - Verifies the identity of the user or system that created the information
 Authorization - Provides access to some resource to the authenticated user or system.
 Nonrepudiation - Ensures that the sender cannot deny sending the message.

Today we use cryptography to ensure the integrity of data, to authenticate messages, to


confirm that a message was received, to provide access control, and much more.

The strength of an encryption method comes from the algorithm, the secrecy of the key, the
length of the key, and how they all work together within the cryptosystem. When strength is
discussed in encryption, it refers to how hard it is to figure out the algorithm or key,
whichever is not made public. Attempts to break a cryptosystem usually involve processing
an amazing number of possible values in the hopes of finding the one value (key) that can be
used to decrypt a specific message. The strength of an encryption method correlates (相關) to
the amount of necessary processing power, resources, and time required to break the
cryptosystem or to figure out the value of the key.

Another name for cryptography strength is work factor, which is an estimate of the effort and
resources it would take an attacker to penetrate a cryptosystem.

Important elements of encryption are to use an algorithm without flaws, use a large key size,
use all possible values within the keyspace selected as randomly as possible, and protect the
actual key.

A one-time pad is a perfect encryption scheme because it is considered unbreakable if


implemented properly. It
uses a pad made up of random values.

This encryption process uses a binary mathematic function called exclusive-OR, usually
abbreviated as XOR

The one-time pad encryption scheme is deemed unbreakable only if the following things are
true about the implementation process:
 The pad must be used only one time.
 The pad must be at least as long as the message.
 The pad must be securely distributed and protected at its destination.
 The pad must be made up of truly random values.
Each possible pair of entities that might want to communicate in this fashion must receive, in
a secure fashion, a pad that is as long as, or longer than, the actual message. This type of key
management can be overwhelming (不知所措)and may require more overhead than it is worth.

The one-time pad, though impractical for most modern applications, is the only perfect cryptosystem.

The cryptographic life cycle is the ongoing process of identifying your cryptography needs,
selecting the right algorithms, provisioning the needed capabilities and services, and
managing keys.

How can you tell when your algorithms (or choice of keyspaces) are about to go stale( 過時)?
You need to stay up to date(及時了解) with the cryptologic research community. They are the best
source for early warning that things are going sour( 惡 化 ). Typically, research papers
postulating( 假 設 ) weaknesses in an algorithm are followed by academic exercises in breaking
the algorithm under controlled conditions, which are then followed by articles on how it is
broken in general cases. When the first papers come out, it is time to start looking for
replacements.

Cryptographic Methods
Symmetric Key Cryptography
The key has dual functionality in that it can carry out both encryption and decryption pro-
cesses. Symmetric keys are also called secret keys, because this type of encryption relies on
each user to keep the key a secret and properly protected. If an intruder were to get this key,
he could decrypt any intercepted message encrypted with it.

If 10 people needed to communicate securely with each other using symmetric keys, then 45
keys would need to be kept track of. If 100 people were going to communicate, then 4,950
keys would be involved. The equation used to calculate the number of symmetric keys
needed is
N(N – 1)/2 = number of keys

Use out-of-band method to transfer the key (eg. save the key on a thumb drive and walk over
to deliver it to another user)

Symmetric cryptosystems can provide confidentiality, but they cannot provide authentication
or nonrepudiation.

Symmetric cryptosystems are very fast and can be hard to break. Compared with asymmetric
systems, symmetric algorithms scream in speed. They can encrypt and decrypt relatively
quickly large amounts of data that would take an unacceptable amount of time to encrypt and
decrypt with an asymmetric algorithm. It is also difficult to uncover data encrypted with a
symmetric algorithm if a large key size is used. For many of our applications that require
encryption, symmetric key cryptography is the only option.

Block ciphers, which work on blocks of bits.


Stream ciphers, which work on one bit at a time.
Block Ciphers
Message is divided into blocks of bits. These blocks are then put through mathematical
functions, one block at a time.

A strong cipher contains the right level of two main attributes: confusion and diffusion.
Confusion is commonly carried out through substitution, while diffusion is carried out by using
transposition (scrambled, or diffused). For a cipher to be considered strong, it must contain
both of these attributes to ensure that reverse-engineering is basically impossible. The
randomness(隨機性) of the key values and the complexity of the mathematical functions dictate
the level of confusion and diffusion involved.
Confusion pertains to making the relationship between the key and resulting ciphertext as
complex as possible.
Diffusion, on the other hand, means that a single plaintext bit has influence over several of
the ciphertext bits. Changing a plaintext value should change many ciphertext values, not just
one. In fact, in a strong block cipher, if one plaintext bit is changed, it will change every
ciphertext bit with the probability of 50 percent.
If an algorithm does not exhibit the necessary degree of the avalanche effect(雪崩效應), then the
algorithm is using poor randomization. This can make it easier for an attacker to break the
algorithm.

Key dictates what S-boxes are to be used when scrambling the original message from
readable plaintext to encrypted nonreadable ciphertext. Each S-box contains the different
substitution methods that can be performed on each block.

Stream Ciphers
A plaintext bit will be transformed into a different ciphertext bit each time it is encrypted.
Stream ciphers use keystream generators, which produce a stream of bits that is XORed with
the plaintext bits to produce ciphertext.
Similar to the one-time pad, a stream algorithm the individual bits created by the keystream
generator are used to encrypt the bits of the message through XOR also.

Initialization Vectors
Initialization vectors (IVs) are random values that are used with algorithms to ensure patterns
are not created during the encryption process. They are used with keys and do not need
to be encrypted when being sent to the destination. If IVs are not used, then two
identical plaintext values that are encrypted with the same key will create the same
ciphertext. Providing attackers with these types of patterns can make their job easier in
breaking the encryption method and uncovering the key.
A strong and effective stream cipher contains the following characteristics:
Easy to implement in hardware Complexity in the hardware design makes it more
difficult to verify the correctness of the implementation and can slow it down.
Long periods of no repeating patterns within keystream values Bits generated by
the keystream are not truly random in most cases, which will eventually lead to the
emergence(出現) of patterns; we want these patterns to be rare.
A keystream not linearly related to the key If someone figures out the keystream
values, that does not mean she now knows the key value.
Statistically unbiased(不偏不倚) keystream (as many zeroes as ones) There should be
no dominance(佔主導地位) in the number of zeroes or ones in the keystream.

Stream ciphers require a lot of randomness and encrypt individual bits at a time. This requires
more processing power than block ciphers require, which is why stream ciphers are better
suited to be implemented at the hardware level. Because block ciphers do not require as
much processing power, they can be easily implemented at the software level.

Asymmetric Key Cryptography


No one other than the owner should have access to a private key.
If confidentiality is the most important security service to a sender, she would encrypt the file
with the receiver’s public key. This is called a secure message format because it can only be
decrypted by the person who has the corresponding private key.
If authentication is the most important security service to the sender, then she would encrypt
the data with her private key (open message format). This provides assurance to the receiver
that the only person who could have encrypted the data is the individual who has possession
of that private key.

Asymmetric algorithms are slower than symmetric algorithms because they use much more
complex mathematics to carry out their functions, which requires more processing time.
Although they are slower, asymmetric algorithms can provide authentication and
nonrepudiation, depending on the type of algorithm being used. Asymmetric systems also
provide for easier and more manageable key distribution than symmetric systems
and do not have the scalability issues of symmetric systems.

Diffie-Hellman Algorithm
First asymmetric key agreement algorithm
The algorithm allows for key distribution (key arrangement but not key exchange <-With key
exchange functionality, the sender encrypts the symmetric key with the receiver’s public key
before transmission.), but does not provide encryption or digital signature functionality.
The original Diffie-Hellman algorithm is vulnerable to a man-in-the-middle attack,
because no authentication occurs before public keys are exchanged.
The countermeasure to this type of attack is to have authentication take place before
accepting someone’s public key.

RSA
RSA is a public key algorithm that is the most popular when it comes to asymmetric
algorithms. RSA is a worldwide de facto standard and can be used for digital signatures, key
exchange, and encryption. It was developed in 1978 at MIT and provides authentication as
well as key encryption.
The security of this algorithm comes from the difficulty of factoring large numbers into their
original prime numbers. The public and private keys are functions of a pair of large
prime numbers.
Using its one-way function, RSA provides encryption and signature verification, and the
inverse direction performs decryption and signature generation.
RSA has been implemented in applications; in operating systems; and at the hardware level in
network interface cards, secure telephones, and smart cards. RSA can be used as a key
exchange protocol, meaning it is used to encrypt the symmetric key to get it securely to its
destination. RSA has been most commonly used with the symmetric algorithm AES. So, when
RSA is used as a key exchange protocol, a cryptosystem generates a symmetric key to be
used with the AES algorithm. Then the system encrypts the symmetric key with the receiver’s
public key and sends it to the receiver. The symmetric key is protected because only the
individual with the corresponding private key can decrypt and extract the symmetric key.
RSA’s mathematics are based on the difficulty of factoring a large integer into its two prime
factors.
One-Way Functions
A one-way function is a mathematical function that is easier to compute in one direction than
in the opposite direction.
work factor is the amount of time and resources it would take for someone to break an
encryption method.

Elliptic Curve Cryptography

Elliptic curves have two properties that are useful for cryptography. The first is that they are
symmetrical about the X axis. This means that the top and bottom parts of the curve are
mirror images of each other. The second useful property is that a straight line will intersect

them in no more than three points.

An elliptic curve cryptosystem (ECC) is a public key cryptosystem that can be described by a
prime number (the equivalent of the modulus value in RSA), a curve equation, and a public
point on the curve.
One differing factor is ECC’s efficiency. ECC is more efficient than RSA and any other
asymmetric algorithm. To illustrate this, an ECC key of 256 bits offers the equivalent
protection of an RSA key of 3,072 bits.

Quantum Cryptography
Quantum key distribution (QKD) is a system that generates and securely distributes
encryption keys of any length between two parties. Though we could, in principle, use
anything that obeys the principles of quantum mechanics, photons (the tiny particles that
make up light) are the most convenient particles to use for QKD.
It turns out photons are polarized(偏掁) or spin(自旋) in ways that can be described as vertical,
horizontal, diagonal left (–45o), and diagonal right (45o).
Two types of filters are commonly used in QKD. The first is rectilinear (直線) and allows
vertically and horizontally polarized photons through. The other is a (you guessed it) diagonal
filter, which allows both diagonally left and diagonally right polarized photons through.
It is important to note that the only way to measure the polarization on a photon is to
essentially destroy it: either it is blocked by the filter if the polarizations are different or it is
absorbed by the sensor if it makes it through.

key distillation - discard wrong guesses and keep the remaining sequence of bits. They now
have a shared secret key through this process.
• Made up of truly random values - Quantum mechanics deals with attributes of
matter and energy that are truly random, unlike the pseudo-random numbers we
can generate algorithmically on a traditional computer.
• Used only one time - Since QKD solves the key distribution problem, it allows us to
transmit as many unique keys as we want, reducing the temptation (or need) to reuse keys.
• Securely distributed to its destination - If someone attempts to eavesdrop on the key
exchange, they will have to do so actively in a way that, as we’ve seen, is pretty much
guaranteed to produce evidence of their tampering(篡改).
• Secured at sender’s and receiver’s sites - OK, this one is not really addressed by QKD
directly, but anyone going through all this effort would presumably not mess this one up,
right?
• At least as long as the message - Since QKD can be used for arbitrarily long key streams,
we can easily generate keys that are at least as long as the longest message we’d like to
send.
The maximum range for QKD is just over 500 km over fiber-optic wires. While space-to-ground
QKD has been demonstrated using satellites and ground stations, drastically increasing the
reach of such systems, it remains extremely difficult due to atmospheric interference.

Hybrid Encryption Methods


Symmetric algorithms are fast but have some drawbacks (lack of scalability, difficult key
management, and provide only confidentiality). Asymmetric algorithms do not have these
drawbacks but are very slow.
A symmetric algorithm creates keys used for encrypting bulk data, and an asymmetric
algorithm creates keys used for automated key distribution. Each algorithm has its pros and
cons, so using them together can be the best of both worlds.

Session Keys
A session key is a single-use symmetric key that is used to encrypt messages between two
users during a communication session. A session key is no different from the symmetric key
described in the previous section, but it is only good for one communication session between
users.
A session key provides more protection than static symmetric keys because it is valid for only
one session between two computers. If an attacker were able to capture the session key, she
would have a very small window of time to use it to try to decrypt messages being passed
back and forth.
So if an eavesdropper happens to figure out one session key, that does not mean she has
access to all other messages you write and send off.
When this session is done, each computer tears down any data structures it built to enable
this communication to take place, releases the resources, and destroys the session key.

Integrity
Hash algorithms are required to successfully detect intentional and unintentional
unauthorized modifications to data.

Hashing Functions
A one-way hash is a function that takes a variable-length string (a message) and produces a
fixed-length value called a hash value.
The hashing algorithm is not a secret—it is publicly known. The secrecy of the one-way
hashing function is its “one-wayness.”(單向性). The function is run in only one direction, not the
other direction.
One-way hash functions are never used in reverse; they create a hash value and call it a day.
The receiver does not attempt to reverse the process at the other end, but instead runs the
same hashing function one way and compares the two results. Keep in mind that hashing is
not the same thing as encryption; you can’t “decrypt” a hash.
The goal of using a one-way hash function is to provide a fingerprint of the message.
A strong one-hash function should not provide the same hash value for two or more different
messages. If a hashing algorithm takes steps to ensure it does not create the same hash
value for two or more messages, it is said to be collision free.

MD5
It produces a 128-bit hash, but the algorithm is subject to collision attacks, and is
therefore no longer suitable for applications like digital certificates and signatures that require
collision attack resistance. It is still commonly used for file integrity checksums, such
as those required by some intrusion detection systems, as well as for forensic
evidence integrity.

SHA
SHA was designed to be used in digital signatures

Attacks Against One-Way Hash Functions


If the algorithm does produce the same value for two distinctly different messages, this is
called a collision. An attacker can attempt to force a collision, which is referred to as a
birthday attack.

How many people must be in the same room for the chance to be greater than even that
another person has the same birthday as you? Answer: 253
How many people must be in the same room for the chance to be greater than even that at
least two people share the same birthday? Answer: 23

The birthday paradox(悖论) can apply to cryptography as well. Since any random set of 23
people most likely (at least a 50 percent chance) includes two people who share a birthday,
by extension, if a hashing algorithm generates a message digest of 60 bits, there is a high
likelihood that an adversary can find a collision using only 2 30 inputs.
The main way an attacker can find the corresponding hashing value that matches a specific
message is through a brute-force attack.
The output of a hashing algorithm is n, and to find a message through a brute-force attack
that results in a specific hash value would require hashing 2n random messages. To take this
one step further, finding two messages that hash to the same value would require review of
only 2n/2 messages.
Hash algorithms usually use message digest sizes (the value of n) that are large enough to
make collisions difficult to accomplish, but they are still possible. An algorithm that has 256-
bit output, like SHA-256, may require approximately 2 128 computations to break. This means
there is a less than 1 in 2128 chance that someone could carry out a successful birthday attack.

Message Digest
A one-way hashing function takes place without the use of any keys.

Message Authentication Code


use an HMAC function instead of just a plain hashing algorithm, a symmetric key would be
concatenated(相連) with her message(puts them through a hashing function, generating a
MAC.)((Just the message with the attached MAC value. The sender does not send the
symmetric key with the message.). The result of this process would be put through a hashing
algorithm, and the result would be a MAC value.
The message is not encrypted in an HMAC function, so there is no confidentiality being
provided.

Digital Signatures
A MAC can ensure that a message has not been altered, but it cannot ensure that it comes
from the entity that claims to be its source.
A digital signature is a hash value that has been encrypted with the sender’s private key.
Since this hash can be decrypted by anyone who has the corresponding public key, it verifies
that the message comes from the claimed sender and that it hasn’t been altered.
RSA and DSA are the best-known and most widely used digital signature algorithms. Unlike
RSA, DSA can be used only for digital signatures, and DSA is slower than RSA in signature
verification. RSA can be used for digital signatures, encryption, and secure distribution of
symmetric keys.
Public Key Infrastructure
A public key infrastructure (PKI) consists of programs, data formats, procedures,
communication protocols, security policies, and cryptosystems.
PKI establishes and maintains a high level of trust within an environment. It can provide
confidentiality, integrity, nonrepudiation, authentication, and even authorization.
The central concept in PKI is the digital certificate, but it also requires certificate authorities,
registration authorities, and effective key management.

Digital Certificates
The certificate includes the serial number, version number, identity information, algorithm
information, lifetime dates, and the signature of the issuing authority
There is nothing keeping anyone from issuing a self-signed certificate, in which the subject
and issuer can be one and the same. While this might be allowed, it should be very suspicious
when dealing with external entities.
We need a reputable third party to verify subjects’ identities and issue their certificates.

Certificate Authorities
A certificate authority (CA) is a trusted third party that vouches(擔保) for the identity of a sub-
ject, issues a certificate to that subject, and then digitally signs the certificate to assure its
integrity.
CA takes liability for the authenticity of that subject.
When a person requests a certificate, a registration authority (RA) verifies that individual’s
identity and passes the certificate request off to the CA. The CA constructs (構造) the certificate,
signs it, sends it to the requester, and maintains the certificate over its lifetime.
All browsers have several well-known CAs configured by default. Most are configured to trust
dozens or hundreds of CAs.
The CA is responsible for creating and handing out certificates, maintaining them, and
revoking them if necessary. Revocation is handled by the CA, and the revoked certificate
information is stored on a certificate revocation list (CRL).
This list is maintained and periodically updated by the issuing CA. A certificate may be
revoked because the key holder’s private key was compromised or because the CA
discovered the certificate was issued to the wrong person.
CRL is the mechanism for the CA to let others know this information.
By default, web browsers do not check a CRL to ensure that a certificate is not revoked. So when you
are setting up a secure connection to an e-commerce site, you could be relying on a certificate that has
actually been revoked. Not good.
The Online Certificate Status Protocol (OCSP) is being used more and more rather than the
cumbersome(繁複) CRL approach.
If OCSP is implemented, it does this work automatically in the background. It carries out real-
time validation of a certificate and reports back to the user whether the certificate is “valid,
invalid, or unknown”. OCSP checks the CRL that is maintained by the CA. So the CRL is still
being used, but now we have a protocol developed specifically to check the CRL during a
certificate validation process.

Registration Authorities
The RA cannot issue certificates, but can act as a broker between the user and the CA. When
users need new certificates, they make requests to the RA, and the RA verifies all necessary
identification information before allowing a request to go to the CA.
A PKI may be made up of the following entities and functions:
 Certification authority
 Registration authority
 Certificate repository
 Certificate revocation system
 Key backup and recovery system
 Automatic key update
 Management of key histories
 Timestamping
 Client-side software

PKI supplies the following security services:


 Confidentiality
 Access control
 Integrity
 Authentication
 Nonrepudiation
Another important component that must be integrated into a PKI is a reliable time source that provides

a way for secure timestamping. This comes into play when true nonrepudiation is required.

Key Management
The keys must be generated, destroyed, and recovered properly. Key management can be
handled through manual or automatic processes.
The Kerberos authentication protocol (which we will describe in Chapter 17) uses a Key
Distribution Center (KDC) to store, distribute, and maintain cryptographic session and secret
keys. This provides an automated method of key distribution. The computer that wants to
access a service on another computer requests access via the KDC. The KDC then generates a
session key to be used between the requesting computer and the computer providing the
requested resource or service.

Key Management Principles


Keys are at risk of being lost, destroyed, or corrupted. Backup copies should be available and
easily accessible when required. If data is encrypted and then the user accidentally loses the
necessary key to decrypt it, this information would be lost forever if there were not a backup
key to save the day.

Rules for Keys and Key Management


 The key length should be long enough to provide the necessary level of protection.
 Keys should be stored and transmitted by secure means.
 Keys should be random, and the algorithm should use the full spectrum(光譜) of the
keyspace.
 The key’s lifetime should correspond with the sensitivity of the data it is protecting. (Less
secure data may allow for a longer key lifetime, whereas more sensitive data might
require a shorter key lifetime.)
 The more the key is used, the shorter its lifetime should be.
 Keys should be backed up or escrowed (托管) in case of emergencies.
 Keys should be properly destroyed when their lifetime comes to an end.

Key escrow - When two or more entities are required to reconstruct a key for key recovery
processes, this is known as multiparty key recovery. You can use an approach called m-of-n
control (or quorum authentication), in which you designate a group of (n) people as recovery
agents and only need a subset (m) of them for key recovery.
Key management best practices can be found in NIST Special Publication 800-57, Part
1 Revision 5, Recommendation for Key Management: Part 1 – General.

Attacks Against Cryptography


Eavesdropping and sniffing data as it passes over a network are considered passive attacks
because the attacker is not affecting the protocol, algorithm, key, message, or any parts of
the encryption system.
Altering messages, modifying system files, and masquerading as another individual are acts
that are considered active attacks because the attacker is actually doing something instead of
sitting back and gathering data. Passive attacks are usually used to gain information prior to
carrying out an active attack.
The common attack vectors in cryptography are key and algorithm, implementation, data,
and people.

Key and Algorithm Attacks


Brute Force
Sometimes, all it takes to break a cryptosystem is to systematically try all possible keys until
you find the right one.
Ciphertext-Only Attacks
the attacker has the ciphertext of one or more messages, each of which has been encrypted
using the same encryption algorithm and key. The attacker’s goal is to discover the key used
in the encryption process. Once the attacker figures out the key, she can decrypt all other
messages encrypted with the same key.
It is the hardest attack to carry out successfully because the attacker has so little information
about the encryption process.
Known-Plaintext Attacks
The attacker has the plaintext and corresponding ciphertext of one or more messages and
wants to discover the key used to encrypt the message(s) so that he can decipher and read
other messages. This attack can leverage known patterns in message composition (eg stan-
dard confidentiality disclaimer).
Rather than having to cryptanalyze the entire message, the attacker can focus on that part of
it that is known.
Chosen-Plaintext Attacks
The attacker has the plaintext and ciphertext, but can choose the plaintext that gets
encrypted to see the corresponding ciphertext. This gives the attacker more power and
possibly a deeper understanding of the way the encryption process works so that she can
gather more information about the key being used.
The attacker has a copy of the plaintext of the message, because she wrote it, and a copy of
the ciphertext.
Chosen-Ciphertext Attacks
The attacker can choose the ciphertext to be decrypted and has access to the resulting
decrypted plaintext.
This is a harder attack.
The attacker may need to have control of the system that contains the cryptosystem.
The attacker can carry out one of these attacks and, depending upon what she gleaned (收集) from that
first attack, modify her next attack. This is the process of reverse-engineering or cryptanalysis attacks

Differential Cryptanalysis
A differential cryptanalysis attack looks at ciphertext pairs generated by encryption of
plaintext pairs with specific differences and analyzes the effect and result of those
differences.
The attacker takes two messages of plaintext and follows the changes that take place to the
blocks as they go through the different S-boxes. (Each message is being encrypted with the
same key.) The differences identified in the resulting ciphertext values are used to map
probability values to different possible key values. The attacker continues this process with
several more sets of messages and reviews the common key probability values. One key
value will continue to show itself as the most probable key used in the encryption processes.
Since the attacker chooses the different plaintext messages for this attack, it is considered a
type of chosen-plaintext attack.

Frequency Analysis
A frequency analysis, also known as a statistical attack, identifies statistically significant
patterns in the ciphertext generated by a cryptosystem. For example, the number of zeroes
may be significantly higher than the number of ones. This could show that the pseudorandom
number generator (PRNG) in use may be biased(偏頗) . If keys are taken directly from the
output of the PRNG, then the distribution of keys would also be biased. The statistical
knowledge about the bias could be used to reduce the search time for the keys.

Implementation Attacks
Implementation flaws are system development defects that could compromise a real system,
and implementation attacks are the techniques used to exploit these flaws.

Source Code Analysis


Source code analysis, ideally as part of a large team of researchers, and look for bugs.
Through a variety of software auditing techniques (which we will cover in Chapter 25), source
code analysis examines each line of code and branch(分枝)
of execution to determine whether it is vulnerable to exploitation. This is most practical when
the code is open source or you otherwise have access to its source.

Reverse Engineering
Another approach to discovering implementation flaws in cryptosystems involves taking a
product and tearing it apart to see how it works.
There are a number of ways in which you can disassemble those binaries and get code that is
pretty close to the source code. Software reverse engineering requires a lot more effort and
skill than regular source code analysis, but it is more common that most would think.
Hardware reverse engineering. This means the researcher is directly probing integrated
circuit (IC) chips and other electronic components. In some cases, chips are actually peeled
apart layer by layer to show internal interconnections and even individual bits that are set in
memory structures. This approach oftentimes requires destroying the device as it is
dissected(解剖), probed(探索), and analyzed.
The effort, skill, and expense required can sometimes yield(產生) implementation flaws that
would be difficult or impossible to find otherwise.

Side-Channel Attacks
We can review facts and infer(推斷) the value of an encryption key. For example, we could
detect how much power consumption is used for encryption and decryption (the fluctuation of
electronic voltage). We could also intercept the radiation emissions released and then
calculate how long the processes took. Looking around the cryptosystem, or its attributes and
characteristics, is different from looking into the cryptosystem and trying to defeat it through
mathematical computations.
An attacker could measure power consumption, radiation emissions, and the time it takes for
certain types of data processing. With this information, he can work backward by reverse-
engineering the process to uncover an encryption key or sensitive data. A power attack
reviews the amount of heat released. This type of attack has been successful in uncovering
confidential information from smart cards.

Fault Injection
Fault injection attacks attempt to cause errors in a cryptosystem in an attempt to recover or
infer(推斷) the encryption key.
Though this attack is fairly rare, it received a lot of attention in 2001 after it was shown to be
effective after only one injection against the RSA using Chinese Remainder Theorem (RSA-
CRT).

Other Attacks
A big concern in distributed environments is the replay attack, in which an attacker captures
some type of data and resubmits it with the hopes of fooling the receiving device into thinking
it is legitimate information. Many times, the data captured and resubmitted is authentication
information, and the attacker is trying to authenticate herself as someone else to gain
unauthorized access.
Pass the hash is a well-known replay attack that targets Microsoft Windows Active Directory
(AD) single sign-on environments.
Any user with local admin rights can dump LSASS (responsible for verifying user logins,
handling password changes, and managing access tokens such as password hashes. )
memory from a Windows computer and recover password hashes for any user who has
recently logged into that system.
Timestamps and sequence numbers are two countermeasures to replay attacks. Packets
can contain sequence numbers, so each machine will expect a specific number on each
receiving packet. If a packet has a sequence number that has been previously used, that is an
indication of a replay attack. Packets can also be timestamped. A threshold can be set on
each computer to only accept packets within a certain timeframe. If a packet is received that
is past this threshold, it can help identify a replay attack.

Man-in-the-Middle
Insert yourself into the process by which secure connections are established. In man-in-the-
middle (MitM) attacks, threat actors intercept an outbound secure connection request from
clients and relay their own requests to the intended servers, terminating both and acting as a
proxy.
The attacker is invisibly sitting in the middle of two separate. From this vantage point, the
attacker can either relay information from one end to the other, perhaps copying some of it
(e.g., credentials, sensitive documents, etc.) for later use. The attacker can also selectively
modify the information sent from one end to the other.

Social Engineering Attacks


Social engineering attacks can be carried out through deception(欺騙), persuasion, coercion(強迫)
(rubber-hose cryptanalysis), or bribery(賄賂) (purchase-key attack).

Ransomware
Ransomware is a type of malware that typically encrypts victims’ files and holds them ransom
until a payment is made to an account controlled by the attacker.
After the initial compromise, however, the ransomware may be able to move laterally (橫向)
across the victim’s network, infecting other hosts.

You might also like