Risk Management Framework 2.0
Risk Management Framework 2.0
2016
Recommended Citation
Richey, Brent F., "Risk Management Framework 2.0" (2016). Graduate Theses and Dissertations. 16002.
https://lib.dr.iastate.edu/etd/16002
This Thesis is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital
Repository. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Iowa State University Digital
Repository. For more information, please contact [email protected].
Risk Management Framework 2.0
by
Brent F. Richey
MASTER OF SCIENCE
Ames, Iowa
2016
DEDICATION
This thesis is dedicated to the military service members, academics, role models, and
family members that have shaped my views of the world over the past thirty-six years. These
personal relationships have opened doors, provided counsel, broadened my horizons, instilled
values, and ultimately resulted in who I am. The military leaders and academic mentors to
whom I would primarily like to dedicate this thesis are: LTC Matthew Hopper, LTC Jeremy
Bartel, COL Miles Brown, COL Eric Walker, LTC John Williams, LTC Robert Hensley,
MAJ Natalie Bynum, MAJ Frank Krammer, CSM Andy Frye, 1SG John Stockton, MAJ(Ret)
Russell Fenton, LTC Reginald Harris, LTC Dexter Nunnally, COL Barton Lawrence, SFC
(Ret) Corneluis Williams, MSG(Ret) Johnny Escamilla, 1SG(Ret) Dallas Gasque, MAJ
Diane Johnson-Windau, COL (Ret) Eric Albert, SFC Shane Baxter, MAJ Adrienne Crosby,
MAJ Ronnie Slack, CSM Ira Russey, Dr. Jim Davis, Mr. Kenneth Peters, Dr. Evan Hart, Dr.
Peter Li, Dr. Tom Daniels, Dr. Dan Zhu, Dr. Doug Jacobson, and Dr. Yong Guan.
My faith is deeply personal and equally as important as the arts and sciences, and the
following men have set examples and contributed perspectives that have been influential in
this realm: BP Alfonzo Huezo, BP Michael Peden, Bishop Jim Cannon, Bishop Mark
Anderson, Bishop John Albright, Bishop James Reed, LtCol (Ret) Robert Wolfertz, Shane
Van Wey, Chaplain (CPT) Jeff Pyun, Dr. Scott Mere, and my father David Richey.
Last but not least, I would like to dedicate this work to my family members who have
always shown nothing less than unconditional love and support. There are too many
members to mention here, however I would like to individually dedicate this work to my
grandfather, Vernon Richey, who truly understood the value of evaluating risk and would
TABLE OF CONTENTS
Page
LIST OF FIGURES
Page
LIST OF TABLES
Page
NOMENCLATURE
ACKNOWLEDGMENTS
committee members Dr. Doug Jacobson and Dr. James Cannon, for their guidance and
support throughout the course of this research. In addition, I would like to acknowledge my
colleagues, the faculty of the Information Assurance Center (IAC), and the staff for making
my journey at Iowa State University such a rewarding experience. If it weren’t for the IAC’s
administrative coordinator and director, Ginny Anderson and Dr. Jacobson, my journey here
might not have even been possible if it weren’t for their willingness to accommodate my
unique situation during the application process. I am also grateful for the counsel of Iowa
State University alumni Dr. David Carlson and MAJ Bryan Plass for recommending this
program when deciding which academic path to pursue. The graduate course that provided
the most benefit to this study was “Legal and Ethics in Information Assurance” taught by
ISU Lecturer Kenneth Peters, and I am thankful for all the effort he put into developing such
a broadening course. I would also like to thank recent ISU graduate Alec Poczatek for
sharing his extensive knowledge on attack graph theory, and how his research applied to my
project.
I would especially like offer my sincere gratitude to the U.S. Army for providing the
generous opportunity to further my education and making this investment in me. Within the
U.S. Army, I would like to individually thank LTC Matthew Hopper, MAJ (Ret) Russell
Fenton, MAJ Mitchell Hockenbury, MAJ Bradley Denisar, and LTC Jeremy Bartel for their
ABSTRACT
The quantification of risk has received a great deal of attention in recently published
literature, and there is an opportunity for the DoD to take advantage of what information is
processes. The critical elements absent in the current process are the objective assessment of
likelihood as part of the whole risk scenario and a visual representation or acknowledgement
theories and axiomatic approaches in order to: (1) simultaneously examine multiple
objectives of the organization, (2) limit bias and subjectivity during the assessment process
by converting subjective risk contributors into quantitative values using tools that measure
the attack surface and adversarial effort, (3) present likelihood and impact as real-time
objective variables that reflect the state of the organization and are grounded on sound
(strategic, operational, and tactical) with maximum transparency, (5) achieve greater
representation of the real scenario and strive to model future scenarios, (6) adapt to the
preferred granularity, dimensions, and discovery of the decision maker, and (7) improve the
decision maker’s ability to select the most optimal alternative by reducing the decision to
rational logic. The proposed solution is what I term "Risk Management Framework 2.0", and
the expected results of this modernized framework are reduced complexity, improved
optimization, and more effective management of risk within the organization. This study
transparency and cross-level communication, and keep members operating within the bounds
CHAPTER 1. INTRODUCTION
There is a vital need for monumental change in how we communicate, assess, and
model risk within the federal government. U.S. Army Defensive Cyberspace Operations
Capability Developer and retired Army officer, Russell Fenton, endorses this argument by
stating - “the current risk framework, as developed by the National Institute of Standards and
Technology (NIST), is outdated and no longer effective in assessing true risk factoring
threats, vulnerabilities, and other variables in a way that decision-makers can confidently
understand the environment and envision how certain choices have the potential to impact
future operations.” [R. Fenton, personal communication, 2016] This study aims to interpret
the basic contributors of risk as quantifiable variables and present an alternative framework
as to how the federal government can more effectively govern their risk management
programs while improving upon the quality of risk considerations in the future. My proposed
solution is termed “Risk Management Framework: 2.0” to indicate that this proposal shares
many of the underpinnings of the current federal risk management framework, which
cyberspace domain using quantitative variables; a strategy for the post-modern world.
The evaluation of risk is, and will continue to be, essential to our survival as a
species. Without assessing risk, the human species would have never progressed to becoming
2
the near-masters of our environment that we are today. There are only assumptions as to
when our human ancestors first assessed risk, but conceivably this primitive assessment
attempted to compare the consequences of remaining in the trees versus the benefits of
climbing down to explore the surroundings for resources. Since this postulated moment,
human strategies for determining and assessing risks have evolved tremendously. Recorded
history indicates that our later ancestors relied on mystics, oracles, or religion for guidance
when confronted with a decision between multiple scenarios or options. Tools, such as bones
or dice, were even devised to assist decision-making when neither mysticism nor religion
could provide assistance. The word “probability” originates from the Latin word probabilis,
which means “worthy of approval”. “Worthiness” in Roman times was considered in the
outcome of a roll of dice or knucklebones. Probability was eventually understood in the 17th
Century courts of Europe as a measure of proof in a legal case. The meaning further evolved
into “a measure of the weight of empirical evidence as to the chance a particular outcome”.
[32] Due to the statistical and inferential powers of probability, recent history has seen an
unprecedented growth and an insatiable determination to control our futures. This explosion
becomes evident simply by observing the increasing frequency of risk discussions across
literature. A word search using Google’s Ngram Viewer [33] illustrates this upsurge by
querying the use of the word “risk” in literature versus use of “danger” from 1800-2000.
Meanwhile, the latter’s usage has seen a gradual decline over the 20th Century. (See Figures 1
and 2) Risk and danger were two entirely dissimilar words in earlier history, yet they have
become more synonymous with one another (even replacing) as we approached the mid-
twentieth century. The trending of “risk” in literature appears to correspond with rapid
assist us in modeling variables that impact the level or likelihood of danger imposed by a
hazard thus allowing us to compare alternative courses of action that might result in less
severe or more gainful outcomes. Powerful algorithms and increasing computing capabilities
have revealed abilities to perform a variety of analyses to make sense of the data, and
prospect of risk quantification became increasingly manifest, organizations have made huge
investments in unearthing any utility that could provide them with an advantage over their
competitors or adversaries.
This quantification of risk has received a great deal of attention from academia over
the past two decades, and there is an opportunity for the Department of Defense (DoD) to
4
current risk assessment and management processes since the introduction of the traditional
risk matrix in MIL-STD 882 (1993). [34] A proposed framework, successive to the current
RMF, would synthesize selected elements of various theories and visualization techniques in
order to objectively assess likelihood and uncertainty variables, and to reduce the results of
the analysis to a point that decision-makers are able to make a decision chiefly based on
rational logic with the aid of a collaborative Decision Support System (DSS). There are
DoD, but this study suggests these challenges can be overcome with the introduction of a
As stated in the opening and subsequent paragraphs, the current risk management
approach used by the DoD could greatly benefit from the lessons learned throughout this
study. The current approach creates opportunities for the assessor to inject significant bias
and subjectivity into the analysis, is loosely grounded on sound scientific or mathematical
principles, and fails to present the results to the decision-maker in a tailored form that enables
him or her to make the most qualified decision. (These issues will be discussed further and
solutions provided in Chapters 3 and 4.) These issues combine to significantly hinder our
ability to effectively manage risk. Less informed decision-making, stemming from unsound
analyses and misguided perceptions, has resulted in some of the most severe unintended
consequences in history costing countless lives due to miscalculations of risk by both society
and organizational entities. Scientific and organizational experts prefer that society view the
world as either random or probabilistic to assign accountability, but the world actually
represents something more chaotic. Calculation aside, simply observing risk even causes
5
slight changes in risk variables that could have dramatically different outcomes and chaotic
distributions as time progresses. A sound risk management program should minimize the
effect observation has on the likelihoods and consequences of assessing risk, and join
adaptive procedures and processes that help organizations march forward into a more stable
organizational maturity level across six different business domains, with “risk approach”
being one of them. [35] (See Table 1) An organization achieves level four (benefit
optimization) when it can “accept and manage business risks based on a Return on
Investment (ROI) model”, and has achieved level five (competitive advantage) when it can
“take calculated risks for a competitive advantage”. Most organizations strive to reach these
highest levels of maturity in their risk management programs, but unfortunately many
programs are inhibited at levels two and three due to subjective or rigid risk management
strategies. The envy of all organizations are those that have been identified as a High
Reliability Organization (i.e. HRO). What distinguishes HROs from other organizations is
that they have achieved nearly flawless operations while maintaining high risk and complex
systems. Experts have been unable to clearly determine how HROs achieve these results, but
they assume it is due to their ability to maintain “bureaucratic structures that are hierarchical
and rigid during routine operations, but flat and flexible during times of crisis.” [15, p. 149]
6
By making optimization a top priority for their risk management programs and mirroring the
This chapter is prefaced with an ancient Arabic proverb that categorizes the four types of
mindsets in an organization: the wise, the ignorant, those asleep, and the fools. Any proposed
framework should attempt to make the wise wiser, educate the ignorant, motivate others, and
force the organization to address any uncertainty in the data. This is a lofty goal and littered
with challenges, but there are champions in the cybersecurity discipline that address steps to
correcting our current mindsets. Cybersecurity expert, Chris Williams at Leidos Inc., has
threats: (1) assume an intelligent attacker will eventually defeat all defensive measures, (2)
design defenses to detect and delay attacks so defenders have time to respond, (3) layer
defenses to contain attacks and provide redundancy in protection, and (4) use an active
defense to catch and repel attacks after they start, but before they can succeed. [27] This
study will attempt to build upon these axioms as fundamental objectives in changing the way
risk is socially and individually perceived, constructed, and managed. Most contemporary
strategies oppose these axioms and have led to unsustainable levels of complexity across
7
both social and organizational risk management programs. New York University Professor of
Sociology, David Garland, has been quoted as saying, “Risk society is our late modern world
spinning out of control”, and followed up by asking the question: “Would we do better to
slow progress and evaluate ourselves?” [14, p. 6] Few would argue that the rise of
neoliberalism, or laissez-faire economic liberalism, has not served as part of the driving force
behind this trend as opportunities are seized by those who have mastered the art and science
of risk taking. Unfortunately like matter risk cannot be eliminated, but only redistributed.
This was the case in the U.S. Financial Collapse of 2008 where opportunists were making
irresponsible and risky financial decisions, and the consequences of these decisions were
these immoral activities thus causing an erosion of trust and faith in many foundations.
Perhaps taking a fundamentally different approach could help restore this faith?
quantification of risk, various theories that support this study, and NIST guidelines within the
current Risk Management Framework (RMF). The immediately following chapter attempts
the need for monumental change in preparation for the era succeeding Modernism. The final
two chapters will provide a solution to the current RMF, conceptualized in the form of a
Decision Support System, and then articulate a logical and coherent course for
implementation. It is worth supposing that there may be superior strategies to addressing the
problems identified in this thesis, however any chosen strategy “in the end must link risk to
8
mission assurance” to realize the ultimate objectives of any organization. [R. Fenton,
“All models are wrong, but some are useful.” ~ George Box, 1987
Organizations that have been able to perfect the art and science of risk management
ultimately become masters within their domains. Oxford mathematician Marcus De Saytoy,
said that “predicting the future is the ultimate power.” [28] This concept can be illustrated
through one of the stories told of Christopher Columbus when he landed on the shores of the
Americas in 1504, and needed provisions for his men. He communicated to the natives that if
they did not supply his men these provisions their god would become angry and remove the
Moon. What Columbus had, and the natives did not, was knowledge of the lunar cycle. As
Columbus predicted, those present witnessed a lunar eclipse and the natives had no other
explanation than to trust that their god was in fact mad at them for not coming to his aid. [36]
This single example demonstrates the asymmetrical advantage that can exist between
opponents due to one actor’s ignorance, the influence uncertainty can have on the selected
actions, and how empirical observations can be used to model the future.
The primary objectives of this study are to (1) develop a framework that better meets
the needs and functions for managing risks in the cyber domain, (2) demonstrate that
objectivity increases decision-power and program reliability in the evaluation of risk, and (3)
approaches that help organizations determine the root causes of risk, and evaluating the
current framework as compared to the proposed framework. Before the problems this thesis
attempts to address can be fully described, certain concepts must be understood first. This
chapter begins by discussing various ways risk is measured and modeled, and how
10
organizations use this information to manage risk within them. It then progresses into what
roles uncertainty plays in risk management, and how experts have attempted to reduce its
impact on informed decision making. The chapter also reviews several alternative methods
that could have application within the proposed framework. After these methods have been
discussed, we will describe various social theories, risk management approaches, and
abstraction techniques that led to the development of a framework most suited to addressing
the many systemic problems highlighted throughout this study. The chapter will conclude by
reviewing the history of the NIST organization and their guidelines for assessing and
managing risk within their recommended framework, then provides a preliminary survey of
Risk assessments generally attempt to answer three fundamental questions: (1) What
can go wrong? (2) What is the likelihood that it would go wrong? (3) What are the
consequences? [9] These questions are answered by identifying and prioritizing risk factors,
assigning values to those variables that contribute to the likelihood of an event occurring, and
determining how those events impact mission or business operations. This study feels that all
three questions can be mathematically answered, and then judged through subjective means.
Mathematically answering these questions is critical because the results can be validated in
most cases. Though this effort is especially challenging to those responsible for producing
these answers, their efforts can be alleviated by first understanding what is exactly at risk and
how that information translates to organizational risk. This investigation leads the
organization to developing its own understanding of risk, and how it can most appropriately
2.1.1 Risk.
One of the most significant challenges in any risk management program lies in the
measurement of risk variables, and combining those variables in such a way that risk
potential can be coherently modeled. Risk experts have similar definitions of risk with minor
differences, but the term can be broadly defined as the measured certainty of an outcome due
to an event or activity where these outcomes result as sequences of cause and effect. The
recognition of “causes” and their “effects” is critical because identifying those relationships
enables organizations to truly engage risk in manners aligned with this thesis. Organizations
use risk measurements to help guide decisions and develop courses of action that minimize
organization’s risk, and these factors can be prioritized by first and second order factors. [6]
First order factors are those that dominate all others, and serve as the primary driving force in
factors are those intrinsic and extrinsic attributes that are derived from the dominating
factors, such as quality (derived from funding), intellectual capital (derived from personnel),
or lifecycle (derived from maturity). The list of risk factors can be infinite, but it is critical to
have a filtering mechanism that differentiates between first-order and second-order factors.
Identifying risk factors also depends on the granularity and scope of the decision. Due to the
broad range of risk factors and the challenge of prioritizing them, this study suggests that risk
factors should focus more on addressing the consequential impacts of a threat rather than the
results of the threat. For example, an organization should address the risk of degraded
communications vs. a Denial of Service (DoS) event because that second-order risk factor
speaks to the business impact while a DoS lacks that specificity as to what is considered “true
12
risk” to the organization. This objective of identifying true risk factors can be achieved by
In studying the relationships between risk and its impact to the mission of the
and/or uncertainty are gained. [9] The central relationship in cyberspace is between the
defender and the adversary, and that relationship is most evident in the outcomes (rewards or
rather one-to-many. This is a difficult task, and should mainly focus on objective
imperfect, but they can still be statistically significant in the form of probabilities. [10]
Probabilities are heavily dependent on the relational perceptions between the subject and
object [11], and even known probabilities represent a distinct relationship between the
assessor and the event [23]. Establishing relationships between human activities and
consequences is a complex challenge [15], but this challenge can be reduced by driving down
to the primary roots of those relationships. Regardless of what relationships are revealed,
describing the concept of risk to an audience is articulating the relationship between realities
amount of risk exposure in order to allocate resources to reducing this exposure to a cost-
effective level. One recommendation is given by Jack V. Micheals, Ph.D where he gives the
following rule of thumb: “invest one dollar in risk avoidance for every four dollars of risk
exposure”. [6, p. 1] Even though this general rule has some utility, the challenge still remains
in quantifying the amount of risk exposure an organization faces and how to appropriately
13
manage perceptions of decision-makers so that investment dollars are optimally spent. The
first step in overcoming this challenge is to develop the most accurate estimation of risk, and
then model the data in such a way that best communicates the amount of exposure.
There are primarily two approaches to describing risk, and they are through
qualitative or quantitative analysis. [11] Qualitative analysis is subjective by nature and uses
a person’s expert opinion to estimate the amounts of risk, while quantitative analysis is based
inputs are generated by enlisting the expertise of various professionals to assign a qualitative
category to a relationship between variables that expresses their highly regarded and trusted
opinions. On the other hand, there are a variety of ways to generate quantitative inputs. For
example, an assessor could extract statistical relevance from historical records, measure
to variables that directly link to an individual’s utilities. In either approach, both suppose that
overall risk equals the likelihood of an adverse event joined by the magnitude of impact
resulting from that event. As a general rule, all variables should be assessed holistically to
describe the totality of a scenario that a system or organization faces in the event of an
outcome.
Author Yukov Haimes maintains the notion that measuring risk is an empirical,
quantitative, and scientific activity while determining the acceptability of risk is a qualitative
and political activity [9]. Risk analysis is considered scientific in that scientific principles are
applied when determining the likelihood of an event and its severity of harm. These
seeking answers, experimenting and testing, and forming and modifying hypotheses. A
discovery process that solely relies on qualitative inputs cannot adhere to these well-
As identified by NIST in [3], both approaches present their own set of challenges.
The major challenge presented by qualitative analysis is in the opinions and experiences of
the assessor, and how that individual chooses to present the results of the analysis. Two
results and the potential for substantial uncertainty in the values. [3] Yukov Haimes adds an
additional challenge to quantitative assessments in that all sources of risk against a system
must be evaluated. [9] The list of sources could be endless, but this statement by Haimes
could be interpreted as a holistic collection of all relevant information using the chosen data
collection technique. Haimes suggests in [9] that these challenges can be reduced through
improved data and analytical techniques with the intent of minimizing measurement errors or
uncertainty. Fortunately there are tools and techniques in development that will be able to
effectively collect all the information for a data set in the near future. One technique with
algorithm to measure the attack surface of a particular system. [8] Once all the selected
variables have been measured using comparable techniques, then the assessor can aggregate
the data into a model that provides the maximum amount of insight to the decision problem.
15
and/or information technology within an organization. [2] The existing relationships between
costs and resources supports the idea that it must be theoretically possible to quantify risk
because costs can be quantified, and the goal of reducing costs to the organization by
avoiding negative impacts is the primary driving force behind performing risk assessments.
Any risk model under development must function by starting from a threshold of
measure of potentially detrimental cost(s) to the organization and determines how each
variable contributes to the potential gains or losses of a risk management decision. [5]
Furthermore, using the word “cost” synonymously with “risk” aids the discussion and helps
The essence of assessing risk is to reduce risk exposure to a cost-effective level [6],
therefore it is critical that discretion is taken in selecting those variables that will perform to
the maximum expectations of the decision-maker. The selection of risk variables depends on
the structure of the optimization problem, and the mathematical properties of the decision
primarily determines which variables will fit in the model. The sources of variable
information are from historical data, theoretical considerations, and expert analysis. [21] The
two variables fundamental to all risk assessments are likelihood and impact, and there are
supporting variables to enable the assessor to estimate these fundamental values such as time,
resource’s functional worth. Deciding which variables support the optimization problem and
mathematical properties of the decision ultimately depends on the variables’ potential for
abstraction into some sort of business intelligence that can be used by the decision-making
hierarchy. [9]
The primary variables evaluated throughout this study are those that demonstrate the
greatest support in evaluating risks in the cyberspace domain, and they are: vulnerability,
likelihood, impact, and uncertainty. Vulnerability has the greatest influence on how an
organization should invest its resources [5], and represents an organization’s exposure/
sensitivity to danger or some measure of resilience. [14] Crime analysis shows that victims
aren’t targeted by random; they are targeted based up their perceived vulnerability. [28] This
adversarial perception can be shaped by many factors, and will change over time. Russell
Fenton agrees that vulnerability assessments should include elements of time and some
quantifiable value. [R. Fenton, personal communication, 2016] These quantifiable values can
be derived from measuring the attack surface, compiling an adversary benefit structure for
completing some objective, and showing how these two variables morph over time. The
called Damage-Effort Ratio (DER) that functions in a similar way. [8] (Both MTD and DER
assessed vulnerability of a targeted system and the adversary’s reward of attacking the target,
instead represent a degree of certainty in the data since likelihood has been applied as a state
variable in the proposed risk model. Though likelihoods are often represented in probabilities
17
in a traditional sense, there are significant problems with assigning probabilities; for
statistician Bruno de Finetti who said that “probability does not exist” because probabilities
primarily depend on the relationship between the object and subject, and they are all based on
fundamental concept, and a model cannot be deterministic and probabilistic at the same time,
therefore probability must not be considered objectively or represent likelihood within this
framework.
Impact is the second fundamental variable used to model risk, and this study defines
this variable as a measure of the effect an outcome can have on business or mission processes
because truly “knowing the impact enhances Impact Analysis” as stated by Haimes. [9, p.
346] Impact isn’t one single measurement, but a combination of one-to-many relationships
between threats and impacts [6] that represent the state of the organization following an
event. There are far more hazards than there are impacts, and an assessor will never have
reasonable time to develop a comprehensive list of all known hazards. Therefore it is optimal
to represent impact as a consequential cost against the organization if outcomes (A) or (B)
were to occur. Michaels suggests two formulas that can achieve this goal, Risk Time Estimate
(RTE) and Risk Cost Estimates (RCE). [6] (These calculations will be explained in Section
2.4.1.) RCE/RTE (or impact) combined with a measure of the organization’s susceptibility
(i.e. likelihood) provides the most functional and dynamic representation of the true
serves as the greatest challenge in estimating risk. Probability can serve as a measure of
18
uncertainty in this framework, but there are significant challenges within determining
probabilities as previously mentioned. Due to the variabilities that exist in any measure of
certainty, this study hopes to achieve separation between the likelihood and uncertainty
variables by presenting both as independent contributors to the risk model. There are a
variety of ways to model uncertainty in the context of this study, such as: Uncertainty
extensive, and any universally accepted model should provide the option of selecting
different methods that are optimally suited to the mathematical properties of the decision
problem. Before proceeding, it is important to note the difference between uncertainty and
deep uncertainties. Deep uncertainties are those risk components that will never be able to be
described scientifically [22], and all remaining uncertainties are those variables that have
measurable limits to their precisions. Deep uncertainties are isolated earlier in the proposed
framework through the Info-Gap Procedure, and all remaining uncertainties are quantified by
establishing statistical relevance in the data through the variety of means mentioned above.
Modeling risk is crucial in making sense of the data [16], and attempts to uncover the
logical relationship between hypothesis and evidence (or data distributions) by isolating a
causal agent from intervening variables. [21] Modeling also enables us to look into the past
and predict the future as shown in the Christopher Columbus example. [28] The process for
developing a model is fairly standard, and begins with determining the specific needs for
modeling a scenario. Once the need has been firmly established, the problem is formulated
within the higher tiers of the organization. After these needs have been well established and
19
agreed upon, subordinate tiers within the organization begin model construction by collecting
and analyzing data pertinent to the problem. Following model construction, the model is
validated and run, and the findings are analyzed to determine if further refinements are
required or if the findings will be implemented. [6] There are a variety of modeling
techniques and practices to build upon in order to develop a model that fits each decision
problem. Jack V. Michaels provides three broad categories of models in [6]: network,
decision, and cost/risk analysis models. Additionally, there are several ways to model the
presentation of the data: iconic, symbolic, and analog representations. This study is primarily
representations using one of the following mathematical models for risk simulations as
probabilistic, static, dynamic, distributed, or lumped. This study concludes that the ideal
model combination for assessing and communicating risk within cyberspace is a cost/risk
analysis model with analog representations that supports non-linear, deterministic, and
dynamic mathematical techniques for variable formulation. This study suggests analog
representations are best suited for the proposed model since they allow greater
reproducibility and all risk decisions with the DoD are associated with some sort of
organizational cost metric. Non-linear simulation techniques show the greatest potential for
the proposed model due to an object’s random behavior patterns. The model should be
dependent upon changes within the environment. Once the optimal solution is discovered,
then that solution can be implemented. [9] Other options that may be taken into account are
20
[9]
the behaviors, actions, and perceptions of subjects and objects within the environment. [10]
is dependent on how closely it represents the system [9], and simply observing objects within
cyberspace can have an indirect impact on their behaviors and actions. This effect is referred
to as the Hawthorne Effect [10] or the Beijing Butterfly Effect [11]. Deploying an intrusion
detection system (IDS) is an example of these effects in a situation where the IDS becomes
detectable by an intelligent adversary. Any discovery by either the defender or the adversary
of the other’s presence could drastically alter their tactics as the Exchange Principle applies
to both offensive and defensive actors. Additionally, closely monitoring the state of the
organization can affect the decisions process. The Markov Decision Process is one such
framework that implies a subject’s observation can affect his or her decisions when an agent
(or subject) makes assumptions on the next state based on the observed state of things, and
ultimately receives a reward for selecting that action. In most cases, the reward is a
deterministic function of both the observed state of the organization and the outcomes
associated with certain actions. [19] The Expected Utility Function also shows that
observation has an impact on our decision processes through the formula: EU (a|o) = Σ P
(si|a,o) U (si). The formula states that a decision-maker’s expected utility (EU) for a
particular action-observation combination (a|o) equals the sum of probabilities for the states
21
of the world as observed (o) while taking action (a), and our preferences over the space of
outcomes are represented by U (si). [19] This accepted formula demonstrates that our
personal utilities are encoded by our observation of the state of things over time. One simple
example that illustrates this concept is the future value of money over time based on previous
observations of rising inflation. All of the negative effects of observation can be limited by
objective measurements of risk variables, and removing threat assessments from the risk
model.
that organizations encounter” [6, p. 1], and is continuous over the lifecycle of the project [9].
Organizations perform this function through the careful selection of management techniques,
strategies, and standard practices or policies. Challenges organizations often face in their risk
behaviors as risks morph over time. Due to these requirements and challenges, solid risk
management programs must start with the full commitment of the organization’s top
management. Where top management often fail is when they focus on managing people
versus managing the system. [6] In addition to appropriately managing the system, decision-
makers must understand the value of the risk assessment, be made aware of the effects of
their decisions, ensure availability and credibility of the analysis, and take into account any
biases that exist in their organizations. [9] Author and academic, Clayton Christensen, said
that “solving challenges in life requires a deep understanding of what causes what to happen”
[18, p. 16], and this understanding enables the decision-maker to select the best strategies and
22
apply the best theories to implement a framework that provides maximum insight to the
decision problem.
Risk frameworks should manage from the top, and assess from the bottom. [6] The
technical side of risk management should answer the “who, when, how”, while the business
side should focus on addressing the “who and where”. [6] Risk frameworks must be capable
of aggregating and dividing processes because risks in each subsystem within the
organization ultimately determines the risk of the overall system being assessed. [9] The
program (or framework) must also engage the entire organization and ensure maximum
transparency for maximum success. [6, 9] This objective is made possible by allowing every
member an opportunity to make a contribution to the decision process which gives them the
sense of inclusion in their organization, and also allows members to increase their
competence through the individual process of making their own judgments. [10] The authors
of [7] suggest that the “People” domain is most vital among the STOPE (Strategy,
Technology, Organization, People, and Environment) domains, and the needs of the
decision-maker are the most important of all people in the organization; of most interest to
the decision-maker is how the decision variables link to his or her utilities.
A goal for risk management is reducing the decision to logic, and selecting the
alternative with the highest utility. [15] An understanding of the decision-maker’s utilities
enables the organization to effectively quantify potential gains and losses [10], however both
still remain subject to interpretation. For this reason, the decision must be defined in the
decision-maker’s terms as appropriate [15]. The challenges with leveraging utilities in risk
management are due to mobility, inherent subjective properties, and the uniqueness of
possible utility forms. One example of utility mobilization occurs when the willingness to
23
gamble decreases as the odds increase. Another example is reduced satisfaction experienced
from a second glass of water versus the first glass. Both of these examples illustrate the Law
of Diminishing Marginal Utility [15], but utilities can also increase as well. There are
basically two groups of utilities, and those are decision and experience utility. Decision
Utility is a computation about the expected future utility that is predicted, while Experienced
Utility is a utility chosen when an outcome has been previously experienced. [11] In the
absence of probabilities (which is what this study hopes to limit), the Subjective Expected
Utility Function may be used as developed by L. J. Savage. [15] Assuming the decision-
maker adheres to axioms of rationality, this function combines a personal utility function
Risk management boils down to people and processes, and how they all support the
diverse body of people, but the decision still rests with the top decision-maker at the
conclusion of a risk assessment. For that reason, the problem must be described in such a
selecting courses of action that are optimally suited to the aims of the decision-maker.
Unfortunately there will always be some knowledge gaps throughout the decision process,
but gaining an enhanced understanding of uncertainty and fully acknowledging its existence
enables organizations to overcome many of the posed limitations stemming from these gaps.
and perceptions that fill voids created by uncertainty. Yukov Haimes says that “uncertainty
colors the decision-making process”, and personally feels that the need for assessing risk
24
increases with less knowledge on a system. [9, p. 27] Because of this significance,
uncertainty lies at the heart of risk management. Haimes defines uncertainty as “the inability
to determine the true state of a system where no reasonable probabilities can be assigned to
outcomes” [9, p. 227], while risk expert Sir David Spielgelhalter simply equates risk with
uncertainty. [11] In the context of this study, uncertainty occurs when variables cannot be
described with concrete objectivity or fluctuations in the variables must be explained through
statistical inferences. [9] Unfortunately for all organizations, uncertainty is inherent in the
expressed quantitatively when optimal. [6] Despite the limitations, uncertainty can also
create an advantage for the cyber-defenders as described by the Moving Target Defense
attacks. [8] Before one can harness the potential benefits of uncertainty, a deeper and
The Uncertainty Principle states that there are limits to precision, therefore
uncertainty will always exist. [14] Applying this principle is fundamental in risk management
because risk models can create the illusion that probabilities are highly reliable
measurements. Probabilities can be deceptive when accepted as fact, but there are
from a variety of sources, and John Adams has identified four primary sources in [10]:
threats, and initial conditions. Sir David Speilgelhalter says that there are “deeper
25
uncertainties” of which the organization should be made aware. These deeper uncertainties
are those that science may never be able to solve, and Spiegelhalter proposes that quantitative
and qualitative measurements must be brought together. This synthesis of objective and
subjective measurements would acknowledge the limitations of math and engage the social
sciences to critique. [22] Frank Knight proposes that the use of the word “uncertainty” be
decide to tackle uncertainty, they must allocate resources to reducing uncertainty to the
Yacov Haimes prefaced his discussion on uncertainty in [9] by quoting Alvin Toffler:
“it is better to have a general and an incomplete map, subject to revision and correction, than
to have no map at all.” [p. 4] Fortunately improved data and analytical techniques can help
manage uncertainty, and modeling intricate details among subsystems improves the process
Section 2.1, a risk model could apply the Principle of Insufficient Reason as a means to
principle was developed by early mathematicians Jacob Bernoulli and Pierre Simon Laplace,
and works under the assumption that no probabilities are known. To overcome this gap, each
event is simply assigned a probability equal to 1 divided by the number of possible outcomes
for that single event. [17] There are additional substitutions to probability such as an analysis
system. Uncertainty can also be displaced by decomposing a complex system into smaller
understanding of the system and corresponding sources of risk or uncertainty. [9] If top
management insists on including probabilities, Bayes’ Theorem is one option with potential
application within the proposed framework’s risk model by basing the probability of an event
occurs given that Condition B is true, which equals the probability of A multiplied by the
probability of observing B given that A is true and divided by the probability of B; {P(A|B) =
P(A) P(B|A) / P(B)}. As an example, suppose the probability of an email server failing (A)
due to a virus introduction (B) equals the probability of an email server failing at some point
(50%) multiplied by the probability that a virus could cause an email server to fail (20%).
This number is then divided by the probability of a virus introduction to the system (30%)
which would result in a 33% probability of the scenario occurring. Confidence Intervals and
Despite the challenges compounded by including uncertainty within the model, this
infusion actually improves the effectiveness of the model. Ultimately, the best way to reduce
uncertainty is by ensuring the model representing risk is as accurate to the real situation as
this challenge also entails managing the expectations, perceptions, and fear of blame within
the organization. Carlo C. Jaeger expressed doubts in [15] that one particular structure can
manage these issues as organizations grapple with uncertainty and attempt to model
organizations guide the risk management process and keep individuals on task. A good
framework enables decision-makers to frame the problem to be analyzed, engages the entire
organization in the process, and produces results that enable decision-makers to develop the
best strategy for tackling the problem. Michaels says in [6] that is impossible to entirely
evaluate risk deterministically, but that programs should put determinism to its limits.
Determinism is a field of study that associates causes with effects, and suggests that every
single outcome can be traced by analyzing sequences of events. This study focuses on
assessing risk as a deterministic problem, and the following frameworks (or methodologies)
A framework is not necessarily a formula or a model style, but a unified attack vector
that represents the mindset of members that apply and operate within the framework. The
attack vector” most suitable to managing risk in the cyberspace domain: Pre-Mortem
Analysis, Technical Risk Management framework, the Gordon-Loeb Model, the Monte-
Carlo Method, Risk Filtering Ranking and Management framework, and methodologies
Pre-Mortem Analysis is one such methodology that is forward looking and seeks to
identify vulnerabilities in a plan by imagining a project has failed before it starts. This
process begins by gathering the team that will participate in the analysis. The team then
decides on a catastrophic outcome to analyze. Once this outcome has been selected, team
28
members individually generate reasons for failure and consolidate their lists. Simulations are
then run, and the team revisits the original plan for optimization. Subsequent reviews are
subcomponents of a complex system. Those impacts with the greatest magnitude are then
selected for analysis. Then a team analyzes those impacts to determine all the possible events
or activities that could lead to the outcome. Once these possibilities are generated, then a
likelihood variable is assigned to each one and prescribed a corrective action. The next step
involves calculating the Risk Time and Cost Estimates for each outcome. The Risk Time and
Cost Estimates are derived by adding the baseline estimates with a combination of
corrective-action time/cost and the appropriate risk determinate factors. Baselines are
estimates of time or cost to complete a task in the absence of hazards, and Risk Determinate
Factors (RDF) which are quantified measures to serve as estimates of risk exposure. RDFs
are computed by dividing previous risk cost/time estimates by the actual costs/times, and is a
percentage between zero and one if costs or times were underestimated. These estimates are
then rank-ordered and the team decides what controls could prevent these outcome from
occurring or which could reduce the outcomes. The last step involves packaging the analysis
in a form that presents the advantages of selecting one course of action over another to upper
management. [6]
The Gordon-Loeb Model tries to determine the optimal amount to invest to protect a
given set of information. Three parameters characterize the information set (S), and they are:
(L) the loss resulting in a breach, (t) the likelihood of event occurring, and the (v)
29
negative impact. Both (t) and (v) are both integers representing a probability residing
between 0 and 1. The monetary investment in security needed to protect the resource is
denoted by (z). There are three assumptions with this model: (1) any information set that is
completely invulnerable requires zero investment, (2) zero investment in security makes the
information set inherently vulnerable to an event or activity, and (3) increasing investments
in security increases security at a decreasing rate. The Expected Net Benefits in Information
Investment, or {ENBIS (z) = [v - S(z, v)] L - z}. Gordon and Loab suggests that
organizations should invest in security only up until the point that marginal benefits equal
marginal costs, and that most risk-neutral organizations will need to invest approximately
37% of the expected loss of a breach into their information security programs. [5]
The Monte Carlo Method (MCM) is one methodology that proves useful when all
others are impossible to use. The basic idea behind MCM is to encode random numbers with
some sort of principle governing probability distribution. The process begins by tabulating
variates of interest, and could be a set of metrics used to describe one domain within the
problem; for example, sever off-line = time or frequency for individual previous events. Then
the assessor derives some score to cumulate distribution based on each metric within the
ranging from the last metric input through the next. For example, suppose a server went off-
line on two occasions, 60s and 80s respectively, which would compute a range for the second
metric as 60 - 79. The next step requires the assessor to generate pseudorandom numbers into
a table and selecting one random number that falls within the previous range that associates
30
with an actual measurement from the list of variates. An example would be an association of
the event with a down time of 80s with a randomly generated number of 77. Finally the
assessor would use the list of random numbers to perform statistical analysis on the complete
list of associated random numbers, with mean and standard deviation being the most popular.
[6]
(RFRM), which was developed for NASA with application towards the Space Shuttle
program in the 1990s. RFRM is more philosophical than mechanical, forces a decision
problem to focus on the actual contributors to risk, and comprises eight broad phases. In
Phase I, a Hierarchical Holographic Model (HHM) is developed that organizes and presents
a complete set of system risk categories or requirements for success. Increasing the levels
within this structure improves the level of detail for analysis. Phase II filters risk scenarios or
requirements according to the preferences of the decision-maker, and achieved through the
experiences and knowledge of that collective or individual. Phase III uses a traditional risk
matrix that compares likelihood and impact to provide a severity index for sub-scenarios to
the primary scenarios developed through the HHM. Phase IV reflects on the ability of each
scenario developed in Phase III to defeat any one of the three defensive properties of a
system, which are: resilience, robustness, and redundancy. Those scenarios that are
determined as able to defeat the system are then further evaluated against established criteria
that relate to those abilities. For example, a virus could impact the robustness of a system and
have the following characteristics: undetectable, cascading effects, or a high persistence. This
scenario would be one of the sub-scenarios falling under the degraded operations risk
scenario developed in Phase I. To complete Phase V, Bayes’ Theorem is used to quantify the
31
likelihood of each scenario based on available evidence, and is especially useful in modeling
when there are many sources of uncertainty. Phase V ends by filtering out those scenarios
that linger above established thresholds. Phase VI is the risk management phase where
assessors ask the questions “what can be done” and “what options are available” to defeat
these filtered scenarios. Cost is a major factor in this phase and guides much of the decision-
making, and trade-off analysis is conducted to evaluate the various options. This phase is
also where selections are made on which options provide the maximum benefits. Phase VII
determines the actual benefits of the options selected in Phase VI, and looks to determine if
there are any relationships or interdependencies between scenarios discarded during the
filtering process or whether policy options will be effective. The final phase is Phase VIII
information collected during this phase will help guide decision-making in the future. [9]
The final supportive methodology this study mentions in detail is computation of the
Damage-Effort Ratio (DER) to determine the likelihood of a cyber-attack, and was suggested
as part of a Moving Target Defense (MTD) strategy which was funded by the Army
Research Office and developed in 2012. [8] The entire strategy covered under MTD is
beyond the scope of this study, and this review will limit itself to the DER component of
MTD which strives to quantify likelihoods as weights represented by a system’s DER. DER
is a combination of the damage an adversary could inflict on a system (or reward) and the
amount of effort that would be required by the adversary in order to be successful. The
simplified theory behind this approach is that the likelihood of an attack against a system is
relative to the system’s vulnerabilities and the reward gained by the adversary, and is
basically a cost-benefit analysis performed on behalf of the adversary. This approach is only
32
This assessment begins by measuring the number of entry/exit points (M), channels used to
connect to a system (C), and untrusted data resources (I) comprising a particular system.
This combined measurement provides a weighted amount of effort an adversary must expend
in order to break the system, and helps compute the attack surface. Then each resource is
assigned attributes ranging from method privilege (m), to access rights (a), or to channel
protocols (c). Each of these attributes represent a numerical value such that one parameter
within the set of access rights (a) is greater than another and benefits the adversary; for
example, root > non-root. This list is non-exhaustive and selects those attributes that have
priority sitting with management, but these are primarily the attributes used. By evaluating
these attributes against the previous measurements, the total attack surface is a combination
of the total contributions of methods (M), channels (C), and data resources (I) within the
system’s environment. In respect to the computing environment (Es), a system’s (s) attack
surface is a set of the sets of methods, channels, and untrusted data resources that contribute
or can be represented by {MEs, CEs, IEs}. By combining ratios {(M:m + M:a + M:c) + (C:m +
C:a + C:c) + (I:m + I:a + I:c)}, the result equals the total DER for a system. Attack surface
measurements first showed potential to estimate the likelihood of an attack scenario through
research conducted by Michael Howard of Microsoft where he proved that systems operating
at elevated privileges were more likely to be attacked by those operating as general users. [8]
This study does not recommend full implementation of any of the above-mentioned
frameworks or methodologies in the proposed framework, but each will contribute various
elements to the final product which will be discussed in Chapter 4. Pre-Mortem Analysis
shows great promise as a broad strategy, and studies have shown that “prospective
33
identify reasons for future outcomes by 30%”. [29] However, Pre-Mortem Analysis is only a
blanket strategy and doesn’t offer specifics for assessing risk at the lower tiers of an
organization. The Technical Risk Management strategy has much potential to help
organizations succeed in their technical risk management programs, and this study feels that
calculating Risk Cost/Time Estimates best quantify the impacts an organization could face
based on its exposure to risk factors. The Gordon-Loeb Model best supports this study by
justifying to top management that there should be upper and lower limits to how much the
organization should be willing to invest in its security program, and that the benefit curve in
increasing security investments is logarithmic. The Monte Carlo Method shows that it is
possible to derive some sort of probability surrounding an event when no probabilities are
even known. The Risk Filtering, Ranking, and Management framework supports this study’s
belief that the assessment should begin by identifying critical scenarios that could impact
business or mission processes the most, and that Bayes’ Theorem could have application in
developing probabilities for conditional scenarios or when there are significant sources of
uncertainty. Lastly, computing a system’s DER has shown potential to serve as a quantifiable
particular system. These supporting methods and frameworks cannot solve the problems
elements just provided. The following subchapter aims to compliment and support the
selected attributes by reviewing various theories and approaches that explain how these
attributes will enable the proposed framework to function as intended and provide all the
As stated by Clayton Christensen, “robust theories are able to explain what has and
will occur across the hierarchy”, and that sometimes multiple theories are required to provide
insight into the same problem. [18, p. 5] The methodologies, models, and frameworks
mentioned in the previous sections are methods and guidelines for conducting risk
assessments, but how a decision matures and gets reached lies in the theories applied to the
framework. This study supports the research of four primary theories that will explain “how”
and “why” the proposed framework will function as expected, and they are: Culture Theory,
Decision Theory, Rational Action Theory, and Game Theory. By synthesizing these four
theories and using cutting-edge abstraction techniques, this study can achieves its goal of
Culture Theory.
Culture Theory attempts to describe the application of cultural filters and social
responses in how risk is constructed, perceived, and managed. Cultural filters help us to piece
together evidences to support our beliefs, and the two main filters are rewards and costs. [10]
Humans are constantly bombarded with information, and without filtering this information
we could quickly become overwhelmed. The media plays a significant role in shaping how
we perceive the world and assess the likelihoods of events occurring. This perception can
have positive and negative effects. An example of a positive effect could be the general
public’s awareness of water quality and combining their voices to invoke governmental
leaders to take action. A documented example of a negative effect occurred when fewer
people traveled by airplane following 9-11 believing their lives were safer on the road, which
35
resulted in 1,595 extra road fatalities. [11] Donald Rumsfeld has even been quoted as saying
that “belief in inevitability can be the cause of it”. [12] British anthropologists, Mary
Douglas, has identified and defined four components to our society, and they are: (1) the
fatalists that believe nature is unpredictable, (2) the individualists that believe nature is
predictable and stable, (3) the egalitarians that believe nature is fragile and unforgiving, and
(4) the hierarchies that believe regulation is needed to keep nature in balance. [10] All
culture types are required to hold society in equilibrium, and oftentimes failure or catastrophe
is needed to restore balance between the four. This study primarily examines culture theory
as it pertains to hierarchies and their management of risk, and how society influences their
behaviors.
The key characteristics of risk, as perceived and managed by hierarchies, are that
hierarchies use science and technology to reduce risk, subordinates tend to take risks greater
than necessary due to ignorance or incompetence, and the number of risk accidents relates to
the amounts of risk accepted. [10] Hierarchies also believe cost-benefit tools will help them
make the most responsible decisions, and they use codes of conduct, policies, and regulations
to control their environments and hold these institutions together. [14] In fact, much of the
scientific research conducted on risk is sanctioned by hierarchies in the expectation that they
can more effectively manage risk. [10] Hierarchies must be able to distinguish themselves
from other hierarchies because unified objective functions within a single organization help
create a culture that reinforces the intent and priorities of the top-level management. In
addition to maintaining cohesiveness and a common culture, hierarchies must also manage
how they are perceived by society for survival. In governmental hierarchical organizations,
as mentioned by Christopher Hood for the Center for Analysis of Risk and Regulation, “the
36
sort of risk that tends to matter most in government is the risk of blame”. [11, p. 63]
Governmental organizations tend to select courses of action that will result in the least blame
if failure occurs rather than the optimal solution. However, there are several benefits
provided by the risk of blame: blame is a central regulator of human conduct and to observe
social restraints, it creates incentives to behave appropriately, and most importantly keeps
developmental guidance for the framework by stipulating the need for a cost-benefit
component, suggesting that scientific innovation will foster optimism, and that the possibility
Decision Theory.
Decision Theory supposes that there always exists some rational purpose behind a
Decision Theory tries to explain how people make decisions through empirical and
experimental observations, and Normative Decision Theory sets apart rational and irrational
decisions. A decision is considered rational if the decision-maker selects the decision with
greatest reason at the time of the decision. [17] Sometimes a decision is solely based on
intuition because there is no evidence or previous experiences to base the decision upon, but
the decision-maker assumes the risk because the results of the decision could possibly lead to
more productive decisions in the future. Decision Theory provides several possible
There must always be one member of a collective that has the final decision, and
explains how collective decision-making combines the inputs of various individuals to reach
37
a decision that maximizes benefits to everyone. [17] Voting procedures are an example of
how Social Choice Theory is used to elect an official that presides over the governing body.
Making decisions as a collective is the only way various cultures and beliefs can aggregate to
form one unified decision. [17] Decisions are typically reached using a Decision Matrix that
compares act, state, and outcome variables. These variables are arranged in a table that
determines the outcome of a combination of states and actions. Outcomes are compared
against others and then the most rational decision is reached by selecting a combination of
acts and states that results in the most desirable outcome. [17] The superior outcome
becomes realized when applying the Dominance Principle. Dominance exists between
alternative outcomes if alternative (a) is preferred over alternative (b) iff (if an only if) the
value of set {a, state} is greater than value of set {b, state}. The Optimism-Pessimism Rule
considers both best and worst outcomes, and the alternative is chosen based on performance.
The rule can be described through the formula (α = decision-maker’s degree of optimism,
min = worst outcome, max = best outcome): alternative (a) is preferred over alternative (b)
iff α * max (a) + (1 - α) * min (a) > α * max (b) + (1 - α) * min (b). [17] Transitivity is an
inductive reasoning to speed the process or provide reliable assumptions, and states that if
alternative (a) is superior to (b) and (b) is superior to (c), then (a) must be superior to (c). [17]
The concepts just described support this study by providing various means to assist the
decision-maker in comparing alternatives upon presentation of the analysis, and supports the
belief that processes should aggregate to a single point for final decision-making.
38
Rational Action Theory suggests that the intention behind an action should be morally
constructed, benefit mankind, and lead actors to a state of equilibrium. Rational Action
Theory also combines a model of individual action and social interaction. [15] Rational
Action Theory makes the following assumptions that actors can: choose between different
possible actions, assign likelihoods to outcomes, order actions and outcomes by preferences,
Rational Action Theory shows that empirical evidence has little effect when it comes
understanding of logic. [15] Such is the case when a person is faced with deciding between
their religious convictions (or passions) and a socially-approved rational decision. Rational
decision-making is a very individual process, and neither science nor technology can offer
much support. Rational Action Theory criticizes technical risk assessments by claiming:
undesirable effects are subjective, human activities and consequential relationships are
complex, the institutional structure of managing risks is prone to organizational failure, and
high impact/low probability events are perceived as equal to low impact/high probability
events. [15] Any imprecision of probabilities or likelihood estimates pose a problem to risk
Rational Action Theory also states that the decision on the value of any thresholds should be
independent of the numerical analysis of risk. [15] These assumptions support this study’s
suggestion that the decision variables should reflect the preferences and utilities of decision-
makers in order for them to make the decisions they feel are best for the organization, and
39
that an organization’s defense strategy should focus on its own state of security if presented
Game Theory.
Game Theory ties into Rational Action Theory in that it studies mathematical models
choose among alternatives where external forces are at play or to select business strategies
where competition is a factor. [6] In cyberspace, there are two competitors: the defender and
the adversary (or threat). Game Theory can be applied to level the playing field in
cyberspace, and return some symmetry between opponents. There are many game types, but
this study primarily concerns itself with instance-based learning and non-cooperative games.
In game theory, both opponents are either trying to maximize gains or predict an
interactions between the security analyst and the attacker. [8] Instance-based learning games
are based on Instance-Based Learning Theory (IBLT), and explains how one opponent
experiences. [8] IBLT suggests the network defender should focus on recognizing the
diagnosis and comprehension of threats while monitoring a set of network events. [8] The
core of game theory is focusing on the consequences of decisions [15], and this mindset is
critical to this thesis. The key assumptions under Game Theory are that all players are
rational actors, all players know all players are rational, and players select the alternative that
dominates others (or according to the Dominance Principle). [17] In the end and as proposed
by John F. Nash Jr., rational players will do whatever they can to ensure the least regret. [17]
40
Current risk management frameworks used for assessing risk exposed to federal IT
approaches which are failing organizations in their attempts to effectively manage risk. The
deterministic, and reflexive approach instead. A proscriptive approach will assess risk as if
failure has already occurred, help organizations identify those risk factors that lie at the heart
of the problem, and establish risk thresholds prior to a risk decision by condemning values
that are unacceptable to the organization. Deterministic approaches look at the circular
relationships between causes and effects, and avoids the act of assigning probabilities to risk
scenarios assuming failures will inevitably occur. This approach will shift intelligence
priorities from assessing the unknown to quantifying what is known about the environment.
Reflexive approaches link outcomes with the actions taken by an organization as opposed to
linking the threats to the outcomes, which will help organizations reflect and evaluate the
framework that takes these contrasting approaches and applies the theories mentioned in
Section 2.4.2 will produce a framework that is optimized for the 21st century, and help restore
trust or faith in the institutions for those they serve which are eroding quickly as mentioned
dangerous one, and has the potential to lead to the most severe unintended consequences. The
reason for this danger is that decisions are made without viewing the scenario beyond the
wouldn’t have been chosen. Prescriptive measures require imagination, and imaginations
have potential to run far outside the scopes of reality. When former Secretary of Defense
Donald Rumsfeld was asked during a congressional hearing about his greatest fear, he replied
by saying his greatest fear is “the danger that we can be surprised because of a failure of
imagining what might happen in the world”. [12] In the case of the Iraq War (2003-2011), an
action (or military campaign) was prescribed to prevent Saddam Hussein from becoming a
greater threat to the United States or its allies. The perceived casus belli, or justification for
war, led to many long-term unintended consequences that are still being managed today. A
proscriptive analysis would have developed optimal reactionary measures to be taken after a
threat was introduced by Iraq rather than developing measures to contain the threat before it
became realized. Sir David Spiegelhater agrees that prescriptive approaches can lead to
statement can be illuminating, but could get too complex and over-prescriptive, which has
possibly been a problem with otherwise admirable [approaches].” [22, p. 8] Politics have
much to do with selecting prescriptive approaches as the federal government is under intense
pressure to ensure threats do no find themselves in the “backyard” of U.S. citizens. [14]
mechanisms, as described by the Social Amplification of Risk Framework (SARF), and the
42
media has increased society’s sensitivity to risk. LTC Matthew Hopper feels that there is, in
fact, a greater sensitivity to risk which has occurred because “a single voice has the potential
to become as loud as the remaining 999,999 voices, and society has become so burdened
with risk possibilities that it no longer puts as much effort into mitigation as required”. [M.
a communicative system [14] and communicative logic (or translation) helps stabilize society
[16]. Since both Niklas Luhmann and Joust Van Loon have theoretically explained how
communicative logic is developed and stabilizes society, the next step is to examine a
SARF is one such framework that attempts to deconstruct how risk is translated from its
attenuation through repeated iterations, and then finally rippling across society to becoming
perceived impacts. [14] Mass media is a prime example that validates these assumptions
under SARF and Luhmann’s theory. Mass media is a major contributor to how risk is
perceived by the public as this is the primary medium for how many people create much of
their own knowledge about the world, and media outlets are under intense pressure to
increase their viewership by grabbing people’s attention in ways that approach the border of
unethical journalism. This problem is compounded when several outlets rely on a single news
agency to produce the information, and these multiple outlets spin a single interpretation of
the facts which creates an impression that the analysis being reported is truth. An example of
how both social amplification and the media can play a role in deciding outcomes is by
looking at a 2003 Washington Post poll that showed 69% of the American public believed
Iraq was connected to the September 2001 terrorist attack on the United States even though
43
the federal government never stated such a connection existed. [12] This belief generated
public support that placed pressure on politicians to select a prescriptive course of action that
would prevent a future attack on American soil. How did 69% come to believe that Saddam
Hussein had a role in 9-11 when there was no substantial evidence to support this belief?
Perhaps the seeds of this belief were planted through some opinion from a news outlet
contributor which then ran through the various processes within SARF and cycled through
the news agencies repeatedly until the majority developed the same beliefs. This example
demonstrates that the relationships between socially manufactured perceptions and the
courses of action taken following the realization of an event or system failure by condemning
measures are proactive, and proscriptive measures are reactionary focusing on resilience,
robustness, and redundancy capabilities which are the primary threats to defeating a system
as identified by risk management expert Yukov Haimes in [9]. Focusing on reinforcing those
capabilities from a proscriptive standpoint will reduce the effects the media, social
to the data collected that enables organizations to make predictions on the likelihood of
future events, while deterministic approaches avoid probabilities and see outcomes as
sequences of “if-then” statements. Joost Van Loon says that trying to describe cyber-risks in
44
probabilistic terms is a futile attempt, but instead suggests that organizations ask - “What can
assumes that the likelihoods of outcomes are inevitable and dependent on the conditions of
the system being analyzed. Deterministic approaches eliminate the need to assess the
probabilities of an outcome or the effect a threat’s capabilities could have on risk variables
process due to the small proportion of knowns to unknowns, and excluding nonessential or
deeply uncertain variables from the framework optimizes the modeling process. Probabilistic
approaches only optimize the modeling process if the proportion of “knowns” is significantly
greater than the amount of “unknowns”, but the case is entirely the opposite in cyberspace.
Therefore, the conditions that assign some quantity to the threat variable must all be the same
because the safest assumption is that all cyber threats/entities have equal capabilities due to
the existence of Advanced Persistent Threats (APTs), internet anonymity, and the threat’s
complete disregard for legal boundaries. It is nearly impossible to assign any precise or
accurate probabilities that a cyber threat will result in a specific outcome because the
knowledge available. Probabilities are also dependent on the distribution of the data that
enables statistical analysis to be performed, but Marcus De Saytoy claims that the world isn’t
random; it is chaotic. Chaos Theory states that it is impossible to predict the future, but it can
be managed under the right assumptions. [28] In addition, focusing on consequential impacts
variety of outcomes outside physical harm alone. Since deterministic models examine
45
relationships, an example of how this multi-objective analysis could benefit risk management
is by determining the impact one responsive strategy would have on lost production and
model that could only illustrate the probability of a virus resulting in a variety of disasters.
relationships, meanwhile all these relationships can be viewed within a single model that
Another issue with probability estimates is the relational dependence between the
subject and object; a relationship that is based on unprovable assumptions. [11] Deterministic
outcomes are determined by the relationships between various objects being modeled based
on known information or strong assumptions, and then communicated by the subject. Though
interpreting the data and focuses on managing the system as opposed to the “futile attempt”
to make accurate predictions of probable scenarios described earlier by Van Loon. [16]
Gordon and Loeb clearly state that there will never be a simple procedure to determine
probabilities of threats or impacts for IT systems [5], and any attempt to determine these are
plagued with challenges. Since there exists a relationship between many threats and one
outcome [6], probabilities must be assessed for each threat to provide a general probability
for the outcome. A deterministic model would instead exclude the threat variable and assume
the probabilities for each outcome are the same, and alternately measure environmental
conditions that facilitate a threat event by exploring the relationships between those risk
The optimization potential for a deterministic model increases with any reductions in
uncertainty, and “determinism should be pushed to its very limits” as adamantly stated by
Jack V. Michaels. [5, p. 77] Haimes claims that uncertainties can prevent risk models from
taking on a deterministic form in most cases, but creates a catch-22 by adding that
“uncertainty contains no reasonable probabilities”. [9, p. 227] Assuming all threats have
equal capabilities, that all outcomes are equally inevitable at some point, and that an
approach as the more attractive and optimal alternative. Purely deterministic models must
also include a problematic element that critiques the quality of the data. Regardless of
Aarhus University (Denmark) feels that a future society will lose faith in our ability to
calculate risk unless we also include a reflexive approach in our risk management programs.
[14]
innovation because major innovations occur in nearly half the amount of time it takes to
optimize a management process assuming the ratio remains at five years versus ten. [15]
approaches operate under the assumption that policies and processes, developed under the
skills of subject matter experts, can control the fate of the organization by containing or
reducing risks. However, this “containment” only creates more complex risks as their risk
specialization within their bureaucratic structures. [16] Organizations must deconstruct their
and management. A reflexive approach would reduce this complexity and improve
optimization by evaluating the effectiveness of current policies via reflection through the
lenses of outside actors, by further establishing checks and balances, and by conforming to
people rather than forcing people to conform to the institution [15]. Van Loon says that
“when we engage cyber-risks reflexively, we transcend the matter at hand” [16, p. 161], and
this transcendence enables organizations to discover the root issues with current cyber
policies and reflect on the ways they conduct business in the cyber domain. Additionally,
subjects need to see themselves in the results of the analysis, but current analysis techniques
only view results from the perspective of the organization. [15] Renowned sociologist and
[16] and says progressive modernization is the causal agent behind turning our society into a
The belief that science and technology can reduce the uncertainties in risk has caused
organizations to develop progressive strategies that attempt to avoid the consequences of risk
rather than focusing on resilience and robustness. Several of the alternative methodologies
mentioned earlier in subchapter 2.4 have reflexive properties in addition to resilience and
robustness focuses. A technique by Haimes suggests that decomposing a system can reduce
are broken up to analyze problems of lesser granularity. [9] Phase four of the RFRM risk
management approach reflects on the ability of each outcome to defeat the three primary
48
defensive properties of a system (resilience, robustness, and redundancy). [9] DER is also
reflexive in that assessors view a system’s attack surface from the perspective of an
adversary and the amount of effort they are willing to sacrifice to complete their objectives.
[8] Each of these methodologies support the notion that reflexive elements are essential
strategies that help them better manage risk through scientific analysis of the relationships
Prior to 1988, NIST was known as the National Bureau of Statistics which was
founded in 1901 to provide standard weights and measurements in the United States. NIST
reports to the U.S. Department of Commerce, who reports to the Executive Branch’s Office of
Management and Budget. According to its official website, NIST’s mission is to “Promote
standards, and technology in ways that enhance economic security and improve our quality
of life.” NIST was chartered in 2009 to lead the effort in developing a new risk management
framework, and formed the Joint Task Force Transformation Initiative Interagency Working
Group to meet the requirements of FISMA (Federal Information Security Management Act),
NIST, and the office of the Secretary of Commerce. The NIST “Risk Management
Framework (RMF) for DoD IT” was formally adopted by DoD Instruction 8510.01 in March
of 2014, and replaced the Department of Defense Information Assurance Certification and
Accreditation Process (DIACAP) which served as a risk management framework since 2006.
Prior to DIACAP, the DoD relied on the Defense IT Security C&A Process (DITSCAP) as a
49
Information Systems Agency) under the authority of the Secretary of Defense. All three
frameworks work (or worked) to achieve a standardized and generally applied manner to
Publication 800-37, and provides guidance for conducting risk assessments in Special
Publication 800-30. The goals of the RMF are to improve security, strengthen processes, and
encourage reciprocity among federal agencies. NIST has established three organizational
tiers at which the RMF operates: Tier 1 - Governance, Tier 2 - Mission Business Processes,
and Tier 3 - Information Systems. Tier 1 views risk from a strategic perspective, Tier 3
assesses risk at the tactical level, and Tier 2 bridges the upper and lower tiers. The RMF is
comprised of six well-defined steps that are expected to correspond to system development
lifecycles (or run parallel to the DoD Acquisition Process, abbreviated as DAP), and take into
account dependencies among other systems being assessed. The process begins in Step 1 by
categorizing the information system (IS) by the IS Owner (ISO), who represents the users’
community and is responsible for procurement, compliance, and developing the security
plan. This step corresponds to the development of DAP’s program acquisition information
assurance strategy. Step 2 involves selecting security controls based on the classification of
the system, and these are selected by the CIO and approved by the Authority Official (AO)
who is accountable for all security risks dealing with the system being evaluated. System
security baselines are specified under this step in DAP. Security controls are implemented by
the ISO in Step 3 according to the architecture and requirements set by the CIO. DAP
50
translates these security controls into design requirements that will be furnished to the
program managers for the system being evaluated, and approved at various design review
points. The implemented security controls are assessed in Step 4, and the Security Control
Assessor packages the assessment in the Security Assessment Report (SAR) as an executive
summary provided to the ISO and AO. DAP develops their own test and evaluation criteria
during this phase to ensure the implemented controls allow the system to function as
required. Once the SAR and Plan of Action and Milestones (POA&M) have been reviewed
by the AO, the system is authorized and the risk is accepted by the AO in Step 5. DAP
officials conduct operational tests and evaluations based on the criteria set in the previous
step during this phase. Step 6 is the final step, which involves a routine monitoring of
security controls by conducting ongoing assessments and ensuring the system continually
by assigning a qualitative category (high, medium, or low) to each risk variable, with those
risk variables being: threat, vulnerability/previous conditions, impact, and likelihood. There
are four basic steps to the Risk Assessment process: (1) Prepare by identifying the purpose,
scope, assumptions, constraints, and sources of the assessment. The risk model is also
selected during this step, which provides the assessor with the risk variables to be addressed
and the relationships of those factors. (2) Conduct the assessment by identifying threat
sources, threat events, the likelihoods of those events to occur, the impact resulting from a
threat against a known vulnerability. After doing so, determine the severity of risk by plotting
the impact and likelihood within a risk matrix. (3) Communicate the Results to executive
51
management, and (4) maintain the risk assessment for subsequent reviews, audits, or future
assessments [3]. A chart illustrating these processes and the relationships between variables
is provided in Appendix B.
30, which is called the Common Vulnerability Scoring System (CVSS). [43] CVSS is an
open industry standard for assessing the severity of computer system security vulnerabilities
that was launched in 2005 and has since been revised twice. Under this system, assessors
calculate severity scores using a formula dependent on a variety of metrics that estimate the
exploitability of a system and the corresponding impact. Vulnerabilities are labeled as “low”
if the score is between 0-3.9, “medium” if the score is between 4-6.9, and “high” if between
7-10. Quantitative inputs are still selected based on subjective judgment, but this is an
attempt to produce results with greater precision based on six metric calculations. Risk
management firm Risk Based Security (RBS) has been openly critical of CVSS, citing a lack
of granularity that results in vectors and scores that are not able to distinguish between
certain vulnerabilities or risk profiles. RBS also feels that the scoring system requires too
much knowledge about the exact impacts caused by vulnerabilities. [44] CVSS is mentioned
here to demonstrate the federal government’s awareness of the limitations posed by the
In addition to the quantification of risk, uncertainty doesn’t receive the full attention it
deserves and should be represented as its own independent variable. John Adams describes
the relationship between risk and uncertainty by saying that “a world without risk is a world
52
without uncertainty”. [10, p. 17] NIST acknowledges that uncertainty is inherent in the
evaluation of risk and indirectly expresses it as a level of confidence against risk variables in
[3], but uncertainty should be expressed quantitatively and independently if possible. Though
analysts occasionally describe uncertainty in numerical terms, the inputs are often subjective
in nature which “colors the decision process” as observed by Yukov Haimes [9, p. 27]. This
study primarily promotes the use of objective variables in the assessment of risk, but as
perspectives has potential to address the limitations of mathematical models and employ the
social sciences to close any gaps”. [22, p. 2] A functional model must combine both
qualitative and quantitative techniques to accurately and completely describe the scenario,
but those inputs must be separated so that each is independent of the other to avoid any
misrepresentation. This distinction can be achieved by creating a third dimension within the
risk model, but the model NIST recommends within its framework only supports two
either subjective or objective. All the authors examined throughout this study believe any
The true state of the system will never be fully described unless the issue of
uncertainty is appropriately addressed, and every account of uncertainty within the model
actually improves the overall process. [9] The NIST approach conducts assessments as if the
state of the world is static throughout and some period after the assessment. Assessors have
to overcome this limitation by making predictions on the future state of the object being
assessed, but as Heisenberg’s Uncertainty Principle states - “elements of any system within
53
an environment are constantly in motion and moves in response to measure it”. [10, p. 29]
The NIST-governed assessor must take into account future variability, the latency period of
threats, and the compounded effects of threat combinations in addition to the state of things.
[10] Muddling through all these complexities makes assessing risk extremely challenging for
a human mind to account for every required variable to produce a precise estimate of risk
exposure. These complexities amass to create a problem that is so perplexing that even a
team of people may never be able to fully solve. Robert Glass’s Law of Complexity illustrates
how added complexity compounds the problem, as every 25% increase in problem
so great that the assessor lacks the confidence to even assign a subjective weight to a risk
variable. This situation essentially forces the assessor to either ignore the risk contributor or
assign a weight randomly; for example, through MCM. There are significant issues with
through mathematical processes and remove the burden from the assessors of expressing
their confidence in the analysis. As an example, suppose one assessor is correct and
expresses a high degree of confidence in his prediction about the future while another is
extremely cautious in his expressions. The correct assessor is initially perceived as ludicrous
and any request for funding is limited to the understanding of the decision-makers, but the
conservative assessor is viewed more favorably and receives his full request for funding.
Following a catastrophe, both assessors will be criticized for either failing to clearly
articulate their confidence or for being too extreme. Both scenarios are linked to uncertainties
that exist among all actors, and approval is ultimately up to the decision-makers. These two
54
examples go to show that decision-makers should be presented with factual data while
representing uncertainty numerically in a separate dimension so they can form their own
conclusions and assume responsibility for the decision with a higher degree of assurance. By
using the current approach, in the end, the decision-makers are left to choose between their
own incomplete understandings of the problem or to settle with the assessor’s confidence in
Another systematic issue with the current framework is that it fails to unify the
organization under a single decision process, and there is little transparency for how
decisions are reached at each level. The current process is a more “us versus them” situation,
or functional siloing, where a department’s needs takes priority over organizational needs
and conflicts of interest exist internally within tiers of the organization. These issues could be
reduced by increasing transparency across the organization, and by developing a process that
maximizes participation and communication between decision levels. U.S. Army commander
LTC Matthew Hopper agrees that people in organizations can sometimes become slightly
detached from the organization’s priorities and primary mission as risk management
processes continue. [M. Hopper, personal communication, 2016] The organization’s chosen
risk management process needs to function as though participants are part a team, and
decision-making such that cross-level processes operate as a network, and the flow of
communication is constant. Input taken at each level creates a more accurate picture and
meaning of the world as seen from various views. This process can be supported through
organization. [14]
Another problem with the current framework recommended by NIST is the lack of
opportunity for decision-makers to introduce their own criteria for establishing thresholds or
to define risk variables according to values that link to their utilities. The current model is
already set by the parameters provided, and the thresholds are already defined as to what
constitutes “high, medium, or low”. Decision-makers are ultimately the ones who accept
responsibility for a decision, and it should be up to them to decide what values are considered
unacceptable versus predefined risk categories of the output. Chancy Starr said in Science
Magazine (1969) that “there is no absolute threshold for when risks of a technology should
be accepted”; deciding on a threshold should rely more on values and ethics rather than facts.
[14, p. 34] There are cases when risk variables plot in the “high” category, but a high risk
decision is still made. Would it not be better if the criteria were set as either acceptable or
unacceptable up-front? Failure to clearly establish defined acceptability criteria can result is
extremely unfortunate outcomes or cause a relaxing of ethics such as the case with Ford’s
1971 decision in regards to the Pinto. The NHTSA (National Highway Traffic Safety
Administration) established safety criteria that required all vehicles be able to withstand an
impact at 30mph without leaking fuel. During Ford’s testing, it was revealed that the Pinto
was a fire risk if another object collided with its rear under the thresholds set by NHTSA. A
cost-benefit comparison showed that the cost of modifying the design would exceed the total
amount of lawsuits Ford would face over the lifecycle of the brand. [39] This example clearly
illustrates that determining thresholds at the conclusion of the analysis is just as illogical as
A final problem with the current model is how risk variables are presented to the
decision-maker. The NIST model categorizes risk variables in advance, and then it is up to
decision-makers to mentally convert the results into their own utilities which is extremely
challenging. Utilities are variables that quantitatively represent factors that contribute to an
individual’s meaning of success [15], and are the basis for how decision-makers determine
which alternative is preferred over another by linking the presented decision variables with
their own utilities. By clearly defining decision-makers’ utilities up-front, assessors can tailor
the analysis to them and reduce the load of conceptualizing these interpretations which could
help guide the analysis. The current model helps describe risk exposures, but fails to link risk
variables to a decision-maker’s utility, which might encourage him or her to explore the data
further or recognize the effects one alternative decision could have on another by modifying
the parameters. With the current model, the assessor must manually perform a completely
new analysis with each environmental change or with each introduction of a new variable
addition to these approaches, pushing objective assessments of cyber risk to its limits
decisions, reduce overall complexity, and function optimally for organizations in the 21st
century.
58
“We should not be victims of risk, but active managers of it.” ~ Mary Beard, 2010
Just as Mary Beard’s quote suggests in [11, p. 15], organizations are increasingly
finding themselves as the victims of risk management rather than successful managers of
risk. Many institutions have unintentionally become victims by developing programs that
allow assessors to make subjective determinations on the risk likelihoods and impacts to
which their organizations are exposed. These subjective judgments combine individual
biases, perceptions, and/or personal convictions to create inputs that vary greatly from
individual to individual. To demonstrate the variability between expert opinions, a case study
categories that express their confidence in modeled data prepared for the conference. His
study found that the scientists’ judgments lacked consistency and that each interpreted the
categories differently. [22] Though expert opinions generally vary, obtaining the correct
answer is achievable through collective reasoning if there are sufficient participants included
in the survey. Marcus De Saytoy demonstrated this possibility in his documentary “The
Code” by asking 160 individuals to guess the number of jelly beans in a jar of over five
thousand. Though some guesses were either grossly over or under-estimated, the group of
160 were able to collectively estimate within a very small margin of error the accurate
number of jelly beans in the jar just as he had predicted. [28] This is the problem with
make decisions with any real precision. NIST recommends the use of qualitative over
quantitative assessments due to the simplicity, cost effectiveness, and lesser need for
59
expertise in making those assessments. [3] NIST makes this recommendation under the
assumption that only a single individual or a small team will assess an exposure, but it would
require a far greater number of assessors to precisely estimate risk exposure for any system.
The trade-off might be a more cost-effective tool for assessing risk, but subjective analytical
techniques lack the precision needed to accurately assess and forecast risk.
The primary purpose of this study is to suggest an alternative to the current RMF used
by the DoD that maximizes the use of objective variables to minimize complexity and enable
decision-makers to make the most rational decision based on logic as opposed to shaping
management process. Optimization is the goal of minimizing costs while maximizing gains,
and the assumption behind an optimization problem is that the consequences of decisions are
taken into account for future policy options. [9] In order for this framework to achieve its two
primary purposes, it must meet several goals. A proposed framework would synthesize
selected elements of Game Theory, Cultural Theory, Rational Action Theory, Decision
Theory, and effective visualization techniques in order to: (1) simultaneously examine
multiple objectives of the organization, (2) limit bias and subjectivity during the assessment
process by converting subjective risk contributors into quantitative values using tools that
measure the attack surface and adversarial effort, (3) present likelihood and impact as real-
time objective variables that reflect the state of the organization and are grounded on sound
(strategic, operational, and tactical) with maximum transparency, (5) achieve greater
60
representation of the real scenario and strive to model future scenarios, (6) adapt to the
preferred granularity, dimensions, and discovery of the decision maker, and (7) improve the
decision-maker’s ability to select the most optimal alternative by reducing the decision to
rational logic. The final product will be a recommended successor to the current framework
as “RMF 2.0”, and companioned with a Decision Support System (DSS) to aid
within the bounds of the framework. A detailed implementation for achieving these seven
The failures of all large empires and organizations have resulted either from defeat by
effectively manage. LTC Matthew Hopper feels organizations reach a tipping point when
“they can no longer win against their opponents”. [M. Hopper, personal communication,
2016] Extending capabilities enable organizations to seize advantages and succeed over their
Complexity also states that there is a 100% increase in complexity for every 25% increase in
capability, and increases in complexity cause increases in risk as well. [38] Hierarchical
organizations expand their bureaucratic structures to manage both the additional complexities
and risks induced from increasing their capabilities. The DoD is such an organization, and
there are extensive challenges within the management of risk by hierarchical organizations.
There are steps that these types of organizations can take to reduce these challenges, but the
tries to meet the expectations of its customer base or community, while achieving its
principal objectives in a cost-effective manner. The organization must also exert sufficient
influence to shape the conduct and behaviors of its employees without driving them away.
Without any limitations, hierarchies would expand to the point where they start to represent a
select the alternative that meets the requirements of Pareto’s Improvement where a decision
can be made that makes the maximum number of people better off, and no one worse. [10]
Though this is the goal, it is often not the case as there remain some losers even though the
decision reached is for the good of the majority. Disproportional and unfair outcomes occur
because organizations are diverse, and every member has his or her own set of opinions and
values. This diversity presents a challenge to organizations when decisions must be made.
Organizations can attempt to unify people by creating a climate and culture that is understood
of blame. Federal organizations tend to develop strategies that avoid blame, and this
avoidance can powerfully shape the structure, processes, and activities within them. [11]
Occasionally the more rational choice is also the choice that could result in the greatest
blame if unintended consequences were to occur, but blame avoidance can outweigh any
positive credit if the rational choice were chosen. [11] A proposed risk management strategy
must enable federal organizations to select the most rational decision and avoid the blame,
and this can be achieved by basing the decision off of pure logic by describing decision
62
variables in objective terms which can then be shown numerically to be the superior option.
Subjective assessments currently used by the federal government cannot achieve this, so
appointed officials or military officers are indirectly forced by their constituents to avoid
blame. These decision-makers can achieve greater flexibility and shift the opprobrium by
altering the conditions of the risk analysis and the environments of their organizations. The
conditions needing the most focus are rationale, logic, and the objectification of risk
variables which open organizations to approaches that may solve many of their most difficult
The 1983 Royal Society said “there is a need for better estimates of actual risk based
on direct observations of what happens in society” [10, p. 8], and Lord Kelvin said “anything
that exists, exists in some quantity and can therefore be measured.” [10, pg. 10] Assessing
risk objectively (or quantitatively) differs from subjective (or qualitative) analysis by
organization faces in the likelihood a negative consequence were to occur due to an activity,
event, or behavior. Additionally, objective modeling will permit new observations to update
the risk model without having to rerun the assessment process. Objective assessments also
provide the opportunity to compare gains vs. losses, and to evaluate the most optimal and
logical decision among a number of alternatives. Optimizing the decision is selecting the
alternative that imposes a minimal cost while maximizing benefits for the organization [16],
which is fundamental in rational action and decision theories as stated by the Maximax Rule
over subjective ones due to the ability to weight the likelihoods of future successes as given
by Prospect Theory [15], and one of the primary objectives of this study’s proposed model is
to maximize the analysis and presentation according to the preferences of the decision-maker
over time. LTC Matthew Hopper agrees that decision-makers ultimately prefer quantitative
over qualitative assessments, though he feels both have use in making recommendations to
superiors. LTC Hopper also feels that there should be separations between subjective and
objective analyses to prevent one from influencing the other, and quantitative breaks should
be clearly defined if used in the model. Whichever model is chosen, “it should enable
decision-makers to put their energy into the right things”. [M. Hopper, personal
communication, 2016]
standing within proximity of a ramp’s release point with a large metal ball at the top and the
person had to choose between estimates of where the ball should land based on mathematical
analysis or another’s “expert” guess. The rational individual would select the mathematical
analysis because a tested equation has shown the strongest evidence for performing as
expected. Of course many other variables outside the equation might not be taken into
account (for example: wind speed, temperature, etcetera), but these variables are minor and
an assessor could still estimate within close proximity of where the ball would land. This
assessments are incapable of measuring risk outcomes with any acceptable precision. In
addition to precision, the proximity of the subject to the object has an effect on the decision
64
made. The person at the end of the ramp would care little how the trajectory of the ball is
modeled if the person were far outside any hazardous range of the impact area. Subjective
determinations are heavily influenced on an individual’s proximity to the risk factor, which
creates additional problems within an organization as conflicts of interest arise. The problem
becomes even further compounded when multiple subjective variables are combined, which
causes the model to venture further and further from reality. Even in reasonable cases where
subjectivity can be applied, there are still too many situations where poor definition or
ignorance prevents the assessor from making an estimate without any high level of
confidence. [11] When confidence is challenged, implementing a Brier Scoring Rule can help
calibrate people for decision making and discourage them from exaggerating their confidence
included, can also create a false sense of security or doom. If the true probability of a
catastrophe is 51%, then an assessor might assign a “very likely” qualitative category to that
outcome based on the assessor’s perception or intrinsic value system. A “very likely”
estimate communicated to the decision-maker, without raw numbers, might cause him to
interpret this category in a probability range between 70-90% based on his own utility
functions or prior experiences. This scenario removes some ability from the decision-maker
to make a rational and logical decision to accept or disapprove an alternative for which he or
she will have to take responsibility in the event the decision leads to a catastrophe. A
subjective assessment of risk by a single assessor functions properly when there is proof
beyond reasonable doubt that an event or activity will lead to a specific outcome. [14]
Unfortunately, this is rarely the case when risk is assessed against federal IT systems as a
high degree of uncertainty and variability exist among the variables that contribute to cyber-
65
risks. Another significant limitation with subjective assessments involves the how to
opinions. The expertise of the individual submitting the opinion could be challenged, but it is
ultimately the decision-maker that must fall on the proverbial sword for the decision made.
Therefore subjective assessments expose the decision-maker to undue risk, but if the decision
were based on objective data, then blame would shift to the distribution, calculation errors, or
plain misfortune. The intelligent decision-maker understands the risk associated with
subjective assessments, and most will exercise additional caution when presented with
alternatives. This caution often results in decisions with greater reliability that might have not
been the most rational or optimal choice, but would protect him or her from the consequences
of assuming greater risk. Aggressive strategies are occasionally needed to gain superiority
over the competition, but decision-makers take on additional risk if they accept strategies
subjective assessments is the inability to rank and order multiple risk scenarios without a
broad range of qualitative breaks. Typical models in federal risk assessments only assign
three or four categories to likelihoods and impacts which will result in many scenarios having
the same risk classification. An objective assessment results in scenarios with specific
numerical assignments that are unique among others; a subjective alternative being the
Though subjective assessments are severely limited and challenged in their approaches,
Just as with subjective assessment, objective assessments come with their own
challenges and these must be overcome before quantitative modeling can provide
information about the variables to be known as historical facts or observable, and the data
distribution must be such that analysis of the information can make strong assumptions based
assessment methods must make up the deficit. Such is the case when human lives are at
stake, or a decision-maker is trying to decide between what they are willing to pay or willing
to accept. Ethical conflicts may prevent the assignment of values to a single human life,
however an accepted value can be placed on the ability to affect probability of death. The
formula for determining the value of a human life is r = (1/Δp)x, where r = change in risk, Δp
= change in population, x = range in values. (Note - This method breaks down as (p)
approaches zero.) [10] This formula is one example of how ethical decisions, often only
reached through subjective mechanisms, can be reasoned through objective means. Often in
the cyber-domain, there isn’t enough information available or the variability is too great to
variables must have some sort of relationship or link to the decision-makers’ utility in order
for them to interpret the results into some form that supports a rational and logical decision,
and a model is only as good as how nearly it represents the real world. Modeling the real
world within cyberspace is a challenging endeavor, and Van Loon says that “there are no
pure forms of cyber risks; there are only mutations and deviations” which we can only sense
67
“in its distribution or consequences”. [16, p. 160] Tackling this challenge, in an objective
sense, will require risk management programs to consolidate vast amounts of information
and focusing the analysis on assessing the consequences of an impact rather than describing
was described in the beginning of this chapter, where confidences varied in modeled data
relating to climate change. There will always be a point where an objective risk assessment
must transition to subjectivity for final judgment despite the level of detail in an objective
analysis. This is the case when socioeconomic values, politics, or intelligence must
synthesize with the objective analysis of the risk variables to produce decision thresholds
according to the needs of the organization and the decision-maker’s instrumental rationality.
[17] Objective functions set by the decision-maker only succeed if there are common values
and agreements across the organization, and this creates an additional burden to the
organization, which must invest large amounts of energy into shaping its culture to counter
the broad range of diversity that often hinders decision-making. Sometimes this effort is
unachievable, and objectives must be reduced to measurements, such as money and time, that
have a universal understanding to ensure risk exposure is universally perceived across the
organization. A model that overcomes these mentioned challenges and supports the objective
assessment of technological or cyber risks will enable organizations to progress into the
3.4 Post-Modernism
Prior to the modern age, the industrial revolution (or pre-modernism) was
characterized by man discovering ways to harness the elements of nature and his collective
capabilities to expand industrial output, and to make products and goods accessible to the
68
techniques to control the effects of natural or human phenomena. According to Ulrich Beck,
the found ability to control these phenomena is what ultimately created our “risk-society”,
which as David Garland warns is dangerously “spinning out of control”. [14] Many of these
risks generated through modernization never eventually become realized, yet they are
during the rush to ensure computers didn’t crash globally following the changeover of
computer clocks at the start of the new millennium which could have resulted in less panic if
society were more reflexive in its modernization strategy. Reflexive Modernization is when
we confront the risks that are fabricated and introduced by society [14], and this
confrontation is necessary to reduce the complexities that hinder progress. Eventually this
phase will pass, and society will transition to post-modernism where we begin to have
domains, questioning the actual value of progress, and analyzing all the information collected
from previous lessons to form a new model for dealing with the uncertainties and risks
society will face. This new model or paradigm will signal the end of modernism, and usher in
a new school of thought to help us cope with the 21st Century and the Post-Modern period.
In modern times, perceptions of new threats and catastrophes are being conceived
more quickly than society has time to react and manage these realizations. Society places an
enormous amount of pressure on politicians to solve our problems because the complexity of
problems has become so great that individuals are incapable of assessing risks alone. The
69
evaluation of food and drugs is one such example of where government regulators are needed
to verify the safety of products because citizens do not have the expertise or time to make
these conclusions for themselves in addition to all the other threats that pose a hazard. Once a
food or drug has been classified as “safe”, citizens can consume these products with
reasonable assurance that their safety has been essentially guaranteed. One of the primary
motivations behind modernism is to overcome any scarcity or need, but this cyclical effort
resulted in developed technologies that came with unanticipated costs. [16] This cycle
created a surplus of risks and costs that now far exceed the original benefits those
technologies were intended to bring. In addition to coping with these risks and costs, citizens
must develop skills to utilize these new technologies. This began around the mid-20th
Century when magazines began including sections that provided demonstrations on how to
put these new technologies to use. Since this point, individuals have developed personalized
This Google and Wikipedia-fueled boom helped expand this revolution, and created a new
phenomenon called the “Death of Expertise”. [40] The “death of expertise” is incredibly
dangerous and represents the collapse of dialogue and trust between professionals and non-
professionals. The dangers of this collapse become evident when diseases, once thought
bloggers. Social media has also enabled individuals to amplify their voices and discount the
advice of experts by “becoming as loud as the opposing 999,999”. [M. Hopper, personal
media. The elements of social deconstruction and globalization have combined to form the
fundamental freedoms, and this process has hindered society’s ability to collectively reflect
from a person’s social environment, and embedding into a world ordered and revealed by
technology”. [16, p. 26] There have been several responses to modernism, and two of those
are unfettered capitalism and religious fundamentalism. [15] Both oppose modernism, but
ironically embrace the capabilities introduced by it. Modernism has also led to an erosion of
trust between science and religion where each are in constant competition, and this erosion
has led people to trust neither and seek out the answers for themselves which has further
accelerated the process of individualization. This cycle must end otherwise the social fabric
that unifies us all will fall apart, but there are academics who believe the qualities that
characterize a post-modern period will enable society to find stability and security for the
future. Before we can get there, there are several challenges that must be overcome.
contain this risk, and this rate of growth in risks corresponds to advances in technology.
systems, bureaucracies, and identities could also help to reduce complexity by turning larger
problems into smaller ones and solving those smaller ones individually. A model that adjusts
to the granularity of the problem is one such technique. Post-Normal Science, as suggested
values to the public as opposed to assumptions because trust is lost in the experts when their
positions or conclusions shift. Post-normal science would also alleviate the challenges
of this process people desire to see the information for themselves, self-educate on the risk
variables, and then compare their own analysis against the recommendations or conclusions
of experts. This cross-comparison leads into the third and final transitional challenge where
society must find a common balance between non-professionals and professionals. The need
for experts that specialize in various specialties will never disappear, but neither will the
public’s desire to understand how these experts formed their conclusions. Accepting the
following terms between the common person and the expert will enable both to succeed in a
post-modern world: (1) the expert isn’t right 100% of the time, (2) the expert is more likely
than the common person to be right in their area of specialization, (3) expertise is a result of
specialty cannot surpass that of the expert simply by browsing the web, and (5) the common
person’s analysis will always have less value than the expert’s in the eyes of an outsider. [40]
When asked how capable common people are in drawing effective conclusions for
themselves using the information available, LTC Hopper feels that “that reliance on expertise
will always be necessary for good decision-making and is crucial practices in making sound
and developing a framework that synchronizes the multiple risk management theories and
approaches mentioned throughout this study, it will become possible to create a model that is
equally functional, objective, and optimal for those assessing risk in the post-modern world.
73
“Good examples are like bells calling to worship.” ~ Old Scandinavian Saying
concept that demonstrates potential for gained efficiency and optimization. [15] An optimal
risk management framework must include a model such that the three fundamental questions
of risk management are most efficiently and accurately answered, and again they are: (1)
what can go wrong, (2) what is the likelihood that it could go wrong, and (3) what are the
effective model that reveals the answers, yet the framework unintentionally introduces
problems that increase complexity, propagate uncertainties, and decrease the ability to
forecast the consequences of available policy options. Additionally, the current framework
requires a far greater number of assessors to precisely estimate risk exposures than available,
fails to address uncertainties that could further describe the true state of the system being
analyzed, fails to unify the organization under a single decision process, lacks the
opportunity for decision-makers to create their own thresholds or define variables according
to their terms, and doesn’t allow the decision-maker to abstract or explore the data in real-
time, in order to gain insight into the dependencies between risk management options.
Abating these problems can be accomplished by maximizing the use of objective variables
and applies selected assumptions under culture, decision, rational, and game theories to
develop a framework that will meet the needs for hierarchical organizations today and in the
74
foreseeable future. This chapter also introduces a Decision Support System (DSS) concept
with potential to aid implementation, maximize transparency and communication, and keep
members operating within the bounds of the framework. The synthesis of these processes,
theories, and an accompanying DSS merge to present a successor to the current RMF
outlined in NIST SP 800-37; the successor being “Risk Management Framework 2.0”.
The primary objectives of the RMF developed by NIST are to improve information
security, strengthen processes, and encourage reciprocity among federal agencies. [2] The
proposed framework builds upon these objectives, and introduces concepts that enables
organizations to: (1) simultaneously examine multiple objectives, (2) limit bias and
subjectivity during the assessment process by converting subjective risk contributors into
quantitative values using tools that measure the attack surface and adversarial effort, (3)
present likelihood and impact as real-time objective variables that reflect the state of the
organization and are grounded on sound mathematical and scientific principles, (4) aggregate
transparency, (5) achieve greater representation of the real scenario and strive to model future
scenarios, (6) adapt to the preferred granularity, dimensions, and discovery of the decision
maker, and (7) improve the decision maker’s ability to select the most optimal alternative by
reducing the decision to rational logic. Proposed solutions to meeting these additional
requirements will be provided in the following section to show how objective risk assessment
strategies, through a combined approach, enables the framework to reduce complexity and
4.1 Requirements.
techniques, while the methods and model proposed here will convert organizational priorities
into consequential impacts that have potential to prevent the organization from meeting its
deterministic model will enable the organization to fully understand the scenario by
how each of these risk factors reciprocate across the organization based on selected
applying assumptions under Culture Theory can help organizations discover solutions to
managing these conflicts of interest. One assumption is that each member under the culture
theory model possesses a small window of truth and can contribute to the analysis.
Contributions from every member of the framework will ensure proposed policy options
objectives must be taken into account for everyone to feel connected to the analysis and the
results, otherwise members of the framework become detached from the problem based on
maintain a culture that is conducive to finding a balance or agreement on the values of the
organizational objectives. This culture can be reinforced through the framework by including
a mechanism that causes members to repeatedly engage the mission and priorities of the
risk analysis strategy make the examination of multiple objectives a viable and optimal
modeling solution.
Subjectivity is created through the uncertainties, biases, and perceptions that plague
risk assessments, and every metric evaluated in the current risk model are based on the
empirical evidence (such as events logs, damage reports, or attack surface), and by shifting
one such process included in the framework that proposes that the decision-making body
avoids modeling risk factors that are highly uncertain (for example, a threat’s capabilities),
and instead focuses on satisfying critical requirements and exploiting favorable opportunities
often sought through social discourses, and the Social Amplification of Risk Framework
(SARF) provides an explanation for how risk is translated from its initial realization through
resultant outcomes. A study of the SARF process benefits the model by developing
subjective filters that influence how risk is interpreted and communicated across the
organization, and ensuring that only objective observations influence any modeled severity of
risk. Perhaps the most amplified and exaggerated variable in risk analysis is the threat
variable, and assuming all cyber threats have equal capabilities ensures more robust and
resilient strategies are developed and implemented. Perceptions of risk cause members to
develop prescriptive measures that are expected to mitigate an outcome, but these measures
create a false sense of security. The alternative is to develop proscriptive measures that result
77
from a pre-mortem analysis, where members assume a failure has occurred and determine the
steps needed to return the organization to full operational capacity. The framework supports a
proscriptive strategy by addressing these catastrophes prior to the analysis rather than during
or towards the conclusion of the analysis. Objective analysis does have its limits in
cyberspace, and this study suggests representing these variabilities (or lesser degrees of
Expected Values, Poisson’s Distribution, or the Monte Carlo Method for numerical
representation. All of these methods have potential to represent the strength of data
distributions within the proposed model, and the decision as to which method has the greatest
any offered recommendations or based on their prior experiences. A Brier Scoring Rule
could be added to the framework to supplement an effort to manage the perceptions of its
members, by penalizing assessors for valuing risk variables outside the scope of reality. In
limit subjectivity and bias by limiting modeling to objective variables, managing the
perceptions and expectations of members (or penalizing them), and by taking a proscriptive
approach.
Any effort to model risk must be grounded on sound mathematical and scientific
principles, and an assumption under Culture Theory states that hierarchical organizations
operate under the conviction that science and technology give them the ability to control
phenomena that threatens them. Since scientific principles are grounded on factual
observations, any effort to model risk must therefore be based on a current or historical
78
analysis of objective risk variables. The three variables considered objectively in the
proposed model are: vulnerability, likelihood, and impact; excluded from the model is any
dimension due to subjective and/or random properties. (See Table 2) All of the variables
Table 2.
provided in NIST SP 800-30 are evaluated solely based on assumptions, while the proposed
variables are objectively derived measures of each contributor to the risk scenario.
comparing the attack surface of a resource against the amount of effort needed to defeat that
resource. Likelihoods are assigned by multiplying the resource’s vulnerability with its value
terms of recovery time or cost estimates if the catastrophic outcomes were to occur. These
recovery cost/time estimates are reduced by combining any redundant capabilities for an
organization’s net recovery cost. Plotting the likelihood (X) and impact (Y) variables model
the state of the organization as if the threat outcomes had occurred. Each of these variables is
database or other repository to reflect the most accurate state of the organization. This
framework does so in order to answer the Royal Society’s call for “better estimates of actual
79
risk based on direct observation” by making every effort to limit the evaluation of risk to
function such that risk is seen from the perspectives of others to construct a larger or smaller
picture of the risk scenario. This is achieved by maximizing transparency throughout the
decision process and promoting maximum communication across tiers within the
organization. Every member of the framework must be part of the process, and able to realize
how their input contributes to other processes within the framework. The framework
achieves this goal by unifying the process under a single DSS that is server-based and
approach considered reflexive would assume that successful decision-makers attempt to view
the problem from the perspectives of others, and that members of the organization desire to
see themselves in the results of the analysis. Under a single platform that unifies the decision
process, members at different tiers can witness processes at levels below and above them,
and understand how their inputs impact the results of the analysis as presented to the
decision-maker. Members could feel like they are being marginalized by the process or the
decision-maker could fear the reactions of subordinates from a decision, but a transparent
The validity of a model is the level at which it accurately represents the real system or
scenario, and the ability to model future scenarios is dependent on any distributions that can
80
be extracted from past data and/or to the degree direct observations affect future outcomes.
The RFRM framework suggests that a Hierarchical Holographic Model (HHM) is the most
accurate representation of the current risk situation [9], and includes an established set of
requirements that enable the organization to succeed. This procedure is also the first phase of
the RFRM methodology, and is carried over to the framework proposed by this study where
those requirements are inversed as consequential impacts that prevent success. There is a
general belief that complexity brings the model closer to reality, but this assumption is a
common fallacy. Assumptions fill gaps that objective analyses cannot fill, and both
making assumptions in the absence of exploitable data distributions is by using the Principle
by the number of possible outcomes for a single event. This assumption is also a
studying the relationships between previous organizational states, actions, and outcomes
versus the relationships between a future threat and outcome. Assuming that the next state
depends on the current state and actions chosen is a hypothesis under the Markov Decision
Process, and helps model the future by maximizing the use of prior information to predict
outcomes. [19] An additional challenge with determining the future is to reduce the effects
observation has on the outcome or actor perpetuating the threat as a result of the Heisenberg
Uncertainty Principle or the Hawthorne Effect. A framework that considers the fourth axiom,
under the four “Next Generation Cyberdefense Axioms” proposed by Chris Williams where
active defenses are configured such that attackers may enter a system but cannot proceed
beyond a certain point, would limit the effects observation has on a threat. [27] Additionally,
81
creating a greater distance between the subject and a threat-object reduces this impact.
Greater distance is achieved by excluding the threat variable, and focusing on the conditions
intelligence agencies should still gather as much information on threats as possible and
perform threat analysis in parallel with a risk analysis for risk-control development and
evaluation as opposed to allowing threat variables to influence the risk model. All of these
where the most undesirable outcomes are conditionally guaranteed, the current scenario is
more accurately represented, and modeling the future is a less resource-intensive effort.
6. Adapt to the preferred granularity, dimensions, and discovery of the decision maker.
A model that adapts to the preferences and utilities of the decision-makers will
compel active engagement by them. By establishing values for utilities (or diminishing
utilities) and through an assumption under Rational Action Theory, an action should be
chosen that is optimal to the decision-maker’s preferences. Preferences include how the
model presents the analysis, where thresholds are set, and how variables are defined. This
study proposes that the optimal opportunity to establish preferences is prior to the analysis to
ensure variables can be evaluated as they link to the decision-makers’ utilities. The decision-
makers will have a list of options for representing their utilities, and three examples of these
options are an expression of his or her subjective utility, expected utility, or an application of
their optimism-pessimism estimates. The decision-makers also have the option of removing
risk variables that they feel lack relevance or relation to the decision. It is fair to leave this
choice up to decision-makers, since they will have to answer for any outcomes, but the
variables must be clearly defined before any are selected or removed for relevance. A model
82
primarily tailored to decision-makers will encourage exploration when the results of the
analysis are presented, which will enable them to fully understand the magnitude of the
problem, and discover dependencies between conditional outcomes and the impact current
decisions could have on future policy options. This exploration is made exclusively possible
by incorporating a DSS that allows the decision-maker to modify the parameters or visually
compare options within a single model. Deeper exploration is allowed by adjusting the
granularity of various risk scenarios and deconstructing the agreed upon risk factors into
contingent risk factors that form the aggregate scenario. Potential for this deeper exploration
and more will be demonstrated in the DSS concept later in this chapter.
In order to meet the final requirement, organizations must maximize the use of
objective variables and apply a cost function to responsibly compare alternatives. The cost
function must produce a variable that has universal value to everyone operating within the
framework, and the two most universally understood values are time and money. Game
Theory assumes that rational players select the alternative that ensures the least regret and
dominates all others, and Rational Action Theory assumes that a rational ordering of actions
is based on the decision-maker’s understanding of logic. The most logical alternative is also
the most optimal, and the optimal alternative is the one that imposes minimal cost while
maximizing benefits for the organization. The proposed framework presumes discovering
minimal cost is achieved by selecting the most cost-effective control that meets the aims of
the decision-makers, and maximum benefits are achieved by implementing controls that
reciprocate among multiple risk factors and reduces risk severity under the thresholds
established prior to the analysis. Assumptions under Game Theory are applied where the
83
performing a cost-benefit analysis (i.e. DER) on behalf of the threat to determine the
compare alternative courses of action. Combinations of DER and risk time/cost estimates
represent the state of the organization for each scenario, and comparing these states with
various available courses of action reduces the decision to rational logic. Alternative courses
of action are compared using the Dominance Principle, an assumption under Decision
Theory, where one action and state combination are proven to dominate another. The
decision with the least regret is the choice that remains most logical and optimal over periods
of time, and this is achieved by demonstrating the potential that one decision could influence
another with respect to time. Ultimately, exchanging probabilities for DER to represent
flexibility to make a decision according to their own preferences and understanding of the
scenario.
RMF 2.0 is a decision and approval process that can be initiated by anyone within the
organization where the problem is presented to upper management for a decision on whether
to pursue a course of deeper analysis. This is a cyclical process where Tier 1 first establishes
priorities and risk thresholds, Tier 2 then produces risk factors that threaten business or
mission processes, and Tier 3 assesses risk based on the current state and conditions of the
organization. Following Tier 3’s assessment, the analysis is presented to Tier 2 for review of
the current security plan and the opportunity to prepare recommendations for Tier 1. Tier 1
reviews the analysis and Tier 2’s recommendations, and then makes a final decision based on
84
what is presented. (See Figure 3) These steps are described in much greater detail in the
following section.
susceptibility of the organization to a risk scenario and the expected performance of various
security or policy options, thus allowing decision-makers to abstract the information and
make comparisons according to their preferences. (See Table 3) This process begins at Step 1
by identifying a need for risk analysis and defining the subject or system to be analyzed.
explanation of the system’s security requirements. These definitions enable Tier 1 to process
Step 2, which involves establishing priorities to be kept in mind throughout the analysis and
risk variables are removed from the process through an information-gap analysis during this
step. (Reference Info-Gap Procedure) Once priorities and preferences have been established
85
consequential impacts most severe to operations in the form of an HHM. This analysis is
conducted from a pre-mortem perspective that assumes failure has occurred, and determines
the consequential impacts that led to this failure. These impacts become the risk factors that
guide the analysis conducted at Tier 3. Step 4 involves determining all the organizational
resources that support the business or mission functions at risk, and assigning a likelihood
variable to each of those resources through an objective assessment of each individual DER.
Once likelihoods have been evaluated, Tier 3 then links existing security controls or policies
from NIST SP 800-53 (“Security and Privacy Controls for Federal Information Systems”) to
either its contribution to modifying the adversary’s DER or reducing the recovery cost
following a failure related to one of the risk factors developed in Step 3. Tier 3 then conducts
an impact analysis in Step 6 where the impact is described as a Risk Time (or Cost) Estimate
for the amount of time or money it would take to recover from any one of the consequential
impacts developed by Tier 2 in Step 3. Once the impact analysis is complete, Tier 3 submits
the analysis to Tier 2 for review. It is at Step 7 where Tier 2 reviews the current security
plan, and compares existing security controls against available controls. Additional controls
are selected or existing ones removed in this step as Tier 2 determines which controls are
optimally suited to defending the organization against a particular risk factor. Tier 2 then
submits their recommendations to Tier 1 for final approval at Step 8. This step is when the
most optimal course of action is revealed to the decision-maker as all decision variables and
tacit knowledge merge to form a personal aggregated assessment of the risk scenario. Step 9
is a continuous process, following final approval, where the security plan is constantly
86
reflected upon for changes in the environment or conditions that might alter the assessments
of impact or likelihood. A suggested synchronization strategy between DAP and RMF 2.0 is
Table 3. - NIST RMF vs. DAP vs. RMF 2.0 (w/ Applicable Organizational Tiers)
The proposed risk model’s output is projected onto a two-dimensional plane, where
the X and Y axes represent Likelihood and Impact respectively. (See Figure 4) The two
additional dimensions of Uncertainty (or statistical inferences) and Time are represented as
(z) and (t) independently so as to not disassociate the former variables from the data source.
Uncertainty is graphed as a radius extending from the {X,Y} plot of the risk factor being
evaluated that represents the variability among the objective data. If the decision-maker
prefers, there are alternative methods to measure uncertainty which will be provided later in
this chapter. Time is a fourth dimension that represents the projected change in risk over
time, and there is also an opportunity for Tier 2 to plot periods of time that cover critical
missions where risk is expected to peak. Classes of risk are categorized as either
Medium, or Low”, and definitions of these thresholds are unique to each analysis as decided
communicating and articulating risk to top-tier decision-makers. This study suggests that
abstracting the data according to these guidelines mentioned above will help: (1) gain the
attention of audiences, (2) enable audience members to retain information, and (3) influence
behaviors within the organization. The proposed model achieves the first aim by illustrating
trends in risk along lines of magnitude and clearly identifying those risk scenarios that are
most unacceptable to the organization. The model achieves the second aim through its
simplicity and its clearly defined thresholds on a two dimensional plane. The third aim is
achieved through impactful and effective presentation that provides every member operating
within the framework with an understanding of the significance of each policy selected.
88
reduces biases or disagreements. Modeling risk in these manners will additionally enable the
model to illuminate hidden patterns, hold attention, inspire, and promote exploration. [25]
The primary decision-maker would not be able to select the most optimal alternative
if the model were not able to abstract various relationships or were not tailored to his or her
preferences. These preferences for model presentation are established prior to the analysis,
and models risk such that the scenario can be viewed from multiple perspectives as the
(or statistical inference) and the criteria that defines the thresholds are also both established
by the decision-maker prior to the analysis. Defining these model attributes beforehand helps
guide and shape subordinates’ perceptions as if they were viewing the model through the lens
of the decision-maker. Monetary cost is one of the most critical decision variables, and this
model aligns cost thresholds with acceptability thresholds which estimates the amount of
monetary capital needed to shift all the variables in one risk category to the lesser threshold.
Modeling risk according to the preferences of the decision-makers will generate greater
The value of this new framework is that it addresses the NIST model’s oversights of
answers the three questions fundamental to all risk assessments. Returning to the Gartner
Maturity Model mentioned in Chapter 1, this framework enables organizations to select the
most optimal cyber-defense strategy among a number of alternatives where the competitive
89
advantage is assumed over adversaries. The optimal strategy is selected by comparing the
reached when one opponent is able to defeat their opponents, but when one opponent has
and transparency between different hierarchical levels in the decision process, and by
enabling members to see how their analytical contributions impact the risk model.
Unification is furthered through a mutual understanding of the priorities and values of the
decision-maker. Since the organization’s state is objectively described and values are
universally understood, there is little opportunity for the assessor to inject personal bias or
subjectivity into the analysis. Objective analysis also reduces uncertainty and complexity,
opposed to the data speaking for themselves. These benefits and more are all achieved
companion to the proposed framework to aid implementation, unify the organization under a
single decision process, record all transactions for later inspection or review, and keep
members operating within the bounds of the framework. The three fundamental components
to a DSS are the database, the model, and the user interface. [41] The DSS must take several
90
design factors into consideration in order to be effective: (1) work analysis, (2) memory
limitations, (3) attention spans, (4) cognitive processing abilities, and (5) decision-making
abilities. [19] Work analysis is how the DSS frames and explains the decision variables such
that they enable the problem to be most accurately described. The DSS must also ensure that
processes are compartmentalized such that analysts and decision-makers aren’t overwhelmed
or become confused by the problem because too much readily available information can
exceed the memory or attention capacity of members. The DSS must also provide visual aids
that explain the variables or train members on some of the complexities associated with the
programming. Additionally, the DSS must reinforce current procedures that correlate to the
way members execute processes, and this issue is corrected by implementing a forcing
mechanism that causes all members of the framework to revisit previous processes and the
decision-maker’s set of priorities before proceeding. The DSS considers all these challenges
and guides organizational members through the nine-step framework proposed under RMF
2.0 to ultimately enable organizations to determine the most optimal defense strategies for
The RMF 2.0 DSS is a server-based user-friendly collaborative tool that is completely
transparent and accessible to everyone operating within the framework. The initiator creates
the form using the template provided by first naming the decision process according to the
nature of the problem to be analyzed. (See Figure 5) Next, the initiator includes the most
current mission statement for the organization which will be revisited by every member that
provides input or reviews this document. The initiator also provides the points of contact for
each member representing the tier’s approval authority in Step 1 - “Select ORG Tier”. These
91
names are synchronized with an email database so that the leadership at each level are
must return to Step 1 to select their operational tier so that the mission and priorities listed in
the next step are reinforced. Once the form has been prepared and leadership at each tier has
been identified, the form is passed to an appointed representative in Tier 1 to insert the
92
priorities and utilities of the highest decision-maker in Step 2 - “Set Priorities and DM
Utility”. Utility Factor can include a mathematical formula (for example, Subjective Utility
Function or Diminishing Utility) that diminishes or increases values over time, or any other
intrinsic value system of the representative’s choice. Another utility option is to apply the
Optimism-Pessimism Rule where the decision-maker offers his or her degree of optimism (or
confidence) to be applied to each objective variable. Once Step 2 is complete, the form
remains with Tier 1 so that the decision-maker can define risk and cost thresholds for the
most relevant objective variables in Step 3 - “Set Risk Thresholds”. The decision-maker also
has the option of selecting how uncertainty will be represented in the modeled output.
Intervals to illustrate certainty based on either historical records or expert opinion. For
convenience, the decision-maker also has the option of using handles to drag or expand each
risk category as opposed to manually entering the acceptability criteria. The flexibility in
Step 3 might not always be exercised, but according to LTC Matthew Hopper “the ability to
customize the analysis might not be needed with each decision, but the ability would still be
preferred.” [M. Hopper, personal communication, 2016] Once the decision-maker has
established the parameters of the assessment and is comfortable with the representation, the
Tier 1 authority clicks “Approve” and the form is routed to Tier 2 to process Step 4 - “Set
Timeframe and Risk Factors”. (See Figure 6) This step includes setting a reasonable
timeframe for the analysis, developing risk factors that threaten mission or business
processes, and optionally identifying critical missions during the assessment period. The risk
factors are developed using a proscriptive approach that views risk outcomes as failures
93
through pre-mortem analysis. There is no limit to the number of risk factors or critical
missions that can be identified, and each input created is displayed within the model. Tier 2
also has the option of selecting a number of glyphs for each risk factor that best connects the
subject to the issue; for example, a building glyph to represent a risk factor of
physical damage to headquarters. Once Tier 2 has reached an agreement on the risk factors of
94
Step 4, the appointed authority clicks “Approve” and the form is routed to Tier 3 to complete
Step 5 - “Assess Vulnerability and Likelihood”. The goal of Step 5 is to determine the
likelihood that each resource that supports the business or mission process identified in Step
4 can be defeated. This likelihood is not a probability, but a numerical representation of the
Damage-Effort Ratio (DER) and relative value for that particular resource. The DER is a
vulnerability measure of the attack surface of a resource divided by the amount of effort
needed to defeat it, and then multiplied by the value of the resource to the adversary; value
can be assessed manually or through an attack-graph algorithm that measures the attack
surface and adversarial effort for network-connected resources. (See Figure 7) Once Tier 3
has completed Step 5, the assessor “Set’s Existing Controls” in Step 6. (See Figure 9) The
assessor begins this step by clicking on the “Control Worksheet” button, which opens a
separate document that provides all the controls provided in NIST SP 800-53 and provides
space to manually enter custom security controls. (See Figure 8) Clicking on the “Control
Reciprocity” button creates an additional column in the worksheet that enables the assessor
to show reciprocity between single controls and multiple risk factors. The worksheet also
provides the expected performance of each control as to how effective they are at preventing
defeats. Once the form is completed, the assessor enters the number of security devices
deployed and the number of personnel (PAX) required to administer that control. The
95
assessor clicks “Approve” once an inventory of security controls is complete, and moves
onto Step 7 - “Assess Impact and Review”. Tier 3 assesses impact as a function of recovery
upon the organization based upon previous occurrences. If the impact hasn’t occurred before,
then the assessor collaborates with other departments to determine what it would cost the
96
organization to return to operational capacity should the event take place. As impacts are
assessed, these variables are combined with the likelihood estimates produced during Step 5
to model the risk scenarios provided by Tier 2. The assessor can also view the total
investment of the security program as a combination of monetary and human capital. Once
the assessor is confident in the accuracy of his or her input, the assessor clicks “Approve”
and “Submit for Review” which routes the form to Tier 2 to develop recommended controls
97
in Step 8 - “Review and Recommend Controls”. (See Figure 10) The Tier 2 representative
opens up a control worksheet similar to the worksheet used by Tier 3 to compare alternatives
and create recommendations. (See Figure 11) The primary difference between the two
worksheets is that Tier 2’s worksheet shows the annual cost of each control. A side-by-side
cost comparison is shown in the process window that compares the current security
investment and the additional investment required to implement the recommended controls.
The model shows the change in future risk, and is based on the demonstrated performance of
various controls. Once Tier 2 has approved their recommendations, the form is routed to Tier
1 to complete Step 9 - “Finalize Assessment”. In this portion of the document, the decision-
maker is presented with a table showing the recommended controls to reduce the impact of
various risk factors and a model showing comparisons between recommendations and the
current security plan. Decision-makers can open Tier 2’s control worksheet and modify their
recommendations based on a deeper analysis of the variables that contribute to the risk
scenario. Decision-makers can also select, de-select, or create additional dimensions at this
point to help them gain a better perspective of the scenarios represented in the model. They
can also double click on each risk factor to adjust the model’s granularity showing how the
individual resources plot on the graph according to their likelihoods and impacts following
98
uncertainty to compare the estimates of various methods. Once the top decision-maker is
comfortable with the analysis and fully understands the potential impacts of his or her
decisions, then the form is finalized and available for everyone to review. The form and all
transactions are archived to enable future inspection, review, or further updates. Since the
document is synchronized with a cyber-event database, it can live and update risk according
to any changes occurring in the environment that could affect the risk variables. (See
The primary goals of the DSS are to facilitate decision-making, educate decision-
makers on the various contributors to risk, and enable them to explore the model to find the
most optimal solution while keeping members operating within the bounds of the framework.
Many of the processes that model the data and performs analytics are invisibly executed by
automated systems, and there are guides throughout the form that educates members of the
framework on the various risk variables and available security controls. Much of the analysis
is objectively collected from various databases that are synchronized with the DSS to
maintain authorizations, evaluate the effectiveness of controls, and model the risks. The
reliance on automated data collection processes reduces complexity by lessening the need for
assumptions; every input is based off historical data, empirical observations, or a thorough
cost-analysis. Integrating a DSS into the framework additionally reduces the complexity of
the risk assessment by enabling everyone to visualize their contributions to the risk model,
thus proving as far superior to current methods recommended by NIST. The following
chapter will compare both the NIST framework and the DSS-reinforced framework in a
99
detailed evaluation of twenty characteristics generally agreed upon by the risk experts as
“In the end, risk is linked to mission assurance.” ~ Russell Fenton, 2016
making, and improving collaboration for those hierarchical organizations evaluating risk in
the post-modern world. Implementing this framework would require a complete overhaul of
the way risk is perceived and appropriately managed by organizations, but whichever
requirement.
The purposes of this chapter are to provide an evaluation of the proposed and NIST
risk management frameworks, identify a number of the challenges organizations may face
upon adoption, and finally provide a sequential course for eventual adoption. The evaluation
portion of this chapter compares the two frameworks in a side-by-side comparison across
twenty characteristics that have generally been agreed upon as most essential by risk experts
to demonstrate superiority. The next portion highlights some of the many analytical and
hierarchical challenges that could hinder implementation, and how these can be responsibly
overcome. The implementation section lays out a logical five-phase “way forward” strategy
proscriptive, and reflexive approach to managing risk. This thesis and chapter will conclude
by providing a few of my own personal reflections developed throughout the course of this
project, explaining what initially inspired this effort, and lastly sharing my optimism in the
event that the DoD sees the same importance I see in shifting the way risks are assessed in
NIST? In addition to meeting NIST’s three primary objectives and the seven key
requirements outlined in Chapter 4, the framework must exhibit twenty characteristics that
this study suggests are necessary to make any proposal a viable successor.
those framework properties considered by the experts as most critical. The current
framework recommended by NIST meets seven of the twenty characteristics, while the
proposed framework was designed to satisfy all twenty. These twenty characteristics attempt
understood: The NIST RMF and RMF 2.0 both meet the first characteristic of an effective
framework, but NIST’s risk model fails to reveal the relationships between current decisions
and future impacts resulting from those decisions. This analysis is achieved in the proposed
model by demonstrating the change in risk over time relative to the future state of the
2. Holistic and comprehensive by addressing and prioritizing HW, SW, ORG, Environmental,
and Human failures/risks: The two frameworks both attempt to meet the second
characteristic by addressing and prioritizing all known sources of risk, except the proposed
framework tries to combine many risk scenarios under one consequential impact to business
or mission functions.
3. Represents the “real world”, hierarchal structure of the ORG, and behaviors over space
and time: The NIST framework fails to meet the third characteristic, while RMF 2.0
considers risk factors as impacts to business or mission functions as actual risk to the
organization under a single model that represents the consequences of multiple risk scenarios
as they affect every tier under the hierarchical structure over space and time.
4. Takes into account all non-inferior and inferior scenarios and solutions: RMF 2.0 meets
the fourth characteristic by accounting for all scenarios through a proscriptive and pre-
mortem analysis by assuming failure has occurred and considering all the contributing factors
to those scenarios. These failures are the results of many scenarios, and every available
103
control (or solution) is assessed for its ability to manage the consequences. Additionally, the
incorporation of attack-graph algorithms will analyze and enumerate almost all potential
5. Based on firm technological foundations and uses all available tools in risk measuring
approaches: The fifth characteristic is met by RMF 2.0 from measuring the state or previous
measurement tools and techniques to compare alternative solutions. The proposed framework
is scientific in that variables are constructed based on objective observations, and analyzed
using accepted mathematical techniques to illuminate patterns or distributions that can help
6. Uses mathematic and scientific principles to determine likelihoods of risk scenarios: The
NIST approach is completely based on subjective assumptions, while RMF 2.0 collects
against the data to prove statistical relevance or measure the degree of certainty in sound
assumptions.
7. Involves the entire ORG in the ID and mitigation of risk, and motivates and rewards
personnel: Both frameworks meet the seventh characteristic, but RMF 2.0 is more
104
to reward or motivate personnel where their contributions or best judgments become realized,
8. Recognizes the importance of top-level commitment and publishes policies related to risk
management: Both frameworks meet the eighth characteristic, but RMF 2.0 requires greater
9. Strives to meet multiple objectives and balances competing objectives; avoids lumpiness:
RMF 2.0 meets the ninth characteristic and evaluates multiple objectives simultaneously
through a deterministic and objective approach, and allows everyone to contribute to the
analysis. Lumpiness is avoided by providing space in the DSS to input every contributor to
risk and providing the opportunity to deconstruct each risk factor into individual resources
10. Separates and clearly defines state (quantitative) variables and decision (qualitative)
variables: The tenth characteristic is met by RMF 2.0 by creating independent dimensions for
11. Effectively enables Organization to reduce risk to acceptable levels based on Risk
Determinate Factors (RDF): RMF 2.0 achieves the eleventh characteristic where
105
functions. RDFs are used during impact analysis to assist calculating risk time or cost
12. Resistant to the subjectivity and perspectives of the analyst and presents only the facts to
the decision maker: The twelfth characteristic is met by RMF 2.0 by limiting the analysis to
objective variables, and reserves subjective analysis to its own dimensions for final decision-
making.
decision variable: RMF 2.0 succeeds in the thirteenth characteristic by presenting a variety of
mathematical options for optimally representing uncertainty within the model. This third
dimension is modeled such that the decision-maker can estimate the range of consequences
14. Practical, logically sound, adherent of evidence, and open to evaluation: Both
frameworks achieve characteristic fourteen, but RMF 2.0 achieves greater adherence to
15. Compatible and effectively communicates risk across the entire organization: Both
frameworks achieve characteristic fifteen, but RMF 2.0 extends this capability by unifying
106
the process under a collaborative and completely transparent DSS. The DSS is also outfitted
with guides and allows decision-makers at each level to explore the analysis for more
informed decision-making.
characteristic fifteen, but RMF 2.0 maximizes transparency by allowing everyone with
access to the DSS generated form to view every process executed at every level.
17. Presents the optimal amount of information to decision makers to enable them to make
best decisions for the ORG: The NIST framework fails to achieve characteristic seventeen by
presenting unprovable assumptions to the decision-makers, while RMF 2.0 allows decision-
makers to determine the optimal amount of information, based on objective variables, needed
18. Appropriate weights are assigned to contributing factors and optimization coincides with
model construction: NIST lacks characteristic eighteen due to the inability to assign specific
weights to variables, while RMF 2.0 models risk based on objective variables that provide
specific measurements. RMF 2.0 also views the problem from an optimization perspective
where alternatives are selected based on their ability to provide maximum gains at a
minimum cost.
19. Effectively weighs costs versus benefits, indexes severity appropriately, and takes into
a cost variable that compares the organizational expenses associated with each cyber-defense
thresholds that are tailored to the decision-maker’s willingness to accept risk, and deploys an
20. Innovative, based on explicit assumptions and premises, and attuned to the needs and
policies of the organization: The twentieth and final characteristic is met by RMF 2.0
through integration of a flexible and collaborative DSS, which automates the objective
assessment of risk variables as they impact the current and future states of the organization.
5.2 Challenges
Author Clayton Christensen has said that “solving challenges in life requires deep
understanding of ‘what’ causes ‘what’ to happen”. [18, p. 16] In order to gain this deep
multiple theories to explore the problem to its fullest and discover the relationships between:
(a) conditions and outcomes, (b) the assessor and the object being analyzed, (c) current
decisions and future impacts, and (d) between the organization and the environment. The
discovery challenges posed against these relationships can be broken down into the following
three categories: (1) objectivity, (2) analytics, and (3) hierarchical organization challenges.
Each of these categories will be explored further in the following three subsections.
Objective analysis (1) can describe the four relationships mentioned above, but this
approach in the cyberspace domain is challenging due to the many uncertainties present and
108
the difficulty determining returns on investment (ROI) in cyber security. In order for federal
security investments under an objective risk management strategy, members need to operate
under the assumptions that: (1) true risk is the impact to business or mission functions; not
the scenario itself, (2) an assessment of threat capabilities greatly increases the complexity
and reduces the reliability of a risk assessment, (3) the most rational and logical alternative is
the one that can be quantitatively shown to be superior according to the decision-maker’s
preferences, and (4) the likelihood of a successful cyber-attack is conditional and relative to
the state of the organization rather than the offensive capabilities of the threat. Creating and
instilling these mindsets for any organization willing to adopt the proposed framework will
be a great challenge, but this study expects organizations will achieve greater success in their
risk management program by implementing an objective risk assessment strategy. The DoD
will not be able to view their ROI in a manner similar to the private sector, but can instead
view returns in the discovery of an optimal security implementation and any capital savings
gained.
The analytical challenge (2) to an objective assessment is modeling the data in such a
way that the relationships between the four relational pairs become revealed to members of
the framework for effective decision-making at each level. The primary challenges to
analytics, relative to RMF 2.0, are: model reliability, sufficient data, observation effects, and
the development of scientific controls to validate modeled data. Model reliability is heavily
coincidental vs. intentional impacts, deep uncertainties, the percentage of threats captured by
109
identified consequential impacts, and the ability of different permutations to produce similar
results. Further research will be required to develop strategies that effectively normalizes the
empirical observations that is reasonable for statistical analysis, and currently there are many
barriers that inhibit a trustworthy analysis. Overcoming this challenge will require
Observation is the third challenge to objective risk analysis, and research conducted
Amazonian tribe can result in behavior modification, so can observing a threat in the cyber
domain. The opposite is true for observing natural threats (e.g. earthquakes, tornados, etc.),
earthquake’s epicenter located under a data center can have the same business impact as a
virus, however the same control could reduce the impact of both threats. A study must be
commissioned to determine the appropriate distances between the assessor and the threat to
analysis, and these controls validate the effectiveness of selected alternatives and model
reliability following implementation. Deploying these controls is incredibly risky and may
require the organization to purposely expose itself to the adversary, but there are safe means
active defenses. Unfortunately there is one such case where applying the fourth cyber-
defense axiom proved fatal, and this occurred during the U.S. Office of Personnel
110
Management (OPM) data breach of 2015 where a second attacker executed its attack
undetected while the primary attacker was being closely monitored by OPM and the U.S.
Carlos Jaeger has said that the most serious challenge for assessing risk in the post-
modern period will be making rational decisions collectively without sacrificing individual
between the four relational pairs (or actors) of: (a) conditions outcomes, (b) assessors
objects, (c) decisions future impacts, and (d) organizations environments. This
shifting priority from blame avoidance to rationality, acknowledging and not penalizing
ignorance, and reducing the outsourcing of capabilities. People will always feel excluded
from a process, but organizations must ensure everyone is able to make a contribution to
solving the problem. The organizational member that makes the largest contribution is the
decision-maker, and he or she must have an accurate understanding of their utilities and
acceptance criteria for the proposed framework to function properly. The prospect of blame
will influence Actor 1’s decisions against Actor 2 involving all relational pairs, but blame
can become less of an influence if decisions are reached objectively and rationally. In
addition to blame, the social penalties of revealing one's ignorance is a second fear. There
will always be some degree of ignorance between relational actors, but organizations should
view ignorance in a positive light and see these as opportunities to grow. Some ignorance is
Organizations must forecast what capabilities they can honestly afford to outsource, and this
This framework was developed specifically as a means for the DoD to assess risk
against federal IT systems within the cyberspace domain, and the problems mentioned
throughout this study only become magnified if organizations continue to address and
introduces additional risks because this acceleration doesn’t permit sufficient time for
organizations to evaluate policies that are expected to effectively manage these innovations,
and it causes hierarchical organizations to expand to points that they can no longer “win” in
their engagements with the adversary. Though developed specifically for the DoD, this
framework can be applied to a variety of organizations in the private sector. The DoD has
served as a vehicle for fundamental change in the past and choosing to implement this
framework could pave a path for other organizations to approach cyber-risks in a more
efficient and effective manner. The framework can be implemented in five stages that (1)
evaluates policies and processes, (2) reinforces culture and corrects the mindsets of
organizational members, (3) consolidates risk data from multiple sources and develops the
DSS, (4) evaluates the effectiveness of the proposed framework, and (5) synchronizes risk
The first stage is a reflexive process where the organization evaluates all current and
past policies for effectiveness through deep collaborative discussion and analysis. Effective
policies are cost-efficient and optimally suited to the needs of the organization, and often
ineffective policies are maintained because there isn’t a suitable alternative available,
substantial investments have been appropriated, or political momentum has already been
generated. Evaluation involves asking those difficult questions that could potentially result in
embarrassment or regret for the architects or decision-makers associated with the ineffective
policies, but these outcomes could be avoided if the organization establishes a period of
honest reflection and amnesty in an open forum where everyone’s opinion has equal value.
At a minimum, every policy (or process) should be evaluated by asking the following seven
questions:
1. Have our risk management policies improved our organization’s three primary
2. Do the policies link to the preferences, priorities, and aims of the decision-maker?
3. Were policy decisions made on the best logic and rationale available, or were they
4. Can we simplify our risk management processes in a more cost-efficient and effective
manner while still meeting the objectives and priorities of the organization?
5. Did we provide everyone the opportunity to contribute to the analysis, and does
everyone involved in the process understand how the decision was reached?
6. How effective were current or previous policies in reducing risk exposures, and how
7. Do current policies correct the root problems that expose the organization to various
second-order problems?
Once the organization has reflected on its policies and processes, efforts should be
directed towards strengthening its culture and unifying mindsets that influences how risk is
the glue that binds its members, and ensures that policies and processes are executed in
accordance with a spirit characteristic of the organization’s leadership. This study highlighted
previous research by social scientists that suggest hierarchical cultures operate under the
assumption that science and technology give them control over various hazards, but
organizations must detach from this belief and pursue a management program that assesses
risk as if those scenarios were to occur as according to the “fatalist” mentality. Organizations
also need to acknowledge that probabilities are a weak method for making rational and
logical decisions in the cyber domain, and should shift to deterministic approaches.
Organizations also need to take a different approach to filling gaps in knowledge by avoiding
unsolvable problems entirely, and instead focusing on assessing risk variables that can be
measured objectively, which helps remove bias and subjectivity from the analysis. Diversity
presents challenges to organizational cultures, but organizations can use their diversity to
of how organizations decide to define their cultures, generating loyalty must be a top priority
organization must strive to correct the mindsets of those participating in the decision process.
The major flaws with current mindsets are the beliefs that a threat must not penetrate defense
perimeters at all cost, blame is the worst consequence of any outcome, and threat capabilities
have the greatest influence on risk exposure. Organizations can start correcting the mindsets
of its member by first adopting the “Four Axioms of Cyberdefense” in [27] where efforts to
create an impenetrable defense are abandoned, and instead focusing on defenses that delay an
attack so that security personnel can appropriately respond. A second approach is to prevent
the fear of blame from being the primary influence in organizational decision-making. This
fear will eventually wane simply through the adoption of objective assessments. Perhaps the
most difficult mindset challenge is the proposal that eliminating the threat variable from a
risk model will improve decisions regarding actions in cyberspace. Organizations must
choose between optimization or threat intelligence as the first priority, and members must be
shown that the conditional state of the organization plays a much larger role in the frequency
and magnitude of catastrophic outcomes than any capabilities the threat possesses. For
example, the United States has far superior offensive cyber capabilities than the Democratic
People’s Republic of Korea (DPRK), but which country is perceived as the greater threat to
the Republic of Korea (ROK)? The obvious answer is the DPRK, because the DPRK benefits
from any damage inflicted upon the ROK while the U.S. lacks any incentive to use its
capabilities against them. This example shows that DER (or an organization’s state of
security against adversarial rewards) is a far more valid measurement of likelihood for an
the decision process, such that the organization begins to take on the qualities of a cognitive
system where every person represents a communicative node in the architecture. Members of
this system must be calibrated to make optimal and rational decisions in order to function
properly in the post-modern world. A Brier Scoring exercise could help calibrate members
for rational decision-making by showing them discrepancies between their estimates and
post-modern world, and in order to succeed in this era, they need to emphasize transparency
and problem deconstruction, and cautiously engage those risks that are fabricated by society.
This stage can be executed in alongside the first stage, and it will take approximately six
and a massive consolidation of event logs to reduce the number of unreported cases, improve
threat recognition, and refine heuristic analysis. Federally mandated reporting requirements
will result in larger data collections, and consolidating these data collections will produce
trends that enable assessors to estimate the effectiveness of security controls. Without
jeopardizing national security or competitive advantages, this consolidation will require full
cooperation and maximum transparency between federal agencies and the private sector in
order to consolidate the amount of data required to determine how the conditions affect
outcome likelihoods. Currently there is only a partial cooperation between these sectors
which has led to severe gaps in our understanding of how cyber-risks relate to outcomes.
116
Prior to developing a DSS designed to keep members operating within the bounds of
the framework, the right team needs to be assembled to guarantee a product that meets the
expectations and requirements mentioned throughout this study. At a minimum, the team
needs to be comprised of: a tenacious project manager with vision and a complete
theory and risk management, an application developer with previous work developing Java
Server Pages (or whichever platform chosen) that communicate with database management
accessed by the DBMS, and a database administrator that understands the security
requirements and applications of the program. The DBMS will need to be robust, and capable
of managing framework processes and querying necessary information. This DBMS will
interface with the DSS to maintain authorizations, populate security control worksheets, and
provide the algorithms with the needed information to compute values for objective
estimated that this stage will take approximately 1-2 years to develop mechanisms to
consolidate data and pass the needed legislation, and approximately six months to develop a
The fourth phase involves evaluating the proposed framework for gained
projects. Those indicators are: (1) system quality, (2) information quality, (3) use, (4) user
satisfaction, (5) individual impact, and (5) organizational impact. [30] System quality could
117
against the actual results to prove the framework functioned as intended. The expectation is
that the framework and companioned DSS are highly reliable, functional, and more
challenge, as concluded by Alec Poczatec, who found that various attack graphing tools
produced varying estimates on the same network topology. [13] There are tools in
development that are expected to meet the expectations of security professionals, such as
was developed by Xinming Ou, Sudhakar Govindavajhala, and Andrew W. Appel. [31] Until
attack-graphing tools have overcome their limitations, these algorithms must be simplified
such that the results agree across various platforms. Occasionally there are cases when the
manually assessed. As such, manual computations must be calibrated so they produce similar
results as the chosen attack-graph algorithm to prevent distortions. This calibration will
require much evaluation to ensure weights are fairly assigned to all resources at risk. If the
evaluation proves that attack surfaces can be effectively measured, distortions can be
managed, and the DSS formulas compute risks such that they reasonably compare to the real
world, then the framework has proven superior to the current approach. The usage indicator
problem-solving. This criterion can be evaluated through “C-level” test subjects, and their
individual analysis of the product’s usefulness where they are surveyed on their impressions
of comprehensiveness and the potential ability to make rational and logical decisions in
cyberspace. User satisfaction is the third indicator, and it is vital in any potential adoption.
118
This criterion would require a similar survey conducted on many individual across all levels
of the organization, whose feedback would be used to refine the framework and determine
any deficiencies. Individual impact is the fourth indicator, and it is measured by the benefits
the proposed framework provides over the current framework to individual people. This
transparency of the proposed framework. These metrics speak volumes about how the
proposed framework magnifies those intrinsic and extrinsic goals of the individual; for
example, performing better at their jobs and their ability to maximize contributions to the
overall success of the organization. This last metric leads into the final indicator, which is the
This study has already suggested that less human capital will be required to produce precise
estimates, the framework can function across a variety of domains, and objective analysis is a
far more effective approach to assessing risk than subjective approaches. This criterion is
heavily dependent on the previous four measurements to determine overall effectiveness and
including several case studies to evaluate the framework as a final demonstration of its
superiority. This stage will take approximately six months and involve many people in the
The final stage involves synchronizing DoD Acquisition Processes (DAP) with the
RMF to allow the two efforts to operate in parallel for further gains in organizational
efficiency. As opposed to the five-step parallel process that coincides with the NIST RMF,
119
this study proposes an nine-step DAP as outlined in Table 5 on the next page. The first DAP
step corresponds with RMF 2.0 Step 1 where the appropriate program manager is assigned to
develop the acquisition strategy for the system being developed, introduced, or reassessed.
(ACAT). Army Acquisition Corps (AAC) officer, MAJ Mitchell Hockenbury, suggests the
long-term funding strategy used for programs such as the F-35 Strike Fighter. [M.
criteria, the program manager can then suggest minimum security baselines that associate
with the classification or category of the requested system. The identification of risk factors
by Tier 2 helps the program manager forecast the appropriations needed to support the
program. While Tier 3 assesses likelihoods and impacts, the program manager develops test
and evaluation criteria that will later be operationally tested as Tier 3 evaluates existing
controls. The program manager performs cross-mission (or function) analysis while Tier 3
conducts an impact analysis to prioritize expenditures across the organization. After a cross-
mission analysis, the program manager develops the procurement strategy that corresponds to
Tier 2’s recommendations for control enhancement. Once Tier 1 approves the final security
plan, the program manager presents a Plan of Action and Milestones (POA&M) that outlines
timeframes and provides a breakdown of costs to implement the security plan. Following
Step 8 and final approval by the decision-maker, the program manager decides on a life-cycle
management program to set retirement periods for various systems. MAJ Mitchell
proposal, but cautions that funds are released conditionally and not in one lump sum as Table
120
5 suggests in the third step. [M. Hockenbury, personal communication, 2016] It is impossible
for me to estimate the length of time required to synchronize DAP with the proposed
framework since the responsibility will fall upon the Army Acquisition Corps to complete
the analysis involved in this stage; the suggested synchronization strategy is merely a
5.4 Conclusion
This study was partly inspired by authors Steven Levitt and Stephen Dubner who
demonstrated the need to look beyond common rationality to determine the true causes
behind various social phenomena in their 2005 book “Freakonomics”. A former boss also
planted seeds that inspired this undertaking, and those seeds were his convictions that (1)
directly observing behavior can indirectly affect outcomes and (2) a risk analysis is a
shared conclusion with David Garland that the world is in fact “spinning out of control” as
indicated by our insatiable determination to control our futures which has further amplified
our expectations and divided our beliefs. I strongly feel that my one year old daughter, Nora,
deserves to live in a world better than the one that is being prepared for her, and this world is
121
becoming one where agencies are executing actions with severe consequences based on
faulty premises and unsubstantiated assumptions. Perhaps society should abandon this
determination, and engage risk with the mentality once applied by the Romans? I am not
suggesting that we revert to the use of oracles or dice to decide upon courses of action, but
rather that we aggressively confront risk in a manner similar to their approach. Doing so will
enable society to engage risk on a more productive level and help reveal the root causes
leading to risk scenarios. The current methods of modeling risk have reached a point of
complexity and subjectivity that they are no longer effective for a modern society
partly to blame for innovative acceleration and decline in social values. Prior to these
advancements, society primarily relied on religion to maintain stability and provide guidance
when faced with problems; severe outcomes were believed to merely be “acts of God” and
any blasphemies resulted in harsh social penalties. With each advancement, society’s
commitment to faith has eroded little by little, and the voids have been replaced with
accepted rationality and scientific theories. This cultural renaissance will only accelerate,
which I believe will make the management of risk an even greater challenge. The “War on
Drugs” is one such example that shows how an erosion of moral values and technology have
only exacerbated our problems. The reason this war has not demonstrated its expected
success is a result of a chosen strategy where each opponent seeks superior capabilities and
intelligence versus attacking the true contributor to this war. The true contributing factors to
this war are the increasing demands and consumption of drugs! I have always championed
science and technology, but there also needs to be a resurgence in the moral values that have
122
always been promoted by various fundamental ideologies to succeed. Moral analysis and
science must complement each other to combat destructive social issues, and abandoning
either destabilizes the effort. In addition to moral and ethical values, discretion should be
exercised when including probabilities within the risk equation simply because bad outcomes
will occur regardless of any strategy. The Romans didn’t concern themselves with “chance”
and they assumed outcomes were due to the “will of the gods”, yet they were highly
placing our whole trust in science and technology to succeed over our opponents.
I am optimistic for the future, and I believe eventually organizations will become
aware that rationality based on direct observations to support their decision-making processes
will result in superior outcomes over the long-term. Effective rationality will require
consensus across organizations. A rational explanation for the increasing frequency of chaos
in the world often points the finger at advancements in technology, but the true contributors
to this volatility are the users of this technology. Users often lack patience to completely
evaluate new technologies, make decisions based upon false senses of security, and develop
errors must be corrected in order to enter the post-modern world. The optimistic side of me
envisions a post-modern world where terror becomes far less frequent, there is a greater
understanding between various cultures, and we experience progress never before witnessed.
This progress would first require organizations to surrender their attempt to control nature
123
and society, and to adapt to them in a responsible and logical manner. True progress will be
achieved when organizations abandon any prospect of absolute control (outside of their
nature), and this abandonment will result in fewer introduced risks as they harness the
contested ideas that moves the country forward. This framework provides part of the solution
to correcting those mindsets and flawed approaches that hinder progress. Progress will be
reflexive approach to objectively modeling and assessing risk that additionally reduces
cannot do it alone, and will require elements of multiple theories to explain how and why the
approach meets expectations. Many of these theories used throughout this study were
developed by social scientists, and social science will be essential to critiquing risk models
that describe risk from an objective position, as stated by David Spiegelhalter in [22].
demonstrates significant gains in efficiency and effectiveness, but the decision to implement
such a framework is ultimately up the decision-makers, because they primarily assume risk
for their organizations. Regardless of the methods chosen to assess risk, organizations must
acquisition programs with risk analysis processes that determines how much organizations
should invest in their cyber security programs. But we have an even greater need to revisit
124
our fundamental approaches to assessing risk in the cyberspace domain. When I approached
my thesis professor about my proposal, Dr. Davis’s advice was to combine all my
if inside a “black box”. I knew three things beforehand: (1) any proposed framework must
implement a model that assesses risk from an objective perspective, and it’s impossible to
accurately quantify cyber threats, (2) the model’s output must direct the decision-making
body to the most optimal security options, and (3) cyber-risks are far more complex than
traditional risks and cannot be assessed in the same manner. The results from that “black
box” session eventually materialized into the concepts described throughout this work. It is
my sincere hope that this proposal for a new risk management framework results in some
adaptation that will influence how the DoD assesses cyber-risk in the future, and will
BIBLIOGRAPHY
Precis: Establishes the RMF for DoD IT (referred to in this instruction as “the
RMF”), establishing associated cybersecurity policy, and assigning responsibilities
for executing and maintaining the RMF.
2. NIST Special Publication 800-37, “Guide for Applying the Risk Management
Framework to Federal Information Systems”, NIST, February 2010.
Precis: Provides guidelines for applying the Risk Management Framework to federal
information systems to include conducting the activities of security categorization,
security control selection and implementation, security control assessment,
information system authorization, and security control monitoring.
3. NIST Special Publication 800-30, “Guide for Conducting Risk Assessments”, NIST,
September 2012.
4. NIST Special Publication 800-53, “Security and Privacy Controls for Federal
Information Systems and Organizations”, NIST, April 2013.
Precis: This publication has been developed by NIST to further its statutory
responsibilities under the Federal Information Security Management Act (FISMA),
Public Law (P.L.) 107-347. NIST is responsible for developing information security
standards and guidelines, including minimum requirements for federal information
systems.
Precis: This article presents an economic model that determines the optimal amount
to invest to protect a given set of information using the Gordon-Loeb Model. The
model takes into account the vulnerability of the information to a security breach and
the potential loss should such a breach occur.
126
6. “Technical Risk Management”, Jack V. Michaels, Prentice Hall, Upper Saddle River,
NJ, 1996.
Precis: A practical basis for the identification, quantification, and control of risk
factors in the production of hardware, software, and services in both the public and
private sectors, and covers: Risk Principles and Metrics, Risk Formulation and
Modelling, and Risk Control.
9. “Risk Modeling, Assessment, and Management 2nd Edition”, Yacov Y. Haimes, John
Wiley and Sons, Inc., Hoboken, NJ, 2004.
Precis: Focuses on the philosophical, conceptual, and decision making aspects of risk
analysis, and then covers the theory and methodologies that define the state of the art
of risk analysis. Seeks to balance the quantitative and empirical dimensions of risk
assessment and management with the more qualitative and normative aspects of
decision-making under risk and uncertainty.
Precis: Risk compensation postulates that everyone has a risk thermostat and that
safety measures that do not affect the setting of the thermostat will be circumvented
by behaviour that re-establishes the level of risk with which people were originally
comfortable. It explains why, for example, motorists drive faster after a bend in the
road is straightened. Cultural theory explains risk-taking behaviour by the operation
of cultural filters.
127
11. “Risk”, Layla Skinns, Michael Scott, and Tony Cox. Cambridge, UK, 2011.
Precis: Set of public lectures from the 2010 Darwin College Lecture Series where a
variety of leading experts on the subject brought their own meaning of risk from their
respective fields.
12. “The Unknown Known”, Errol Morris (Film). Anchor Bay Entertainment, Beverly
Hills, CA, 2014.
15. “Risk, Uncertainty, and Rational Action”, Carlo C Jaeger, Ortwin Renn, Eugene A
Rosa, and Thomas Webler, Earthscan Publications Ltd., London, UK, 2001.
Precis: Four of the world's leading risk researchers present a fundamental critique of
the prevailing approaches to understanding and managing risk - the 'rational actor
paradigm'. They show how risk studies must incorporate the competing interests,
values, and rationalities of those involved and find a balance of trust and acceptable
risk.
16. “Risk and Technological Culture”, Joost Van Loon, Routledge, New York, NY, 2002.
Precis: Demonstrates how new technologies are transforming the character of risk
and examines the relationship between technological culture and society through
substantive chapters on topics such as waste, emerging viruses, communication
technologies and urban disorders.
128
17. “An Introduction to Decision Theory”, Martin Peterson, Cambridge University Press,
New York, 2009.
18. “How Will You Measure Your Life”, Clayton M. Christensen, HarperCollins
Publishers, New York, NY, 2012.
Precis: Drawing upon his business research, he offered a series of guidelines for
finding meaning and happiness in life. He used examples from his own experiences to
explain how high achievers can all too often fall into traps that lead to unhappiness.
20. “Forecasting and Management of Technology”, Alan L. Porter, John Wiley and Sons
Inc., Canada, 1991.
Precis: The innovations to technology management in the last 17 years: the Internet;
the greater focus on group decision-making including process management and
mechanism design; and desktop software that has transformed the analytical
capabilities of technology managers. Included in this book will be 5 case studies from
various industries that show how technology management is applied in the real
world.
21. “Modeling, Measuring, and Managing Risk”, Georg Ch Plfug and Werner Romishch.
World Scientific Publishing Co, Covent Garden, London, 2007.
22. “Don't Know, Can't Know: Embracing Deeper Uncertainties When Analyzing Risks”,
David J. Spiegelhalter, Hauke Riesch, The Royal Society, October 2011. Retrieved
from: http://rsta.royalsocietypublishing.org/content/369/1956/4730. Accessed June
2016.
23. “Information Visualization for Science and Policy: Engaging Users and Avoiding
Bias”, Greg J. Mcinerny, Min Chen, Robin Freeman, David Gavaghan, Miriah
Meyer, Francis Rowland, David J. Spiegelhalter, Mortiz Stefaner, Geizi Tessarolo,
Joaquin Hortal, Trends in Ecology and Evolution, Vol.29(3), pgs.148-157, March
2014.
Precis: Features studies concerning the presentation and impact of risk information,
and attempts to answer why different representations of risk, apparently describing
the same information, can tell such different stories to people.
26. “Why Risk is Risky Business”, David J. Spiegelhalter, New Scientist, Issue 2721,
Vol. 203, pgs. 20-21, August 2009.
Precis: The author asserts that reaction to risk entails a manipulation between the
emotional and analytical parts of the mind. He mentions some of the unintended
consequences of overreacting to some notable world events, including the September
11, 2001 terrorist attacks and the warnings issued regarding the link between third-
generation oral contraceptives and deep vein thrombosis. He notes the awareness of
senior civil servants about the tendency to overreact to risk.
27. “Enterprise Cybersecurity”, Scott Donaldson, Stanley Siegel, Chris Williams, and
Abdul Aslam, Springer LLC, New York, NY, 2015.
Precis: Explains at both strategic and tactical levels how to accomplish the mission
of leading, designing, deploying, operating, managing, and supporting cybersecurity
capabilities in an enterprise environment.
130
28. “The Code w/ Marcus du Sautoy”, (film) Stephen Cooter, Acord Media, 2014.
Precis: Convinced there is a mathematical formula that can identify patterns and
connect everything we see around us, author and Oxford University Professor
Marcus du Sautoy goes in search of a mysterious hidden code that can unlock the
very laws of the universe.
Precis: This study shows how to achieve the goals of integrating formal vulnerability
specifications and scale to networks by presenting MulVAL, an end-to-end framework
and reasoning system that conducts multi-host, multistage vulnerability analysis on a
network.
Precis: Wikipedia page that explains the history, theory, applications, and
mathematical treatments of probability.
33. “Google Books Ngram Viewer”, Google Inc. Retrieved from: http://books.google.
com/ngrams. Accessed July 2016.
Precis: Google Books Ngram Viewer is an online search engine that charts
frequencies of any set of comma-delimited search strings using a yearly count of n-
grams found in sources printed between 1500 and 2008.
131
35. “What Does Good IT Governance Look Like”, Heather Colella, Gartner Academy for
Leadership Development, Jan. 29, 2013.
Precis: Presentation given to describe “good and great” governance, what are some
of the best practices, and provide examples of tools that can be used to meet
enterprise needs.
Precis: Wikipedia page that describes Christopher’s Columbus’ history, his many
voyages, and his contributions to history.
Precis: Presentation given at the 2009 SOA conference that covered the Service-
Oriented Architecture problem, the complexities, and how to manage it. Robert
Glass’ Law of Complexity is provided here to demonstrate how adding complexity
compounds organizational and engineering problems.
39. “Case: The Ford Pinto”, Moral Issues in Business, 8th Edition (p. 83-86). Retrieved
from: https://philosophia.uncg.edu/phi361-metivier/module-2-why-does-business-
need-ethics/case-the-ford-pinto/. Accessed August 2016.
Precis: Discussion on the dilemma Ford faced regarding the 1971 Pinto on whether
to comply with NHTSA standards or assume the risk of non-compliance.
40. “The Death of Expertise”, Tom Nichols, The Federalist, January 2014. Retrieved
from: http://thefederalist .com/2014/01/17/the-death-of-expertise/. Accessed August
2016.
132
Precis: Tom Nichols shares his observations of the “death of expertise”: a Google-
fueled, Wikipedia-based, blog-sodden collapse of any division between professionals
and laymen, students and teachers, knowers and wonderers. He provides suggestions
on how this problem can be addressed.
Precis: Wikipedia page that describes the history, components, and applications of
Decision Support Systems.
42. “Congressional Report Slams OPM on Data Breach”, Brian Krebs, Krebs on Security,
September 2016. Retrieved from: http://krebsonsecurity.com/2016/09/congressional-
report-slams-opm-on-data-breach/. Accessed September 2016.
Precis: Provides an update following the conclusion of the investigation involving the
2015 U.S. Office of Personnel Management (OPM) data breach. Includes a brief
summary of the attack and the conclusions of the investigation team.
43. “NVD Common Vulnerability Scoring System v2”, National Institute of Standards
and Technology. Retrieved from: https://nvd.nist.gov/cvss.cfm. Accessed August
2016.
Precis: CVSS provides an open framework for communicating the characteristics and
impacts of IT vulnerabilities. Its quantitative model ensures repeatable accurate
measurement while enabling users to see the underlying vulnerability characteristics
that were used to generate the scores.
44. “The CVSSv2 Shortcomings, Faults, and Failures Formulation”, Carsten Eiram (Risk
Based Security) and Brian Martin (Open Security Foundation). Retrieved from: https:
//www.riskbasedsecurity.com/reports/CVSS-ShortcomingsFaultsandFailures.pdf.
Accessed August 2016.
APPENDIX A. GLOSSARY
2. Access Rights - The permissions that are granted to a user, or to an application, to read,
write and erase files in the computer.
4. Advanced Persistent Threats (APTs) - A set of stealthy and continuous computer hacking
processes, often orchestrated by human(s) targeting a specific entity for business or
political motives, using sophisticated techniques.
9. Attack Surface - The sum of the different points (the "attack vectors") where an
unauthorized user (the "attacker") can try to enter data to or extract data from an
environment. Part of the analysis performed by an Attack-Graph algorithm.
11. Bayes Theorem - Describes the probability of an event, based on conditions that might be
related to the event; P(A|B) = P(A) P(B|A) / P)B). P = Probability, A and B = Events.
12. Beijing Butterfly Effect - The sensitive dependence on initial conditions in which a small
change in one state of a deterministic nonlinear system can result in large differences
in a later state.
13. Best Business Practices - A set of methods or techniques that have been generally
accepted as superior to any alternatives because they produce results that are superior
to those achieved by other means or because it has become a standard way of doing
things; are also used to maintain quality as an alternative to mandatory legislated
standards based on self-assessment or benchmarking.
134
14. Blame - assign responsibility for a fault or wrong; equals Perceived Avoidable Harm (or
Loss) time + Perceived Responsibility time.
15. Brier Scoring Rule - Can be thought of as a measure of the "calibration" of a set of
probabilistic predictions, and is applicable to tasks in which predictions must assign
probabilities to a set of mutually exclusive discrete outcomes.
17. Casus Belli - Latin term for an event or action that justifies a war.
18. Catch-22 - A dilemma or difficult circumstance from which there is no escape because of
mutually conflicting or dependent conditions.
19. Causal Agent - Any entity that produces an effect or is responsible for events or results.
20. Channel Protocol - The protocol used to access a channel, and imposes restrictions on
the data exchange allowed using the channel, e.g., a TCP socket
21. Channels - a means to connect to a system and send (receive) data to (from) a system.
22. Chaos Theory - A field of study that examimines the behavior of dynamical systems that
are highly sensitive to initial conditions, and whose future behavior is fully
determined by their initial conditions, with no random elements involved.
23. Common Vulnerability Scoring System (CVSS) - A free and open industry standard for
assessing the severity of computer system security vulnerabilities. CVSS attempts to
assign severity scores to vulnerabilities, allowing responders to prioritize responses
and resources according to threat.
24. Compartmentalization - Creating logical boundaries among information sets that limit
access to information to persons or other entities who need to know it in order to
perform certain tasks.
26. Confidence Interval - An observed interval (i.e., it is calculated from the observations),
in principle different from sample to sample, that frequently includes the value of an
unobservable parameter of interest if the experiment is repeated. Formula equals
((sample mean - critical value) * (stdDev/square root of #observations)) + (critical
value * (stdDev/square root of #observations).
28. Cultural Filters - Filters that select and construe evidence to support established biases,
and especially true when data is contested, ambiguous, or inconclusive; the two forms
are Rewards and Costs.
29. Culture Theory - The branch of comparative anthropology and semiotics that seeks to
define the heuristic concept of culture in operational and/or scientific terms.
30. Cyber - Prefix comes from Greek word - cybernetics, which means “governance”.
Defined in 1948 by Norbet Weiner as “the scientific study of control and
communication in the animal and machine.” Now often implies, in contemporary
sense, “control of any system using technology”.
31. Cybersecurity - The state of being protected against the criminal or unauthorized use of
electronic data, or the measures taken to achieve this.
32. Cyberspace - the notional environment in which communication over computer networks
occurs.
33. Damage-Effort Ratio (DER) - The contribution of a resource to the attack surface based
on the level of harm the attacker can cause to the system in using the resource in an
attack and the effort the attacker spends to acquire the necessary access rights in order
to be able to use the resource in an attack.
37. Decision Matrix - A technique used to rank the multi-dimensional options of an option
set; consists of a set of criteria options which are scored and summed to gain a total
score which can then be ranked
38. Decision Support System - A computer-based information system that supports business
or organizational decision-making activities.
39. Decision Theory - The study of strategies for optimal decision-making between options
involving different risks or expectations of gain or loss depending on the outcome.
40. Decision Variables - The variables within a model that one can control.
136
41. Denial of Service Attack (DoS) - An attempt to make a machine or network resource
unavailable to its intended users, such as to temporarily or indefinitely interrupt or
suspend services of a host connected to the Internet.
42. Descriptive Decision Theory - Explains and predicts how people make decisions;
empirical and experimental.
43. Determinism - A philosophical doctrine that all events transpire in virtue of some
necessity and are therefore inevitable. All data is known before analysis.
44. Deterministic - Relating to the philosophical doctrine that all events, including human
action, are ultimately determined by causes regarded as external to the will.
45. Defense Information Systems Agency (DISA) - Provides information technology (IT)
and communications support to the President, Vice President, Secretary of Defense,
the military services, the combatant commands, and any individual or system
contributing to the defense of the United States.
46. DoD Component - Includes OSD; the Chairman, Joint Chiefs of Staff and the Joint Staff;
the DoD Inspector General; the Military Departments including the Coast Guard
when assigned to the Department of the Navy; the Defense Agencies; DoD Field
Activities; the Combatant Commands; Washington Headquarters Services (WHS),
the Uniformed Services University of the Health Sciences (USUHS), and all non-
appropriated fund instrumentalities.
47. Dominance Principle - Where one alternative can be ranked as superior to another
alternative for a broad class of decision-makers. It is based on shared preferences
regarding sets of possible outcomes and their associated probabilities. Strong
Dominance = ai ≻ aj iff. v(ai, sm) ≥ v(aj, sm).
49. Egalitarian - Of, relating to, or believing in the principle that all people are equal and
deserve equal rights and opportunities.
50. Empirical Observations - The knowledge (or source of knowledge) acquired by means of
the senses, particularly by observation and experimentation.
51. Entry/Exit Points - The methods in a system’s codebase that receive data from the
system’s environment are the exit points, and a system’s methods that send data to the
system’s environment are the system’s exit points.
52. Exchange Principle - Principle that the perpetrator of a crime will bring something into
the crime scene and leave with something from it, and that both can be used as
forensic evidence. Also known as Locard’s Principle.
137
54. Expected Utility Function - A hypothesis that states that a decision-maker’s expected
utility for a particular action-observation combination equals the sum of probabilities
for the states of the world as observed (o) while taking action (a), and our preferences
over the space of outcomes are represented by U (si); in formula form EU (a|o) = Σ P
(si|a,o) U (si).
55. Expected Value - In probability theory, the expected value of a random variable is
intuitively the long-run average value of repetitions of the experiment it represents. In
decision theory, it is computed as the Σ Outcome Values (Outcome Probabilities).
56. Fatalist - Someone who believes that all events or actions are subjugated to fate.
57. Federal Information Security Act of 2002 (FISMA) - An act requiring each federal
agency to develop, document, and implement an agency-wide program to provide
information security for the information and information systems that support the
operations and assets of the agency, including those provided or managed by another
agency, contractor, or other source.
58. Framework - A strategy for prioritizing and sharing information about the security risks
to an information technology (IT) infrastructure.
60. Game Theory - The study of mathematical models of conflict and cooperation between
intelligent rational decision-makers.
61. Gartner - An American research and advisory firm providing information technology
related insight.
63. Glass’s Law of Complexity - A series of laws developed by American software engineer
Robert L. Glass that supposes that every 25% increase in capability yields a 100%
increase in complexity.
64. Globalization - The process of international integration arising from the interchange of
world views, products, ideas, and other aspects of culture.
65. Granularity - To the extent to which a larger entity is subdivided, or the extent to which
groups of smaller indistinguishable entities have joined together to become larger
distinguishable entities.
138
66. Hawthorne Effect - The alteration of behavior by the subjects of a study due to their
awareness of being observed.
68. Hierarchy - A system or organization in which people or groups are ranked one above
the other according to status or authority.
69. Hierarchical Holographic Model (HHM) - A model that organizes and presents a
complete set of system risk categories or requirements for success.
70. High Reliability Organizations (HRO) - An organization that has succeeded in avoiding
catastrophes in an environment where normal accidents can be expected due to risk
factors and complexity.
71. Impact - The measured consequences of an event or activity, and the influence on
individual functions of an organization.
73. Individualism - A social theory favoring freedom of action for individuals over collective
or state control.
74. Individualization - A process where emphasis is placed on the moral worth of the
individual. Individualists promote the exercise of one's goals and desires, value
independence and self-reliance, and advocate that the interests of the individual
should take precedence over the state or a social group.
77. Instrumental Rationality - The decision-maker’s personal aims or objectives for the
organization that are used to guide decisions.
request of member governments, dedicated to the task of providing the world with an
objective, scientific view of climate change and its political and economic impacts.
79. Internet Computing Technologies (ICT) - An extended term for information technology
(IT) which stresses the role of unified communications and the integration of
telecommunications (telephone lines and wireless signals), computers, and necessary
enterprise software, middleware, storage, and audio-visual systems, which enable
users to access, store, transmit, and manipulate information.
81. Law of Diminishing Marginal Utility - A law of economics stating that as a person
increases consumption of a product, while keeping consumption of other products
constant, there is a decline in the marginal utility that person derives from consuming
each additional unit of that product.
82. Likelihood - A measurement that describes the potential of an event or activity to result
in a particular outcome.
83. Luhmann’s Abstract Theory - Theory that proposes society is a (1) Communicative
System that operates much like a human consciousness, where there is a constant
flow of communication and there is a (2) Cognitive System where there is a constant
flow of thoughts. Both observe something.
84. Man-in-the-Middle Attack - An attack where the attacker secretly relays and possibly
alters the communication between two parties who believe they are directly
communicating with each other.
85. Markov Decision Process (MDP) - A mathematical framework for modeling decision
making in situations where outcomes are partly random and partly under the control
of a decision maker. MDPs are useful for studying a wide range of optimization
problems solved via dynamic programming and reinforcement learning.
86. Method Privilege - The method an attacker uses to elevate privileges on a computing
system.
88. Modernism - In general, includes the activities and creations of those who felt the
traditional forms of art, architecture, literature, religious faith, philosophy, social
organization, activities of daily life, and even the sciences, were becoming ill-fitted to
their tasks and outdated in the new economic, social, and political environment of an
emerging fully industrialized world.
140
89. Monte Carlo Method (MCM) - A broad class of computational algorithms that rely on
repeated random sampling to obtain numerical results. They are often used in
physical and mathematical problems and are most useful when it is difficult or
impossible to use other mathematical methods.
90. Monumental Change - Emerging events that change everything at the level of their
scope which can be fact, theory, paradigm, episteme, ontos, existence, or absolute.
91. Moving Target Defense (MTD) - A new strategy motivated by the asymmetric costs
borne by cyber defenders that takes an advantage afforded to attackers and reverses it
to advantage defenders instead.
94. Non-Linear - A mathematical function or graph plot such that results represent a
relationship that opposes a linear plot.
95. Normative Decision Theory - Yields prescriptions for what are rationally correct
decisions (what one ought to do).
96. Objectivity - A philosophical concept where the state or quality of being true is outside of
a subject's individual biases, interpretations, feelings, and imaginings.
97. Office of Management and Budget - The largest office within the Executive Office of
the President of the United States (EOP) that is responsible for producing the
President's Budget. OMB also measures the quality of agency programs, policies, and
procedures to see if they comply with the president's policies and coordinates inter-
agency policy initiatives.
100. Optimization - The action of making the best or most effective use of a situation or
resource.
141
103. Pareto’s Improvement - An economic state where resources are allocated in the most
efficient manner. Pareto efficiency is obtained when a distribution strategy exists
where one party's situation cannot be improved without making another party's
situation worse. Pareto efficiency does not imply equality or fairness.
104. Plan of Action and Milestones (POA&M) - A permanent record that identifies tasks to
be accomplished in order to resolve security weaknesses. Required for any
accreditation decision that requires corrective actions, it specifies resources required
to accomplish the tasks enumerated in the plan and milestones for completing the
tasks.
105. Poisson Distribution - A discrete probability distribution that expresses the probability
of a given number of events occurring in a fixed interval of time and/or space if these
events occur with a known average rate and independently of the time since the last
event. Formula is P(k events in Interval) = λke-λ/k!; λ = avg# of events, e=Euler’s #,
K! - Factorial.
107. Post-Normal Science - A novel approach for the use of science on issues with uncertain
facts, disputed values, high stakes, and urgent decisions. Also described as the stage
where we are today, where all the comfortable assumptions about science, its
production and its use, are in question.
110. Principle of Insufficient Reason - States that if the (n) possibilities are
indistinguishable except for their names, then each possibility should be assigned a
probability equal to 1/n.
111. Probabilis - Latin foundation for English word probability; translates to “a measure of
the authority of a witness in a legal case, and often correlated with the witness's
nobility”.
114. Proscriptive - A strategy that develops measures that condemn an event or activity;
relates to pre-mortem analysis.
115. Prospect Theory - A behavioral economic theory that describes the way people choose
between probabilistic alternatives that involve risk, where the probabilities of
outcomes are known. The theory states that people make decisions based on the
potential value of losses and gains rather than the final outcome, and that people
evaluate these losses and gains using certain heuristics.
116. Psychometric - A field of study concerned with the theory and technique of
psychological measurement, and with the objective measurement of skills and
knowledge, abilities, attitudes, personality traits, and educational achievement.
117. Rational Action Theory - A framework for understanding and often formally modeling
social and economic behavior. The basic premise of rational action theory is that
aggregate social behavior results from the behavior of individual actors, each of
whom is making their individual decisions. The theory therefore focuses on the
determinants of the individual choices.
118. Rationality - The quality or state of being reasonable, based on facts or reason.
Rationality implies the conformity of one's beliefs with one's reasons to believe, or of
one's actions with one's reasons for action. Determining optimality for rational
behavior requires a quantifiable formulation of the problem, and making several key
assumptions.
119. Reciprocity - The practice of exchanging things with others for mutual benefit,
especially privileges granted by one country or organization to another.
120. Redundancy - The duplication of critical components or functions of a system with the
intention of increasing reliability of the system, usually in the form of a backup or
fail-safe.
143
121. Reflexive - Of a method or theory in the social sciences that takes into account of itself
or of the effect of the personality or presence of the subject on what (i.e. object) is
being investigated. A reflexive relationship is bidirectional with both the cause and
the effect affecting one another in a relationship in which neither can be assigned as
causes or effects. A high level of social reflexivity would be defined by an individual
shaping their own norms, tastes, politics, desires, and so on.
123. Resilience - A design objective that reinforces the ability to absorb or avoid damage
without suffering complete failure.
124. Risk - The measured certainty of an outcome due to an event or activity where these
outcomes result as sequences of cause and effect.
125. Risk Avoidance - A risk assessment technique that entails eliminating hazards, activities
and exposures that place an organization's valuable assets at risk.
126. Risk Cost Estimate - The cost of corrective action as compared to a baseline estimate.
Example: Risk Cost Estimate = Risk Cost (.5(BCE) + Baseline Cost Estimate ($5) =
$7.50.
127. Risk Determinate Factors (RDF) - A computation performed by dividing previous risk
cost/time estimates by the actual costs/times; it is a number between 0-1 if costs or
times were underestimated. Can be used to refine the estimates of risk times or costs.
128. Risk Factors - A feature or characteristic that contributes to the risk exposure for an
organization.
130. Risk Time Estimate - A quantified measure of the corrective action time as compared to
a baseline estimate. Example: Risk Time Estimate = Risk Time (.5(BTE)) + Baseline
Time Estimate (24mo) = 36 months.
144
131. Risk-Society - A society increasingly preoccupied with the future (and also with safety),
which generates the notion of risk, and serves as a systematic way of dealing with
hazards and insecurities induced and introduced by modernization itself.
132. Robustness - A design concept that reduces variation in a product without eliminating
the causes of the variation.
134. Royal Society - The oldest and most prestigious scientific society in Britain. It was
formed by followers of Francis Bacon to promote scientific discussion, especially in
the physical sciences, and received its charter from Charles II in 1662
137. Social Choice Theory - A theoretical framework for analysis of combining individual
opinions, preferences, interests, or welfares to reach a collective decision or social
welfare in some sense.
138. Socioeconomic - Relating to or concerned with the interaction of social and economic
factors.
139. State Variables - One of the variables used to describe the state of a dynamical system.
Each state variable corresponds to one of the coordinates of the underlying state
space.
140. Strategic - Relating to the identification of long-term or overall aims and interests and
the means of achieving them.
142. Subjectivity - Some information, idea, situation, or physical thing considered true only
from the perspective of a subject or subjects.
143. Tactical - Of, relating to, or constituting actions carefully planned to gain a specific end.
144. Technical Risk Management - The executive function of controlling hazards and perils.
145. Trade-Off Analysis - An analysis strategy that employs Pareto Principles to compare
options where one quality or aspect is lost in return for a gained quality or aspect.
147. U.S. Department of Commerce - The Cabinet department of the United States
government concerned with promoting economic growth with a mission to promote
job creation and improved living standards for all Americans by creating an
infrastructure that promotes economic growth, technological competitiveness, and
sustainable development. Among its tasks are gathering economic and demographic
data for business and government decision-making, and helping to set industrial
standards.
151. Utility - A measure of preferences over some set of goods and services, or the measured
contribution of something to someone’s definition of success.
APPENDIX B.