Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
83 views32 pages

Uncertainty Analysis Techniques

This document discusses various uncertainty analysis methods for engineering systems. It describes quantifying input uncertainty from physical variability and sparse data. It then covers propagating that uncertainty through models using methods like surrogate models, sensitivity analysis, and accounting for multi-physics models and model error. The document concludes with discussing model calibration, validation, extrapolation, and performing probabilistic performance assessments.

Uploaded by

Hasan Uslu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views32 pages

Uncertainty Analysis Techniques

This document discusses various uncertainty analysis methods for engineering systems. It describes quantifying input uncertainty from physical variability and sparse data. It then covers propagating that uncertainty through models using methods like surrogate models, sensitivity analysis, and accounting for multi-physics models and model error. The document concludes with discussing model calibration, validation, extrapolation, and performing probabilistic performance assessments.

Uploaded by

Hasan Uslu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

UNCERTAINTY ANALYSIS METHODS

Sankaran Mahadevan
Email: [email protected]
Vanderbilt University, School of Engineering
Consortium for Risk Evaluation with Stakeholders Participation, III
Nashville, TN 37235

and

Sohini Sarkar
Email: [email protected]
Vanderbilt University, School of Engineering
Consortium for Risk Evaluation with Stakeholders Participation, III
Nashville, TN 37235

November 2009
CBP-TR-2009-002, Rev. 0
Uncertainty Analysis Methods

XI-ii
Uncertainty Analysis Methods

CONTENTS Page No.

LIST OF FIGURES ...............................................................................................................................XI-iv

LIST OF ABBREVIATIONS AND ACRONYMS ..............................................................................XI-v

LIST OF NOMENCLATURE ..............................................................................................................XI-vi

ABSTRACT ............................................................................................................................................XI-1

1.0 INTRODUCTION .......................................................................................................................XI-1

2.0 INPUT UNCERTAINTY QUANTIFICATION ........................................................................XI-4


2.1 Physical Variability ................................................................................................................ XI-4
2.1.1 Modeling Variability in System Properties ..................................................................... XI-5
2.1.2 Modeling Variability in External Conditions.................................................................. XI-5
2.1.3 Stationary External Processes......................................................................................... XI-6
2.1.4 Non-Stationary External Processes ................................................................................ XI-6
2.2 Data Uncertainty.................................................................................................................... XI-7
2.2.1 Sparse Statistical Data .................................................................................................... XI-7
2.2.2 Measurement Error ......................................................................................................... XI-8
2.2.3 Data Available in Interval Format .................................................................................. XI-8

3.0 PROPAGATION UNCERTAINTY METHODS ......................................................................XI-9


3.1 Propagation of Physical Variability ..................................................................................... XI-9
3.2 Propagation of Data Uncertainty ......................................................................................... XI-10
3.3 Surrogate Models ................................................................................................................... XI-11
3.3.1 Stochastic Response Surface Method ............................................................................. XI-11
3.3.2 Kriging or Gaussian Process Models ............................................................................. XI-12
3.4 Sensitivity Analysis Methods ................................................................................................ XI-13
3.5 Multi-Physics Models ............................................................................................................ XI-13
3.6 Model Error Quantification .................................................................................................. XI-14
3.6.1 Solution Approximation Error ........................................................................................ XI-14
3.6.2 Model Form Error........................................................................................................... XI-14

4.0 MODEL CALIBRATION, VALIDATION AND EXTRAPOLATION ..................................XI-15


4.1 Model Calibration.................................................................................................................. XI-15
4.2 Model Validation .................................................................................................................... XI-16
4.3 Confidence Assessment in Extrapolation ............................................................................. XI-17

5.0 PROBABILISTIC PERFORMANCE ASSESSMENT ............................................................XI-18


5.1 Individual Criteria ................................................................................................................. XI-18
5.2 Multiple Criteria .................................................................................................................... XI-19

XI-iii
Uncertainty Analysis Methods

CONTENTS (contd) Page No.

6.0 CONCLUSION ............................................................................................................................XI-20

7.0 REFERENCES ............................................................................................................................XI-22

LIST OF FIGURES

Figure 1. Uncertainty Quantification, Propagation and Management Framework .................................. XI-4

Figure 2. Precipitation Data for Aiken, SC (National Oceanic and Atmospheric Administration). ........ XI-6

Figure 3. Example of Physical Variability Propagation ........................................................................... XI-10

Figure 4. Gaussian Process Model With Uncertainty Bounds. ................................................................ XI-12

Figure 5. Bayes Network ......................................................................................................................... XI-17

Figure 6. First-order Reliability Method ................................................................................................. XI-19

XI-iv
Uncertainty Analysis Methods

LIST OF ABBREVIATIONS AND ACRONYMS

ARIMA Autoregressive Integrated Moving Average


CDF Cumulative Distribution Function
FORM First-order Reliability Method
GP Gaussian Process
IID Independent and Identically Distributed
KLE Karhunen-Loeve Expansion
LHS Latin Hypercube Sampling
MCMC Markov Chain Monte Carlo
MPP Most Probable Point
PA Performance Assessment
PCE Polynomial Chaos Expansion
PDF Probability Density Function
RE Richardson Extrapolation
SRSM Stochastic Response Surface Method
SRV Standard Random Variable
S-N Stress vs. Number of Cycles
SORM Second-order Reliability Method
V&V Verification and Validation

XI-v
Uncertainty Analysis Methods

LIST OF NOMENCLATURE

B Vector of Reliability Indices


B Bayes Factor
Bi Backward Operator of Order i
C(χ1,χ2) Covariance Function of a Random Process
N(μσ2) Gaussian Distribution With Mean μ and Variance σ2
P(Ho),P(Ho) Prior Probabilities of Null and Alternative Hypotheses
P(Ho|D),P(H1|D) Probabilities of H0 and Ha Given Observed Data D
P(D|Ho),P(D|H1) Probabilities of Observing Data D Given H0 and Ha
Pf Probability of Failure
R Correlation Matrix
X Vector of Input Random Variables
c,c1 Constants
fi(x) Eigenfunctions of C(x1,x2)
fx(x) Joint Probability Density Function of X
g(X) Model Output
k Regulatory Requirement
yexp Experimental Observation
ytrue True Value of the Parameter
zt,zt-i Observations at tth and (t-i)th Time Steps
∇d,∇D Backward Difference Operators of Degree d and D
Φ(B,R) Standard Normal Multivariate CDF
Φp() Polynomial of Order P
ΘQ() Polynomial of Order Q
α Unit Gradient Vector of the Limit State in Standard Normal Space
β Reliability Index
εd Discretization Error
εexp Measurement Error
εh Input Parameter Error
εmodel Model Form Error
εnum Numerical Solution Error
εobs Overall Prediction Error
εs Stochastic Analysis Error
εt Error at tth Time Step
εt-i Error at (t-i)th Time Step
η Vector of Random Variables in Uncorrelated Standard Normal Space
λi Eigenvalues of C(x1,x2)
ø1 Coefficients of ARIMA Model
øp() Polynomial of Order p
θi Coefficients of ARIMA Error Model
θq() Polynomial of Order p
(x,χ) Random Process Dependent on Spatial Coordinate x and an Event χ
(x) Mean of the Random Process (x,χ)
ξ1 Sets of Uncorrelated Standard Normal Random Variables

XI-vi
Uncertainty Analysis Methods

UNCERTAINTY ANALYSIS METHODS


Sankaran Mahadevan
Sohini Sarkar
Vanderbilt University, School of Engineering
Consortium for Risk Evaluation with Stakeholders Participation, III
Nashville, TN 37235

ABSTRACT

This report surveys available analysis techniques to quantify the uncertainty in performance assessment (PA)
arising from various sources. Three sources of uncertainty – physical variability, data uncertainty, and model
error – are considered. The uncertainty quantification methods are described in the context of four types of
analyses needed, namely, (1) quantification of uncertainty in the inputs to the PA models, (2) propagation of
input uncertainty through the PA models, (3) model error quantified through verification and validation activi-
ties, and (4) probabilistic PA. Random variable and random process descriptions of physical variability are
outlined. Methods for handling data uncertainty through flexible families of probability distributions, confi-
dence bounds, interval analysis and Bayesian analysis are described. Useful surrogate modeling and sensitivity
analysis techniques for efficient uncertainty propagation analysis are discussed, as well as methods to quantify
the various sources of model error. Statistical hypothesis testing techniques (both classical and Bayesian) are
discussed for the validation of PA models, and a Bayesian approach to quantify the confidence in model predic-
tion with respect to field conditions is developed. First-order approximations as well as efficient Monte Carlo
sampling techniques for probabilistic PA are described.

1.0 INTRODUCTION input variables, modeling errors, assumptions and


approximations, measurement errors, and sparse and
Uncertainty quantification is important in assessing imprecise data.
and predicting performance of complex engineering
systems, especially in the absence of adequate ex- The overall goal of this report is to discuss pos-
perimental or real-world data. Simulation of complex sible methods and tools for quantifying uncertainty.
physical systems involves multiple levels of modeling Sources of uncertainty are listed below:
ranging from the material to component to subsystem
to system. Interacting models and simulation codes • Physical variability
from multiple disciplines (multiple physics) may be • Data uncertainty
required, with iterative analyses between some of the • Model error
codes. As the models are integrated across multiple
disciplines and levels, the problem becomes more Physical variability: This type of uncertainty, also
complex and assessing the predictive capability of the referred to as aleatory or irreducible uncertainty,
overall system model becomes more difficult. Many arises from natural or inherent random variability of
factors contribute to the uncertainty in the prediction physical processes and variables, due to many factors
of the system model including: variability in model such as environmental and operational variations,

XI-1
Uncertainty Analysis Methods

construction processes, and quality control. This type The performance assessment (PA) of a complex
of uncertainty is present both in system properties system involves the use of numerous analysis models,
(e.g., material strength, porosity, diffusivity, geometry each with its own assumptions and approximations.
variations, reaction rates) and external influences and The errors from the various analysis components
demands on the system (e.g., concentration of chemi- combine in a complicated manner to produce the
cals, temperature, humidity, mechanical loads). As a overall model error. This is also referred to as model
result, in model-based prediction of system behav- bias.
ior, there is uncertainty regarding the precise values
for model parameters and model inputs, leading to The roles of several types of uncertainty in the use of
uncertainty about the precise values of the model model-based simulation for performance assessment
output. Such quantities are represented in engineering can be easily illustrated with the following example.
analysis as random variables, with statistical param- Consider the probability of an undesirable event de-
eters such as mean values, standard deviations, and noted by g(X) < k, which can be computed from
distribution types estimated from observed data or in
some cases assumed. Variations over space or time P ( g ( X)  k ) ³f
g ( X ) k
X ( x ) dx (1)

are modeled as random processes.


where:
Data uncertainty: This type of uncertainty falls
under the category of epistemic uncertainty (i.e., X is the vector of input random variables, fX(x) is the
knowledge or information uncertainty) or reducible joint probability density function of X, g(X) is the model
uncertainty (i.e., the uncertainty is reduced as more output, and k is the regulatory requirement in performance
information is obtained). Data uncertainty occurs assessment.
in different forms. In the case of a quantity treated
as a random variable, the accuracy of the statistical Every term on the right hand side of Equation (1) has
distribution parameters depends on the amount of uncertainty. There is inherent variability represented
data available. If the data is sparse, the distribution by the vector of random variables X, data uncertainty
parameters themselves are uncertain and may need (due to inadequate data) regarding the distribution
to be treated as random variables. On the other hand, type and distribution parameters of fX(x), and model
information may be imprecise or qualitative, and it errors in the computation of g(X). Thus it is neces-
is not easy to treat this type of uncertainty through sary to systematically identify the various sources of
random variables. In some cases, data regarding uncertainty and develop the framework for including
some variables may only be available as a range of them in the overall PA uncertainty quantification.
values, based on expert opinion. Non-probabilistic
representations such as fuzzy sets and evidence The uncertainty analysis methods covered in this
theory are available for describing such uncertainties. report are grouped along four major steps of analysis
Measurement error (either in the laboratory or in the that are needed for probabilistic PA:
field) is another important source of data uncertainty.
• Input uncertainty quantification
Model error: This results from approximate math- • Uncertainty propagation analysis
ematical models of the system behavior and from • Model uncertainty quantification (calibration, veri-
numerical approximations during the computational fication, validation, and extrapolation)
process, resulting in two types of error in general – • Probabilistic performance assessment
solution approximation error, and model form error.

XI-2
Uncertainty Analysis Methods

A brief summary of the analysis methods covered in Probabilistic performance assessment: Limit-state-
the four steps is provided below: based reliability analysis methods are discussed to
help quantify the PA results in a probabilistic manner.
Input uncertainty quantification: Physical variabil- Methods are also discussed to compute the confidence
ity of parameters can be quantified through random bounds in probabilistic PA results. Monte Carlo simu-
variables by statistical analysis. Parameters that vary lation with high-fidelity analyses modules is compu-
in time or space are modeled as random processes tationally expensive; hence surrogate (or abstracted)
or random fields with appropriate correlation struc- models are frequently used with Monte Carlo simula-
ture. Data uncertainty that leads to uncertainty in the tion. In that case, the uncertainty or error introduced
distribution parameters and distribution types can be by the surrogate model also needs to be quantified.
addressed using confidence intervals and Bayesian
statistics. Methods to include several sources of data Figure 1 shows the four stages, within a conceptual
uncertainty, namely, sparse data, interval data and framework for systematic quantification, propagation
measurement error, are discussed. and management of various types of uncertainty. The
methods discussed in this report address all the four
Uncertainty propagation analysis: Both classical steps shown in Figure 1. While uncertainty has been
and Bayesian probabilistic approaches can be inves- dealt with using probabilistic as well as non proba-
tigated to propagate uncertainty between individual bilistic (e.g., fuzzy sets, possibility theory, evidence
sub-models and through the overall system model. To theory) formats in the literature, this report will focus
reduce the computational expense, surrogate models only on probabilistic analysis, mainly because the
can be constructed using several different techniques. mathematics of probabilistic computation are very
Methods for sensitivity analysis in the presence of well established, whereas the non-probabilistic meth-
uncertainty are discussed. ods are still under development and generally result ul
in interval computations that are expensive when aap-
Model uncertainty quantification (calibration, ver- plied to large problems with many variables.
ification, validation, and extrapolation): Model cal-
ibration is the process of adjusting model parameters The different stages of analysis in Figure 1 are nott
to obtain good agreement between model predictions strictly sequential. For example, stage 3 (verifica-
and experimental observations (McFarland, 2008). tion and validation – commonly denoted as V&V)
Both classical and Bayesian statistical methods are appears after system analysis and uncertainty propa-
discussed for model calibration with available data. gation. However, it is almost impossible to perform
One particular concern is how to properly integrate V&V on the system scale, because of extrapolation
different types of data, available at different levels of in time and space; therefore V&V is usually done
the model hierarchy. Assessment of the “correct” im- for the sub-models. Also, several of the inputs to the
plementation of the model is called verification, and overall system model may be calibrated based on the
assessment of the degree of agreement of the model results of sub-model analysis, sensitivity analysis,
response with the available physical observation is and V&V activities. Thus the four stages in Figure 1
called validation (McFarland, 2008). Model verifica- simply group together the different types of analysis,
tion and validation activities help to quantify model and might occur in different sequences for different
error (both model form error and solution approxima- problems and different sub-models.
tion error). A possible Bayesian approach is discussed
for quantifying the confidence in model extrapolation
from laboratory conditions to field conditions.

XI-3
Uncertainty Analysis Methods

Risk Management
Design Changes
Physical
variability
(Aleatoric)

Output uncertainty Model Probabil-


Data System Model
(Includes model calibration, istic PA
analysis extrapolation
error) V&V
Data
uncertainty Tests
(Epistemic)

1. Input 2. Uncertainty 3. Model calibration, 4. Performance


uncertainty propagation verification, validation, Assessment
quantification and extrapolation

Figure 1. Uncertainty Quantification, Propagation and Management Framework

__________________________

1
The box Data in the input uncertainty quantification stage includes laboratory data, historical field data, literature sources, and
expert opinion.
2
The box Design Changes may refer to conceptual, preliminary, or detailed design, depending on the development stage.
3
The boxes Design Changes and Risk Management are outside the scope of this report, although they are part of the overall
uncertainty framework.

Uncertainty analysis methods currently used in PA ac- 2.0 INPUT UNCERTAINTY


tivities are discussed in another Cementitious Barriers QUANTIFICATION
Partnership report. The quantification of uncertainty
in current PAs is limited to quantifying the probability 2.1 Physical Variability
distributions of key parameters. A more comprehen- Examples of cementitious barrier model input vari-
sive implementation of uncertainty quantification for ables with physical variability (i.e., inherent, natural
environmental PAs has been hampered by the numer-
variability) include:
ous sources of uncertainty and the long time durations
considered in the PAs. The methods presented in this
• Material properties (e.g., mechanical, thermal,
report provide a basis for advancing the current state
porosity, permeability, diffusivity)
of the art in uncertainty quantification of environmen-
• Geometrical properties (e.g., structural dimensions,
tal PAs.
concrete cover depth)
The remainder of this report is organized as follows: • External conditions (e.g., mechanical loading,
Section 2 discusses methods to quantify the uncer- boundary conditions, physical processes such as
tainty in the inputs to the system analysis model, freeze-thaw, chemical processes such as carbona-
addressing both physical variability and data uncer- tion, chloride or sulfate attack)
tainty. Model error is addressed in Sections 3 and 4.

XI-4
Uncertainty Analysis Methods

Many uncertainty quantification studies have only Consider an example of representing a random pro-
focused on quantifying and propagating the inherent cess using KLE, expressed as
variability in the input parameters. Well-established f
statistical (both classical and Bayesian) methods are Y ( x F ) Y ( x)  ¦ Oi [i ( F ) f i ( x) (2)
available for this purpose. i 1

where:
2.1.1 Modeling Variability in System
Properties Y (x) is the mean of the random process Y (x, χ), λi and
fi ( x) are eigenvalues and eigenfunctions of C(x1,x2), and
In probabilistic analysis, the sample–to–sample ξi(χ) is a set of uncorrelated standard normal random
variations (random variables) in the parameters are variables (x is a space or time coordinate, and χ is an index
addressed by defining them as random variables with representing different realizations of the random process).
probability density functions (PDFs). This assumes
that the system/material is homogeneous on a mac- Using Equation (2), realizations of the random
roscale. For example, chloride ion diffusivity has process Y (x,χ) can be easily simulated by generating
been modeled using a lognormal distribution (Hong, samples of the random variables ξ(χ), and these
2000; Gulikers, 2006; Rafiq et al., 2004; Chen, 2006) realizations of Y (x,χ) can be used as inputs to PA.
and water–cement ratio has been modeled using a
normal distribution (Chen, 2006) and uniform and 2.1.2 Modeling Variability in External
triangular distributions (Kong et al., 2002). Conditions

Some parameters may vary not only from sample to Some boundary conditions (e.g., temperature and
sample (as is the case for random variables), but also moisture content) might exhibit a recurring pattern
in spatial or time domain. Parameter variation over over shorter periods and also a trend over longer
time and space can be modeled as random processes periods. An example of variability in an external
or random fields. For example, concrete cover depth condition, i.e., rainfall, is illustrated in Figure 2. It
and compressive strength have been modeled as is evident from the figure that the rainfall data has a
random fields using squared exponential correlation pattern over a period of 1 year and a downward trend
functions (Stewart and Mullard, 2007). over a number of years. These can be numerically
represented by a seasonal model using an autoregres-
Some well known methods for simulating random sive integrated moving average (ARIMA) method
processes are spectral representation (SR) (Gurley, generally used for linear1 nonstationary2 processes
1997), Karhunen-Loeve expansion (KLE) (Ghanem (Box et al., 1994). This method can be used to predict
and Spanos, 2003, Huang et al., 2007; Mathelin et the temperature or the rainfall magnitudes in the fu-
al., 2005), and polynomial chaos expansion (PCE) ture so that it can be used in the durability analysis of
(Huang et al., 2007; Mathelin et al., 2005; Red-Horse the structures under future environmental conditions.
and Benjamin, 2004). The PCE method has been used
to represent the stochastic model output as a function
of stochastic inputs.

__________________________

1
The current observation can be expressed as a linear function of past observations.
2
A process is said to be non-stationary if its probability structure varies with the time or space coordinate.

XI-5
Uncertainty Analysis Methods

Figure 2. Precipitation Data for Aiken, SC (National Oceanic and Atmospheric Administration)

2.1.3 Stationary External Processes Using a backward operator B such that Bizt = zt-i and
combining Eqs. (3) and (4), results in Equation 5.
For a stationary process3, the ARIMA method ex-
th
presses the observation at the t time step in terms of I p ( B) z t T q ( B )H t (5)
the observations at previous time steps as
p where:
zt c  ¦ I i z t i  H t (3)
i 1
I pp (B) and θq(B) are polynomials of pth and qth order. The
where: coefficients of the polynomials can be determined using the
least-squares method.
zt and zt-i are observations at the t th and (t − i ) th time
steps, c is a constant, I ips are coefficients and εt is the error 2.1.4 Non-Stationary External Processes
between the observed and the predicted values at t th time
step. A random non-stationary process fluctuates about
a mean value that exhibits a specific pattern. If the
th
Assuming that the error at t time step is also depen- differences in levels of fluctuation are considered, the
dent on the errors at previous time steps, εt can also process can be simulated using the same method as
be expressed as for stationary processes. For example, differentiat-
q ing a second order polynomial twice will result in a
Ht c1  ¦T i H t i (4) constant. Thus, a non-stationary process of dth degree
i 1 can be expressed as
where:
I p ( B)’ d z t T q ( B)H t (6)
c1 is a constant and θi's are coefficients.

__________________________

3
A process is said to be stationary if its probability structure does not vary with the time or space coordinate.

XI-6
Uncertainty Analysis Methods

where: be pursued for this purpose, as described in subsec-


tion 2.2.
∇ is called the backward difference operator of the dth
degree. 2.2 Data Uncertainty

If the process exhibits patterns over a shorter period A Bayesian updating approach is described below to
(s) and a trend over a longer period, the process can quantify uncertainty due to inadequate statistical data
be expressed as and measurement errors (εexp). This is consistent with
the framework proposed in Figure 1, and is used to
) P ( B s )’ sD z t 4 Q ( B s )H t (7) update the statistics of different physical variables
and their distribution parameters. The prior distribu-
where: tions are based on available data and expert judgment,
and these are updated as more data becomes avail-
ΦP(Bs) and ΦQ(Bs) are polynomials of order P and Q, Bszt able through experiments, analysis, or real-world
= zt-s, and D is the order of differentiation. experience.

A similar model may be used to relate the current er- 2.2.1 Sparse Statistical Data
ror (error between observation and model prediction
at tth time step) to the previous errors (errors between For any random variable that is quantitatively de-
observations and model predictions at previous time scribed by a probability density function, there is
steps) as always uncertainty in the corresponding distribution
parameters due to small sample size. As testing and
M p ( B)’ d H t T q ( B) at (8) data collection activities are performed, the state of
knowledge regarding the uncertainty changes, and a
where: Bayesian updating approach can be implemented. For
example, suppose we decide that an input variable X
φp(B) and θq(B) and are polynomials of order p and q, d is follows a Gaussian distribution N(μ,σ2) with μ and σ
the order of differentiation and at is a white noise process. estimated from the data.

The final model is obtained by combining Eqs. (7) There is uncertainty in the normal distribution as-
and (8) as sumption, as well as in the estimates of the distribu-
tion parameters μ and σ, depending on the sample
M p ( B)) P ( B s )’ d ’ sD z t T q ( B )4 Q ( B s ) a t (9)
size. In the Bayesian approach, μ and σ are also
treated as random variables, and their statistics are
Eq. (9) is referred to as a general multiplicative model updated based on new data. However, we do not
of order ( p × d × q ) × ( P × D × Q ) s . This method know the distribution of μ and σ a priori, so we may
can be used to simulate a seasonal process. assume Gaussian for μ and Gamma distribution for
I p= σ-2 as an initial guess for example, and then do a
It may also be important to quantify the statistical cor- Bayesian update after more data is collected.
relations between some of the input random variables.
Many previous studies on uncertainty quantification The Bayesian approach also applies to joint distribu-
simply assume either zero or full correlation, in the tions of multiple random variables, which also helps
absence of adequate data. A Bayesian approach may to include the uncertainty in correlations between the

XI-7
Uncertainty Analysis Methods

variables. A prior joint distribution is assumed (or in many studies (e.g., Barford, 1985) is assumed
individual distributions and correlations are assumed), to be independent and identically distributed (IID)
and then updated as data becomes available. with zero mean and an assumed variance, i.e., εexp ~
N(0,σ2exp). Due to the measurement uncertainty, the
Instead of assuming a well known prior distribution distribution parameter σexp cannot be obtained as a
form (e.g., uniform, normal) for sparse data sets, deterministic value. Instead, it is a random variable
either empirical distribution functions, or flexible with a prior density τ (σexp). Thus, when new data is
families of distributions based on the data can be available after testing, the distribution of σexp can be
constructed. A bootstrapping4 technique can then be easily updated using the Bayes theorem.
used to quantify the uncertainty in the distribution
parameters. The empirical distribution function is Another way to represent measurement error εexp is
constructed by ranking the observations from lowest through an interval only, and not as a random vari-
to highest value, and assigning a probability value to able. In that case, one can only say the true value
each observation. ytrue lies in the interval [yexp - εexp, yexp + εexp ] without
any probability distribution assigned to εexp. Methods
Examples of flexible distribution families include to include data in interval format are discussed next.
the: Johnson family, Pearson family, gamma distribu-
tion, and stretched exponential distribution. The use 2.2.3 Data Available in Interval
of the Johnson family distribution has been explored Format
by Marhadi et al., 2008, and extended to quantify the
uncertainty in distribution parameters by McDonald Some quantities in the system model may not have
et al., 2009. In constructing the Johnson family probabilistic representation, since data may be sparse
distribution, the available data is used to calculate the or may be based on expert opinion. Some experts
first four moments, and then the distribution form is might only provide information about a range of
chosen based on the values of the four moments. A possible values for some model input variable.
jack-knife procedure is used to estimate the uncertain- Representations such as fuzzy sets, possibility theory,
ty in the distribution parameters, based on repeated and evidence theory have been used. This report is
estimation by leaving out one or more data points in focused on probabilistic methods to include interval
each estimation. data.

2.2.2 Measurement Error Transformations have been proposed from a non-


probabilistic to probabilistic format, through the
The measured quantity yexp usually deviates from the maximum likelihood approach (Langley, 2000; Ross
unknown true value ytrue due to the uncertainties in et al., 2002). Such transformations have attracted the
the test setup, equipment, environment, and opera- criticism that information is either added or lost in the
tor. For example, large errors in the measurement process. Two ways to address the criticism are: (1)
of expansion due to sulfate attack can be seen in the construct empirical distribution functions based on in-
experiments performed by Ferraris et al., 1997. The terval data collected from multiple experts or experi-
measurement error εexp can be expressed as yexp = ytrue ments (Ferson et al., 2007); or (2) construct flexible
+ εexp. The measurement error in each input variable families of distributions with bounds on distribution

__________________________

4
Bootstrapping is a data-based simulation method for statistical inference by re-sampling from an existing data set
(Efron et al., 1994).

XI-8
Uncertainty Analysis Methods

parameters based on the interval data, without forcing 3.1 Propagation of Physical Variability
a distribution assumption (McDonald et al., 2008).
These can then be treated as random variables with Various probabilistic methods (e.g., Monte Carlo
probability distribution functions and combined with simulation and first-order or second-order analytical
other random variables in a Bayesian framework to approximations) have been studied for the propaga-
quantify the overall system model uncertainty. The tion of physical variability in model inputs and model
use of families of distributions will result in multiple parameters, expressed through random variables and
probability distributions for the output, representing random process or fields. Stochastic finite element
the contributions of both physical variability and data methods (e.g., Ghanem and Spanos, 2003; Haldar and
uncertainty. Mahadevan, 2000) have been developed for single
discipline problems in structural, thermal, and fluid
3.0 PROPAGATION UNCERTAINTY mechanics. An example of such propagation is shown
METHODS in Figure 3. Several types of combinations of system
analysis model and statistical analysis techniques are
In this section, methods to quantify the contributions available:
of different sources of uncertainty and error as they
• Monte Carlo simulation with the deterministic
propagate through the system analysis model, includ-
system analysis as a black-box (e.g., Robert and
ing the contribution of model error, are discussed, in
Cesalla, 2004) to estimate model output statistics
order to quantify the overall uncertainty in the system
or probability of regulatory compliance;
model output.
• Monte Carlo simulation with a surrogate model to
replace the deterministic system analysis model
This section will cover two issues: (1) quantification
(e.g., Ghanem and Spanos, 2003; Isukapalli et al.,
of model output uncertainty, given input uncertainty
1998; Xiu and Karniadakis, 2003; Huang et al.,
(both physical variability and data uncertainty), and
2007), to estimate model output statistics or prob-
(2) quantification of model error (due to both model
ability of regulatory compliance;
form selection and solution approximations). • Local sensitivity analysis using finite difference,
perturbation or adjoint analyses, leading to esti-
Several uncertainty analysis studies, including a study mates of the first-order or second-order moments
with respect to the Yucca Mountain high-level waste of the output (e.g., Blischke and Murthy, 2000);
repository, have recognized the distinction between and
physical variability and data uncertainty (Helton and • Global sensitivity and effects analysis, and analysis
Sallaberry, 2009a & 2009b). As a result, these meth- of variance in the output (e.g., Box et al., 1978).
ods evaluate the variability in an inner loop calcula-
tion and data uncertainty in an outer loop calculation. These techniques are generic, and can be applied to
Another example is provided by Holdren et al., 2006 multi-physics analysis with multiple component mod-
in a baseline risk assessment study with respect to the ules as in the PA of cementitious barriers. However,
Idaho Cleanup Project, where contributions of dif- most applications of these techniques have only con-
ferent sources of uncertainty are separately analyzed, sidered physical variability. The techniques need to
such as from inventory, infiltration, sorption charac- include the contribution of data uncertainty and model
teristics, model calibration, and simulation periods. error to the overall model prediction uncertainty.

XI-9
Uncertainty Analysis Methods

Probabilistic Input Finite Element Analysis Probabilistic Output


P P

P V P V

P V P V

V x i = Stress
Random process: - Thermal protection panel subjected to H xi = Strain
K(xi) = Boundary conditions dynamic loads G xi = Displacement
F(xi) = Mechanical vibration - Stochastic finite element analysis
Random field: - Account for spatial and temporal
E(xi) = Material properties variability of system properties and
H(xi) = Thermal loads loads
G(xi) = Geometric properties - Account for material degradation

Figure 3. Example of Physical Variability Propagation

Computational effort is a significant issue in practical of multiple probability distributions of the output, or
applications, since these techniques involve a number confidence intervals for the estimates of probability of
of repeated runs of the system analysis model. The non-compliance in PA.
system analysis may be replaced with an inexpensive
surrogate model in order to achieve computational In the case of measurement error, choice of the un-
efficiency; this is discussed in Section 3.2. Efficient certainty propagation technique depends on how the
Monte Carlo techniques have also been pursued to measurement error is represented. If the measurement
reduce the number of system model runs, includ- error is represented as a random variable, it is simply
ing Latin hypercube sampling (LHS) (Mckay et al., added to the measured quantity, which is also a ran-
1979; Farrar et al., 2003) and importance sampling dom variable due to physical variability. Thus a sum
(Mahadevan and Raghothamachar, 2000; Zou et al. of two random variables may be used to include both
2003). physical variability and measurement error in a quan-
tity of interest. If the measurement error is represent-
3.2 Propagation of Data Uncertainty ed as an interval, one way to implement probabilistic
analysis is to represent the interval through families
Three types of data uncertainty were discussed in of distributions or upper and lower bounds on prob-
Section 2. Sparse point data results in uncertainty ability distributions, as discussed in Section 2.2.3. In
about the parameters of the probability distributions that case, multiple probabilistic analyses, using the
describing quantities with physical variability. In same nested approach as in the case of sparse data,
that case, uncertainty propagation analysis takes a can be employed to generate multiple output distribu-
nested implementation. In the outer loop, samples of tions or confidence intervals for the model output.
the distribution parameters are randomly generated, The same approach is possible for interval variables
and for each set of sampled distribution parameter that are only available as a range of values, as in the
values, probabilistic propagation analysis is carried case of expert opinion.
out as in Section 3.1. This results in the computation

XI-10
Uncertainty Analysis Methods

Propagation of uncertainty is conceptually very a Gaussian process, and is referred to as Gaussian


simple, but computationally quite expensive to imple- process modeling.
ment, especially when both physical variability and
data uncertainty are to be considered. The presence of 3.3.1 Stochastic Response Surface Method
both types of uncertainty requires a nested implemen-
tation of uncertainty propagation analysis (simulation The common approach for building a surrogate or
of data uncertainty in the outer loop and simulation of response surface model is to use least squares fit-
physical variability in the inner loop). If the system ting based on polynomials or other mathematical
model runs are time-consuming, then uncertainty forms based on physical considerations. In SRSM,
propagation analysis could be prohibitively expen- the response surface is constructed by approximat-
sive. One way to overcome the computational hurdle ing both the input and output random variables and
is to use an inexpensive surrogate model to replace fields through series expansions of standard ran-
the detailed system model, as discussed next. dom variables (e.g. Isukapalli et al., 1998; Xiu and
Karniadakis, 2003; Huang et al., 2007). This approach
3.3 Surrogate Models has been shown to be efficient, stable, and convergent
in several structural, thermal, and fluid flow problems.
Surrogate models (also known as response surface A general procedure for SRSM is as follows:
models) are frequently used to replace the expensive
system model, and used for multiple simulations to • Representation of random inputs (either ran-
quantify the uncertainty in the output. Many types dom variables or random processes) in terms of
of surrogate modeling methods are available, such Standard Random Variables (SRVs) by K-L ex-
as linear and nonlinear regression, polynomial chaos pansion, as in Equation (2).
expansion, Gaussian process modeling (e.g., Kriging
model), splines, moving least squares, support vector • Expression of model outputs in chaos series ex-
regression, relevance vector regression, neural nets, pansion. Once the inputs are expressed as func-
or even simple look-up tables. For example, Goktepe tions of the selected SRVs, the output quantities
et al., 2006 used neural network and polynomial can also be represented as functions of the same
regression models to simulate expansion of concrete set of SRVs. If the SRVs are Gaussian, the output
specimens under sulfate attack. All surrogate models can be expressed a Hermite polynomial chaos
require training or fitting data, collected by running series expansion in terms of Gaussian variables.
the full-scale system model repeatedly for differ- If the SRVs are non-Gaussian, the output can be
ent sets of input variable values. Selecting the sets expressed by a general Askey chaos expansion
of input values is referred to as statistical design in terms of non-Gaussian variables (Ghanem and
of experiments, and there is extensive literature Spanos, 2003).
on this subject. Two types of surrogate modeling
methods are discussed below that might achieve • Estimation of the unknown coefficients in the
computational efficiency while maintaining high series expansion. The improved probabilistic col-
accuracy in output-uncertainty quantification. The location method (Isukapalli et al., 1998) is used
first method expresses the model output in terms to minimize the residual in the random dimension
of a series expansion of special polynomials such by requiring the residual at the collocation points
as Hermite polynomials, and is referred to as a equal to zero. The model outputs are computed
stochastic response surface method (SRSM). The at a set of collocation points and used to estimate
second method expresses the model output through the coefficients. These collocation points are the

XI-11
Uncertainty Analysis Methods

roots of the Hermite polynomial of a higher order. one-dimensional Gaussian process model. Note how
This way of selecting collocation points would the uncertainty bounds are related to both the close-
capture points from regions of high probability ness to the training points, as well as the shape of the
(Tatang et al., 1997). curve.

• Calculation of the statistics of the output that has The basic idea of the GP model is that the output
been cast as a response surface in terms of a chaos quantities are modeled as a group of multivariate
expansion. The statistics of the response can be normal random variables. A parametric covariance
estimated with the response surface using either function is then constructed as a function of the
Monte Carlo simulation or analytical approxima- inputs. The covariance function is based on the idea
tion. that when the inputs are close together, the correla-
tion between the outputs will be high. As a result, the
3.3.2 Kriging or Gaussian Process Models uncertainty associated with the model prediction is
small for input values that are close to the training
Gaussian process (GP) models have several features points, and large for input values that are not close to
that make them attractive for use as surrogate mod- the training points. In addition, the GP model may in-
els. The primary feature of interest is the ability of corporate a systematic trend function, such as a linear
the model to “account for its own uncertainty.” That or quadratic regression of the inputs (in the notation
is, each prediction obtained from a Gaussian process of Gaussian process models, this is called the mean
model also has an associated variance, or uncertainty. function, while in Kriging it is often called a trend
This prediction variance primarily depends on the function). The effect of the mean function on predic-
closeness of the prediction location to the training tions that interpolate the training data is small, but
data, but it is also related to the functional form of the when the model is used for extrapolation, the predic-
response. For example, see Fig. 4, which depicts a tions will follow the mean function very closely.

interpolation
95% confidence intervals
observations

Figure 4. Gaussian Process Model With Uncertainty Bounds

XI-12
Uncertainty Analysis Methods

Within the GP modeling technique, it is also pos- by evaluating the output at the extreme values
sible to adaptively select the design of experiments to within the ranges of the parameters. Local sensitiv-
achieve very high accuracy. The method begins with ity analysis utilizes first-order derivatives of system
an initial GP model built from a very small number output quantities with respect to the parameters. It
of samples, and then one intelligently chooses where is usually performed for a nominal set of parameter
to generate subsequent samples to ensure the model values. Global sensitivity analysis typically uses sta-
is accurate in the vicinity of the region of interest. tistical sampling methods, such as Latin Hypercube
Since the GP model provides the expected value and Sampling, to determine the total uncertainty in the
variance of the output quantity, the next sample may system output and to apportion that uncertainty
be chosen in the region of highest variance, if the among the various parameters. Classical and Bayesian
objective is to minimize the prediction variance. The statistical analysis techniques, including the analysis
method has been shown to be both accurate and com- of variance and differential sensitivity analysis, can
putationally efficient for arbitrarily shaped functions be pursued to assess the global influence of an input
(Bichon et al., 2007). parameter on an output variable by sampling from
each input parameter’s probability density function or
3.4 Sensitivity Analysis Methods from intervals of possible values.

Sensitivity analysis serves several important func- 3.5 Multi-Physics Models


tions: (1) identification of dominant variables or
sub-models, thus helping to focus data collection In the past decade, different approaches have been
resources efficiently; (2) identification of insignifi- proposed to quantify the uncertainty for individual
cant variables or sub-models of limited significance, physical models or simulation codes (e.g. see,
helping to reduce the size of the problem and compu- Glimm and Sharp, 1999; Hanson, 1999; Devolder
tational effort; and (3) quantification of the contribu- et al., 2002; Bae et al., 2003; Hanson and Hemez,
tion of solution approximation error. Both local and 2003; Oberkampf et al., 2003; Millman et al., 2006;
global sensitivity analysis techniques are available Witteveen and Bijl, 2006). For example, Hanson
to investigate the quantitative effect of different (1999) proposed a Bayesian probabilistic method for
sources of variation (physical parameters, models, quantifying uncertainties in simulation predictions.
and measured data) on the variation of the model Bae et al. (2003) used evidence theory to handle
output. The primary benefit of sensitivity analysis to epistemic uncertainty about a structural system.
uncertainty analysis is to enable the identification of Mathelin et al. (2004) and Witteveen and Bijl (2006)
which physical parameters have the greatest influence applied a polynomial chaos-based stochastic method
on the output (Campolongo et al., 2000; Saltelli et al., for uncertainty propagation in numerical simulations.
2000). An analysis of the impact of the parametric However, these existing approaches have not ac-
uncertainty is conducted to weed out those that have counted for the uncertainty quantification in multiple
an insignificant effect upon the system output. For ex- modules of the system model, where the challenge
ample, Chen (2006) performed sensitivity analysis to is to combine data (available from different sources,
identify the important parameters affecting the service in different formats) and model predictions regard-
life of the concrete structures. ing different physical phenomena (e.g., diffusion,
chemical reaction, and mechanical damage), thus
Three sensitivity analysis methods are factor screen- using all available information to quantify the overall
ing, local-, and global-sensitivity analysis approaches. prediction uncertainty. Urbina and Mahadevan (2009)
Factor screening determines which parameters have have recently proposed a Bayes network approach to
the greatest impact on the system output variability, uncertainty quantification in multi-physics models.

XI-13
Uncertainty Analysis Methods

3.6 Model Error Quantification Errors in uncertainty propagation analysis (εs) are
method-dependent, i.e. sampling error occurs in
Model errors may relate to governing equations, Monte Carlo methods, and truncation error occurs
boundary and initial condition assumptions, loading in response surface methods (either conventional or
description, and approximations or errors in solution polynomial chaos-based). For example, sampling
algorithms (e.g., truncation of higher order terms, error could be assumed to be a Gaussian random
finite element discretization, curve-fitting models variable with zero mean and variance given by
for material damage such as S-N curve). Overall σ2/N where N, is the number of Monte Carlo runs,
model error may be quantified by comparing model and σ2 is the original variance of the model output
prediction and experimental observation, properly (Rubinstein, 1981). The truncation error is simply the
accounting for uncertainties in both. This overall er- residual error in the response surface.
ror measure combines both model form and solution
approximation errors, and so it needs to be considered Rebba et al. (2006) used the above concept to con-
in two parts. Numerical errors in the model predic- struct a surrogate model for finite element discretiza-
tion can be quantified first, using sensitivity analy- tion error in structural analysis, using the stochastic
sis, uncertainty propagation analysis, discretization response surface method. Gaussian process models
error quantification, and truncation (residual) error may also be employed for this purpose. Both options
quantification. The measurement error in the input are helpful in quantifying the solution approximation
variables can be propagated to the prediction of the error.
output. The error in the prediction of the output due
to the measurement error in the input variables is ap- 3.6.2 Model Form Error
proximated by using a first-order sensitivity analysis
(Rebba et al., 2006). Then the model form error can The overall prediction error is a combination of errors
be quantified based on all the above errors, following resulting from numerical solution approximations and
the approach illustrated for a heat transfer problem by model form selection. A simple way is to express the
Rebba et al. (2006). total observed error (difference between prediction
and observation) as the sum of the following error
3.6.1 Solution Approximation Error sources:

Several components of prediction error, such as εobs = εnum + εmodel – εexp (10)
discretization error (denoted by εd) and uncertainty
propagation analysis error (εs) can be considered. where:
Several methods to quantify the discretization error in
finite element analysis are available in the literature. εnum, εmodel, and εexp represent numerical solution error, model
However, most of these methods do not quantify the form error, and output measurement error, respectively.
actual error; instead, they only quantify some indica-
tor measures to facilitate adaptive mesh refinement. However solution approximation error results from
The Richardson extrapolation (RE) method comes multiple sources and is probably a nonlinear combi-
closest to quantifying the actual discretization error nation of various errors such as discretization er-
(Richards, 1997). (In some applications, the model ror, round-off and truncation errors, and stochastic
is run with different levels of resolution, until an ac- analysis errors. One option is to construct a regression
ceptable level of accuracy is achieved; formal error model consisting of the individual error components
quantification may not be required.) (Rebba et al., 2006).

XI-14
Uncertainty Analysis Methods

The residual of such a regression analysis will 4.0 MODEL CALIBRATION,


include the model form error (after subtracting the VALIDATION AND
experimental error effects). By denoting εobs as the EXTRAPOLATION
difference between the data and prediction, i.e., εobs =
yexp - ypred, we can construct the following relation by After quantifying and propagating the physical vari-
considering a few sources of numerical solution error ability, data uncertainty, and model error for indi-
(Rebba et al., 2006): vidual components of the overall system model, the
probability of meeting performance requirements
εobs = f(εh, εd, εs) + εmodel – εexp (11) (and our confidence in the model prediction) needs
to be assessed based on extrapolating the model to
where: field conditions (which are uncertain as well), where
sometimes very limited or no experimental data
εh, εd, and εs represent output error due to input parameter is available. Rigorous verification, validation, and
measurement error, finite element discretization error, and
calibration methods are needed to establish credibility
uncertainty propagation analysis error, respectively, all of
in the modeling and simulation. Both classical and
which contribute to numerical solution error.
Bayesian statistical methodologies have been success-
fully developed during recent years for single physics
Rebba et al. (2006) illustrated the estimation of
problems, and have the potential to be extended to
model form error using the above concept for a one-
multi-physics models of cementitious barrier systems.
dimensional heat conduction problem, assuming a
The methods should have the capability to consider
linear form of Eq. (11). However, the function f(εh,
multiple output quantities or a single model output at
εd, εs) is nonlinear, and may be approximated through
different spatial and temporal points.
a response surface with respect to the three error
variables, using a polynomial chaos expansion. The
This section discusses methods for (1) calibration
quantity εmodel - εexp is simply the residual error of such
of model parameters, based on observation data; (2)
a response surface. Thus the distribution of model er-
validation assessment of the model, based on obser-
ror εmodel is quantified by knowing the distributions of
residual error and measurement error. vation data; and (3) estimation of confidence in the
extrapolation of model prediction from laboratory
Note that the above approach to quantifying model conditions to field conditions.
form error is only within the context of model
validation—where actual data is available from 4.1 Model Calibration
targeted validation experiments—and compared with
corresponding model predictions. In the context of Two types of statistical techniques may be pursued
PA, however, the concern is with extrapolation in for model calibration uncertainty, the least squares ap-
time and space, and no direct comparison is possible proach, and the Bayesian approach. The least squares
between prediction and observation (at the time when approach estimates the values of the calibration
the PA is done). Quantifying the model errors during parameters that minimize the discrepancy between
extrapolation is difficult, and a Bayesian methodology model prediction and experimental observation.
might need to be pursued within restrictive assump- This approach can also be used to calibrate surrogate
tions (e.g., no change in physics). The Bayesian ap- models or low-fidelity models, based on high-fidelity
proach is discussed in Section 4. runs, by treating the high-fidelity results similar to
experimental data.

XI-15
Uncertainty Analysis Methods

The second approach is Bayesian calibration In Bayesian hypothesis testing, prior probabilities
(Kennedy and O’Hagan, 2001). This approach is were assigned for the null and alternative hypoth-
flexible and allows different forms for the calibration eses; P(H0 ) and P(Ha ) respectively, such that P(H0 )
factor, and it has been illustrated for a heat transfer + P(Ha) = 1. Here H0 : model error < allowable limit,
example problem (McFarland and Mahadevan, 2007, and Ha: model error > allowable limit. When data D
McFarland, 2008). is obtained, the probabilities are updated as P(H0 | D)
and P(Ha | D) using the Bayes theorem. Then a Bayes
In the literature, several researchers have calibrated factor (Jeffreys, 1961) B is defined as the ratio of like-
their models using experimental results, especially if lihoods of observing D under H0 and Ha; i.e., the first
the phenomenon being modeled is complicated and term in the square brackets on the right hand side of
the model is based on simplifying assumptions. For
example, Tixier and Mobasher (2003) calibrated two
P( H 0 | D) ⎡ P( D | H 0 ) ⎤ P( H 0 ) (12a)
=⎢ ⎥
parameters (reaction rate constant and fraction of P( H a | D) ⎣ P( D | H a ) ⎦ P( H a )
porosity available for solid product deposition), and
Krajcinovic et al. (1992) calibrated one parameter (re- If B > 1, the data gives more support to H0 than Ha.
action rate constant), while modeling the degradation Also the confidence in H0, based on the data, comes
of concrete structures under sulfate attack. from the posterior null probability P(H0 | D), which
can be rearranged from Eq. (12a) as
4.2 Model Validation
P( H 0 ) B
(12b)
Model validation involves comparing prediction with P( H 0 ) B + 1 − P( H 0 )
observation data (either historical or experimental)
when both have uncertainty. Since there is uncertainty Typically, in the absence of prior knowledge, equal
in both model prediction and experimental observa- probabilities may be assigned to each hypothesis and
tion, it is necessary to pursue rigorous statistical tech- thus P(H0) = P(Ha) = 0.5. The posterior null probabil-
niques to perform model validation assessment rather ity can then be further simplified to B/(B+1). Thus a
than simple graphical comparisons, provided data B value of 1.0 represents 50% confidence in the null
is even available for such comparisons. Statistical hypothesis being true.
hypothesis testing is one approach to quantitative
model validation under uncertainty, and both classic The Bayesian hypothesis testing is also able to ac-
and Bayesian statistics have been explored. Classical count for uncertainty in the distribution parameters, as
hypothesis testing is a well-developed statistical mentioned in Section 2.2. For such problems, the val-
method for accepting or rejecting a model based on idation metric (Bayes factor) itself becomes a random
an error statistic (see e.g., Trucano et al., 2001; Hills variable. In that case, the probability of the Bayes
and Trucano, 2002; Paez and Urbina, 2002; Hills factor exceeding a specified value can be used as the
and Leslie, 2003; Rutherford and Dowding, 2003; decision criterion for model acceptance/rejection.
Dowding et al., 2004; Chen et al., 2004; Oberkampf
and Barone, 2006). Validation metrics have been in- Notice that model validation only refers to the situ-
vestigated in recent years based on Bayesian hypothe- ation when controlled, target experiments are per-
sis testing (Zhang and Mahadevan, 2003; Mahadevan formed to evaluate model prediction, and both the
and Rebba, 2005; Rebba and Mahadevan, 2006), model runs and experiments are done under the same
reliability-based methods (Rebba and Mahadevan, set of input and boundary conditions. The validation
2008), and risk-based decision analysis (Jiang and is done only by comparing the outputs of the model
Mahadevan, 2007 & 2008). and the experiment. Once the model is calibrated,

XI-16
Uncertainty Analysis Methods

verified and validated, it may be investigated for con- updating analysis. Several efficient sampling tech-
fidence in extrapolating to field conditions different niques are available for MCMC, such as Gibbs sam-
from laboratory conditions. This is discussed in the pling, the Metropolis algorithm, and the Metropolis-
next section. Hastings algorithm (Gilks et al., 1996).

4.3 Confidence Assessment in Figure 5 shows an illustrative Bayes network for :


Extrapolation confidence extrapolation. An ellipse represents a
random variable and a rectangle represents observed
The Bayesian approach can also be used for assessing data. A solid line arrow represents a conditional
the confidence in extrapolating model prediction from probability link, and a dashed line arrow represents
laboratory conditions to field conditions, from lower the link of a variable to its observed data if available.
resolution to higher resolution analysis, and from The probability densities of the variables Ω, z, and y
the lower level to the higher level in system analy- are updated using the validated data Y. The updated
sis, through the construction of the Bayes network statistics of Ω, z, and y are then used to estimate the
(Jensen and Jensen, 2001). Bayes networks are di- updated statistics of the decision variable d (i.e., as-
rected acyclic graphical representations with nodes to sessment metric). In addition, both model prediction
represent the random variables and arcs to show the and predictive experiments are related to input vari-
conditional dependencies among the nodes. Data in ables X via physical parameters Φ. Note that there is
any one node can be used to update the statistics of all no observed data available for d; yet the confidence in
other nodes. This property makes the Bayes network the prediction of d, can be calculated by making use
a powerful tool to extrapolate model confidence from of observed data in several other nodes and propaga-
laboratory conditions to field conditions (Mahadevan tion of posterior statistics through the Bayes network.
and Rebba, 2005). After computing the posterior dis-
tribution of the output under field conditions, through The Bayes network thus links the various simulation
the Bayes network, the confidence in the prediction codes and corresponding experimental observations
can be calculated similar to Section 4.2, using the to facilitate two objectives: (1) uncertainty quantifica-
Bayes factor. tion and propagation and (2) extrapolation of confi-
dence assessment from validation domain to applica-
Markov Chain Monte Carlo (MCMC) simulation is tion domain.
used for numerical implementation of the Bayesian

Z
z
X
)
d

y Y
:

Figure 5. Bayes Network

XI-17
Uncertainty Analysis Methods

5.0 PROBABILISTIC PERFORMANCE formulated such that g < 0 indicates failure. If the
ASSESSMENT input parameters in the system analysis are uncertain,
so will be the predicted value of g. The probability
Several methods are available in the reliability of system failure, i.e. P(g < 0) may be obtained from
methods literature to efficiently perform probabilistic the volume integral under the joint probability density
performance assessment, as fast alternatives to expen- function of the input random variables over the failure
sive Monte Carlo simulation. Performance assessment domain as
can be conducted with respect to single or multiple
requirements. Efficient reliability analysis techniques Pf ³! ³ fg d0
X ( x1 , x 2 , ! , x n ) dx1 dx 2 ! dx n (13)
that are based on first-order or second-order approxi-
mations or adaptive importance sampling can be used where:
for this purpose. When multiple requirements are
defined, computation of the overall probability of sat- Pf is the probability of failure, fX is the joint probability
isfying multiple performance criteria requires integra- density of a random variable vector X with n elements;
tion over a multidimensional space defined by unions vector x represents a single realization of X. Note that the
and intersections of individual events (of satisfaction integral is taken over the failure domain, or where g ≤ 0, so
or violation of individual criteria). Pf = P(g ≤0).

An important observation here is that the same The basic Monte Carlo simulation method evaluates
methods that are described here for reliability analysis the above integral by drawing random samples from
can also be used to compute the cumulative distri- the distributions of the variables X, and by evaluating
bution function (CDF) of the output, which may be whether g ≤ 0 in each run. Then the failure probability
of more general interest with respect to uncertainty is simply the number of samples with g ≤ 0 divided
quantification of model output. The term reliability by the total number of samples. While this technique
analysis here refers only to computing the probability is very simple to implement, it is also very expensive
of exceeding or not meeting a single threshold value, for problems with low failure probability.
which is a special case of constructing the entire CDF.
The First Order Reliability Method (FORM) approxi-
This section will discuss methods for probabilistic mately estimates the failure probability as Pf = Φ(-β,)
performance assessment with respect to individual where β is the minimum distance from the origin to
criteria (5.1) and multiple criteria (5.2). the limit state in the space of uncorrelated standard
normal variables5, as shown in Figure 6 (Hasofer and
5.1 Individual Criteria Lind, 1974). The minimum distance point on the limit
state is referred to as the most probable point (MPP),
Probabilistic performance assessment can be based and β is referred to as the reliability index. Finding
on the concept of a limit state that defines the bound- the MPP is an optimization problem:
ary between success and failure for a system (Haldar
and Mahadevan, 2000). The limit state function, g, Minimize E K , subject to gK(K) = 0 (14)
is derived from a system performance criterion and

__________________________

5
In general, a set of random variables x may be non-normal and correlated, but these may be transformed to an uncorrelated
standard normal space (i.e. the space of random normal variables with 0 mean and unit standard deviation) via a transformation
T, i.e η = T(x).

XI-18
Uncertainty Analysis Methods

Figure 6. First-order Reliability Method

K
where: meeting the requirements is calculated through unions
or intersections of individual failure probabilities.
η is the vector of random variables in the space of uncor-
related standard normal variables, and ||η|| denotes the norm In the case of unions (i.e., system fails if any one of
of that vector. the individual criteria is not met), the failure prob-
ability is
Several optimization techniques, such as Newton
PF , Series P{* g k (x) d 0} (15)
search (Rackwitz and Fiessler, 1978), and sequential
k
quadratic programming (Schittkowski, 1983) can be
used to find the MPP. Second-order reliability meth- This system failure probability may be computed
ods (SORM) are also available for higher accuracy; using either Monte Carlo simulation, or by extending
these take into account the curvature of the limit state the results of the first-order approximation in Section
in the failure probability calculation (e.g., Breitung, 5.1. Let B be the vector of reliability indices for each
1984; Tvedt, 1990). Compared to basic Monte Carlo of the limit states, and the elements of the matrix R
simulation, FORM and SORM require many fewer be the dot products of the corresponding α vectors
iterations to converge to the MPP, and thus drastically (unit gradient vector of the limit state at the MPP in
reduce the computational expense. standard normal space) obtained from the FORM
analysis for each limit state. Then the system failure
5.2 Multiple Criteria probability in the above equation can be approxi-
mated as 1 – Φ(B, R), where Φ(B, R) is the standard
When a PA is conducted with respect to multiple normal multivariate CDF with correlation matrix R.
requirements, the overall system-level probability of Closed-form representations of Φ(B, R) exist for the

XI-19
Uncertainty Analysis Methods

bivariate case (Dunnett and Sobel, 1954). If more between accuracy and computational expense may be
than two limit states are considered, then one may necessary.
elect to use bounding formulae (Ditlevsen, 1979),
importance sampling methods (e.g., Mahadevan and An important observation to note is that the prob-
Dey, 1998; Ambartzumian et al., 1998), multiple ability calculations described in Sections 5.1 and
linearizations (Hohenbichler and Rackwitz, 1987), or 5.2 are only with respect to physical variability,
a moment-based approximation (Pandey, 1998). For represented by the random variables X. The pres-
nonlinear limit states, the joint failure domain may be ence of data uncertainty and model errors makes the
identified through an iterative linearization procedure probability estimates themselves uncertain. Thus one
(Mahadevan and Shi, 2001). can construct confidence bounds on the CDF of the
output, based on a nested two-loop analysis. In the
Similar concepts can be applied when the system outer loop, realizations of the variables representing
failure is defined through intersections of individual information uncertainty (such as distribution pa-
failures (i.e., system fails only if all the individual rameters of the probability distributions) and model
criteria are not met). In that case, the failure errors are generated, and for each such realization,
probability is the output CDF is constructed in the inner loop. The
collection of the resulting multiple CDFs is then used
PF , Parallel P{ g k (x) d 0} (16)
to construct the confidence bounds on the CDF. This
k
nested implementation can become computationally
Again, the failure probability of the parallel system demanding; in that case, a single loop implementa-
can be calculated either by Monte Carlo simulation, tion that simultaneously performs both outer loop and
or from the results of the FORM analysis of its com- inner loop analyses may be pursued (McDonald et al.,
ponents as Φ (-B, R). In case FORM-based estima- 2009).
tion is too approximate, Monte Carlo simulation can
be used for higher accuracy, but with a large number 6.0 CONCLUSION
of simulations. Efficient sampling techniques such
as importance sampling (Mahadevan and Dey, 1998) Uncertainty quantification in performance assessment
may be used to reduce the computational expense. involves consideration of three sources of uncer-
tainty – inherent variability, information uncertainty,
In some cases, overall system failure definition may and model errors. This report described available
not be a simple union or intersection of individual methods to quantify the uncertainty in model-based
failures, but may need to be represented as combina- prediction due to each of these sources, and addressed
tions of unions and intersections. In most cases, the them in four stages – input characterization based on
system will not necessarily be in one of the two states data; propagation of uncertainties and errors through
(failed or safe), but in one of several levels of per- the system model; model calibration, validation and
formance or degradation. Accounting for evolution extrapolation; and performance assessment. Flexible
of system states through time considerably increases distribution families were discussed to handle sparse
the computational effort. The effort increases further data and interval data. Autoregressive models were
when iterative multi-physics analysis is necessary, as discussed to handle time dependence. Methods to
in the case of several simultaneously active degrada- quantify model errors resulting from both model form
tion processes. One option is to use first-order, second selection and solution approximation were discussed.
moment approximations to B and R (Mahadevan Bayesian methods were discussed for model calibra-
and Smith, 2006), to reduce the computational tion, validation and extrapolation. An important issue
expense, but at the cost of accuracy. A trade-off is computational expense, when iterative analysis

XI-20
Uncertainty Analysis Methods

between multiple codes is necessary. Uncertainty radioactive waste containment, and real-world data to
quantification multiplies the computational effort validate long-term model predictions is not available.
of deterministic analysis by an order of magnitude. Thus the extrapolations are based on laboratory data
Therefore the use of surrogate models, and first-order or limited term observations, and come with large
approximations of overall output uncertainty, were uncertainty. Therefore the benefit of uncertainty quan-
described to reduce the computational expense. tification is not so much in predicting failure probabil-
ity or similar measures, but in facilitating engineering
Many of the methods described in the report have decision making, such as comparing different design
been applied to mechanical systems that are small and analysis options, and allocating resources for
in size, or time-independent, and the uncertain- uncertainty reduction through further data collection
ties considered were not very large. None of these and/or model refinement.
simplifications is available in the case of long-term
performance assessment of engineered barriers for

XI-21
Uncertainty Analysis Methods

7.0 REFERENCES durability assessment of reinforced concrete


structures under coupled deterioration processes.
Ambartzumian, R, Der Kiureghian, A, Ohanian, Ph.D. Dissertation, Department of Civil and
V & Sukiasian, H 1997, ‘Multinormal probability Environmental Engineering, Vanderbilt University.
by sequential conditioned importance sampling.’
In Advances in Safety and Reliability, Proceedings Chen, W, Baghdasaryan, L, Buranathiti, T & Cao,
of ESREL ’97, Vol. 2, 17–20 June, Lisbon, pp. J 2004, ‘Model validation via uncertainty propaga-
1261–1268. tion and data transformation.’ AIAA Journal, 42(7),
1406–1415.
Bae, H-R, Grandhi, RV & Canfield, RA 2003,
‘Uncertainty quantification of structural response Devolder, B, Glimm, J, Grove, JW, Kang, Y, Lee,
using evidence theory.’ AIAA Journal, 41(10), Y, Pao, K, Sharp, DH & Ye, K 2002, ‘Uncertainty
2062–2068. quantification for multiscale simulations.’ ASME
Journal of Fluids Engineering, 124(1), 129 – 141.
Bichon, BJ, Eldred, MS, Swiler, LP, Mahadevan,
S & McFarland, JM 2007. ‘Multimodal Reliability Ditlevsen, O 1979, ‘Narrow reliability bounds
Assessment for Complex Engineering Applications for structural systems.’ Journal of Structural
using Efficient Global Optimization,’ Proceedings, Mechanics, 7(4), 453–472.
9th AIAA Non-Deterministic Approaches
Dowding, KJ, Hills, RG, Leslie, I, Pilch, M,
Conference, Waikiki, HI.
Rutherford, BM & Hobbs, ML 2004, Case study
Barford, NC 1985, Experimental measurements: for model validation: Assessing a model for ther-
precision, error, and truth. Wiley, NewYork, 1985. mal decomposition of polyurethane foam, Sandia
National Laboratories Tec. Rep. Sand. No 2004-
Blischke, WR & Murthy, DNP 2000, Reliability: 3632, Albuquerque, New Mexico, 2004.
Modeling, Prediction, and Optimization. Wiley,
New York. Dunnett CW & Sobel, M 1954, ‘A Bivariate gen-
eralization of student’s t-distribution, with tables
Breitung, K 1984, ‘Asymptotic approximations for certain special cases.’ Biometrika, 41(1/2),
for multinormal integrals.’ Journal of Engineering
153–169.
Mechanics, 110(3), 357-366.
Efron, B & Tibshirani, RJ 1994, An Introduction to
Box, GEP, Hunter, WG & Hunter JS 1978,
the Bootstrap, Chapman & Hall/CRC.
Statistics for Experimenters, An Introduction to
Design, Data Analysis, and Model Building. Wiley Farrar, CR, Sohn, H, Hemez, FM, Anderson,
Interscience. MC, Bement, MT, Cornwell, PJ, Doebling, SW,
Schultze, JF, Lieven, N & Robertson, AN 2003,
Box, GEP, Jenkins, GM & Reinsel GC 1994, Time
Damage prognosis: Current status and future needs.
series analysis forecasting and control (3rd edition).
Tec. Rep. LA–14051–MS, Los Alamos National
Prentice Hall, Englewood Cliffs, New Jersey,.
Laboratory, Los Alamos, New Mexico.
Campolongo, F, Saltelli, A, Sorensen, T &
Ferraris, CF, Clifton, JR, Stutzman, PE &
Tarantola, S 2000, ‘Hitchhiker’s guide to sensitivity
Garboczi, EJ 1997, ‘Mechanisms of degradation of
analysis.’ Sensitivity Analysis, A Saltelli, K Chan,
Portland cement-based systems by sulfate attack.’
and EM Scott. eds., John Wiley & Sons, pp. 15–47.
Mechanisms of Chemical Degradation of Cement-
Chen, D 2006, Computational framework for Based Systems, London, 185-192, 1997.

XI-22
Uncertainty Analysis Methods

Ferson, S, Kreinovich, V, Hajagos, J, Oberkampf Hasofer, AM & Lind, NC 1974, ‘Exact and in-
W & Ginzburg, L 2007, Experimental Uncertainty variant second moment code format.’ Journal
Estimation and Statistics for Data Having Interval of the Engineering Mechanics Division, ASCE,
Uncertainty. Sandia National Laboratories Tec. 100(EM1), 111-121.
Rep. Sand. No 2003-0939, Albuquerque, New
Mexico. Helton, JC & Sallabery, CJ 2009a, ‘Conceptual
basis for the definition and calculation of expected
Ghanem, R & Spanos, P 2003, Stochastic Finite dose in performance assessments for the proposed
Elements: A Spectral Approach, Springer-Verlag, high-level radioactive waste repository at Yucca
New York. Mountain, Nevada’, Reliability Engineering and
System Safety 94, 677- 698.
Gilks, WR, Richardson, S & Spiegelhalter, DJ
1996, Markov Chain Monte Carlo in Practice, Helton, JC & Sallabery, CJ 2009b, ‘Computational
Interdisciplinary Statistics Series, Chapman and implementation of sampling-based approaches to
Hall, Boca Raton, Florida. the calculation of expected dose in performance
assessments for the proposed high-level radioac-
Glimm, J & Sharp, DH 1999, ‘Prediction and the tive waste repository at Yucca Mountain, Nevada’,
quantification of uncertainty.’ Physica D, 133(1-4), Reliability Engineering and System Safety 94, 699-
152–170. 721.
Goktepe, AB, Inan, G, Ramyar, K & Sezer, A Hills, RG & Leslie, I 2003, Statistical validation
2006. ‘Estimation of sulfate expansion level of pc of engineering and scientific models: Validation
mortar using statistical and neural approaches’, experiments to application. Sandia National
Construction and Building Materials, 20, 441–449. Laboratories Tec. Rep. Sand. No 2003-0706,
Gulikers, J 2006, ‘Considerations on the reliabil- Albuquerque, New Mexico.
ity of service life predictions using probabilistic Hills, RG & Trucano, TG 2002, Statistical valida-
approach. Journal de Physique IV, 136, 233-241, tion of engineering and scientific models: A maxi-
2006. mum likelihood based metric, Sandia National
Gurley, KR 1997, ‘Modeling and simulation of Laboratories Tec. Rep. Sand. No 2001-1783,
non-Gaussian processes’, PhD thesis, University of Albuquerque, New Mexico.
Notre Dame, April 1997. Hohenbichler M & Rackwitz, R 1987, ‘First-order
Haldar, A & Mahadevan, S 2000, Probability, concepts in systems reliability.’ Structural Safety;
Reliability and Statistical Methods in Engineering 1(3), 177–188.
Design, J. Wiley & Sons, New York. Holdren, KJ, Anderson, DL, Becker, BH, Hampton,
Hanson, KM 1999, ‘A framework for assessing NL, Koeppen, LD, Magnussen, SO & Sondrup,
uncertainties in simulation predictions.’ Physica D, AJ 2006, Remedial investigation and baseline
133(1-4), 179–188. risk assessment of operable unit 7 13/14, U. S.
Department of Energy, Idaho Operations Office.
Hanson, KM & Hemez, FM 2003, ‘Uncertainty
quantification of simulation codes based on experi-
mental data.’ Proceedings of 41st AIAA Aerospace
Sciences, Jan. 6-9, 2003, Reno, Nevada.

XI-23
Uncertainty Analysis Methods

Hong, HP 2000, Assessment of reliability of ageing Langley, RS 2000, ‘A unified approach to the
reinforced concrete structures. Journal of Structural probabilistic and possibilistic analysis of uncertain
Engineering, 1260(12), 1458–1465. systems’, ASCE Journal of Engineering Mechanics,
126, 1163-1172.
Huang, S, Mahadevan, S & Rebba, R 2007,
‘Collocation-based stochastic finite element Mahadevan, S & Dey, A 1998, ‘Ductile system
analysis for random field problems’, Probabilistic reliability analysis using adaptive importance sam-
Engineering Mechanics, 22, 194–205, 2007. pling.’ Structural Safety, 20(2), 137–154.

Isukapalli, SS, Roy, A & Georgopoulos, PG 1998, Mahadevan, S & Rebba, R 2005, ‘Validation of
‘Stochastic response surface methods (SRSMs) reliability computational models using Bayes net-
for uncertainty propagation: Application to envi- works.’ Reliability Engineering and System Safety,
ronmental and biological systems.’ Risk analysis, 87(2), 223–232.
18(3), 351–363.
Mahadevan, S & Raghothamachar, P 2000,
Jeffreys, H 1961, Theory of Probability, 3rd. ed., ‘Adaptive simulation for system reliability analy-
Oxford University Press, London. sis of large structures.’ Computers and Structures,
77(6), 725–734.
Jensen, FV & Jensen, FB 2001, Bayesian Networks
and Decision Graphs, Springer-Verlag, New York. Mahadevan, S & Shi, P 2001, ‘Multiple lineariza-
tion method for nonlinear reliability analysis.’
Jiang, X & Mahadevan, S 2007, ‘Bayesian risk- ASCE Journal of Engineering Mechanics, 127(11),
based decision method for model validation under 1165–1173.
uncertainty.’ Reliability Engineering and System
Safety, 92(6), 707–718. Mahadevan, S & Smith, N 2006, ‘Efficient first-
order reliability analysis of multidisciplinary
Jiang, X and Mahadevan, S 2008, ‘Bayesian vali- systems.’ International Journal of Reliability and
dation assessment of multivariate computational Safety, 1(1/2), 137–154.
models’, Journal of Applied Statistics, 35(1), 49-
65. Marhadi, KS, Venkataraman, S and Pai, S 2008,
‘Quantifying uncertainty in statistical distribution
Kennedy, MC & O’Hagan, A 2001, Bayesian of small sample data using Bayesian inference of
calibration of computer models (with discussion). unbounded Johnson distribution', Proceedings,
Journal of the Royal Statistical Society, Series B, 49th AIAA/ASME/ASCE/AHS/ASC Structures,
63(3), 425–464. Structural Dynamics and Materials Conference,
Kong, JS, Ababneh, AN, Frangopol, DM & Xi, Schaumburg, Illinois.
Y 2002, Reliability analysis of chloride penetration Mathelin, L, Hussaini, MY, Zhang, TA & Bataille,
in saturated concrete. Probabilistic Engineering F 2004, ‘Uncertainty propagation for a turbulent,
Mechanics, 17, 305–315. compressible nozzle flow using stochastic meth-
Krajcinovic, D, Basista, M, Mallick, K & Sumarac, ods.’ AIAA Journal, 42(8), 1169–1176.
D 1992, Chemo-micromechanics of brittle solids. Mathelin, L, Hussaini, MY & Zang, TA 2005,
Journal of the Mechanics and Physics of solids, ‘Stochastic approaches to uncertainty quantification
40(5), 965-990. in CFD simulations’, Numerical Algorithms, 38,
209–236.

XI-24
Uncertainty Analysis Methods

McDonald, M, Zaman, K & Mahadevan, S 2008, Oberkampf, WL, Trucano, TG & Hirsch, Ch 2003,
Uncertainty quantification and propagation for mul- Verification, Validation, and Predictive Capabilities
tidisciplinary system analysis, 12th AIAA/ISSMO in Computational Engineering and Physics, Sandia
Multidisciplinary Analysis and Optimization National Laboratories Tec. Rep. Sand. No 2003-
Conference, Paper No. 134794, Victoria, British 3769, Albuquerque, New Mexico.
Columbia, Canada.
Paez, TL & Urbina, A 2002, Validation of math-
McDonald, M, Zaman, K & Mahadevan, S 2009, ematical models of complex structural dynamic
‘Representation and first-order approximations for systems, Proceedings of the Ninth International
propagation of aleatory and distribution parameter Congress on Sound and Vibration, Orlando,
uncertainty.’ AIAA-2009-2250, 50th AIAA/ASME/ Florida.
ASCE/AHS/ASC Structures, Structural dynamics,
and Materials Conference, 4-7 May 2009, Palm Pandey, MD 1998, ‘An effective approximation to
Springs, California. evaluate multinormal integrals.’ Structural Safety,
20(1), 51–67.
McFarland, JM 2008, Uncertainty analysis for
computer simulations through validation and cali- Rackwitz, R & Fiessler B 1978, ‘Structural reli-
bration. PhD dissertation, Vanderbilt University, ability under combined load sequences.’ Computers
Nashville, TN. and Structures, 9(5), 489–494.

McFarland, J, Mahadevan, S, Swiler, L & Giunta, Rafiq, MI, Chryssanthopoulos, MK & Onoufriou,
A 2007, ‘Bayesian calibration of the QASPR T 2004, ‘Performance updating of concrete bridges
simulation.’ In Proceedings of the 9th AIAA Non- using proactive health monitoring methods’,
Deterministic Approaches Conference, Honolulu, Reliability Engineering and System Safety, 86(3),
Hawaii. 247-256.

Mckay, MD, Conover, WJ & Beckman, RJ 1979. Rebba, R 2005, Model Validation and Design
‘A comparison of three methods for selecting val- under Uncertainty. PhD dissertation, Vanderbilt
ues of input variables in the analysis of output from University, Nashville, TN, USA.
a computer code.’ Technometrics, 21, 239–245. Rebba, R & Mahadevan, S 2006, ‘Model predictive
Millman, DR, King, PI, Maple, RC, Beranx, PS capability assessment under uncertainty’, AIAA
& Chiltonk, LK 2006, ‘Uncertainty quantification Journal, 44(10), 2376–2384.
with a B-spline stochastic projection.’ AIAA jour- Rebba, R & Mahadevan, S 2008, ‘Computational
nal, 44(8), 1845–1853. methods for model reliability assessment’,
Nigam, NC 1983, Introduction to random vibra- Reliability Engineering and System Safety, 93,
tions. MIT Press. 1197–1207.

Oberkampf, WL & Barone, MF 2006, ‘Measures of Rebba, R, Mahadevan, S & Huang, S 2006,
agreement between computation and experiment: ‘Validation and error estimation of computa-
Validation metrics.’ Journal of Computational tional models.’ Reliability Engineering and Safety
Physics 217(1), 5–36. System, 91(10-11), 1390–1397.

XI-25
Uncertainty Analysis Methods

Red-Horse, JR & Benjamin, AS 2004, ‘A probabi- Tixier, R & Mobasher, B 2003, Modeling of dam-
listic approach to uncertainty quantification with age in cement-based materials subjected to external
limited information’, Reliability Engineering and sulfate attack. II: comparison with experiments.
System Safety, 85, 183–190, 2004. Journal of Materials in Civil Engineering, 15(4),
314-322.
Robert, CP & Casella, G 2004, Monte Carlo
Statistical Methods. 2nd ed. Springer-Verlag, New Trucano, TG, Easterling, RG, Dowding, KJ, Paez,
York. TL, Urbina, A, Romero, VJ, Rutherford, BM &
Hills, RG 2001, Description of the Sandia valida-
Ross, TJ, Booker, JM & Parkinson, WJ 2002, tion metrics project, Sandia National Laboratories
Fuzzy Logic and Probability Applications: Tec. Rep. Sand. No 2001-1339, Albuquerque, New
Bridging the Gap. Society for Industrial and Mexico.
Applied Mathematics, Philadelphia, PA.
Tvedt, L 1990, ‘Distribution of quadratic forms in
Richards, SA 1997, ‘Completed Richardson ex- normal space – application to structural reliability.’
trapolation in space and time.’ Communications Journal of Engineering Mechanics, ASCE, 116(6),
in Numerical Methods in Engineering, 13(7), 1183-1197.
558–573.
Urbina, A & Mahadevan, S 2009, Uncertainty
Rubinstein, RY 1981, Simulation and the Monte quantification in hierarchical computational model
Carlo method. New York, Wiley. development, Proceedings, 12th AIAA Non-
Rutherford, BM, Dowding, K 2003, An approach Deterministic Approaches Conferences, Palm
to model validation and model-based prediction— Springs, California.
polyurethane foam case study, Sandia National Witteveen, J & Bijl, H 2006, ‘Using polynomial
Laboratories, Tec. Rep. Sand. No 2003–2336, chaos for uncertainty quantification in problems
Albuquerque, New Mexico. with Nonlinearities.’ Paper No. AIAA-2006-2066,
Saltelli, A, Chan, K & Scott, EM 2000, Sensitivity Proceedings, 47th AIAA/ASME/ASCE/AHS/ASC
Analysis. John Wiley & Sons. Structures, Structural Dynamics, and Materials
Conference, 14th AIAA/ASME/AHS Adaptive
Schittkowski, K 1983, ‘On the convergence of Structures Conference, Newport, Rhode Island.
a sequential quadratic programming method
with an augmented Lagrangian search direction.’ Xiu, D & Karniadakis, GE 2003, ‘Modeling un-
Mathematische Operationsforschung und Statistik, certainty in flow simulations via generalized poly-
Series Optimization, 14, 197–216. nomial chaos.’ Journal of Computational Physics,
187(1), 137-167.
Stewart, MG & Mullard, JA 2007, Spatial time-
dependent reliability analysis of corrosion damage Zhang, R & Mahadevan, S 2003, ‘Bayesian
and the timing of first repair for RC structures. methodology for reliability model acceptance.’
Engineering Structures, 29, 1457-1464. Reliability Engineering and System Safety, 80(1),
95–103.
Tatang, MA, Pan, W, Prinn, RG & McRae, GJ
1997, ‘An efficient method for parametric uncer- Zou, T, Mahadevan, S & Mourelatos, Z 2003,
tainty analysis of numerical geophysical models.’ ‘Reliability-based evaluation of automotive wind
Journal of Geophysical Research, 102(D18), noise quality.’ Reliability Engineering and System
21925–21932. Safety, 82(2), 217–224.

XI-26

You might also like