Uncertainty Analysis Techniques
Uncertainty Analysis Techniques
Sankaran Mahadevan
Email: [email protected]
Vanderbilt University, School of Engineering
Consortium for Risk Evaluation with Stakeholders Participation, III
Nashville, TN 37235
and
Sohini Sarkar
Email: [email protected]
Vanderbilt University, School of Engineering
Consortium for Risk Evaluation with Stakeholders Participation, III
Nashville, TN 37235
November 2009
CBP-TR-2009-002, Rev. 0
Uncertainty Analysis Methods
XI-ii
Uncertainty Analysis Methods
ABSTRACT ............................................................................................................................................XI-1
XI-iii
Uncertainty Analysis Methods
LIST OF FIGURES
Figure 2. Precipitation Data for Aiken, SC (National Oceanic and Atmospheric Administration). ........ XI-6
XI-iv
Uncertainty Analysis Methods
XI-v
Uncertainty Analysis Methods
LIST OF NOMENCLATURE
XI-vi
Uncertainty Analysis Methods
ABSTRACT
This report surveys available analysis techniques to quantify the uncertainty in performance assessment (PA)
arising from various sources. Three sources of uncertainty – physical variability, data uncertainty, and model
error – are considered. The uncertainty quantification methods are described in the context of four types of
analyses needed, namely, (1) quantification of uncertainty in the inputs to the PA models, (2) propagation of
input uncertainty through the PA models, (3) model error quantified through verification and validation activi-
ties, and (4) probabilistic PA. Random variable and random process descriptions of physical variability are
outlined. Methods for handling data uncertainty through flexible families of probability distributions, confi-
dence bounds, interval analysis and Bayesian analysis are described. Useful surrogate modeling and sensitivity
analysis techniques for efficient uncertainty propagation analysis are discussed, as well as methods to quantify
the various sources of model error. Statistical hypothesis testing techniques (both classical and Bayesian) are
discussed for the validation of PA models, and a Bayesian approach to quantify the confidence in model predic-
tion with respect to field conditions is developed. First-order approximations as well as efficient Monte Carlo
sampling techniques for probabilistic PA are described.
XI-1
Uncertainty Analysis Methods
construction processes, and quality control. This type The performance assessment (PA) of a complex
of uncertainty is present both in system properties system involves the use of numerous analysis models,
(e.g., material strength, porosity, diffusivity, geometry each with its own assumptions and approximations.
variations, reaction rates) and external influences and The errors from the various analysis components
demands on the system (e.g., concentration of chemi- combine in a complicated manner to produce the
cals, temperature, humidity, mechanical loads). As a overall model error. This is also referred to as model
result, in model-based prediction of system behav- bias.
ior, there is uncertainty regarding the precise values
for model parameters and model inputs, leading to The roles of several types of uncertainty in the use of
uncertainty about the precise values of the model model-based simulation for performance assessment
output. Such quantities are represented in engineering can be easily illustrated with the following example.
analysis as random variables, with statistical param- Consider the probability of an undesirable event de-
eters such as mean values, standard deviations, and noted by g(X) < k, which can be computed from
distribution types estimated from observed data or in
some cases assumed. Variations over space or time P ( g ( X) k ) ³f
g ( X ) k
X ( x ) dx (1)
XI-2
Uncertainty Analysis Methods
A brief summary of the analysis methods covered in Probabilistic performance assessment: Limit-state-
the four steps is provided below: based reliability analysis methods are discussed to
help quantify the PA results in a probabilistic manner.
Input uncertainty quantification: Physical variabil- Methods are also discussed to compute the confidence
ity of parameters can be quantified through random bounds in probabilistic PA results. Monte Carlo simu-
variables by statistical analysis. Parameters that vary lation with high-fidelity analyses modules is compu-
in time or space are modeled as random processes tationally expensive; hence surrogate (or abstracted)
or random fields with appropriate correlation struc- models are frequently used with Monte Carlo simula-
ture. Data uncertainty that leads to uncertainty in the tion. In that case, the uncertainty or error introduced
distribution parameters and distribution types can be by the surrogate model also needs to be quantified.
addressed using confidence intervals and Bayesian
statistics. Methods to include several sources of data Figure 1 shows the four stages, within a conceptual
uncertainty, namely, sparse data, interval data and framework for systematic quantification, propagation
measurement error, are discussed. and management of various types of uncertainty. The
methods discussed in this report address all the four
Uncertainty propagation analysis: Both classical steps shown in Figure 1. While uncertainty has been
and Bayesian probabilistic approaches can be inves- dealt with using probabilistic as well as non proba-
tigated to propagate uncertainty between individual bilistic (e.g., fuzzy sets, possibility theory, evidence
sub-models and through the overall system model. To theory) formats in the literature, this report will focus
reduce the computational expense, surrogate models only on probabilistic analysis, mainly because the
can be constructed using several different techniques. mathematics of probabilistic computation are very
Methods for sensitivity analysis in the presence of well established, whereas the non-probabilistic meth-
uncertainty are discussed. ods are still under development and generally result ul
in interval computations that are expensive when aap-
Model uncertainty quantification (calibration, ver- plied to large problems with many variables.
ification, validation, and extrapolation): Model cal-
ibration is the process of adjusting model parameters The different stages of analysis in Figure 1 are nott
to obtain good agreement between model predictions strictly sequential. For example, stage 3 (verifica-
and experimental observations (McFarland, 2008). tion and validation – commonly denoted as V&V)
Both classical and Bayesian statistical methods are appears after system analysis and uncertainty propa-
discussed for model calibration with available data. gation. However, it is almost impossible to perform
One particular concern is how to properly integrate V&V on the system scale, because of extrapolation
different types of data, available at different levels of in time and space; therefore V&V is usually done
the model hierarchy. Assessment of the “correct” im- for the sub-models. Also, several of the inputs to the
plementation of the model is called verification, and overall system model may be calibrated based on the
assessment of the degree of agreement of the model results of sub-model analysis, sensitivity analysis,
response with the available physical observation is and V&V activities. Thus the four stages in Figure 1
called validation (McFarland, 2008). Model verifica- simply group together the different types of analysis,
tion and validation activities help to quantify model and might occur in different sequences for different
error (both model form error and solution approxima- problems and different sub-models.
tion error). A possible Bayesian approach is discussed
for quantifying the confidence in model extrapolation
from laboratory conditions to field conditions.
XI-3
Uncertainty Analysis Methods
Risk Management
Design Changes
Physical
variability
(Aleatoric)
__________________________
1
The box Data in the input uncertainty quantification stage includes laboratory data, historical field data, literature sources, and
expert opinion.
2
The box Design Changes may refer to conceptual, preliminary, or detailed design, depending on the development stage.
3
The boxes Design Changes and Risk Management are outside the scope of this report, although they are part of the overall
uncertainty framework.
XI-4
Uncertainty Analysis Methods
Many uncertainty quantification studies have only Consider an example of representing a random pro-
focused on quantifying and propagating the inherent cess using KLE, expressed as
variability in the input parameters. Well-established f
statistical (both classical and Bayesian) methods are Y ( x F ) Y ( x) ¦ Oi [i ( F ) f i ( x) (2)
available for this purpose. i 1
where:
2.1.1 Modeling Variability in System
Properties Y (x) is the mean of the random process Y (x, χ), λi and
fi ( x) are eigenvalues and eigenfunctions of C(x1,x2), and
In probabilistic analysis, the sample–to–sample ξi(χ) is a set of uncorrelated standard normal random
variations (random variables) in the parameters are variables (x is a space or time coordinate, and χ is an index
addressed by defining them as random variables with representing different realizations of the random process).
probability density functions (PDFs). This assumes
that the system/material is homogeneous on a mac- Using Equation (2), realizations of the random
roscale. For example, chloride ion diffusivity has process Y (x,χ) can be easily simulated by generating
been modeled using a lognormal distribution (Hong, samples of the random variables ξ(χ), and these
2000; Gulikers, 2006; Rafiq et al., 2004; Chen, 2006) realizations of Y (x,χ) can be used as inputs to PA.
and water–cement ratio has been modeled using a
normal distribution (Chen, 2006) and uniform and 2.1.2 Modeling Variability in External
triangular distributions (Kong et al., 2002). Conditions
Some parameters may vary not only from sample to Some boundary conditions (e.g., temperature and
sample (as is the case for random variables), but also moisture content) might exhibit a recurring pattern
in spatial or time domain. Parameter variation over over shorter periods and also a trend over longer
time and space can be modeled as random processes periods. An example of variability in an external
or random fields. For example, concrete cover depth condition, i.e., rainfall, is illustrated in Figure 2. It
and compressive strength have been modeled as is evident from the figure that the rainfall data has a
random fields using squared exponential correlation pattern over a period of 1 year and a downward trend
functions (Stewart and Mullard, 2007). over a number of years. These can be numerically
represented by a seasonal model using an autoregres-
Some well known methods for simulating random sive integrated moving average (ARIMA) method
processes are spectral representation (SR) (Gurley, generally used for linear1 nonstationary2 processes
1997), Karhunen-Loeve expansion (KLE) (Ghanem (Box et al., 1994). This method can be used to predict
and Spanos, 2003, Huang et al., 2007; Mathelin et the temperature or the rainfall magnitudes in the fu-
al., 2005), and polynomial chaos expansion (PCE) ture so that it can be used in the durability analysis of
(Huang et al., 2007; Mathelin et al., 2005; Red-Horse the structures under future environmental conditions.
and Benjamin, 2004). The PCE method has been used
to represent the stochastic model output as a function
of stochastic inputs.
__________________________
1
The current observation can be expressed as a linear function of past observations.
2
A process is said to be non-stationary if its probability structure varies with the time or space coordinate.
XI-5
Uncertainty Analysis Methods
Figure 2. Precipitation Data for Aiken, SC (National Oceanic and Atmospheric Administration)
2.1.3 Stationary External Processes Using a backward operator B such that Bizt = zt-i and
combining Eqs. (3) and (4), results in Equation 5.
For a stationary process3, the ARIMA method ex-
th
presses the observation at the t time step in terms of I p ( B) z t T q ( B )H t (5)
the observations at previous time steps as
p where:
zt c ¦ I i z t i H t (3)
i 1
I pp (B) and θq(B) are polynomials of pth and qth order. The
where: coefficients of the polynomials can be determined using the
least-squares method.
zt and zt-i are observations at the t th and (t − i ) th time
steps, c is a constant, I ips are coefficients and εt is the error 2.1.4 Non-Stationary External Processes
between the observed and the predicted values at t th time
step. A random non-stationary process fluctuates about
a mean value that exhibits a specific pattern. If the
th
Assuming that the error at t time step is also depen- differences in levels of fluctuation are considered, the
dent on the errors at previous time steps, εt can also process can be simulated using the same method as
be expressed as for stationary processes. For example, differentiat-
q ing a second order polynomial twice will result in a
Ht c1 ¦T i H t i (4) constant. Thus, a non-stationary process of dth degree
i 1 can be expressed as
where:
I p ( B) d z t T q ( B)H t (6)
c1 is a constant and θi's are coefficients.
__________________________
3
A process is said to be stationary if its probability structure does not vary with the time or space coordinate.
XI-6
Uncertainty Analysis Methods
If the process exhibits patterns over a shorter period A Bayesian updating approach is described below to
(s) and a trend over a longer period, the process can quantify uncertainty due to inadequate statistical data
be expressed as and measurement errors (εexp). This is consistent with
the framework proposed in Figure 1, and is used to
) P ( B s ) sD z t 4 Q ( B s )H t (7) update the statistics of different physical variables
and their distribution parameters. The prior distribu-
where: tions are based on available data and expert judgment,
and these are updated as more data becomes avail-
ΦP(Bs) and ΦQ(Bs) are polynomials of order P and Q, Bszt able through experiments, analysis, or real-world
= zt-s, and D is the order of differentiation. experience.
A similar model may be used to relate the current er- 2.2.1 Sparse Statistical Data
ror (error between observation and model prediction
at tth time step) to the previous errors (errors between For any random variable that is quantitatively de-
observations and model predictions at previous time scribed by a probability density function, there is
steps) as always uncertainty in the corresponding distribution
parameters due to small sample size. As testing and
M p ( B) d H t T q ( B) at (8) data collection activities are performed, the state of
knowledge regarding the uncertainty changes, and a
where: Bayesian updating approach can be implemented. For
example, suppose we decide that an input variable X
φp(B) and θq(B) and are polynomials of order p and q, d is follows a Gaussian distribution N(μ,σ2) with μ and σ
the order of differentiation and at is a white noise process. estimated from the data.
The final model is obtained by combining Eqs. (7) There is uncertainty in the normal distribution as-
and (8) as sumption, as well as in the estimates of the distribu-
tion parameters μ and σ, depending on the sample
M p ( B)) P ( B s ) d sD z t T q ( B )4 Q ( B s ) a t (9)
size. In the Bayesian approach, μ and σ are also
treated as random variables, and their statistics are
Eq. (9) is referred to as a general multiplicative model updated based on new data. However, we do not
of order ( p × d × q ) × ( P × D × Q ) s . This method know the distribution of μ and σ a priori, so we may
can be used to simulate a seasonal process. assume Gaussian for μ and Gamma distribution for
I p= σ-2 as an initial guess for example, and then do a
It may also be important to quantify the statistical cor- Bayesian update after more data is collected.
relations between some of the input random variables.
Many previous studies on uncertainty quantification The Bayesian approach also applies to joint distribu-
simply assume either zero or full correlation, in the tions of multiple random variables, which also helps
absence of adequate data. A Bayesian approach may to include the uncertainty in correlations between the
XI-7
Uncertainty Analysis Methods
variables. A prior joint distribution is assumed (or in many studies (e.g., Barford, 1985) is assumed
individual distributions and correlations are assumed), to be independent and identically distributed (IID)
and then updated as data becomes available. with zero mean and an assumed variance, i.e., εexp ~
N(0,σ2exp). Due to the measurement uncertainty, the
Instead of assuming a well known prior distribution distribution parameter σexp cannot be obtained as a
form (e.g., uniform, normal) for sparse data sets, deterministic value. Instead, it is a random variable
either empirical distribution functions, or flexible with a prior density τ (σexp). Thus, when new data is
families of distributions based on the data can be available after testing, the distribution of σexp can be
constructed. A bootstrapping4 technique can then be easily updated using the Bayes theorem.
used to quantify the uncertainty in the distribution
parameters. The empirical distribution function is Another way to represent measurement error εexp is
constructed by ranking the observations from lowest through an interval only, and not as a random vari-
to highest value, and assigning a probability value to able. In that case, one can only say the true value
each observation. ytrue lies in the interval [yexp - εexp, yexp + εexp ] without
any probability distribution assigned to εexp. Methods
Examples of flexible distribution families include to include data in interval format are discussed next.
the: Johnson family, Pearson family, gamma distribu-
tion, and stretched exponential distribution. The use 2.2.3 Data Available in Interval
of the Johnson family distribution has been explored Format
by Marhadi et al., 2008, and extended to quantify the
uncertainty in distribution parameters by McDonald Some quantities in the system model may not have
et al., 2009. In constructing the Johnson family probabilistic representation, since data may be sparse
distribution, the available data is used to calculate the or may be based on expert opinion. Some experts
first four moments, and then the distribution form is might only provide information about a range of
chosen based on the values of the four moments. A possible values for some model input variable.
jack-knife procedure is used to estimate the uncertain- Representations such as fuzzy sets, possibility theory,
ty in the distribution parameters, based on repeated and evidence theory have been used. This report is
estimation by leaving out one or more data points in focused on probabilistic methods to include interval
each estimation. data.
__________________________
4
Bootstrapping is a data-based simulation method for statistical inference by re-sampling from an existing data set
(Efron et al., 1994).
XI-8
Uncertainty Analysis Methods
parameters based on the interval data, without forcing 3.1 Propagation of Physical Variability
a distribution assumption (McDonald et al., 2008).
These can then be treated as random variables with Various probabilistic methods (e.g., Monte Carlo
probability distribution functions and combined with simulation and first-order or second-order analytical
other random variables in a Bayesian framework to approximations) have been studied for the propaga-
quantify the overall system model uncertainty. The tion of physical variability in model inputs and model
use of families of distributions will result in multiple parameters, expressed through random variables and
probability distributions for the output, representing random process or fields. Stochastic finite element
the contributions of both physical variability and data methods (e.g., Ghanem and Spanos, 2003; Haldar and
uncertainty. Mahadevan, 2000) have been developed for single
discipline problems in structural, thermal, and fluid
3.0 PROPAGATION UNCERTAINTY mechanics. An example of such propagation is shown
METHODS in Figure 3. Several types of combinations of system
analysis model and statistical analysis techniques are
In this section, methods to quantify the contributions available:
of different sources of uncertainty and error as they
• Monte Carlo simulation with the deterministic
propagate through the system analysis model, includ-
system analysis as a black-box (e.g., Robert and
ing the contribution of model error, are discussed, in
Cesalla, 2004) to estimate model output statistics
order to quantify the overall uncertainty in the system
or probability of regulatory compliance;
model output.
• Monte Carlo simulation with a surrogate model to
replace the deterministic system analysis model
This section will cover two issues: (1) quantification
(e.g., Ghanem and Spanos, 2003; Isukapalli et al.,
of model output uncertainty, given input uncertainty
1998; Xiu and Karniadakis, 2003; Huang et al.,
(both physical variability and data uncertainty), and
2007), to estimate model output statistics or prob-
(2) quantification of model error (due to both model
ability of regulatory compliance;
form selection and solution approximations). • Local sensitivity analysis using finite difference,
perturbation or adjoint analyses, leading to esti-
Several uncertainty analysis studies, including a study mates of the first-order or second-order moments
with respect to the Yucca Mountain high-level waste of the output (e.g., Blischke and Murthy, 2000);
repository, have recognized the distinction between and
physical variability and data uncertainty (Helton and • Global sensitivity and effects analysis, and analysis
Sallaberry, 2009a & 2009b). As a result, these meth- of variance in the output (e.g., Box et al., 1978).
ods evaluate the variability in an inner loop calcula-
tion and data uncertainty in an outer loop calculation. These techniques are generic, and can be applied to
Another example is provided by Holdren et al., 2006 multi-physics analysis with multiple component mod-
in a baseline risk assessment study with respect to the ules as in the PA of cementitious barriers. However,
Idaho Cleanup Project, where contributions of dif- most applications of these techniques have only con-
ferent sources of uncertainty are separately analyzed, sidered physical variability. The techniques need to
such as from inventory, infiltration, sorption charac- include the contribution of data uncertainty and model
teristics, model calibration, and simulation periods. error to the overall model prediction uncertainty.
XI-9
Uncertainty Analysis Methods
P V P V
P V P V
V x i = Stress
Random process: - Thermal protection panel subjected to H xi = Strain
K(xi) = Boundary conditions dynamic loads G xi = Displacement
F(xi) = Mechanical vibration - Stochastic finite element analysis
Random field: - Account for spatial and temporal
E(xi) = Material properties variability of system properties and
H(xi) = Thermal loads loads
G(xi) = Geometric properties - Account for material degradation
Computational effort is a significant issue in practical of multiple probability distributions of the output, or
applications, since these techniques involve a number confidence intervals for the estimates of probability of
of repeated runs of the system analysis model. The non-compliance in PA.
system analysis may be replaced with an inexpensive
surrogate model in order to achieve computational In the case of measurement error, choice of the un-
efficiency; this is discussed in Section 3.2. Efficient certainty propagation technique depends on how the
Monte Carlo techniques have also been pursued to measurement error is represented. If the measurement
reduce the number of system model runs, includ- error is represented as a random variable, it is simply
ing Latin hypercube sampling (LHS) (Mckay et al., added to the measured quantity, which is also a ran-
1979; Farrar et al., 2003) and importance sampling dom variable due to physical variability. Thus a sum
(Mahadevan and Raghothamachar, 2000; Zou et al. of two random variables may be used to include both
2003). physical variability and measurement error in a quan-
tity of interest. If the measurement error is represent-
3.2 Propagation of Data Uncertainty ed as an interval, one way to implement probabilistic
analysis is to represent the interval through families
Three types of data uncertainty were discussed in of distributions or upper and lower bounds on prob-
Section 2. Sparse point data results in uncertainty ability distributions, as discussed in Section 2.2.3. In
about the parameters of the probability distributions that case, multiple probabilistic analyses, using the
describing quantities with physical variability. In same nested approach as in the case of sparse data,
that case, uncertainty propagation analysis takes a can be employed to generate multiple output distribu-
nested implementation. In the outer loop, samples of tions or confidence intervals for the model output.
the distribution parameters are randomly generated, The same approach is possible for interval variables
and for each set of sampled distribution parameter that are only available as a range of values, as in the
values, probabilistic propagation analysis is carried case of expert opinion.
out as in Section 3.1. This results in the computation
XI-10
Uncertainty Analysis Methods
XI-11
Uncertainty Analysis Methods
roots of the Hermite polynomial of a higher order. one-dimensional Gaussian process model. Note how
This way of selecting collocation points would the uncertainty bounds are related to both the close-
capture points from regions of high probability ness to the training points, as well as the shape of the
(Tatang et al., 1997). curve.
• Calculation of the statistics of the output that has The basic idea of the GP model is that the output
been cast as a response surface in terms of a chaos quantities are modeled as a group of multivariate
expansion. The statistics of the response can be normal random variables. A parametric covariance
estimated with the response surface using either function is then constructed as a function of the
Monte Carlo simulation or analytical approxima- inputs. The covariance function is based on the idea
tion. that when the inputs are close together, the correla-
tion between the outputs will be high. As a result, the
3.3.2 Kriging or Gaussian Process Models uncertainty associated with the model prediction is
small for input values that are close to the training
Gaussian process (GP) models have several features points, and large for input values that are not close to
that make them attractive for use as surrogate mod- the training points. In addition, the GP model may in-
els. The primary feature of interest is the ability of corporate a systematic trend function, such as a linear
the model to “account for its own uncertainty.” That or quadratic regression of the inputs (in the notation
is, each prediction obtained from a Gaussian process of Gaussian process models, this is called the mean
model also has an associated variance, or uncertainty. function, while in Kriging it is often called a trend
This prediction variance primarily depends on the function). The effect of the mean function on predic-
closeness of the prediction location to the training tions that interpolate the training data is small, but
data, but it is also related to the functional form of the when the model is used for extrapolation, the predic-
response. For example, see Fig. 4, which depicts a tions will follow the mean function very closely.
interpolation
95% confidence intervals
observations
XI-12
Uncertainty Analysis Methods
Within the GP modeling technique, it is also pos- by evaluating the output at the extreme values
sible to adaptively select the design of experiments to within the ranges of the parameters. Local sensitiv-
achieve very high accuracy. The method begins with ity analysis utilizes first-order derivatives of system
an initial GP model built from a very small number output quantities with respect to the parameters. It
of samples, and then one intelligently chooses where is usually performed for a nominal set of parameter
to generate subsequent samples to ensure the model values. Global sensitivity analysis typically uses sta-
is accurate in the vicinity of the region of interest. tistical sampling methods, such as Latin Hypercube
Since the GP model provides the expected value and Sampling, to determine the total uncertainty in the
variance of the output quantity, the next sample may system output and to apportion that uncertainty
be chosen in the region of highest variance, if the among the various parameters. Classical and Bayesian
objective is to minimize the prediction variance. The statistical analysis techniques, including the analysis
method has been shown to be both accurate and com- of variance and differential sensitivity analysis, can
putationally efficient for arbitrarily shaped functions be pursued to assess the global influence of an input
(Bichon et al., 2007). parameter on an output variable by sampling from
each input parameter’s probability density function or
3.4 Sensitivity Analysis Methods from intervals of possible values.
XI-13
Uncertainty Analysis Methods
3.6 Model Error Quantification Errors in uncertainty propagation analysis (εs) are
method-dependent, i.e. sampling error occurs in
Model errors may relate to governing equations, Monte Carlo methods, and truncation error occurs
boundary and initial condition assumptions, loading in response surface methods (either conventional or
description, and approximations or errors in solution polynomial chaos-based). For example, sampling
algorithms (e.g., truncation of higher order terms, error could be assumed to be a Gaussian random
finite element discretization, curve-fitting models variable with zero mean and variance given by
for material damage such as S-N curve). Overall σ2/N where N, is the number of Monte Carlo runs,
model error may be quantified by comparing model and σ2 is the original variance of the model output
prediction and experimental observation, properly (Rubinstein, 1981). The truncation error is simply the
accounting for uncertainties in both. This overall er- residual error in the response surface.
ror measure combines both model form and solution
approximation errors, and so it needs to be considered Rebba et al. (2006) used the above concept to con-
in two parts. Numerical errors in the model predic- struct a surrogate model for finite element discretiza-
tion can be quantified first, using sensitivity analy- tion error in structural analysis, using the stochastic
sis, uncertainty propagation analysis, discretization response surface method. Gaussian process models
error quantification, and truncation (residual) error may also be employed for this purpose. Both options
quantification. The measurement error in the input are helpful in quantifying the solution approximation
variables can be propagated to the prediction of the error.
output. The error in the prediction of the output due
to the measurement error in the input variables is ap- 3.6.2 Model Form Error
proximated by using a first-order sensitivity analysis
(Rebba et al., 2006). Then the model form error can The overall prediction error is a combination of errors
be quantified based on all the above errors, following resulting from numerical solution approximations and
the approach illustrated for a heat transfer problem by model form selection. A simple way is to express the
Rebba et al. (2006). total observed error (difference between prediction
and observation) as the sum of the following error
3.6.1 Solution Approximation Error sources:
Several components of prediction error, such as εobs = εnum + εmodel – εexp (10)
discretization error (denoted by εd) and uncertainty
propagation analysis error (εs) can be considered. where:
Several methods to quantify the discretization error in
finite element analysis are available in the literature. εnum, εmodel, and εexp represent numerical solution error, model
However, most of these methods do not quantify the form error, and output measurement error, respectively.
actual error; instead, they only quantify some indica-
tor measures to facilitate adaptive mesh refinement. However solution approximation error results from
The Richardson extrapolation (RE) method comes multiple sources and is probably a nonlinear combi-
closest to quantifying the actual discretization error nation of various errors such as discretization er-
(Richards, 1997). (In some applications, the model ror, round-off and truncation errors, and stochastic
is run with different levels of resolution, until an ac- analysis errors. One option is to construct a regression
ceptable level of accuracy is achieved; formal error model consisting of the individual error components
quantification may not be required.) (Rebba et al., 2006).
XI-14
Uncertainty Analysis Methods
XI-15
Uncertainty Analysis Methods
The second approach is Bayesian calibration In Bayesian hypothesis testing, prior probabilities
(Kennedy and O’Hagan, 2001). This approach is were assigned for the null and alternative hypoth-
flexible and allows different forms for the calibration eses; P(H0 ) and P(Ha ) respectively, such that P(H0 )
factor, and it has been illustrated for a heat transfer + P(Ha) = 1. Here H0 : model error < allowable limit,
example problem (McFarland and Mahadevan, 2007, and Ha: model error > allowable limit. When data D
McFarland, 2008). is obtained, the probabilities are updated as P(H0 | D)
and P(Ha | D) using the Bayes theorem. Then a Bayes
In the literature, several researchers have calibrated factor (Jeffreys, 1961) B is defined as the ratio of like-
their models using experimental results, especially if lihoods of observing D under H0 and Ha; i.e., the first
the phenomenon being modeled is complicated and term in the square brackets on the right hand side of
the model is based on simplifying assumptions. For
example, Tixier and Mobasher (2003) calibrated two
P( H 0 | D) ⎡ P( D | H 0 ) ⎤ P( H 0 ) (12a)
=⎢ ⎥
parameters (reaction rate constant and fraction of P( H a | D) ⎣ P( D | H a ) ⎦ P( H a )
porosity available for solid product deposition), and
Krajcinovic et al. (1992) calibrated one parameter (re- If B > 1, the data gives more support to H0 than Ha.
action rate constant), while modeling the degradation Also the confidence in H0, based on the data, comes
of concrete structures under sulfate attack. from the posterior null probability P(H0 | D), which
can be rearranged from Eq. (12a) as
4.2 Model Validation
P( H 0 ) B
(12b)
Model validation involves comparing prediction with P( H 0 ) B + 1 − P( H 0 )
observation data (either historical or experimental)
when both have uncertainty. Since there is uncertainty Typically, in the absence of prior knowledge, equal
in both model prediction and experimental observa- probabilities may be assigned to each hypothesis and
tion, it is necessary to pursue rigorous statistical tech- thus P(H0) = P(Ha) = 0.5. The posterior null probabil-
niques to perform model validation assessment rather ity can then be further simplified to B/(B+1). Thus a
than simple graphical comparisons, provided data B value of 1.0 represents 50% confidence in the null
is even available for such comparisons. Statistical hypothesis being true.
hypothesis testing is one approach to quantitative
model validation under uncertainty, and both classic The Bayesian hypothesis testing is also able to ac-
and Bayesian statistics have been explored. Classical count for uncertainty in the distribution parameters, as
hypothesis testing is a well-developed statistical mentioned in Section 2.2. For such problems, the val-
method for accepting or rejecting a model based on idation metric (Bayes factor) itself becomes a random
an error statistic (see e.g., Trucano et al., 2001; Hills variable. In that case, the probability of the Bayes
and Trucano, 2002; Paez and Urbina, 2002; Hills factor exceeding a specified value can be used as the
and Leslie, 2003; Rutherford and Dowding, 2003; decision criterion for model acceptance/rejection.
Dowding et al., 2004; Chen et al., 2004; Oberkampf
and Barone, 2006). Validation metrics have been in- Notice that model validation only refers to the situ-
vestigated in recent years based on Bayesian hypothe- ation when controlled, target experiments are per-
sis testing (Zhang and Mahadevan, 2003; Mahadevan formed to evaluate model prediction, and both the
and Rebba, 2005; Rebba and Mahadevan, 2006), model runs and experiments are done under the same
reliability-based methods (Rebba and Mahadevan, set of input and boundary conditions. The validation
2008), and risk-based decision analysis (Jiang and is done only by comparing the outputs of the model
Mahadevan, 2007 & 2008). and the experiment. Once the model is calibrated,
XI-16
Uncertainty Analysis Methods
verified and validated, it may be investigated for con- updating analysis. Several efficient sampling tech-
fidence in extrapolating to field conditions different niques are available for MCMC, such as Gibbs sam-
from laboratory conditions. This is discussed in the pling, the Metropolis algorithm, and the Metropolis-
next section. Hastings algorithm (Gilks et al., 1996).
Z
z
X
)
d
y Y
:
XI-17
Uncertainty Analysis Methods
5.0 PROBABILISTIC PERFORMANCE formulated such that g < 0 indicates failure. If the
ASSESSMENT input parameters in the system analysis are uncertain,
so will be the predicted value of g. The probability
Several methods are available in the reliability of system failure, i.e. P(g < 0) may be obtained from
methods literature to efficiently perform probabilistic the volume integral under the joint probability density
performance assessment, as fast alternatives to expen- function of the input random variables over the failure
sive Monte Carlo simulation. Performance assessment domain as
can be conducted with respect to single or multiple
requirements. Efficient reliability analysis techniques Pf ³! ³ fg d0
X ( x1 , x 2 , ! , x n ) dx1 dx 2 ! dx n (13)
that are based on first-order or second-order approxi-
mations or adaptive importance sampling can be used where:
for this purpose. When multiple requirements are
defined, computation of the overall probability of sat- Pf is the probability of failure, fX is the joint probability
isfying multiple performance criteria requires integra- density of a random variable vector X with n elements;
tion over a multidimensional space defined by unions vector x represents a single realization of X. Note that the
and intersections of individual events (of satisfaction integral is taken over the failure domain, or where g ≤ 0, so
or violation of individual criteria). Pf = P(g ≤0).
An important observation here is that the same The basic Monte Carlo simulation method evaluates
methods that are described here for reliability analysis the above integral by drawing random samples from
can also be used to compute the cumulative distri- the distributions of the variables X, and by evaluating
bution function (CDF) of the output, which may be whether g ≤ 0 in each run. Then the failure probability
of more general interest with respect to uncertainty is simply the number of samples with g ≤ 0 divided
quantification of model output. The term reliability by the total number of samples. While this technique
analysis here refers only to computing the probability is very simple to implement, it is also very expensive
of exceeding or not meeting a single threshold value, for problems with low failure probability.
which is a special case of constructing the entire CDF.
The First Order Reliability Method (FORM) approxi-
This section will discuss methods for probabilistic mately estimates the failure probability as Pf = Φ(-β,)
performance assessment with respect to individual where β is the minimum distance from the origin to
criteria (5.1) and multiple criteria (5.2). the limit state in the space of uncorrelated standard
normal variables5, as shown in Figure 6 (Hasofer and
5.1 Individual Criteria Lind, 1974). The minimum distance point on the limit
state is referred to as the most probable point (MPP),
Probabilistic performance assessment can be based and β is referred to as the reliability index. Finding
on the concept of a limit state that defines the bound- the MPP is an optimization problem:
ary between success and failure for a system (Haldar
and Mahadevan, 2000). The limit state function, g, Minimize E K , subject to gK(K) = 0 (14)
is derived from a system performance criterion and
__________________________
5
In general, a set of random variables x may be non-normal and correlated, but these may be transformed to an uncorrelated
standard normal space (i.e. the space of random normal variables with 0 mean and unit standard deviation) via a transformation
T, i.e η = T(x).
XI-18
Uncertainty Analysis Methods
K
where: meeting the requirements is calculated through unions
or intersections of individual failure probabilities.
η is the vector of random variables in the space of uncor-
related standard normal variables, and ||η|| denotes the norm In the case of unions (i.e., system fails if any one of
of that vector. the individual criteria is not met), the failure prob-
ability is
Several optimization techniques, such as Newton
PF , Series P{* g k (x) d 0} (15)
search (Rackwitz and Fiessler, 1978), and sequential
k
quadratic programming (Schittkowski, 1983) can be
used to find the MPP. Second-order reliability meth- This system failure probability may be computed
ods (SORM) are also available for higher accuracy; using either Monte Carlo simulation, or by extending
these take into account the curvature of the limit state the results of the first-order approximation in Section
in the failure probability calculation (e.g., Breitung, 5.1. Let B be the vector of reliability indices for each
1984; Tvedt, 1990). Compared to basic Monte Carlo of the limit states, and the elements of the matrix R
simulation, FORM and SORM require many fewer be the dot products of the corresponding α vectors
iterations to converge to the MPP, and thus drastically (unit gradient vector of the limit state at the MPP in
reduce the computational expense. standard normal space) obtained from the FORM
analysis for each limit state. Then the system failure
5.2 Multiple Criteria probability in the above equation can be approxi-
mated as 1 – Φ(B, R), where Φ(B, R) is the standard
When a PA is conducted with respect to multiple normal multivariate CDF with correlation matrix R.
requirements, the overall system-level probability of Closed-form representations of Φ(B, R) exist for the
XI-19
Uncertainty Analysis Methods
bivariate case (Dunnett and Sobel, 1954). If more between accuracy and computational expense may be
than two limit states are considered, then one may necessary.
elect to use bounding formulae (Ditlevsen, 1979),
importance sampling methods (e.g., Mahadevan and An important observation to note is that the prob-
Dey, 1998; Ambartzumian et al., 1998), multiple ability calculations described in Sections 5.1 and
linearizations (Hohenbichler and Rackwitz, 1987), or 5.2 are only with respect to physical variability,
a moment-based approximation (Pandey, 1998). For represented by the random variables X. The pres-
nonlinear limit states, the joint failure domain may be ence of data uncertainty and model errors makes the
identified through an iterative linearization procedure probability estimates themselves uncertain. Thus one
(Mahadevan and Shi, 2001). can construct confidence bounds on the CDF of the
output, based on a nested two-loop analysis. In the
Similar concepts can be applied when the system outer loop, realizations of the variables representing
failure is defined through intersections of individual information uncertainty (such as distribution pa-
failures (i.e., system fails only if all the individual rameters of the probability distributions) and model
criteria are not met). In that case, the failure errors are generated, and for each such realization,
probability is the output CDF is constructed in the inner loop. The
collection of the resulting multiple CDFs is then used
PF , Parallel P{ g k (x) d 0} (16)
to construct the confidence bounds on the CDF. This
k
nested implementation can become computationally
Again, the failure probability of the parallel system demanding; in that case, a single loop implementa-
can be calculated either by Monte Carlo simulation, tion that simultaneously performs both outer loop and
or from the results of the FORM analysis of its com- inner loop analyses may be pursued (McDonald et al.,
ponents as Φ (-B, R). In case FORM-based estima- 2009).
tion is too approximate, Monte Carlo simulation can
be used for higher accuracy, but with a large number 6.0 CONCLUSION
of simulations. Efficient sampling techniques such
as importance sampling (Mahadevan and Dey, 1998) Uncertainty quantification in performance assessment
may be used to reduce the computational expense. involves consideration of three sources of uncer-
tainty – inherent variability, information uncertainty,
In some cases, overall system failure definition may and model errors. This report described available
not be a simple union or intersection of individual methods to quantify the uncertainty in model-based
failures, but may need to be represented as combina- prediction due to each of these sources, and addressed
tions of unions and intersections. In most cases, the them in four stages – input characterization based on
system will not necessarily be in one of the two states data; propagation of uncertainties and errors through
(failed or safe), but in one of several levels of per- the system model; model calibration, validation and
formance or degradation. Accounting for evolution extrapolation; and performance assessment. Flexible
of system states through time considerably increases distribution families were discussed to handle sparse
the computational effort. The effort increases further data and interval data. Autoregressive models were
when iterative multi-physics analysis is necessary, as discussed to handle time dependence. Methods to
in the case of several simultaneously active degrada- quantify model errors resulting from both model form
tion processes. One option is to use first-order, second selection and solution approximation were discussed.
moment approximations to B and R (Mahadevan Bayesian methods were discussed for model calibra-
and Smith, 2006), to reduce the computational tion, validation and extrapolation. An important issue
expense, but at the cost of accuracy. A trade-off is computational expense, when iterative analysis
XI-20
Uncertainty Analysis Methods
between multiple codes is necessary. Uncertainty radioactive waste containment, and real-world data to
quantification multiplies the computational effort validate long-term model predictions is not available.
of deterministic analysis by an order of magnitude. Thus the extrapolations are based on laboratory data
Therefore the use of surrogate models, and first-order or limited term observations, and come with large
approximations of overall output uncertainty, were uncertainty. Therefore the benefit of uncertainty quan-
described to reduce the computational expense. tification is not so much in predicting failure probabil-
ity or similar measures, but in facilitating engineering
Many of the methods described in the report have decision making, such as comparing different design
been applied to mechanical systems that are small and analysis options, and allocating resources for
in size, or time-independent, and the uncertain- uncertainty reduction through further data collection
ties considered were not very large. None of these and/or model refinement.
simplifications is available in the case of long-term
performance assessment of engineered barriers for
XI-21
Uncertainty Analysis Methods
XI-22
Uncertainty Analysis Methods
Ferson, S, Kreinovich, V, Hajagos, J, Oberkampf Hasofer, AM & Lind, NC 1974, ‘Exact and in-
W & Ginzburg, L 2007, Experimental Uncertainty variant second moment code format.’ Journal
Estimation and Statistics for Data Having Interval of the Engineering Mechanics Division, ASCE,
Uncertainty. Sandia National Laboratories Tec. 100(EM1), 111-121.
Rep. Sand. No 2003-0939, Albuquerque, New
Mexico. Helton, JC & Sallabery, CJ 2009a, ‘Conceptual
basis for the definition and calculation of expected
Ghanem, R & Spanos, P 2003, Stochastic Finite dose in performance assessments for the proposed
Elements: A Spectral Approach, Springer-Verlag, high-level radioactive waste repository at Yucca
New York. Mountain, Nevada’, Reliability Engineering and
System Safety 94, 677- 698.
Gilks, WR, Richardson, S & Spiegelhalter, DJ
1996, Markov Chain Monte Carlo in Practice, Helton, JC & Sallabery, CJ 2009b, ‘Computational
Interdisciplinary Statistics Series, Chapman and implementation of sampling-based approaches to
Hall, Boca Raton, Florida. the calculation of expected dose in performance
assessments for the proposed high-level radioac-
Glimm, J & Sharp, DH 1999, ‘Prediction and the tive waste repository at Yucca Mountain, Nevada’,
quantification of uncertainty.’ Physica D, 133(1-4), Reliability Engineering and System Safety 94, 699-
152–170. 721.
Goktepe, AB, Inan, G, Ramyar, K & Sezer, A Hills, RG & Leslie, I 2003, Statistical validation
2006. ‘Estimation of sulfate expansion level of pc of engineering and scientific models: Validation
mortar using statistical and neural approaches’, experiments to application. Sandia National
Construction and Building Materials, 20, 441–449. Laboratories Tec. Rep. Sand. No 2003-0706,
Gulikers, J 2006, ‘Considerations on the reliabil- Albuquerque, New Mexico.
ity of service life predictions using probabilistic Hills, RG & Trucano, TG 2002, Statistical valida-
approach. Journal de Physique IV, 136, 233-241, tion of engineering and scientific models: A maxi-
2006. mum likelihood based metric, Sandia National
Gurley, KR 1997, ‘Modeling and simulation of Laboratories Tec. Rep. Sand. No 2001-1783,
non-Gaussian processes’, PhD thesis, University of Albuquerque, New Mexico.
Notre Dame, April 1997. Hohenbichler M & Rackwitz, R 1987, ‘First-order
Haldar, A & Mahadevan, S 2000, Probability, concepts in systems reliability.’ Structural Safety;
Reliability and Statistical Methods in Engineering 1(3), 177–188.
Design, J. Wiley & Sons, New York. Holdren, KJ, Anderson, DL, Becker, BH, Hampton,
Hanson, KM 1999, ‘A framework for assessing NL, Koeppen, LD, Magnussen, SO & Sondrup,
uncertainties in simulation predictions.’ Physica D, AJ 2006, Remedial investigation and baseline
133(1-4), 179–188. risk assessment of operable unit 7 13/14, U. S.
Department of Energy, Idaho Operations Office.
Hanson, KM & Hemez, FM 2003, ‘Uncertainty
quantification of simulation codes based on experi-
mental data.’ Proceedings of 41st AIAA Aerospace
Sciences, Jan. 6-9, 2003, Reno, Nevada.
XI-23
Uncertainty Analysis Methods
Hong, HP 2000, Assessment of reliability of ageing Langley, RS 2000, ‘A unified approach to the
reinforced concrete structures. Journal of Structural probabilistic and possibilistic analysis of uncertain
Engineering, 1260(12), 1458–1465. systems’, ASCE Journal of Engineering Mechanics,
126, 1163-1172.
Huang, S, Mahadevan, S & Rebba, R 2007,
‘Collocation-based stochastic finite element Mahadevan, S & Dey, A 1998, ‘Ductile system
analysis for random field problems’, Probabilistic reliability analysis using adaptive importance sam-
Engineering Mechanics, 22, 194–205, 2007. pling.’ Structural Safety, 20(2), 137–154.
Isukapalli, SS, Roy, A & Georgopoulos, PG 1998, Mahadevan, S & Rebba, R 2005, ‘Validation of
‘Stochastic response surface methods (SRSMs) reliability computational models using Bayes net-
for uncertainty propagation: Application to envi- works.’ Reliability Engineering and System Safety,
ronmental and biological systems.’ Risk analysis, 87(2), 223–232.
18(3), 351–363.
Mahadevan, S & Raghothamachar, P 2000,
Jeffreys, H 1961, Theory of Probability, 3rd. ed., ‘Adaptive simulation for system reliability analy-
Oxford University Press, London. sis of large structures.’ Computers and Structures,
77(6), 725–734.
Jensen, FV & Jensen, FB 2001, Bayesian Networks
and Decision Graphs, Springer-Verlag, New York. Mahadevan, S & Shi, P 2001, ‘Multiple lineariza-
tion method for nonlinear reliability analysis.’
Jiang, X & Mahadevan, S 2007, ‘Bayesian risk- ASCE Journal of Engineering Mechanics, 127(11),
based decision method for model validation under 1165–1173.
uncertainty.’ Reliability Engineering and System
Safety, 92(6), 707–718. Mahadevan, S & Smith, N 2006, ‘Efficient first-
order reliability analysis of multidisciplinary
Jiang, X and Mahadevan, S 2008, ‘Bayesian vali- systems.’ International Journal of Reliability and
dation assessment of multivariate computational Safety, 1(1/2), 137–154.
models’, Journal of Applied Statistics, 35(1), 49-
65. Marhadi, KS, Venkataraman, S and Pai, S 2008,
‘Quantifying uncertainty in statistical distribution
Kennedy, MC & O’Hagan, A 2001, Bayesian of small sample data using Bayesian inference of
calibration of computer models (with discussion). unbounded Johnson distribution', Proceedings,
Journal of the Royal Statistical Society, Series B, 49th AIAA/ASME/ASCE/AHS/ASC Structures,
63(3), 425–464. Structural Dynamics and Materials Conference,
Kong, JS, Ababneh, AN, Frangopol, DM & Xi, Schaumburg, Illinois.
Y 2002, Reliability analysis of chloride penetration Mathelin, L, Hussaini, MY, Zhang, TA & Bataille,
in saturated concrete. Probabilistic Engineering F 2004, ‘Uncertainty propagation for a turbulent,
Mechanics, 17, 305–315. compressible nozzle flow using stochastic meth-
Krajcinovic, D, Basista, M, Mallick, K & Sumarac, ods.’ AIAA Journal, 42(8), 1169–1176.
D 1992, Chemo-micromechanics of brittle solids. Mathelin, L, Hussaini, MY & Zang, TA 2005,
Journal of the Mechanics and Physics of solids, ‘Stochastic approaches to uncertainty quantification
40(5), 965-990. in CFD simulations’, Numerical Algorithms, 38,
209–236.
XI-24
Uncertainty Analysis Methods
McDonald, M, Zaman, K & Mahadevan, S 2008, Oberkampf, WL, Trucano, TG & Hirsch, Ch 2003,
Uncertainty quantification and propagation for mul- Verification, Validation, and Predictive Capabilities
tidisciplinary system analysis, 12th AIAA/ISSMO in Computational Engineering and Physics, Sandia
Multidisciplinary Analysis and Optimization National Laboratories Tec. Rep. Sand. No 2003-
Conference, Paper No. 134794, Victoria, British 3769, Albuquerque, New Mexico.
Columbia, Canada.
Paez, TL & Urbina, A 2002, Validation of math-
McDonald, M, Zaman, K & Mahadevan, S 2009, ematical models of complex structural dynamic
‘Representation and first-order approximations for systems, Proceedings of the Ninth International
propagation of aleatory and distribution parameter Congress on Sound and Vibration, Orlando,
uncertainty.’ AIAA-2009-2250, 50th AIAA/ASME/ Florida.
ASCE/AHS/ASC Structures, Structural dynamics,
and Materials Conference, 4-7 May 2009, Palm Pandey, MD 1998, ‘An effective approximation to
Springs, California. evaluate multinormal integrals.’ Structural Safety,
20(1), 51–67.
McFarland, JM 2008, Uncertainty analysis for
computer simulations through validation and cali- Rackwitz, R & Fiessler B 1978, ‘Structural reli-
bration. PhD dissertation, Vanderbilt University, ability under combined load sequences.’ Computers
Nashville, TN. and Structures, 9(5), 489–494.
McFarland, J, Mahadevan, S, Swiler, L & Giunta, Rafiq, MI, Chryssanthopoulos, MK & Onoufriou,
A 2007, ‘Bayesian calibration of the QASPR T 2004, ‘Performance updating of concrete bridges
simulation.’ In Proceedings of the 9th AIAA Non- using proactive health monitoring methods’,
Deterministic Approaches Conference, Honolulu, Reliability Engineering and System Safety, 86(3),
Hawaii. 247-256.
Mckay, MD, Conover, WJ & Beckman, RJ 1979. Rebba, R 2005, Model Validation and Design
‘A comparison of three methods for selecting val- under Uncertainty. PhD dissertation, Vanderbilt
ues of input variables in the analysis of output from University, Nashville, TN, USA.
a computer code.’ Technometrics, 21, 239–245. Rebba, R & Mahadevan, S 2006, ‘Model predictive
Millman, DR, King, PI, Maple, RC, Beranx, PS capability assessment under uncertainty’, AIAA
& Chiltonk, LK 2006, ‘Uncertainty quantification Journal, 44(10), 2376–2384.
with a B-spline stochastic projection.’ AIAA jour- Rebba, R & Mahadevan, S 2008, ‘Computational
nal, 44(8), 1845–1853. methods for model reliability assessment’,
Nigam, NC 1983, Introduction to random vibra- Reliability Engineering and System Safety, 93,
tions. MIT Press. 1197–1207.
Oberkampf, WL & Barone, MF 2006, ‘Measures of Rebba, R, Mahadevan, S & Huang, S 2006,
agreement between computation and experiment: ‘Validation and error estimation of computa-
Validation metrics.’ Journal of Computational tional models.’ Reliability Engineering and Safety
Physics 217(1), 5–36. System, 91(10-11), 1390–1397.
XI-25
Uncertainty Analysis Methods
Red-Horse, JR & Benjamin, AS 2004, ‘A probabi- Tixier, R & Mobasher, B 2003, Modeling of dam-
listic approach to uncertainty quantification with age in cement-based materials subjected to external
limited information’, Reliability Engineering and sulfate attack. II: comparison with experiments.
System Safety, 85, 183–190, 2004. Journal of Materials in Civil Engineering, 15(4),
314-322.
Robert, CP & Casella, G 2004, Monte Carlo
Statistical Methods. 2nd ed. Springer-Verlag, New Trucano, TG, Easterling, RG, Dowding, KJ, Paez,
York. TL, Urbina, A, Romero, VJ, Rutherford, BM &
Hills, RG 2001, Description of the Sandia valida-
Ross, TJ, Booker, JM & Parkinson, WJ 2002, tion metrics project, Sandia National Laboratories
Fuzzy Logic and Probability Applications: Tec. Rep. Sand. No 2001-1339, Albuquerque, New
Bridging the Gap. Society for Industrial and Mexico.
Applied Mathematics, Philadelphia, PA.
Tvedt, L 1990, ‘Distribution of quadratic forms in
Richards, SA 1997, ‘Completed Richardson ex- normal space – application to structural reliability.’
trapolation in space and time.’ Communications Journal of Engineering Mechanics, ASCE, 116(6),
in Numerical Methods in Engineering, 13(7), 1183-1197.
558–573.
Urbina, A & Mahadevan, S 2009, Uncertainty
Rubinstein, RY 1981, Simulation and the Monte quantification in hierarchical computational model
Carlo method. New York, Wiley. development, Proceedings, 12th AIAA Non-
Rutherford, BM, Dowding, K 2003, An approach Deterministic Approaches Conferences, Palm
to model validation and model-based prediction— Springs, California.
polyurethane foam case study, Sandia National Witteveen, J & Bijl, H 2006, ‘Using polynomial
Laboratories, Tec. Rep. Sand. No 2003–2336, chaos for uncertainty quantification in problems
Albuquerque, New Mexico. with Nonlinearities.’ Paper No. AIAA-2006-2066,
Saltelli, A, Chan, K & Scott, EM 2000, Sensitivity Proceedings, 47th AIAA/ASME/ASCE/AHS/ASC
Analysis. John Wiley & Sons. Structures, Structural Dynamics, and Materials
Conference, 14th AIAA/ASME/AHS Adaptive
Schittkowski, K 1983, ‘On the convergence of Structures Conference, Newport, Rhode Island.
a sequential quadratic programming method
with an augmented Lagrangian search direction.’ Xiu, D & Karniadakis, GE 2003, ‘Modeling un-
Mathematische Operationsforschung und Statistik, certainty in flow simulations via generalized poly-
Series Optimization, 14, 197–216. nomial chaos.’ Journal of Computational Physics,
187(1), 137-167.
Stewart, MG & Mullard, JA 2007, Spatial time-
dependent reliability analysis of corrosion damage Zhang, R & Mahadevan, S 2003, ‘Bayesian
and the timing of first repair for RC structures. methodology for reliability model acceptance.’
Engineering Structures, 29, 1457-1464. Reliability Engineering and System Safety, 80(1),
95–103.
Tatang, MA, Pan, W, Prinn, RG & McRae, GJ
1997, ‘An efficient method for parametric uncer- Zou, T, Mahadevan, S & Mourelatos, Z 2003,
tainty analysis of numerical geophysical models.’ ‘Reliability-based evaluation of automotive wind
Journal of Geophysical Research, 102(D18), noise quality.’ Reliability Engineering and System
21925–21932. Safety, 82(2), 217–224.
XI-26