Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
22 views41 pages

Structural Reliability

The document discusses structural reliability in civil engineering, emphasizing the importance of probabilistic methods to account for uncertainties in loads and strengths throughout a structure's lifecycle. It outlines various reliability methods, including Level I to IV, and reviews probability theory relevant to structural analysis. Key concepts include limit states, safety factors, and different statistical distributions used to model uncertainties in structural performance.

Uploaded by

Sami Tabsh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views41 pages

Structural Reliability

The document discusses structural reliability in civil engineering, emphasizing the importance of probabilistic methods to account for uncertainties in loads and strengths throughout a structure's lifecycle. It outlines various reliability methods, including Level I to IV, and reviews probability theory relevant to structural analysis. Key concepts include limit states, safety factors, and different statistical distributions used to model uncertainties in structural performance.

Uploaded by

Sami Tabsh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Structural Reliability

Thuong Van DANG

May 28, 2018

1 / 41
2 / 41
Introduction to Structural Reliability

Concept of Limit State and Reliability

Review of Probability Theory

First Order Second Moment Method (FOSM)

First Order Reliability Method (FORM)

Second Order Reliability Method (SORM)

Monte Carlo Method

3 / 41
Introduction to Structural Reliability

Civil engineering facilities such as bridges, buildings, power plants,


dams and offshore platforms are all intended to contribute to the
benefit and quality of life. Therefore when it is important that the
benefit of the facility can be identified considering all phases of the
life of the facility, i.e. including design, manufacturing,
construction, operation and eventually decommissioning.
For many years it has been assumed in design of structural systems
that all loads and strengths are deterministic. The strength of an
element was determined in such a way that it exceeded the load
with a certain margin. The ratio between the strength and the
load was denoted the safety factor. This number was considered as
a measure of the reliability of the structure.

4 / 41
Introduction to Structural Reliability

In codes of practice for structural systems values for loads,


strengths and safety factors are prescribed.
However, uncertainties in the loads, strengths and in the modeling
of the systems require that methods based on probabilistic
techniques in a number of situations have to be used.
A structure is usually required to have a satisfactory performance in
the expected lifetime, i.e. it is required that it does not collapse or
becomes unsafe and that it fulfills certain functional requirements.
Generally structural systems have a rather small probability that
they do not function as intended.

5 / 41
Introduction to Structural Reliability

The study of structural reliability is concerned with the calculation


and prediction of the probability of limit state violation for an
engineered structural system at any stage during its life.

The objective of any structural design is to ensure safety and


economy of the structure operating under a given environment.

Capacity (C) > Demand (D)

So long this condition is satisfied, the safety of the structure is


ensured for the intended purpose for which the structure is built.
Besides this, designers also ensure that there is an optimal use of
the materials which, in turn, ensures economy.

6 / 41
Capacity i.e. 𝐶 = 𝑋1 𝑋2 𝑋3 … 𝑋𝑝 and '𝑞' may be associated with Demand
Concept of𝐷design
Limit
= 𝑋 𝑋 State
𝑋 …𝑋
𝑝+1 𝑝+2and
leading toReliability
𝑝+3 𝑝 + 𝑞 = 𝑛. If the variables are deterministic, the
𝑝+𝑞
may be iteratively carried out to reach an optimal value of them which, in turn, will
Limit stateensure all three fundamental requirements stated earlier. The problem becomes more complex
equation can be represented in the form:
when the design variables are random in nature i.e. only defined by their pdf. Let us consider
only two random variables i.e. 𝐶 and 𝐷 which are independent. Figure 3.1.2 shows the limit state
g(X) = Capacity - Demand =C-D=0
that divides the design space into safe and unsafe regions.

x2

Unsafe g(X) < 0

Safe g(X) > 0


x1

g(X) = 0

Figure 3.1.2 Limit State showing safe and unsafe regions

Figure: Limit State showing safe and unsafe regions


From this Figure 3.1.2 one can conclude that there may be infinite combinations of 𝐶 and 𝐷 that
falls on the limit state i.e. satisfy the design. So, unlike deterministic design, it is rather tough job
for the designer to choose the optimal value. An intuitive solution to this problem is to select the
point that has least probability of failure. This gives rise to an important question, how to assess
It is important to note that the limit state does not define a unique
the point on the limit state in the first place? To answer this question let us consider Figure 3.1.3
that shows the pdf of these random variables.
failure function, i.e. the limit state can be described by a number
f (c)
of equivalent failure functions. In structural reliability, the limit
C

f (d) D
state function usually results from a mechanical analysis of the
structure. D C
7 / 41
Concept of Limit State and Reliability
I Reliability
The reliability of an engineering design is the probability that
it response certain demands under conditions for a specified
period of time. Reliability often mentions as the ability of a
component or system to function at a specified moment or
interval of time.
There are some definitions of reliability in national and
international documents. In ISO 2394, reliability is the ability
of a structure to comply with given requirements under
specified conditions during the intended life for which it was
designed.
While Eurocode provides a description: reliability is the ability
of a structure or a structural member to fulfill the specified
requirements, including the design working life, for which it
has been designed and it is usually expressed in probabilistic
terms.
8 / 41
Uncertainty

Reliability Safety

Risk

9 / 41
Levels of Reliability Methods

Generally, methods to measure the reliability of a structure can be


divided into four groups:
I Level I methods: The uncertain parameters are modeled by
one characteristic value. For example in codes based on the
partial safety factor concept (load and resistance factor
formats).
I Level II methods: The uncertain parameters are modeled by
the mean values and the standard deviations, and by the
correlation coefficients between the stochastic variables. The
stochastic variables are implicitly assumed to be normally
distributed. The reliability index method is an example of a
level II method.

10 / 41
Levels of Reliability Methods

I Level III methods: The uncertain quantities are modeled by


their joint distribution functions. The probability of failure is
estimated as a measure of the reliability.
I Level IV methods: In these methods, the consequences
(cost) of failure are also taken into account and the risk
(consequence multiplied by the probability of failure) is used
as a measure of the reliability. In this way, different designs
can be compared on an economic basis taking into account
uncertainty, costs and benefits.

11 / 41
Review of Probability Theory
Events and basis probability rules
An event E is defined as a subset of the sample space (all possible
outcomes of a random quantity) Ω . The failure event E of e.g. a
structural element can be modeled by E = R≤ S where R is the
strength and S is the load.
The probability of failure is the probability:
Pf = P(E ) = P(R≤ S).
Axioms of probability:
I Axiom 1: for any event E :

0 ≤ P(E ) ≤ 1 (1)
I Axiom 2: for the sample space
P(Ω) = 1 (2)
I Axiom 3: for mutually exclusive events E1 , E2 , ..., Em :
m m
!
[ X
P Ei = P(Ei ) (3)
i=1 i=1
12 / 41
Review of Probability Theory
The conditional probability of an event E1 given another event E2
is defined by:

P(E1 ∩ E2 )
P(E1 |E2 ) = (4)
P(E2 )
if E1 and E2 are statistically independent:

P(E1 ∩ E2 ) = P(E1 )P(E2 ) (5)


Bayes theorem:

P(A|Ei )P(Ei ) P(A|Ei )P(Ei )


P(Ei |A) = = m (6)
P(A) X
P(A|Ei )P(Ei )
j=1

where A is an event.
13 / 41
Review of Probability Theory
Continuous stochastic variables
Consider a continuous stochastic variable X . The distribution
function of X is denoted FX (x) and gives the probability FX (x) =
P(X ≤ x) . The probability density function fX (x) is defined by:

d
fX (x) = Fx (x) (7)
dx
(8)
The expected value is defined by:
Z ∞
µ= (xfX (x)dx (9)
−∞

The variance σ 2 is defined by:


Z
2
σ = (x − µ)2 fX (x)dx (10)

where σ is the standard deviation. 14 / 41


Review of Probability Theory
I Normal distribution
The normal distribution often occurs in practical applications
because the sum of a large number of statistically
independent random variables converges to a normal (known
as central limit theorem). The normal distribution can be
used to represent material properties, fatigue uncertainty. It is
also used in stress-strength interference models in reliability
studies.
The cumulative distribution function of a normal random
variable X with the mean µX and standard deviation σX is
given by the exponential expression:
Z x
x −µ 1 1 x − µX 2
FX (x) = Φ( )= √ exp [− ( ) ]dx
σ −∞ σX 2π 2 σX
(12)
where Φ(u) is is the standardized distribution function for a
Normal distributed stochastic variable with expected value µ
= 0 and standard deviation σ = 1. 15 / 41
Review of Probability Theory

16 / 41
Review of Probability Theory

I Lognormal distribution
The cumulative distribution function for a stochastic variable
with expected value µX and standard deviation σX is denoted
LN(µX , σX ), and is defined by:
lnx
lnx − µX 1 lnx − µX 2
Z
1
FX (x) = Φ( )= √ exp [− ( ) ]dx
σX −∞ σX 2π 2 σX
(13)

where:q
σX = ln( σµ 2 + 1) and µX = lnµ − 21 σX2 is the standard
deviation and expected value for the Normal distributed
stochastic variable: Y = lnX

17 / 41
Review of Probability Theory

I Exponential distribution
The exponential distribution is most widely used distribution
in reliability and risk assessment. It is the only distribution
having constant hazard rate and is used to model useful life of
many engineering systems. The exponential distribution is
closely related to the Poisson distribution which is discrete. If
the number of failure per unit time is Poisson distribution
then the time between failures follows the exponential
distribution. The cumulative distribution function is given by:

(
1 − e −λx x ≥0
F (x, λ) = (14)
0 x <0

where λ > 0 is inverse scale

18 / 41
Review of Probability Theory

I Weibull distribution
The Weibull distribution is a continuous probability
distribution. It is named after Swedish mathematician Waloddi
Weibull, who described it in detail in 1951, although it was
first identified by Frchet (1927) and first applied by Rosin &
Rammler (1933) to describe a particle size distribution.
The cumulative distribution function is given by:

x k
(
1 − e −( λ ) x ≥0
F (x, k, λ) = (15)
0 x <0

where λ(0, +∞) and k(0, +∞) are scale and shape
parameter.

19 / 41
Review of Probability Theory

20 / 41
Review of Probability Theory
Conditional distributions
The conditional distribution function for X1 given X2 is defined by:
fX1 ,X2 (x1 , x2 )
fX1 |X2 (x1 |x2 ) = (16)
fX2 (x2 )
X1 and X2 are statistically independent if fX1 |X2 (x1 |x2 ) = fX1 (x1 )
implying that:

fX1 |X2 (x1 |x2 ) = fX1 (x1 )fX2 (x2 ) (17)

Covariance and correlation


The covariance between X1 and X2 is defined by:

Cov [X1 , X2 ] = E [(X1 − µ1 )(X2 − µ2 )] (18)

It is seen that

Cov [X1 , X1 ] = Var [X1 ] = σ12 (19)


21 / 41
Review of Probability Theory
Covariance and correlation
The correlation coefficient between X1 and X2 is defined by:

Cov [X1 , X2 ]
ρX1 ,X2 = (20)
σ1 , σ2
If ρX1 ,X2 = 0 then X1 and X2 is uncorrelated, but not necessarily
statistically independent.
For a stochastic vector X = (X1 , X2 , ..., Xn ) the covariance-matrix
is defined by:
 
Var [X1 , X1 ] Cov [X1 , X2 ] · · · Cov [X1 , Xn ]
 Cov [X1 , X2 ] Var [X2 , X2 ] · · · Cov [X2 , Xn ] 
C =  (21)
 
.. .. .. ..
 . . . . 
Cov [X1 , Xn ] Cov [X2 , Xn ] · · · Var [Xn , Xn ]

22 / 41
First Order Second Moment Method (FOSM)

The First Order Second Moment FOSM method is based on a first


order Taylors approximation of the performance function. It uses
only second moment statistics (means and covariances) of the
random variables. Consider a case where C and D are normally
distributed variables and they are statistically independent. µC and
µD are mean and σC and σD are standard deviations of C and D
respectively.
Then mean of g(X) is:

µg = µC − µ D (22)

and standard deviation of g(X) is:


q
σg = µ2C + µ2D (23)

23 / 41
First Order Second Moment Method (FOSM)

So that failure probability is:

 
 
0 − (µC − µD )  µg
pf = P(g (X ) < 0) = Φ  q =Φ − (24)
µ2C + µ2D σg

where β is measure of the reliability for the limit state g(X). It was
proposed by Cornell and hence, is termed as Cornell’s Reliability
Index:
µg
β= (25)
σg

Consider a case of generalized performance function of many


random variables: X1 , X2 , ..., Xn

24 / 41
First Order Second Moment Method (FOSM)
Expanding this performance function about the mean gives:
n
X ∂g
LSF = g (X1 , X2 , ..., Xn ) + (Xi − µXi ) + ... (26)
∂Xi
i=1
n n
1 X X ∂2g
(Xi − µXi )(Xj − µXj ) + ... (27)
2 ∂Xi ∂Xj
i=1 j=1

where the derivatives are evaluated at the mean values.


Considering first two terms in the Taylor’s series expression and
taking expectation on both sides one can prove that :

µg = E [g (X )] =
g (µ1 , µ2 , ..., µn ) (28)
n n   
XX ∂g ∂g
Var (g ) = σg2 = Cov (Xi , Xj ) (29)
∂Xi ∂Xj
i=1 j=1

where Cov (Xi , Xj ) is the covariance of Xi and Xj .


25 / 41
First Order Reliability Method (FORM)
I Reliability index
In 1974 Hasofer & Lind proposed a definition of the reliability
Visual
index which is invariant with respect to the mathematical
formulation of the safety margin.

Figure: An example transformation to standard normal space

Reliability index β is defined as the smallest distance from the


origin O(0,0) in the U-space to the failure surface g(U) = 0.
26 / 41
First Order Reliability Method (FORM)
The reliability index is thus determined by the optimization
problem:
v
u n
uX 2
β = |{z}
min t Ui (30)
g (U)=0 i=1

Reliability index is also known as an equivalent value to the


probability of failure, formally defined as a negative value of a
standardized normal variable corresponding to the probability of
failure Pf :
β = −Φ−1
U (Pf ) (31)
where −Φ−1 U (Pf ) denotes the inverse standardized normal
distribution function
For independent variables of any distribution, the principle of the
transformation into standard normal space consists of writing the
equality of the distribution functions:
Φ(u) = FX (x) ⇒ u = Φ(u)−1 (FX (x)) (32) 27 / 41
First Order Reliability Method (FORM)
I Probability of failure
The probability of failure is an important term in the theory of
structural reliability. It is assumed that X is the vector of
random variables that influence a system’s load (L) and
resistance (R), the limit state function is formulated in terms
of these basic variables and given as:

g (X ) = g (X1 , X2 , ..., Xn ) = L − R (33)

In the general sense, the probability of failure based on the


given limit state for a time-invariant reliability problem is:
Z
Pf = P[g (X ) ≤ 0] = fx (x)dx (34)
g (X )≤0

Where fx (x) and Pf are the joint probability density function


of X.
28 / 41
100 Probability of failure

10-2

10-4

10-6

10-8

10-10

10-12

10-14
Reliability index

-16
10
0 1 2 3 4 5 6 7 8 9 10

29 / 41
First Order Reliability Method (FORM)
The FORM (First Order Reliability Method) is one of the basic and
very efficient reliability methods. The FORM method is used as a
fundamental procedure by a number of software products for the
reliability analysis of structures and systems. It is also mentioned
in EN 1990 [1] that the design values are based on the FORM
reliability method.
The procedure of applying the algorithm by Rackwitz and Fiessler
algorithm for reliability calculation can be listed as follows:
I Step 1: Write the limit state function g(X)= 0 in terms of the
basic variables. Transform the limit state function g(X) into
standard normal space g(U).
I Step 2: Assume initial value of design point U0 (mean value).
I Step 3: Calculate gradient vector
∂g
(35)
∂u

30 / 41
First Order Reliability Method (FORM)
I Step 4: Calculate an improved guess of the Reliability Index β

5g (u)
α=− (36)
| 5 g (u)|

and then iterated point u i+1 is obtained:

g (u)
u i+1 = (uαT )α + α (37)
| 5 g (u)|
I Step 5: Calculate the corresponding reliability index:

q
β i+1 = (u i+1 )T u i+1 (38)

I Step 6: If convergence in β: |β i+1 − β i | ≤ 10−3 , then stop,


else i=i+1 and go to step 2
31 / 41
First Order Reliability Method (FORM)

Example: Suppose that the performance function of a problem is


defined by:

g(X) = X1X2-1500

where X1 follows a normal distribution with mean µX 1 = 33.2 and


standard deviation σX 1 = 2.1. X2 follows a normal distribution
with mean µX 2 = 50 and standard deviation σX 2 = 2.6.
Using FORM, find reliability index and probability of failure?

32 / 41
Second Order Reliability Method (SORM)
In reality, the limit states are highly non-linear in standard normal
space and hence a first order approximation may contribute
significant error in reliability index evaluation. Thus, a better
approximation by second order terms is required for a highly
non-linear limit state.
The procedure to apply the algorithm of Breitung
I Step 1: Initial orthogonal matrix T0 is evaluated from the
direction cosines, evaluated as explained in FORM under
Rackwitz and Fiessler algorithm:
 
1 0 ... 0
 0 1 ... 0 
T0 =  ... ... ... ... 
 (39)
α1 α2 ... αn

where α1 , α2 , ..., αn are the direction cosines of the unit


gradient vector at the Most Probable failure Point (MPP).
33 / 41
Second Order Reliability Method (SORM)
I Step 2: Consider a matrix T0 = [t01 , t02 , ..., t0n ]t is modified
using Gram-Schmidt orthogonal procedure as:
n t
X ti t0k
tk = t0k − ti (40)
ti tit
i=k+1
where tk is row vectors of modified orthogonal matrix T =
[t1 , t2 , ..., tn ]t and k ranges from n, n-1, n-2,...,2,1. The
rotation matrix in produced by normalizing these vectors to
unit.
I Step 3: An orthogonal transformation of random variables X
into Y is evaluated using orthogonal matrix T (also known as
rotation matrix) Y = TX. Again using orthogonal matrix T,
another matrix A is evaluated as:
(THT t )ij
A = [aij ] = i, j = 1, 2, ..., n − 1 (41)
|G ∗ |
where |G ∗ | is the length of the gradient vector and H
represents a double derivative matrix of the limit state in
34 / 41
Second Order Reliability Method (SORM)
standard normal space at the design point.
 ∂2g ∂2g

∂u12 ∂u1 ∂u2 ... ...
 ∂2g ∂2g


∂u2 ∂u1 ∂u22
... ... 
H=  (42)
 ... ... ... ... 
 
∂2g
... ... ... ∂un2

I Step 4: the last row and last column in the A matrix and last
row in the Y matrix are eliminated to consider a factor that
last variable yn coincides with β computed in FORM.
1
yn = β + y t Ay (43)
2
Now, the size of coefficient matrix A is reduced to (n-1)x(n-1)
and main curvatures κi are given by computing eigen values
of matrix A.
35 / 41
Hessian matrix H(u* ) , and U ' = {U 1, U 2, L , U n −1} .

Second OrderWhenReliability Method (SORM)


β is large enough, an asymptotic solution of the probability of failure can be then
derived as
I Step 5: compute the failure probability Pf using Breitung n −1
p = P{g ( X ) < 0} = Φ(− β ) ∏ (1 + β κ )
1/2
equation: f (7.40)
i =1
i

n−1
in which κ i denotes the i-th main curvature of the performance function g (U) at the
Y
−1/2
MPP. Pf = φ(−β) (1 + βκi ) (44)
Since the approximation of the performance
i=1function in SORM is better than that in
FORM (see Fig. 7.14), SORM is generally more accurate than FORM. However, since
where κ is the main curvatures of the limit state surface at
SORM requires the second order derivative, it is not as efficient as FORM when the
i
derivatives are evaluated numerically. If we use the number of performance function
design point.
evaluations to measure the efficiency, SORM needs more function evaluations than
FORM.

U2 SORM

g=0

g<0
g>0 MPP u*
β

o
U1
FORM

Figure 7.14 Comparison of FORM and SORM


Figure: Comparison of FORM and SORM

36 / 41
Monte Carlo Method
Monte Carlo simulation is a statistical analysis tool and widely used
in both non-engineering fields and engineering fields. Monte Carlo
is also suitable for solving complex engineering problems because it
can deal with a large number of random variables, various
distribution types, and highly non-linear engineering models.
Different from a physical experiment, Monte Carlo simulation
performs random sampling and conducts a large number of
experiments on the computer. Three steps are required in the
simulation process:
I Step 1: Sampling on input random variables (Generating
samples of random variables).
The purpose of sampling on the input random variables X=
(X1 , X2 , ..., Xn ) is to generate samples that represent
distributions of the input variable from their cdfs
Fxi (xi )(i = 1, 2, ..., n).

37 / 41
Monte Carlo Method
The samples of the random variables will then be used as inputs to
the simulation experiments.
I Step 2: Numerical experimentation
Suppose that N samples of each random variable are
generated, then all the samples of random variables constitute
N sets of inputs, xi = (xi1 , xi2 , ..., xiN ), i = 1, 2, ..., N to the
model Y=g(X). Solving the problem N times yields N sample
points of the output Y:

yi = g (xi ), i = 1, 2, ..., N (45)

I Step 3: Extraction of probabilistic information of output


variables
After N samples of output Y have been obtained, statistical
analysis can be carried out to estimate the characteristics of
the output Y, such as the mean, variance, reliability, the
probability of failure, pdf and cdf.
38 / 41
Monte Carlo Method
- The mean:
N
1 X
µ=Y = yi (46)
N
i=1

- The variance:
N
1 X
σY2 = (yi − Y )2 (47)
N −1
i=1

- The probability of failure:


N
1 X Nf
Pf = I [g (xi )] = (48)
N N
i=1

where Nf is the number of samples that have the performance


function less than or equal to zero
39 / 41
Monte Carlo Method

I Step 4: Error analysis


The commonly used confidence level is 95% under which the
error is approximately given by:

s
(1 − Pf
ε% ≈ 200 (49)
NPf

where Pf is the true value of the probability of failure and N is


required sample number to obtain certain Pf with a
predefined error.

40 / 41
25

20
Crack depth (mm)

15

10

0
0 5 10 15 20 25 30
Year
41 / 41

You might also like