OpenQuake Manual 1.2
OpenQuake Manual 1.2
The OpenQuake-engine
User Manual
Authors:
Helen Crowley1 , Damiano Monelli2 , Marco Pagani1 , Vitor Silva3 , Graeme Weatherill3
1 GEM Foundation 2GEM Model Facility 3 GEM Model Facility
via Ferrata, 1 SED - ETHZ via Ferrata, 1
20133 Pavia Sonneggstrasse, 5 20133 Pavia
Italy CH-8092 Zurich Italy
Switzerland
c 2014 GEM Foundation
Citation
Please cite this document as:
Crowley, H., Monelli, D., Pagani, M., Silva, V., and Weatherill, G. (2014). The OpenQuake-engine User
Manual. Global Earthquake Model (GEM) Technical Report 2014-12.
doi: 10.13117/GEM.OPENQUAKE.MAN.ENGINE.1.2/01, 125 pages.
Disclaimer
The OpenQuake-engine User Manual is distributed in the hope that it will be useful, but without any
warranty: without even the implied warranty of merchantability or fitness for a particular purpose. While
every precaution has been taken in the preparation of this document, in no event shall the authors of
the Manual and the GEM Foundation be liable to any party for direct, indirect, special, incidental, or
consequential damages, including lost profits, arising out of the use of information contained in this
document or from the use of programs and source code that may accompany it, even if the authors and
GEM Foundation have been advised of the possibility of such damage. The Manual provided hereunder is
on as “as is” basis, and the authors and GEM Foundation have no obligations to provide maintenance,
support, updates, enhancements, or modifications.
License
This Manual is distributed under the Creative Commons License Attribution-NonCommercial-ShareAlike
4.0 International (CC BY-NC-SA 4.0) You can download this Manual and share it with others as long as
you provide proper credit, but you cannot change it in any way or use it commercially.
http://creativecommons.org/licenses/by-nc-sa/4.0/
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
I Introduction 11
1 OpenQuake-engine Background . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
II Hazard 17
3.1.4.1 Calculation of a hazard map and hazard curves using classical PSHA . . . . . . . . . . 39
3.1.4.2 Seismic hazard disaggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.1.4.3 Event based PSHA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.3.1 Example of files containing a stochastic event set and a ground motion field 49
III Risk 69
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
C Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Preface
The goal of this manual is to provide a comprehensive and transparent description of the features of the
OpenQuake-engine (v1.2). This manual is designed to be readable by someone with basic understanding
of Probabilistic Seismic Hazard and Risk Analysis, but no previous knowledge of the OpenQuake-
engine is assumed. The OpenQuake-engine is an effort promoted and actively developed by the Global
Earthquake Model, a public-private partnership initiated by the Global Science Forum of the Organisation
for Economic Co-operation and Development (OECD)1 .
The OpenQuake-engine is the result of an effort carried out jointly by the Information Technology
and Scientific teams working at the GEM Secretariat. It is freely distributed under an Affero GPL license
(more information available at this link http://www.gnu.org/licenses/agpl-3.0.html)
Introduction
OpenQuake-engine introduction
Running the OpenQuake-engine
1. OpenQuake-engine Background
optional arguments:
-h, --help show this help message and exit
General:
--version Display version information
--log-file LOG_FILE, -L LOG_FILE
Location to store log messages; if not specified, log
messages will be printed to the console (to stderr)
--log-level debug,info,progress,warn,error,critical, -l debug,info,progress,warn,error,critical
Defaults to "info"
--no-distribute, --nd
Disable calculation task distribution and run the
computation in a single process. This is intended for
use in debugging and profiling.
--list-inputs INPUT_TYPE, --li INPUT_TYPE
List inputs of a specific input type
--yes, -y Automatically answer "yes" when asked to confirm an
action
--config-file CONFIG_FILE
Custom openquake.cfg file, to override default
configurations
--make-html-report YYYY-MM-DD|today, -r YYYY-MM-DD|today
Build an HTML report of the computation at the given
date
Database:
--upgrade-db Upgrade the openquake database
--version-db Show the current version of the openquake database
--what-if-I-upgrade Show what will happen to the openquake database if you
upgrade
Hazard:
--run-hazard CONFIG_FILE, --rh CONFIG_FILE
Run a hazard job with the specified config file
--list-hazard-calculations, --lhc
List hazard calculation information
--delete-hazard-calculation HAZARD_CALCULATION_ID, --dhc HAZARD_CALCULATION_ID
Delete a hazard calculation and all associated outputs
--delete-uncompleted-calculations, --duc
Delete all the uncompleted calculations
Risk:
--run-risk CONFIG_FILE, --rr CONFIG_FILE
Run a risk job with the specified config file
--hazard-output-id HAZARD_OUTPUT, --ho HAZARD_OUTPUT
Use the desired hazard output as input for the risk
job
--hazard-calculation-id HAZARD_CALCULATION_ID, --hc HAZARD_CALCULATION_ID
Use the desired hazard job as input for the risk job
--list-risk-calculations, --lrc
List risk calculation information
--delete-risk-calculation RISK_CALCULATION_ID, --drc RISK_CALCULATION_ID
Delete a risk calculation and all associated outputs
Export:
--list-outputs CALCULATION_ID, --lo CALCULATION_ID
List outputs for the specified calculation
--list-hazard-outputs CALCULATION_ID, --lho CALCULATION_ID
List outputs for the specified calculation
[deprecated]
--list-risk-outputs CALCULATION_ID, --lro CALCULATION_ID
List outputs for the specified calculation
[deprecated]
--exports EXPORT_FORMATS
Comma-separated string specifing the export formats,
in order of priority
--export-output OUTPUT_ID TARGET_DIR, --eo OUTPUT_ID TARGET_DIR
Export the desired output to the specified directory
--export-hazard-output OUTPUT_ID TARGET_DIR, --eh OUTPUT_ID TARGET_DIR
Export the output to the specified directory
[deprecated]
--export-risk-output OUTPUT_ID TARGET_DIR, --er OUTPUT_ID TARGET_DIR
Export the output to the specified directory
[deprecated]
--export-outputs HAZARD_CALCULATION_ID TARGET_DIR, --eos HAZARD_CALCULATION_ID TARGET_DIR
Export all the calculation outputs to the specified
directory
--export-stats HAZARD_CALCULATION_ID TARGET_DIR OUTPUT_TYPE, --es HAZARD_CALCULATION_ID TARGET_DIR OUTPUT_TYPE
Export the statistical outputs to the specified
directory
--export-hazard-outputs HAZARD_CALCULATION_ID TARGET_DIR, --eho HAZARD_CALCULATION_ID TARGET_DIR
Export all the outputs to the specified directory
[deprecated]
--export-risk-outputs HAZARD_CALCULATION_ID TARGET_DIR, --ero HAZARD_CALCULATION_ID TARGET_DIR
Export all the outputs to the specified directory
[deprecated]
Save/Load:
--save-hazard-calculation HAZARD_CALCULATION_ID DUMP_DIR, --shc HAZARD_CALCULATION_ID DUMP_DIR
Save a hazard calculation to a new created directory.
--load-hazard-calculation DUMP_DIR
Load a hazard calculation from a saved import. Only
SES outputs currently supported
Import:
--load-gmf GMF_FILE Load gmf from a file. Only single-source gmf are
supported currently. The file can be xml or tab-
separated.
--load-curve CURVE_FILE
Load hazard curves from an XML file.
--list-imported-outputs
List outputs which were imported from a file, not
calculated from a job
Part II
Hazard
Source typologies
Source typologies for modelling dis-
tributed seismicity
Fault source with floating ruptures
Fault source types without floating rup-
tures
Supported magnitude-frequency distri-
butions
Calculation workflows
Classical Probabilistic Seismic Hazard
Analysis
Event-Based Probabilistic Seismic Hazard
Analysis
Scenario based Seismic Hazard Analysis
The hazard component of the OpenQuake-engine builds on top of the OpenQuake hazard library (oq-
hazardlib), a Python-based library containing tools for PSHA calculation. The web repository of this
library is available at the following address: http://github.com/gem/oq-hazardlib.
In this section we briefly illustrate the main properties of the hazard component of the engine. In
particular, we will describe the main typologies of sources supported and the main calculation workflows
available.
Hypocenter
The point source is the elemental source type adopted in the OpenQuake-engine to model distributed
seismicity. The OpenqQuake-engine even in the case of point sources always performs calculations
considering finite ruptures.
These are the basic assumptions used to generate ruptures with point sources:
• Ruptures have a rectangular shape
• Rupture’s hypocenter is located in the middle of the rupture
• Ruptures are limited at the top and at the bottom by two planes parallel to the topographic surface
and placed at two characteristic depths named upper and lower seismogenic depths, respectively
(see Figure 2.1)
Source data
For the definition of a point source the following parameters are requested (Figure 2.1 shows some of the
parameters described below, together with an example of the surface of a generated rupture):
• The coordinates of the point (i.e. Longitude and Latitude) [decimal degrees];
• The upper and lower seismogenic depths [km];
• One magnitude-frequency distribution;
• One magnitude-scaling relationship;
• The rupture aspect ratio;
• A distribution of nodal planes i.e. one (or several) instances of the following set of parameters:
– strike [degrees]
– dip [degrees]
– rake [degrees]
• A magnitude independent depth distribution of hypocenters [km].
Figure 2.2 shows ruptures generated by a point source for a range of magnitudes. Each rupture is centered
on the single hypocentral position admitted by this point source. Ruptures are created by conserving
the area computed using the specified magnitude-area scaling relatioship and the corresponding value of
magnitude.
Below we provide the excerpt of an .xml file used to describe the properties of a point source.
Figure 2.2 – Point source with multiple ruptures. Note the change in the aspect ratio once the
rupture width fills the entire seismogenic layer.
5 </gml:Point>
6 <upperSeismoDepth>0.0</upperSeismoDepth>
7 <lowerSeismoDepth>10.0</lowerSeismoDepth>
8 </pointGeometry>
9 <magScaleRel>WC1994</magScaleRel>
10 <ruptAspectRatio>0.5</ruptAspectRatio>
11 <truncGutenbergRichterMFD aValue="-3.5" bValue="1.0" minMag="5.0"
12 maxMag="6.5" />
13 <nodalPlaneDist>
14 <nodalPlane probability="0.3" strike="0.0" dip="90.0" rake="0.0" />
15 <nodalPlane probability="0.7" strike="90.0" dip="45.0" rake="90.0" />
16 </nodalPlaneDist>
17 <hypoDepthDist>
18 <hypoDepth probability="0.5" depth="4.0" />
19 <hypoDepth probability="0.5" depth="8.0" />
20 </hypoDepthDist>
21 </pointSource>
The red part shows the the parameters used to describe the geometry of the point source, the blue part is
the description of the magnitude-frequency distribution, the green text shows the nodal plane distribution
and the text in magenta illustrates the hypocentral depth distribution. The text in black describes the
parameters needed to generate the ruptures such as the magnitude-scaling relationship and the aspect ratio.
Note that in this example, ruptures occur on two possible nodal planes and two hypocentral depths.
Figure 2.3 shows the ruptures generated by the point source specified above.
Figure 2.3 – Ruptures produced by the source created using the information in the example .xml file
described at page 20.
geometries of the area sources and (2) it produces a spatial pattern of seismicity that is usually closer
to what observed in the reality. Nevertheless, in many cases smoothing algorithms require an a-priori
definition of some setup parameters that expose the calculation to a certain degree of partiality.
Grid sources are modeled in oq-engine simply as a set of point sources; in other words, a grid source
is just a long list of point sources specified as described in the previous section (see page 21).
Source data
• A polygon defining the external border of the area (i.e. a list of Longitude-Latitude tuples) The
current version of the OQ-engine doesn’t support the definition of internal borders. [degrees]
• The upper and lower seismogenic depths [km];
• One magnitude-frequency distribution;
• One magnitude-scaling relationship;
• The rupture aspect ratio;
• A distribution of nodal planes i.e. one (or several) instances of the following set of parameters:
– strike [degrees]
– dip [degrees]
– rake [degrees]
• A magnitude independent depth distribution of hypocenters [km].
Below we provide the exerpt of an .xml file used to describe the properties of an area source.
6 <gml:posList>
7 -122.5 37.5
8 -121.5 37.5
9 -121.5 38.5
10 -122.5 38.5
11 </gml:posList>
12 </gml:LinearRing>
13 </gml:exterior>
14 </gml:Polygon>
15 <upperSeismoDepth>0.0</upperSeismoDepth>
16 <lowerSeismoDepth>10.0</lowerSeismoDepth>
17 </areaGeometry>
18 <magScaleRel>PeerMSR</magScaleRel>
19 <ruptAspectRatio>1.5</ruptAspectRatio>
20 <incrementalMFD minMag="6.55" binWidth="0.1">
21 <occurRates>0.0010614989 8.8291627E-4 7.3437777E-4 6.108288E-4
22 5.080653E-4</occurRates>
23 </incrementalMFD>
24 <nodalPlaneDist>
25 <nodalPlane probability="0.3" strike="0.0" dip="90.0" rake="0.0"/>
26 <nodalPlane probability="0.7" strike="90.0" dip="45.0" rake="90.0"/>
27 </nodalPlaneDist>
28 <hypoDepthDist>
29 <hypoDepth probability="0.5" depth="4.0" />
30 <hypoDepth probability="0.5" depth="8.0" />
31 </hypoDepthDist>
32 </areaSource>
The red text describes the parameters used to describe the geometry of the area source, the blue part is the
description of the magnitude-frequency distribution, the green text displays the nodal plane distribution
and the part in magenta illustrated the hypocentral depth distribution. The text in gray describes the
parameters required to generate the ruptures such as the magnitude-scaling relationship and the aspect
ratio.
The ruptures generated by the area source described in the example above are controlled by two nodal
planes and have hypocenters at localized at two distinct depths.
Source data
• A fault trace (usually a polyline). It’s a list of longitude-latitude tuples [degrees];
• A magnitude-frequency distribution;
• A magnitude-scaling relationship;
• A representative value of the dip angle (specified following the Aki-Richards convention; see Aki
and Richards (2002)) [degrees];
• Rake angle (specified following the Aki-Richards convention; see Aki and Richards (2002))
[degrees];
• Upper and lower depth values limiting the seismogenic interval [km];
Below we provide the excerpt of an .xml file used to describe the properties of a simple fault source.
As with the previous examples, the red text hightlights the parameters used to specify the source geometry,
in green parameters describing the rupture mechanism, in blue the magnitude-frequency distribution and
in gray parameters describing rupture properties.
2 tectonicRegion="Subduction Interface">
3 <complexFaultGeometry>
4 <faultTopEdge>
5 <gml:LineString>
6 <gml:posList>
7 -124.704 40.363 0.5493260E+01
8 -124.977 41.214 0.4988560E+01
9 -125.140 42.096 0.4897340E+01
10 </gml:posList>
11 </gml:LineString>
12 </faultTopEdge>
13 <intermediateEdge>
14 <gml:LineString>
15 <gml:posList>
16 -124.704 40.363 0.5593260E+01
17 -124.977 41.214 0.5088560E+01
18 -125.140 42.096 0.4997340E+01
19 </gml:posList>
20 </gml:LineString>
21 </intermediateEdge>
22 <intermediateEdge>
23 <gml:LineString>
24 <gml:posList>
25 -124.704 40.363 0.5693260E+01
26 -124.977 41.214 0.5188560E+01
27 -125.140 42.096 0.5097340E+01
28 </gml:posList>
29 </gml:LineString>
30 </intermediateEdge>
31 <faultBottomEdge>
32 <gml:LineString>
33 <gml:posList>
34 -123.829 40.347 0.2038490E+02
35 -124.137 41.218 0.1741390E+02
36 -124.252 42.115 0.1752740E+02
37 </gml:posList>
38 </gml:LineString>
39 </faultBottomEdge>
40 </complexFaultGeometry>
41 <magScaleRel>WC1994</magScaleRel>
42 <ruptAspectRatio>1.5</ruptAspectRatio>
43 <truncGutenbergRichterMFD aValue="-3.5" bValue="1.0" minMag="5.0"
44 maxMag="6.5" />
45 <rake>30.0</rake>
46 </complexFaultSource>
As with the previous examples, the text in red hightlights the parameters used to specify the source
geometry, in green parameters describing the rupture mechanism, in blue the magnitude-frequency
distribution and in gray parameters describing rupture properties.
26 Chapter 2. Introduction to the Hazard Module
Source data
• The characteristic rupture surface is defined through one of the following options:
– A list of rectangular ruptures
– A simple fault source geometry
– A complex fault source geometry
• A magnitude-frequency distribution;
• Rake angle (specified following the Aki-Richards convention; see Aki and Richards (2002))
• Upper and lower depth values limiting the seismogenic interval.
This is the magnitude-frequency distribution obtained with the above settings is represented in
Figure 2.5.
A double truncated Gutenberg-Richter distribution
This distribution is described by means of a minimum minMag and maximum magnitude maxMag
and by the a and b values of the Gutenberg-Richter relationship.
The syntax of the xml used to describe this magnitude-frequency distribution is rather compact as
demonstrated in the following example
<truncGutenbergRichterMFD aValue="5.0" bValue="1.0" minMag="5.0"
maxMag="6.0"/>
Figure 2.6 shows the magnitude-frequency distribution obtained using the parameters of the
considered example.
• Classical Probabilistic Seismic Hazard Analysis (PSHA), allowing calculation of hazard curves
and hazard maps following the classical integration procedure (Cornell, 1968, McGuire (1976)) as
formulated by Field et al., 2003.
• Event-Based Probabilistic Seismic Hazard Analysis, allowing calculation of ground-motion fields
28 Chapter 2. Introduction to the Hazard Module
100
10-1
100
Number of eqks, N(m) [ev/yr]
10-1
from stochastic event sets. Traditional results - such as hazard curves - can be obtained by
post-processing the set of computed ground-motion fields.
• Scenario Based Seismic Hazard Analysis (SSHA), allowing the calculation of ground motion fields
from a single earthquake rupture scenario taking into account ground-motion aleatory variability.
Each workflow has a modular structure, so that intermediate results can be exported and analyzed.
Each calculator can be extended independently of the others so that additional calculation options and
2.2 Calculation workflows 29
methodologies can be easily introduced, without affecting the overall calculation workflow.
This Chapter summarises the structure of the information necessary to define a PSHA input model to be
used with the OpenQuake-engine.
Configuration file Seismic Source Logic Tree Initial Seismic Source Model A
...
branching level : it’s the largest container. It’s not used in modelling uncertainty, but it’s useful in
maintaining a logic and an order in the structure of the tree.
Below we provide a simple schema illustrating the skeleton of xml file containing the desciption of a
logic tree:
<logicTreeBranchingLevel branchingLevelID=ID>
<logicTreeBranchSet branchSetID=ID
uncertaintyType=TYPE>
<logicTreeBranch>
<uncertaintyModel>VALUE</uncertaintyModel>
<uncertaintyWeight>WEIGHT</uncertaintyWeight>
</logicTreeBranch>
</logicTreeBranchSet>
</logicTreeBranchingLevel>
As it appears from this example, the structure of a logic tree is a set of nested elements.
A schematic representation of the elemental components of a logic tree structure is provided in Figure
3.2. A branching level identifies the position where branching occurs while a branch set identifies a
collection of branches (i.e. individual branches) whose weights sum to 1.
<logicTree logicTreeID="ID">
...
</logicTree>
weight 1
weight1
weight 2
weight 1
weight1
weight 2
weight2 Branch Set
weight 3
weight 4
weight 2
weight2
weight N
Figure 3.2 – Generic Logic Tree structure as described in terms of branching levels, branch sets,
and individual branches.
the first logicTreeBranchingLevel element in the sequence represents the first level in the tree, the
second element the second level in the tree, and so on.
<logicTree logicTreeID="ID">
<logicTreeBranchingLevel branchingLevelID="ID_1">
...
</logicTreeBranchingLevel>
<logicTreeBranchingLevel branchingLevelID="ID_2">
...
</logicTreeBranchingLevel>
....
<logicTreeBranchingLevel branchingLevelID="ID_N">
...
</logicTreeBranchingLevel>
</logicTree>
There are no restrictions on the number of tree levels that can be defined.
A logicTreeBranchingLevel is defined as a sequence of logicTreeBranchSet elements where
each logicTreeBranchSet defines a particular epistemic uncertainty inside a branching level.
A branch set has two required attributes: branchSetID and uncertaintyType. The latter defines
the type of epistemic uncertainty this branch set is describing.
<logicTree logicTreeID="ID">
...
<logicTreeBranchingLevel branchingLevelID="ID_#">
<logicTreeBranchSet branchSetID="ID_1"
34 Chapter 3. Using the Hazard Module
uncertaintyType="UNCERTAINTY_TYPE">
...
</logicTreeBranchSet>
<logicTreeBranchSet branchSetID="ID_2"
uncertaintyType="UNCERTAINTY_TYPE">
...
</logicTreeBranchSet>
...
<logicTreeBranchSet branchSetID="ID_N"
uncertaintyType="UNCERTAINTY_TYPE">
...
</logicTreeBranchSet>
</logicTreeBranchingLevel>
...
</logicTree>
<logicTree logicTreeID="ID">
...
<logicTreeBranchingLevel branchingLevelID="ID_#">
...
<logicTreeBranchSet branchSetID="ID_#"
uncertaintyType="UNCERTAINTY_TYPE">
<logicTreeBranch branchID="ID_1">
<uncertaintyModel>
UNCERTAINTY_MODEL
</uncertaintyModel>
<uncertaintyWeight>
UNCERTAINTY_WEIGHT
</uncertaintyWeight>
</logicTreeBranch>
...
3.1 Input data definition 35
<logicTreeBranch branchID="ID_N">
<uncertaintyModel>
UNCERTAINTY_MODEL
</uncertaintyModel>
<uncertaintyWeight>
UNCERTAINTY_WEIGHT
</uncertaintyWeight>
</logicTreeBranch>
</logicTreeBranchSet>
...
</logicTreeBranchingLevel>
...
</logicTree>
<uncertaintyModel>GMPE_NAME</uncertaintyModel>
<uncertaintyModel>SOURCE_MODEL_FILE_PATH</uncertaintyModel>
<uncertaintyModel>MAX_MAGNITUDE_INCREMENT</uncertaintyModel>
<uncertaintyModel>B_VALUE_INCREMENT</uncertaintyModel>
<uncertaintyModel>
A_VALUE B_VALUE
</uncertaintyModel>
<uncertaintyModel>
MAX_MAGNITUDE
</uncertaintyModel>
There are no restrictions on the number of logicTreeBranch elements that can be defined in a
logicTreeBranchSet, as long as the uncertainty weights sum to 1.0.
The logicTreeBranchSet element offers also a number of optional attributes allowing for complex
tree definitions:
36 Chapter 3. Using the Hazard Module
The default is the keyword ALL, which means that a branch set is by default linked to all branches
in the previous branching level. By specifying one or more branches to which the branch set links
to, non-symmetric logic trees can be defined.
• applyToSources: specifies to which source in a source model the uncertainty applies to. Sources
are specified in terms of their IDs:
applyToSources="srcID1 srcID2 .... srcIDN"
• applyToSourceType: specifies to which source type the uncertainty applies to. Only one source
typology can be defined (area, point, simpleFault, complexFault), e.g.:
applyToSources="area"
• applyToTectonicRegionType: specifies to which tectonic region type the uncertainty applies to.
Only one tectonic region type can be defined (Active Shallow Crust, Stable Shallow Crust,
Subduction Interface, Subduction IntraSlab, Volcanic), e.g.:
applyToTectonicRegionType="Active Shallow Crust"
14 </logicTreeBranchingLevel>
15 </logicTree>
16 </nrml>
The optional branching levels will contain rules that modify parameters of the sources in the initial
seismic source model.
For example, if the epistemic uncertainties to be considered are source geometry and maximum
magnitude, the modeller can create a logic tree structure with three initial seismic source models (each
one exploring a different definition of the geometry of sources) and one branching level accounting for
the epistemic uncertainty on the maximum magnitude.
Below we provide an example of such logic tree structure.
6 <logicTreeBranchingLevel branchingLevelID="bl1">
7 <logicTreeBranchSet uncertaintyType="sourceModel"
8 branchSetID="bs1">
9 <logicTreeBranch branchID="b1">
10 <uncertaintyModel>seismic_source_model_A.xml
11 </uncertaintyModel>
12 <uncertaintyWeight>0.2</uncertaintyWeight>
13 </logicTreeBranch>
14 <logicTreeBranch branchID="b2">
15 <uncertaintyModel>seismic_source_model_B.xml
16 </uncertaintyModel>
17 <uncertaintyWeight>0.3</uncertaintyWeight>
18 </logicTreeBranch>
19 <logicTreeBranch branchID="b3">
20 <uncertaintyModel>seismic_source_model_C.xml
21 </uncertaintyModel>
22 <uncertaintyWeight>0.5</uncertaintyWeight>
23 </logicTreeBranch>
24 </logicTreeBranchSet>
25 </logicTreeBranchingLevel>
26
27 <logicTreeBranchingLevel branchingLevelID="bl2">
28 <logicTreeBranchSet branchSetID="bs21"
29 uncertaintyType="maxMagGRRelative">
30 <logicTreeBranch branchID="b211">
31 <uncertaintyModel>+0.0</uncertaintyModel>
32 <uncertaintyWeight>0.6</uncertaintyWeight>
33 </logicTreeBranch>
34 <logicTreeBranch branchID="b212">
35 <uncertaintyModel>+0.5</uncertaintyModel>
36 <uncertaintyWeight>0.4</uncertaintyWeight>
37 </logicTreeBranch>
38 Chapter 3. Using the Hazard Module
38 </logicTreeBranchSet>
39 </logicTreeBranchingLevel>
40
41 </logicTree>
42 </nrml>
Note that the uncertainty on the maximum magnitude is specified in terms of relative increments with
respect to the initial maximum magnitude defined for each source in the initial seismic source models.
<sourceModel gml:id="ID">
...
<areaSource gml:id="SOURCE_ID">
<gml:name>SOURCE_NAME</gml:name>
<tectonicRegion>TECT_REGION_TYPE</tectonicRegion>
...
</areaSource>
...
<pointSource gml:id="SOURCE_ID">
<gml:name>SOURCE_NAME</gml:name>
<tectonicRegion>TECT_REGION_TYPE</tectonicRegion>
...
</pointSource>
...
<simpleFaultSource gml:id="SOURCE_ID">
<gml:name>SOURCE_NAME</gml:name>
<tectonicRegion>TECT_REGION_TYPE</tectonicRegion>
...
</simpleFaultSource>
...
<complexFaultSource gml:id="SOURCE_ID">
<gml:name>SOURCE_NAME</gml:name>
<tectonicRegion>TECT_REGION_TYPE</tectonicRegion>
...
</complexFaultSource>
...
</sourceModel>
The example below shows a simple ground-motion logic tree. This logic tree assumes that all the
sources in the PSHA input model belong to “Active Shallow Crust” and uses for calculation the Chiou
and Youngs (2008) GMPE.
10 <logicTreeBranch branchID="b1">
11 <uncertaintyModel>
12 ChiouYoungs2008
13 </uncertaintyModel>
14 <uncertaintyWeight>1.0</uncertaintyWeight>
15 </logicTreeBranch>
16
17 </logicTreeBranchSet>
18 </logicTreeBranchingLevel>
19 </logicTree>
20 </nrml>
3.1.4.1 Calculation of a hazard map and hazard curves using the classical PSHA methodology
In the following we describe the overall structure and the most typical parameters of a configuration file
to be used for the computation of a seismic hazard map using a classical PSHA methodology.
• Calculation type and model info
1 [general]
2 description = A demo OpenQuake-engine .ini file for classical PSHA
3 calculation_mode = classical
4 random_seed = 1024
The first one consists on defining a polygon (usually a rectangle) and a distance (in km) used to
discretize the polygon area. The polygon is defined by a list of longitude-latitude tuples.
An example is provided below.
5 [geometry]
6 region = 10.0 43.0, 12.0 43.0, 12.0 46.0, 10.0 46.0
7 # km
8 region_grid_spacing = 10.0
The second option allows the definition of a number of sites where the hazard will be computed.
An example is provided below.
5 [geometry]
6 sites = 10.0 43.0, 12.0 43.0, 12.0 46.0, 10.0 46.0
If the list of sites is too long the user can specify the name of a .csv file as it is shown below
5 [geometry]
6 sites_csv = <name_of_the_csv_file>
The format of the csv file containing the list of sites is a sequence of points (one per row) specified
in terms of the longitude, latitude tuple. An example is provided below,
1 179.0,90.0
2 178.0,89.0
3 177.0,88.0
If the seismic source logic tree and the ground motion logic tree do not contain epistemic uncer-
tainties the engine will create a single PSHA input.
• Parameters controlling the construction of the earthquake rupture forecast
11 [erf]
12 # km
13 rupture_mesh_spacing = 5
14 width_of_mfd_bin = 0.1
15 # km
16 area_source_discretization = 10
This section of the configuration file is used to specify the level of discretization of the mesh
representing faults, the grid used to delineate the area sources and, the magnitude-frequency
distribution. Note that the smaller is the mesh spacing (or the bin width) the larger are (1) the
precision in the calculation and (2) the computation demand.
3.1 Input data definition 41
17 [site_params]
18 reference_vs30_type = measured
19 reference_vs30_value = 760.0
20 reference_depth_to_2pt5km_per_sec = 5.0
21 reference_depth_to_1pt0km_per_sec = 100.0
In this section the user specifies local soil conditions. The simplest solution is to define uniform
site conditions (i.e. all the sites have the same characteristics). Alternatively it’s possible to define
spatially variable soil properties in a separate file; the engine will then assign to each investigation
location the values of the closest point used to specify site conditions.
17 [site_params]
18 site_model_file = ../_site_model/site_model.xml
The file containing the site model has the following structure:
• Calculation configuration
22 [calculation]
23 source_model_logic_tree_file = source_model_logic_tree.xml
24 gsim_logic_tree_file = gmpe_logic_tree.xml
25 # years
26 investigation_time = 50.0
27 intensity_measure_types_and_levels = {"PGA": [0.005, ..., 2.13]}
28 truncation_level = 3
29 # km
42 Chapter 3. Using the Hazard Module
30 maximum_distance = 200.0
This section of the oq-engine configuration file specifies the parameters that are relevant for the
calculation of hazard. These include the names of the two files containing the Seismic Source
System and the Ground Motion System, the duration of the time window used to compute the hazard,
the ground motion intensity measure types and levels for which the probability of exceedence will
be computed, the level of truncation of the gaussian distribution of the logarithm of ground motion
used in the calculation of hazard and the maximum integration distance (i.e. the distance within
which sources will contribute to the computation of the hazard).
• Output
32 [output]
33 export_dir = out/
34 # given the specified ‘intensity_measure_types_and_levels‘
35 quantile_hazard_curves =
36 poes_hazard_maps = 0.1
The final section of the configuration file is the one that contains the parameters controlling the
typology of output to be produced.
In the section it will be necessary to specify the geographic coordinates of the site (or the sites)
where the disaggregation will be performed.
• Disaggregation parameters
[disaggregation]
poes_disagg = 0.02, 0.1
mag_bin_width = 1.0
distance_bin_width = 25.0
# decimal degrees
coordinate_bin_width = 1.5
num_epsilon_bins = 3
With the disaggregation settings shown above we’ll disaggregate the intensity measure levels with
10% and 2% probability of exceedance using the investigation_time and the intensity measure
3.1 Input data definition 43
types defined in the “Calculation configuration” section of the OpenQuake configuration file (see
page 41).
The parameters mag_bin_width, distance_bin_width, coordinate_bin_width control the
level of discretization of the disaggregation matrix computed. num_epsilon_bins indicates the
number of bins used to represent the contributions provided by different values of epsilon.
If the user is interested in a specific type of disaggregation, we suggest to use a very coarse gridding
for the parameters that are not necessary. For example, if the user is interested in a magnitude-
distance disaggregation, we suggest the use of very large value for the coordinate_bin_width
and to set num_epsilon_bins equal to 1.
2. Event based
This is section is used to specify the number of stochastic event sets to be generated for each logic
tree realisation (each stochastic event set represents a potential realisation of seismicity during the
investigation_time specified in the calculation_configuration part). Additionally, in
this section the user can specify the spatial correlation model to be used in case for the generation
of ground motion fields.
[event_based_params]
ses_per_logic_tree_path = 5
ground_motion_correlation_model = JB2009
ground_motion_correlation_params = "vs30_clustering": True
The acceptable flags for the parameter vs30_clustering are False and True, with a capital F
and T respectively. 0 and 1 are also acceptable flags.
3. Output
This part substitutes the Output part described in the configuration file example described in the
section 3.1.4.1 at page 39.
[output]
export_dir = /tmp/xxx
ground_motion_fields = true
# post-process ground motion fields into hazard curves,
# given the specified ‘intensity_measure_types_and_levels‘
hazard_curves_from_gmfs = true
mean_hazard_curves = true
quantile_hazard_curves = 0.15, 0.5, 0.85
poes_hazard_maps = 0.1, 0.2
Running OpenQuake-engine for hazard
calculations
Getting results
Description of outputs
Output from Classical PSHA
Output from Event Based PSHA
Example of files containing a stochastic
event set and a ground motion field
Output from Disaggregation
In this Chapter we provide a desciption of the main commands available for running hazard with the
oq-engine and the file formats used to represent the results of the analyses.
A general introduction to the use of OpenQuake-engine is provided in Section 1.2 at page 13. The
reader is invited to consult this part before diving into this section.
The amount of information prompted during the execution of the analysis can be controlled through
the –log-level flag as shown in the example below:
In this example we ask the engine to provide an extensive amount of information (usually not justified for
a standard analysis). Alternative options are: debug, info, progress, warn, error, critical.
The second option, allows the user to export the computed results or just a subset of them whenever
they want. In order to obtain the list of results of the hazard calculations stored in the oq-engine database
the user can utilize the following command:
46 Chapter 4. Hazard Calculation and Results Provided
The execution of this command will produce a list similar to the one provided below (the numbers in red
are the calculations IDs).
user@ubuntu:~$ oq-engine --lhc
calc_id | num_jobs | latest_job_status | last_update | description
1 | 1 | failed | 2013-03-01 09:49:34 | Classical PSHA
2 | 1 | successful | 2013-03-01 09:49:56 | Classical PSHA
3 | 1 | failed | 2013-03-01 10:24:04 | Classical PSHA
4 | 1 | failed | 2013-03-01 10:28:16 | Classical PSHA
5 | 1 | failed | 2013-03-01 10:30:04 | Classical PSHA
6 | 1 | successful | 2013-03-01 10:31:53 | Classical PSHA
7 | 1 | failed | 2013-03-09 08:15:14 | Classical PSHA
8 | 1 | successful | 2013-03-09 08:18:04 | Classical PSHA
Subsequently the user can get the list of result stored for a specific hazard analysis as in the example
below (note that the number in blue emphasizes the result ID)
user@ubuntu:~$ oq-engine --lho <calc_id>
id | output_type | name
3 | hazard_curve | hc-rlz-6
In this case the oq-engine computed a group of hazard curves with result ID equal to 3.
On the contrary, if the parameter number_of_logic_tree_samples in the configuration file is
different than zero, then N hazard curves files are generated. The example below shows this case:
If we export from the database the hazard curves contained in one of the items above using the following
command
we obtain a nrml formatted file as represented in the example in the inset below.
Notwithstanding the intuitiveness of this file, let’s have a brief overview of the information included.
The overall content of this file is a list of hazard curves, one for each investigated site, computed
using a PSHA input model representing one possible realisation obtained using the complete logic tree
structure.
The attributes of the hazardCurves element (see text in red) specify the path of the logic tree used
to create the seismic source model (sourceModelTreePath) and the ground motion model (gsimTree-
Path) plus the intensity measure type and the investigation time used to compute the probability of
exceedance.
48 Chapter 4. Hazard Calculation and Results Provided
The IMLs element (in green in the example) contains the values of shaking used by the engine
to compute the probability of exceedance in the investigation time. For each site this file contains a
hazardCurve element which has the coordinates (longitude and latitude in decimal degrees) of the site
and the values of the probability of exceedance for all the intensity measure levels specified in the IMLs
element.
If in the configuration file the calculation of mean hazard curves and hazard curves corresponding
to one or several percentiles have been specified, the list of outputs that we should expect from the
OpenQuake-engine corresponds to:
In this example the oq-engine produced hazard curves and hazard maps for six logic tree realisations plus
median hazard curves and the median hazard map (both highlighted in red).
The following inset shows a sample of the nrml file used to describe a hazard map.
This list in the inset above contains two sets of stochastic events (in red) and two sets of ground motion
fields (in blue).
The whole group of stochastic event set and ground motion fields can be exported immediately using
the results with id 35 and 25, respectively.
4.3.1 Example of files containing a stochastic event set and a ground motion field
This is an example showing a nrml file containing a collection of stochastic event sets (2 ruptures)
<planarSurface>
<topLeft lon="11.45858812" lat="42.7429056814"
depth="11.3208667302"/>
<topRight lon="11.4822820715" lat="42.7256333907"
depth="11.3208667302"/>
<bottomLeft lon="11.45858812" lat="42.7429056814"
depth="12.6791332698"/>
<bottomRight lon="11.4822820715" lat="42.7256333907"
depth="12.6791332698"/>
</planarSurface>
</rupture>
</stochasticEventSet>
</stochasticEventSetCollection>
</nrml>
The text in red shows the part which describes the id of the generated stochastic event set and the
investigation time covered. The text in green emphasises the portion of the text used to describe a rupture.
The informtion provided describes entirely the geometry of the rupture as well as its rupturing properties
(e.g. rake, magnitude).
This is an example of a nrml file containing one ground motion field:
The example below shows the list of disaggregation results obtained for four logic tree realisations.
For each realisation, disaggregation has been completed for two intensity measure levels corresponding to
different probabilities of exceedence in the specified investigation time.
In the following inset we show an example of the nrml file used to represent the different disaggrega-
tion matrices (highlighted in red) produced by OpenQuake-engine.
5. Demonstrative Examples
<nrml xmlns:gml="http://www.opengis.net/gml"
54 Chapter 5. Demonstrative Examples
xmlns="http://openquake.org/xmlns/nrml/0.4">
<logicTree logicTreeID=’lt1’>
<logicTreeBranchingLevel branchingLevelID="bl1">
<logicTreeBranchSet
uncertaintyType="gmpeModel"
applyToTectonicRegionType="Active Shallow Crust"
branchSetID="bs1">
<logicTreeBranch branchID="b1">
<uncertaintyModel>
BooreAtkinson2008
</uncertaintyModel>
<uncertaintyWeight>1.0</uncertaintyWeight>
</logicTreeBranch>
</logicTreeBranchSet>
</logicTreeBranchingLevel>
</logicTree>
</nrml>
[general]
description = ...
calculation_mode = classical
random_seed = 23
[geometry]
region = ...
region_grid_spacing = 5.0
5.1 OpenQuake Hazard Demos 55
[logic_tree]
number_of_logic_tree_samples = 0
[erf]
rupture_mesh_spacing = 2
width_of_mfd_bin = 0.1
area_source_discretization = 5.0
[site_params]
reference_vs30_type = measured
reference_vs30_value = 600.0
reference_depth_to_2pt5km_per_sec = 5.0
reference_depth_to_1pt0km_per_sec = 100.0
[calculation]
source_model_logic_tree_file = source_model_logic_tree.xml
gsim_logic_tree_file = gmpe_logic_tree.xml
investigation_time = 50.0
intensity_measure_types_and_levels ={
"PGV": [2, 4, 6 ,8, 10, ...],
"PGA": [0.005, 0.007, ...],
"SA(0.025)": [...],
"SA(0.05)": [...],
"SA(0.1)": [...],
"SA(0.2)": [...],
"SA(0.5)": [...],
"SA(1.0)": [...],
"SA(2.0)": [...]}
truncation_level = 3
maximum_distance = 200.0
[output]
export_dir = ...
mean_hazard_curves = false
quantile_hazard_curves =
hazard_maps = true
uniform_hazard_spectra = true
56 Chapter 5. Demonstrative Examples
0.5°N 0.38
0.450 0.34
0.30
0.5°N 0.375 0°
0.26
0.300
0.22
0° 0.5°S
0.225 0.18
0.14
0.150
0.5°S 1°S 0.10
0.075
0.06
0.000 0.02
0.5°W 0° 0.5°E 0.5°W 0° 0.5°E
(a) (b)
0.475
0.5°N 0.76
0.425
0.64 0.375
0.325
0° 0.52 1°S
0.275
0.40
0.225
1.5°S
0.5°S 0.28 0.175
0.125
0.16
0.5°E 1°E 1.5°E 0.075
0.04
1°E 1.5°E 2°E 0.025
(c) (d)
Figure 5.1 – Hazard maps (for PGA, 10% in 50 years) as obtained from the different oq-engine
source typologies. (a) Point Source. (b) Area source. The solid black line represents the area
boundary. (c) Simple Fault Source. The dashed line represents the fault trace, while the solid line
the fault surface projection. (d) Complex Fault Source. The solid line represent the fault surface
projection (d)
Hazard maps for the different demos are show in figure 5.1 and 5.2.
5.1 OpenQuake Hazard Demos 57
0.550
0.5°N
0.475 0.300
0.400 0.255
0° 1°S
0.325 0.210
0.250 0.165
1.5°S
0.5°S 0.175 0.120
0.100 0.075
0.5°E 1°E 1.5°E
0.025
1°E 1.5°E 2°E 0.030
(a) (b)
0.550
0.475
0.400
2°S
0.325
0.250
2.5°S
0.175
0.100
1°W 0.5°W 0° 0.5°E
0.025
(c)
Figure 5.2 – Hazard maps (for PGA, 10% in 50 years) as obtained from characteristic fault sources
with simple fault geometry (e), complex fault geometry (f), and collection of planar surfaces (g)
58 Chapter 5. Demonstrative Examples
<logicTreeBranchingLevel branchingLevelID="bl1">
<logicTreeBranchSet uncertaintyType="sourceModel"
branchSetID="bs1">
<logicTreeBranch branchID="b1">
<uncertaintyModel>
source_model_1.xml
</uncertaintyModel>
<uncertaintyWeight>0.5</uncertaintyWeight>
</logicTreeBranch>
<logicTreeBranch branchID="b2">
<uncertaintyModel>
source_model_2.xml
</uncertaintyModel>
<uncertaintyWeight>0.5</uncertaintyWeight>
</logicTreeBranch>
</logicTreeBranchSet>
</logicTreeBranchingLevel>
</logicTree>
</nrml>
The two source models are defined in two separate files source_model_1.xml and source_model_-
2.xml each one associated to a corresponding weight (0.5 for both).
The GSIM logic tree file contains the following structure:
<nrml xmlns:gml="http://www.opengis.net/gml"
xmlns="http://openquake.org/xmlns/nrml/0.4">
<logicTree logicTreeID=’lt1’>
5.1 OpenQuake Hazard Demos 59
<logicTreeBranchingLevel branchingLevelID="bl1">
<logicTreeBranchSet uncertaintyType="gmpeModel"
applyToTectonicRegionType="Active Shallow Crust"
branchSetID="bs1">
<logicTreeBranch branchID="b11">
<uncertaintyModel>
BooreAtkinson2008
</uncertaintyModel>
<uncertaintyWeight>0.5</uncertaintyWeight>
</logicTreeBranch>
<logicTreeBranch branchID="b12">
<uncertaintyModel>
ChiouYoungs2008
</uncertaintyModel>
<uncertaintyWeight>0.5</uncertaintyWeight>
</logicTreeBranch>
</logicTreeBranchSet>
</logicTreeBranchingLevel>
<logicTreeBranchingLevel branchingLevelID="bl2">
<logicTreeBranchSet uncertaintyType="gmpeModel"
applyToTectonicRegionType="Stable Continental Crust"
branchSetID="bs2">
<logicTreeBranch branchID="b21">
<uncertaintyModel>
ToroEtAl2002</uncertaintyModel>
<uncertaintyWeight>0.5</uncertaintyWeight>
</logicTreeBranch>
<logicTreeBranch branchID="b22">
<uncertaintyModel>
Campbell2003</uncertaintyModel>
<uncertaintyWeight>0.5</uncertaintyWeight>
</logicTreeBranch>
</logicTreeBranchSet>
</logicTreeBranchingLevel>
</logicTree>
</nrml>
The source model contains sources belonging to Active Shallow Crust and Stable Continental Crust,
therefore the GSIM logic tree defines two branching levels, one for each considered tectonic region type.
Moreover for each tectonic region a branch set with two GMPEs is defined: Boore and Atkinson 2008
and Chiou and Youngs 2008 for Active Shallow Crust and Toro et al. 2003 and Campbell 2003 for Stable
60 Chapter 5. Demonstrative Examples
Continental Crust. By processing the above logic tree files using the logic tree path enumeration mode
(enabled by setting in the configuration file number_of_logic_tree_samples = 0) hazard results are
computed for 8 logic tree paths (2 source models x 2 GMPEs for Active x 2 GMPEs for Stable).
LogicTreeCase2ClassicalPSHA defines a single source model consisting of only two sources (area
and simple fault) belonging to different tectonic region types (Active Shallow Crust and Stable Continental
Region) and both characterized by a truncated Gutenberg-Richter distribution. The logic tree defines
uncertainties for G-R a and b values (three possible pairs for each source), maximum magnitude (three
values for each source) and uncertainties on the GMPEs for each tectonic region type (two GMPE per
region type).
To accommodate such a structure the GSIM logic tree is defined in the following way:
<logicTreeBranchingLevel branchingLevelID="bl1">
<logicTreeBranchSet uncertaintyType="sourceModel"
branchSetID="bs1">
<logicTreeBranch branchID="b11">
<uncertaintyModel>
source_model.xml
</uncertaintyModel>
<uncertaintyWeight>1.0</uncertaintyWeight>
</logicTreeBranch>
</logicTreeBranchSet>
</logicTreeBranchingLevel>
<logicTreeBranchingLevel branchingLevelID="bl2">
<logicTreeBranchSet uncertaintyType="abGRAbsolute"
applyToSources="1"
branchSetID="bs21">
<logicTreeBranch branchID="b21">
<uncertaintyModel>4.6 1.1</uncertaintyModel>
<uncertaintyWeight>0.333</uncertaintyWeight>
</logicTreeBranch>
<logicTreeBranch branchID="b22">
<uncertaintyModel>4.5 1.0</uncertaintyModel>
<uncertaintyWeight>0.333</uncertaintyWeight>
</logicTreeBranch>
<logicTreeBranch branchID="b23">
<uncertaintyModel>4.4 0.9</uncertaintyModel>
<uncertaintyWeight>0.334</uncertaintyWeight>
</logicTreeBranch>
5.1 OpenQuake Hazard Demos 61
</logicTreeBranchSet>
</logicTreeBranchingLevel>
<logicTreeBranchingLevel branchingLevelID="bl3">
<logicTreeBranchSet uncertaintyType="abGRAbsolute"
applyToSources="2"
branchSetID="bs31">
<logicTreeBranch branchID="b31">
<uncertaintyModel>3.3 1.0</uncertaintyModel>
<uncertaintyWeight>0.333</uncertaintyWeight>
</logicTreeBranch>
<logicTreeBranch branchID="b32">
<uncertaintyModel>3.2 0.9</uncertaintyModel>
<uncertaintyWeight>0.333</uncertaintyWeight>
</logicTreeBranch>
<logicTreeBranch branchID="b33">
<uncertaintyModel>3.1 0.8</uncertaintyModel>
<uncertaintyWeight>0.334</uncertaintyWeight>
</logicTreeBranch>
</logicTreeBranchSet>
</logicTreeBranchingLevel>
<logicTreeBranchingLevel branchingLevelID="bl4">
<logicTreeBranchSet uncertaintyType="maxMagGRAbsolute"
applyToSources="1"
branchSetID="bs41">
<logicTreeBranch branchID="b41">
<uncertaintyModel>7.0</uncertaintyModel>
<uncertaintyWeight>0.333</uncertaintyWeight>
</logicTreeBranch>
<logicTreeBranch branchID="b42">
<uncertaintyModel>7.3</uncertaintyModel>
<uncertaintyWeight>0.333</uncertaintyWeight>
</logicTreeBranch>
<logicTreeBranch branchID="b43">
<uncertaintyModel>7.6</uncertaintyModel>
<uncertaintyWeight>0.334</uncertaintyWeight>
</logicTreeBranch>
</logicTreeBranchSet>
</logicTreeBranchingLevel>
<logicTreeBranchingLevel branchingLevelID="bl5">
<logicTreeBranchSet uncertaintyType="maxMagGRAbsolute"
62 Chapter 5. Demonstrative Examples
applyToSources="2"
branchSetID="bs51">
<logicTreeBranch branchID="b51">
<uncertaintyModel>7.5</uncertaintyModel>
<uncertaintyWeight>0.333</uncertaintyWeight>
</logicTreeBranch>
<logicTreeBranch branchID="b52">
<uncertaintyModel>7.8</uncertaintyModel>
<uncertaintyWeight>0.333</uncertaintyWeight>
</logicTreeBranch>
<logicTreeBranch branchID="b53">
<uncertaintyModel>8.0</uncertaintyModel>
<uncertaintyWeight>0.334</uncertaintyWeight>
</logicTreeBranch>
</logicTreeBranchSet>
</logicTreeBranchingLevel>
</logicTree>
</nrml>
The first branching level defines the source model. For each source, two branching levels are created, one
defining uncertainties on G-R a and b values (defined by setting uncertaintyType="abGRAbsolute")
and G-R maximum magnitude (uncertaintyType="maxMagGRAbsolute"). It is important to notice
that each branch set is applied to a specific source by defining the attribute applyToSources, followed
by the source ID. The GSIM logic tree file is the same as used for LogicTreeCase1ClassicalPSHA. By
setting in the configuration file number_of_logic_tree_samples = 0, hazard results are obtained for
324 paths (1 source model x 3 (a, b) pairs for source 1 x 3 (a, b) pairs for source 2 x 3 max magnitude
values for source 1 x 3 max magnitude values for source 2 x 2 GMPEs for Active Shallow Crust X 2
GMPEs for Stable Continental Crust), see figure 5.3.
<logicTreeBranchingLevel branchingLevelID="bl1">
<logicTreeBranchSet uncertaintyType="sourceModel"
branchSetID="bs1">
<logicTreeBranch branchID="b11">
<uncertaintyModel>
5.1 OpenQuake Hazard Demos 63
100
10-1
10-2
10-3
10-5
10-6
10-7
10-8 -3
10 10-2 10-1 100 101
PGA (g)
(a)
Figure 5.3 – Hazard curves as obtained from the LogicTreeCase2 demo. Solid gray lines represent
individual hazard curves from the different logic tree path (a total of 324 curves). The red dashed
line represents the mean hazard curve, while the red dotted lines depict the quantile levels (0.15, 0.5,
0.95).
source_model.xml
</uncertaintyModel>
<uncertaintyWeight>1.0</uncertaintyWeight>
</logicTreeBranch>
</logicTreeBranchSet>
</logicTreeBranchingLevel>
<logicTreeBranchingLevel branchingLevelID="bl2">
<logicTreeBranchSet uncertaintyType="bGRRelative"
branchSetID="bs21">
<logicTreeBranch branchID="b21">
<uncertaintyModel>+0.1</uncertaintyModel>
<uncertaintyWeight>0.333</uncertaintyWeight>
</logicTreeBranch>
<logicTreeBranch branchID="b22">
<uncertaintyModel>0.0</uncertaintyModel>
<uncertaintyWeight>0.333</uncertaintyWeight>
</logicTreeBranch>
<logicTreeBranch branchID="b23">
<uncertaintyModel>-0.1</uncertaintyModel>
<uncertaintyWeight>0.334</uncertaintyWeight>
</logicTreeBranch>
</logicTreeBranchSet>
</logicTreeBranchingLevel>
64 Chapter 5. Demonstrative Examples
<logicTreeBranchingLevel branchingLevelID="bl3">
<logicTreeBranchSet uncertaintyType="maxMagGRRelative"
branchSetID="bs31">
<logicTreeBranch branchID="b31">
<uncertaintyModel>0.0</uncertaintyModel>
<uncertaintyWeight>0.333</uncertaintyWeight>
</logicTreeBranch>
<logicTreeBranch branchID="b32">
<uncertaintyModel>+0.5</uncertaintyModel>
<uncertaintyWeight>0.333</uncertaintyWeight>
</logicTreeBranch>
<logicTreeBranch branchID="b33">
<uncertaintyModel>+1.0</uncertaintyModel>
<uncertaintyWeight>0.334</uncertaintyWeight>
</logicTreeBranch>
</logicTreeBranchSet>
</logicTreeBranchingLevel>
</logicTree>
</nrml>
After the first branching level defining the source model, two additional branching levels are defined, one
defining relative uncertainties on b value (bGRRelative applied consistently to all sources in the source
model) and the second uncertainties on maximum magnitude (maxMagGRRelative). Similarly to the
other cases, two GMPEs are considered for each tectonic region type and therefore the total number of
logic tree path is 36 (1 source model x 3 b value increments x 3 maximum magnitude increments x 2
GMPE for Active x 2 GMPEs for Stable)
[general]
[geometry]
[logic_tree]
number_of_logic_tree_samples = 0
5.1 OpenQuake Hazard Demos 65
[erf]
rupture_mesh_spacing = 2
width_of_mfd_bin = 0.1
area_source_discretization = 5.0
[site_params]
reference_vs30_type = measured
reference_vs30_value = 600.0
reference_depth_to_2pt5km_per_sec = 5.0
reference_depth_to_1pt0km_per_sec = 100.0
[calculation]
source_model_logic_tree_file = source_model_logic_tree.xml
gsim_logic_tree_file = gmpe_logic_tree.xml
investigation_time = 50.0
intensity_measure_types_and_levels = "PGA": [...]
truncation_level = 3
maximum_distance = 200.0
[event_based_params]
ses_per_logic_tree_path = 100
ground_motion_correlation_model =
ground_motion_correlation_params =
[output]
export_dir = ...
ground_motion_fields = true
hazard_curves_from_gmfs = true
mean_hazard_curves = false
quantile_hazard_curves =
hazard_maps = true
poes = 0.1
The source model consist of one source (area). 100 stochastic event sets are generated (ses_per_-
logic_tree_path = 100) (an example can be seen in figure 5.4). Ground motion fields are computed
(ground_motion_fields = true, figure 5.5) and also hazard curves from ground motion fields are
extracted (hazard_curves_from_gmfs = true). The corresponding hazard maps for 0.1 probability
are also calculated (hazard_maps = true)
66 Chapter 5. Demonstrative Examples
1°N
0.5°N
0°
0.5°S
1°S
1.5°S
(a)
Figure 5.4 – A stochastic event set generated with the event based PSHA demo. The area source
defines a nodal plane distribution which distributes events among vertical and dipping (50 degrees)
faults with equal weights. Vertical ruptures are then distributed equally in the range 0-180 degrees
while the dipping ones in the range 0-360, both with a step of 45 degrees.
1.05
0.64
0.90
0.56
0.75 0.48
0° 0°
0.60 0.40
0.32
0.45
0.15
0.08
0.00 0.00
0.5°W 0° 0.5°W 0°
(a) (b)
Figure 5.5 – Ground motion fields (PGA) with no spatial correlations (a) and with spatial correlation
(b)
5.1 OpenQuake Hazard Demos 67
5.1.1.4 Disaggregation
An example of disaggregation calculation is given considering a source model consisting of two sources
(area and simple fault) belonging to two different tectonic region types.
The calculation is defined with the following configuration file:
[general]
description = ...
calculation_mode = disaggregation
random_seed = 23
[geometry]
[logic_tree]
number_of_logic_tree_samples = 0
[erf]
rupture_mesh_spacing = 2
width_of_mfd_bin = 0.1
area_source_discretization = 5.0
[site_params]
reference_vs30_type = measured
reference_vs30_value = 600.0
reference_depth_to_2pt5km_per_sec = 5.0
reference_depth_to_1pt0km_per_sec = 100.0
[calculation]
source_model_logic_tree_file = source_model_logic_tree.xml
gsim_logic_tree_file = gmpe_logic_tree.xml
investigation_time = 50.0
intensity_measure_types_and_levels = "PGA": [...]
truncation_level = 3
maximum_distance = 200.0
[disaggregation]
poes_disagg = 0.1
68 Chapter 5. Demonstrative Examples
mag_bin_width = 1.0
distance_bin_width = 10.0
coordinate_bin_width = 0.2
num_epsilon_bins = 3
[output]
export_dir = ...
Disaggregation matrices are computed for a single site (located between the two sources) for a ground
motion value corresponding to a probability value equal to 0.1 (poes_disagg = 0.1). Magnitude
values are classified in one magnitude unit bins (mag_bin_width = 1.0), distances in bins of 10
km (distance_bin_width = 10.0), coordinates in bins of 0.2 degrees (coordinate_bin_width =
0.2). 3 epsilons bins are considered (num_epsilon_bins = 3).
Part III
Risk
Introduction
Calculation workflows
Scenario Risk Calculator
Scenario Damage Calculator
Probabilistic Event-based Risk Calculator
Classical PSHA-based Risk Calculator
Retrofitting Benefit/Cost Ratio Calculator
6.1 Introduction
The seismic risk results are being calculated using the OpenQuake risk library (oq-risklib), an open-source
suite of tools for seismic risk assessment and loss estimation. This library is written in the Python
programming language and available in the form of a “developers” release, that can be executed through
a command line interface. The code of the library can be found on a public repository at GitHub at the
following address http://github.com/gem/oq-risklib.
This section provides a brief description of the calculators currently implemented in oq-risklib, and
an initial presentation of the input and output files is provided. In the following sections, the contents and
structure of these files are discussed in detail. For further information regarding the methodologies behind
each calculator, users are referred to the OpenQuake-engine Book (Risk).
This calculator computes loss maps and loss statistics due to a single seismic event, for a collection of
assets. The hazard input can be a single ground motion field (e.g. the median distribution of ground
motion in the region of interest) or a set of ground motion fields allowing the characterisation of the
inter- and intra-event variability from the GMPE. It is noted that the hazard input can either be calculated
using the hazard component of OpenQuake-engine (oq-hazardlib), or provided to the risk component in
an external file following the respective Natural hazards’ Risk Markup Language (NRML) schema (see
oq-nrmllib). A vulnerability model is combined with the distribution of the ground motions at each asset
location to calculate the loss distribution for each asset, as well as the statistics of the total loss throughout
the region of interest. The required input files and resulting output files are depicted in Figure 6.1.
72 Chapter 6. Introduction to the Risk Module
Configuration file
OQ
hazardlib
This calculator is capable of assessing the damage distribution due to a single scenario earthquake,
for a collection of assets. Similarly to the previous calculator, in order to perform the necessary risk
calculations one or a set of ground motion fields are required, which can be derived using the oq-hazardlib,
or introduced in the OpenQuake-engine using the appropriate NRML schema. In this calculator, a fragility
model is combined with the distribution of ground motion at the location of each asset, to estimate the
number or area of buildings in each damage state. The damage distribution can be extracted per asset, per
building typology (taxonomy) or considering all of the assets simultaneously (total damage distribution).
In addition, this calculator also provides collapse maps, which contain the spatial distribution of the
number or area of collapsed buildings throughout the region of interest. The input/output structure for this
calculator is presented in Figure 6.2.
OQ
hazardlib
Configuration file
Loss maps
Exposure Model
OQ
risklib
Loss Curves
Configuration file
OQ
hazardlib
retrofitting intervention is economically viable. In Figure 6.5, the input/output structure for this calculator
is depicted.
Configuration file
Exposure Model
OQ OQ
risklib risklib
Loss Curves BCR Map
OQ
hazardlib
The NRML schema for the exposure model allows the definition of various types of costs (structural
cost, nonstructural cost, contents cost, business interruption cost). Further explanations regarding the
quantities that are currently being used to define the exposure elements can be found in the OpenQuake-
engine Book (Risk).
The way the information about the characteristics of the assets in an exposure model are stored can vary
strongly depending on how and why the data was compiled. As an example, if national census information
is used to estimated the distribution of assets in a given region, it is likely that the number of buildings
within a given geographical area will be used to define the dataset, and will be used for estimating the
number of collapsed buildings for a scenario earthquake. On the other hand, if simplified methodologies
based on proxy data such as population distribution are used to develop the exposure model, then it is
76 Chapter 7. Using the Risk Module
likely that the built up area or economic cost of each building typology will be directly derived, and will
be used for the estimation of economic losses. Thus, the following set of attributes exist within the schema
for the exposure model:
• number: number of units of a given asset at a given location;
• area: area of the asset, at a given location;
• cost: structural replacement cost of the asset at a given location;
The set of required attributes depends on what and how a user wants to store the information about
the assets in the exposure model. While the attribute number might be a rather simple parameter, the other
two (area and cost) can be ambiguous, as different ways to define them might be used. With regards to the
attribute area, one can either choose to provide the aggregated built up area of the assets per location or
the average built up area for a single building unit (noting that an asset might be made up of a number of
individual buildings). Similarly, the cost can also be defined as the aggregated structural replacement
cost, the cost of replacing a single unit or even the structural replacement cost per unit of area. For the
purposes of performing a retrofitting benefit/cost analysis, it is also necessary to define the retrofitting
cost (reco). The combination between the possible options in which these three attributes can be defined
leads to four ways of storing the information about the assets. For each of these cases a brief explanation
and example is provided in this section.
Example 1
This example is comprised of an exposure model in which the aggregated cost (structural, nonstructural,
contents and business interruption) of the buildings of each taxonomy for a set of locations is directly
provided. Thus, in order to indicate how the various costs will be defined, the following information needs
to be stored in the exposure model file:
...
<conversions>
<costTypes>
<costType name="structural" type="aggregated" unit="EUR">
<costType name="non_structural" type="aggregated" unit="EUR" />
<costType name="business_interruption" type="aggregated"
unit="EUR"/>
<costType name="contents" type="aggregated" unit="EUR"/>
</costTypes>
</conversions>
...
In this case, the cost type of each component as been defined as aggregated. Once the way in
which each cost is going to be defined has been established, the values for each asset can be stored
according to the following format.
...
<assets>
<asset id="asset_01" taxonomy="RC/DMRF-D/LR">
<location lon="9.15" lat="45.17" />
<costs>
<cost type="structural" value="1500"/>
<cost type="non_structural" value="2500"/>
<cost type="contents" value="1200"/>
<cost type="business_interruption" value="400"/>
7.1 Input data definition 77
</costs>
</asset>
...
<asset id="asset_99" taxonomy="RC/DMRF-D/HR">
<location lon="9.15" lat="45.12" />
<costs>
<cost type="structural" value="2500"/>
<cost type="non_structural" value="2100"/>
<cost type="contents" value="1900"/>
<cost type="business_interruption" value="40"/>
</costs>
</asset>
</assets>
</exposureModel>
</nrml>
Each asset is uniquely identified by its id, which is used by the OpenQuake-engine to relate each
asset with the associated results (e.g. loss exceedance curves). Then, a pair of coordinates (latitude
and longitude) for a location where the asset is assumed to exist is defined. Each asset must be
classified according to a taxonomy, so that the OpenQuake-engine is capable of employing the appropriate
vulnerability function or fragility function in the risk calculations. Finally, the cost values of each type
are stored within the costs attribute. In this example, the aggregated value for all units (within a given
asset) at each location is provided directly, so there is no need to define other attributes such as number or
area. This mode of representing an exposure model is probably the simplest one.
Example 2
In this example an exposure model containing the number of units (buildings) and the associated costs per
unit of each building typology is presented.
...
<conversions>
<costTypes>
<costType name="structural" type="per_unit" unit="EUR">
<costType name="non_structural" type="per_unit" unit="EUR" />
<costType name="business_interruption" type="per_unit"
unit="EUR"/>
<costType name="contents" type="per_unit" unit="EUR"/>
</costTypes>
</conversions>
...
For this case, the cost type has been set to per_unit. Then, the information from each asset can be
stored following the format below:
...
<assets>
<asset id="asset_01" number="10" taxonomy="RC/DMRF-D/LR">
<location lon="9.15" lat="45.17" />
<costs>
78 Chapter 7. Using the Risk Module
In this example, the various costs for each asset is not provided directly, as happened in the previous
example. In order to carry out the risk calculations in which the economic cost of each asset is required,
the OpenQuake-engine multiplies, for each asset, the number of units (buildings) by the "per unit"
replacement cost. Note that in this case, there is no need to specify the attribute area.
Example 3
This example is comprised of an exposure model containing the built up area of each building typology
for a set of locations, and the associated costs per area.
...
<conversions>
<area type="aggregated" unit="square meters"/>
<costTypes>
<costType name="structural" type="per_area" unit="EUR">
<costType name="non_structural" type="per_area" unit="EUR" />
<costType name="business_interruption" type="per_area"
unit="EUR"/>
<costType name="contents" type="per_area" unit="EUR"/>
</costTypes>
</conversions>
...
In order to compile an exposure model with this structure, it is required to set the cost type to
per_area. In addition, it is also necessary to specify if the area that is being store represents the
aggregated area of number of units within an asset, or the average area of a single unit. In this particular
case, the area that is being stored is the aggregated built up area per asset, and thus this attribute was set
to aggregated.
...
<assets>
7.1 Input data definition 79
Once again, the OpenQuake-engine needs to carry out some calculations in order to compute the
different costs per asset. In this case, this value is computed by multiplying the aggregated built up area
of each building typology by the associated cost per unit of area. Notice that in this case, there is no need
to specify the attribute number.
Example 4
This example is comprised of an exposure model containing the number of buildings for each location,
the average built up area per building unit and the associated costs per area.
...
<conversions>
<area type="per_asset" unit="square meters"/>
<costTypes>
<costType name="structural" type="per_area" unit="EUR">
<costType name="non_structural" type="per_area" unit="EUR" />
<costType name="business_interruption" type="per_area"
unit="EUR"/>
<costType name="contents" type="per_area" unit="EUR"/>
</costTypes>
</conversions>
...
Similarly to what was described in the previous example, the various costs type also need to be
establish as per_area, but the type of area is now defined as per_unit.
...
<assets>
80 Chapter 7. Using the Risk Module
In this example, the OpenQuake-engine will make use of all the parameters to estimate the various
costs of each asset, by multiplying the number of buildings by its average built up area, and then by the
respective cost per unit of area.
Example 5
In this example, additional information will be included, which is required for other risk analysis besides
loss estimation, such as the calculation of insured losses or benefit/cost analysis. For the former assessment,
it is necessary to establish how the insured limit and deductible is going to be define, according to the
format below.
...
<conversions>
<costTypes>
<costType name="structural" type="aggregated" unit="EUR">
<costType name="non_structural" type="aggregated" unit="EUR" />
<costType name="business_interruption" type="aggregated"
unit="EUR"/>
<costType name="contents" type="aggregated" unit="EUR"/>
</costTypes>
<deductible isAbsolute="false"/>
<insuranceLimit isAbsolute="false"/>
</conversions>
...
In this example, both the insurance limit and the deductible were defined as a fraction of the costs, by
setting the attribute isAbsolute to false. On the other hand, a user could define one or both of these
parameters as the absolute threshold, by setting the aforementioned attribute to true. Then, for each
7.1 Input data definition 81
type of cost, the limit and deductible value can be stored for each asset. Moreover, in order to perform a
benefit/cost assessment, it is also fundamental to indicate the retrofitting cost. This parameter is handled
in the same manner as the structural cost, and it should be stored according to the following structure.
...
<assets>
<asset id="asset_01" taxonomy="RC/DMRF-D/LR">
<location lon="9.15" lat="45.17" />
<costs>
<cost type="structural" value="1500" deductible=".05"
insuranceLimit="0.9" retrofitted="200"/>
<cost type="non_structural" value="2500" deductible=".1"
insuranceLimit="0.8"/>
<cost type="contents" value="1200" deductible=".2"
insuranceLimit="0.6"/>
<cost type="business_interruption" value="400" deductible=".1"
insuranceLimit="0.5"/>
</costs>
</asset>
...
<asset id="asset_99" taxonomy="RC/DMRF-D/HR">
<location lon="9.15" lat="45.12" />
<costs>
<cost type="structural" value="2500" deductible=".1"
insuranceLimit="0.9"/ retrofitted="300"/>
<cost type="non_structural" value="2100" deductible=".05"
insuranceLimit="0.7"/>
<cost type="contents" value="1900" deductible=".2"
insuranceLimit="0.7"/>
<cost type="business_interruption" value="40"/ deductible=".05"
insuranceLimit="0.9">
</costs>
</asset>
</assets>
</exposureModel>
</nrml>
Despite the fact that for the demonstration of how the insurance parameters and retrofitting cost can
be stored it was used the aggregated type of cost (structure described in example 1), it is important to
mention that any of the other storing approaches can also be employed (example 2 -4).
Example 6
The OpenQuake-engine is also capable of estimating human losses, based on a number of occupants
within an asset, at a certain time of the day. In this example, it is demonstrated how this parameter is
defined for each asset. In addition, this example also serves the purpose of presenting an exposure model
in which three cost types have been defined following different structures.
...
<conversions>
<area type="aggregated" unit="square meters"/>
82 Chapter 7. Using the Risk Module
<costTypes>
<costType name="structural" type="per_unit" unit="EUR">
<costType name="non_structural" type="per_area" unit="EUR" />
<costType name="contents" type="aggregated" unit="EUR"/>
</costTypes>
</conversions>
...
As previously mentioned, in this example only three costs are being stored, and each one follows
a different approach. The structural cost is being defined as the replacement cost per unit (example
2), the non_structural cost is established as the cost per area (example 3), and the contents cost
is provided directly as the aggregated value per asset (example 1). The information about each asset is
presented bellow, along with the number of occupants at different times of the day.
...
<assets>
<asset id="asset_01" number="5" area ="500" taxonomy="RC/DMRF-D/LR">
<location lon="9.15" lat="45.17" />
<costs>
<cost type="structural" value="1000"/>
<cost type="non_structural" value="250"/>
<cost type="contents" value="5000"/>
</costs>
<occupancies>
<occupancy occupants="10" period="day"/>
<occupancy occupants="50" period="night"/>
</occupancies>
</asset>
...
<asset id="asset_99" number="8" area ="800" taxonomy="RC/DMRF-D/HR">
<location lon="9.15" lat="45.12" />
<costs>
<cost type="structural" value="2000"/>
<cost type="non_structural" value="400"/>
<cost type="contents" value="4000"/>
</costs>
<occupancies>
<occupancy occupants="20" period="day"/>
<occupancy occupants="30" period="night"/>
</occupancies>
</asset>
</assets>
</exposureModel>
</nrml>
The number of occupants for each asset is stored under the occupancies field, as part of the
occupancy instance. The number and type of periods of the day is not a fixed variable, and a user can
provide as many as needed (e.g. morning, afternoon, night, transit, 9am-17pm). The descriptions used
7.1 Input data definition 83
to define each period are used to specify the time of the day for which the human losses should be
estimated in the Scenario Risk calculator (see section INCLUDE LATER).
The way this information is being stored is constantly being modified, as further feedback from users
and experts is received. Hence, it is important to understand which version of NRML the engine is using,
in order to avoid incompatibility issues. NRML is currenly v0.4 and documentation about each release
can be found on GitHub (see oq-nrmllib). Several examples of exposure models containing different types
of information are presented below.
At the top of the NRML schema, the following metadata are being stored:
• vulnerabilitySetID: A unique key used to identify the vulnerability model instance within the
OpenQuake-engine;
• assetCategory: An attribute that describes the asset typology (e.g.: population, buildings,
contents);
• lossCategory: An attribute that describes the type of loss being modelled for the assetCategory
(e.g. fatalities, structural replacement cost, contents replacement cost).
84 Chapter 7. Using the Risk Module
...
<IML IMT = "SA(0.3)"> 0.061 0.129 0.188 0.273 0.398 0.579
0.843 1.227 1.856 2.485 </IML>
...
Within this component, an attribute specifying the intensity measure type (e.g.: Sa, PGA, MMI)
is defined, followed by the list of intensity measure levels. This set of values is common to all of the
vulnerability functions in the model.
...
<discreteVulnerability vulnerabilityFunctionID="typeA"
probabilisticDistribution="LN">
<lossRatio> 0.002 0.007 0.014 0.028 0.058 0.118
0.223 0.370 0.446 0.523 </lossRatio>
<coefficientsVariation> 0.012 0.058 0.079 0.159 0.265
0.244 0.211 0.152 0.088 0.082 </coefficientsVariation>
</discreteVulnerability>
<discreteVulnerability vulnerabilityFunctionID="typeB"
probabilisticDistribution="LN">
<lossRatio> 0.006 0.025 0.052 0.108 0.215 0.391
0.613 0.820 0.894 0.967 </lossRatio>
<coefficientsVariation> 0.010 0.054 0.082 0.167 0.285
0.278 0.261 0.132 0.084 0.021 </coefficientsVariation>
</discreteVulnerability>
</discreteVulnerabilitySet
</vulnerabilityModel>
</nrml>
Finally, for each discrete vulnerability function the following parameters are required:
• vulnerabilityFunctionID : A unique key that is used to relate each vulnerability function
with the assets in the exposure model;
• probabilisticDistribution : An attribute that establishes the type of probabilistic distribu-
tion used to model the uncertainty in loss ratio. At the moment, the OpenQuake-engine supports
lognormal (LN) and beta (BT) distributions;
• lossRatio : A set of mean loss ratios (one for each intensity measure level defined previously).
These values can represent different losses such as fatality rates (ratio between the number of
fatalities and total population exposed) or so-called damage ratio (ratio between the repair cost and
the replacement cost of a given structure).
• coefficientsVariation : A set of coefficients of variation (one per loss ratio) that describes
the uncertainty in the loss ratio. If users do not want to consider the uncertainty, this set of
parameters can be set to zero, and the OpenQuake-engine assumes each loss ratio as a deterministic
value.
In the previously described vulnerability model all of the vulnerability functions were defined in
terms of a single intensity measure type (Sa for 0.3 seconds). However, the current version of the engine
also allows the employment of a vulnerability model that is comprised of vulnerability functions that each
use distinct intensity measure types. In the following example, the schema of a vulnerability model in
which three intensity measure types were used (PGA, PGV and Sa for 0.3 seconds) is presented.
7.1 Input data definition 85
Several methodologies to derive vulnerability functions are currently being evaluated by Global
Earthquake Model (GEM) and will be a part of a set of modelling tools. Scripts to convert vulnerability
functions stored in Excel or ASCII files into NRML have already being developed, and can be found at
the GEM Science tools repository at GitHub (http://github.com/GEMScienceTools).
Probability of exceedance
0.8
0.6
0.4
Similarly to what has been described for the vulnerability models, the NRML schema for this input
also has some attributes that are common to all of the fragility functions comprising the model. This
initial portion of the schema is depicted below:
• description: represents an attribute that can be used to include some information about the
fragility model, for example, what building typologies are being covered or the source of the
fragility model;
• limitStates: this field is used to define the number and nomenclature of each limit state. Despite
the fact that three limit states are being employed in this example, it is possible to use any number
of states, as long as a fragility curve is always defined for each limit state.
7.1 Input data definition 87
...
<ffs noDamageLimit= 0.05>
<taxonomy RC </taxonomy>
<IML IMT="PGA" imlUnit="g"> 0.0 0.25 0.50 0.75 1.00 </IML>
For each building typology, a set of limit state curves need to be stored within the field ffs (fragility
function set). The following attributes are currently being employed to define this input:
• noDamageLimit: this attribute defines the intensity measure level below which the probability of
exceedance for all curves is zero;
• taxonomy: a unique key that is used to relate each fragility function with the relevant assets in the
exposure model;
• IML: this attribute serves the purposes of defining the list of intensity measure levels for which the
limit state curves are defined. In addition, it is also necessary to define the intensity measure type
(IMT) being used and the respective units (imlUnit);
• ffd: this field (fragility function discrete) is used to define the probabilities of exceedance (poes)
of each limit state curve. It is also necessary to include which limit state is being defined in the
attribute ls.
As previously mentioned, the user may choose to define the fragility functions in a continuous
manner, through the use of cumulative lognormal functions. In Figure 7.3, a continuous fragility model is
presented.
The NRML schema to store these functions has an initial structure similar to that described for the
discrete fragility models. Then, the continuous limit state curves are stored as illustrated below:
88 Chapter 7. Using the Risk Module
Probability of exceedance
0.8
0.6
0.4
...
<ffs noDamageLimit= 0.05>
<taxonomy RC </taxonomy>
<IML IMT="PGA" minIML="0.0" maxIML="1.0" imlUnit="g" ></IML>
<ffd ls="slight damage">
<params mean="0.16" stddev="0.11" />
</ffd
<ffd ls="moderate damage">
<params mean="0.40" stddev="0.26" />
</ffd
<ffd ls="collapse">
<params mean="0.73" stddev="0.48" />
</ffd
</ffs>
</fragilityModel>
</nrml>
Again, the set of limit state curves for each building typology needs to be stored within the field ffs
(fragility function set), through the definition of the following attributes:
• noDamageLimit: this attribute defines the intensity measure level below which the probability of
exceedance for all curves is zero;
• type: this parameter defines the type of probabilistic distribution being used to define the limit
state curves. Currently the engine only supports lognormal distributions, however, the capability of
considering other types of distributions (e.g. normal, exponential) will be developed in the future;
• taxonomy: a unique key that is used to relate each fragility function with the relevant assets in the
exposure model;
• IML: in this field, the intensity measure type (IMT) and associated units (imlUnit) for the limit
state curves is defined, along with the minimum (minIML) and maximum (maxIML) intensity
measure levels enclosing the range of applicability of the set of fragility functions;
• ffc: this field (fragility function continuous) is used to define the mean (mean) and standard
deviation (stddev) of the cumulative lognormal function. In addition, the limit state for the curve
being defined needs to be specified in the attribute ls.
7.1 Input data definition 89
[general]
description = Scenario Risk Nepal
calculation_mode = scenario_risk
exposure_file = exposure_model.xml
region_constraint = 78.0 31.5,89.5 31.5,89.5 25.5,78 25.5
maximum_distance = 10
...
• description: a parameter that can be used to include some information about the type of
calculations that are going to be performed;
• calculation_mode: this parameter sets the type of calculations. The key word for each risk
calculator is described in the following sections;
• exposure_file: this parameter is used to specify the path to the exposure model file;
• region_constraint: this field is used to define the polygon enclosing the region of interest.
Assets outside of this region will not be considered in the risk calculations. This region is defined
using pairs of coordinates (longitude and latitude in decimal degrees) that indicate the vertexes of
the polygon;
• maximum_distance: this parameter indicates the maximum allowable distance between an asset
and the closest hazard input. If no hazard input is found within this distance, the asset is skipped
and a message is provided mentioning the id of the asset that is affected by this issue. If this
parameter is not provided, the OpenQuake-Engine assumes the maximum allowable distance as 5
km.
Depending on the type of calculations, other parameters besides the aforementioned ones need to be
provided, as will be described in the following sections.
...
structural_vulnerability_file = struct_vul_model.xml
nonstructural_vulnerability_file = nonstruct_vul_model.xml
contents_vulnerability_file = cont_vul_model.xml
business_interruption_vulnerability_file = bus_int_vul_model.xml
occupants_vulnerability_file = occ_vul_model.xml
asset_correlation = 0.7
master_seed = 3
insured_losses = true
90 Chapter 7. Using the Risk Module
For this calculator, the parameter calculation_mode needs to be defined as scenario_damage. There
is only one parameter specific to this calculator, which is the fragility model file path, as presented below.
...
fragility_file = fragility_model.xml
• fragility_file: a parameter used to define the path to the fragility model file.
The parameter calculation_mode needs to be set to event_based_risk in order to use this calculator.
Similarly to that described for the Scenario Risk Calculator, a Monte Carlo sampling process is also
employed within this module to take into account the loss ratio uncertainty. Hence, the parameters
asset_correlation and master_seed need to be defined as previously described. This calculator
is also capable of estimating insured losses and therefore, the insured_losses attribute should be
established as well. For what concerns the loss disaggregation, the Probabilistic Event-based Risk
Calculator can disaggregate the losses based on magnitude/distance and location (longitude/latitude)
of the events. In order to do so, it is necessary to define the bin width of each of these parameters, as
illustrated in the following example.
...
structural_vulnerability_file = struct_vul_model.xml
nonstructural_vulnerability_file = nonstruct_vul_model.xml
contents_vulnerability_file = cont_vul_model.xml
business_interruption_vulnerability_file = bus_int_vul_model.xml
occupants_vulnerability_file = occ_vul_model.xml
asset_correlation = 0.7
master_seed = 3
insured_losses = true
loss_curve_resolution = 20
conditional_loss_poes = 0.01, 0.05, 0.1
The definition of the parameters for the loss disaggregation follow the same rules established for the
seismic hazard disaggregation described in section (TO BE INCLUDED).
In order to run this calculator, the parameter calculation_mode needs to be set to classical_risk.
With this calculator it is also possible to extract loss maps, so the parameter conditional_loss_poes
needs to be defined as explained in the previous sub-section. The remaining parameter is illustrated below.
92 Chapter 7. Using the Risk Module
...
structural_vulnerability_file = struct_vul_model.xml
nonstructural_vulnerability_file = nonstruct_vul_model.xml
contents_vulnerability_file = cont_vul_model.xml
business_interruption_vulnerability_file = bus_int_vul_model.xml
occupants_vulnerability_file = occ_vul_model.xml
lrem_stpdf_per_interval = 2
conditional_loss_poes = 0.01, 0.05, 0.1
As previously explained, this calculator uses loss exceedance curves which can be calculated using the
Classical PSHA-based Risk or the Probabilistic Event-based Risk calculators. Therefore, depending
on which calculator a user chooses to employ, the configuration file will be different. If the Classical
PSHA-based Risk calculator is employed, then the calculation_mode should be set to classical_bcr
and the calculator-specific part of the configuration file should be defined as presented below.
...
structural_vulnerability_file = struct_vul_model.xml
vulnerability_retrofitted_file = retrof_vul_model.xml
lrem_stpdf_per_interval = 2
interest_rate= 0.005
asset_life_expectancy = 50
Alternatively, if a user decides to employ the Probabilistic Event-based Risk calculator for the
calculation of the loss curves, then the calculation_mode should be set to event_based_bcr and the
remaining portion of the configuration file should be defined as follows.
7.1 Input data definition 93
...
vulnerability_file = vulnerability_model.xml
vulnerability_retrofitted_file = vulnerability_model_retrof.xml
asset_correlation = 0.7
master_seed = 3
loss_curve_resolution = 20
interest_rate= 0.005
asset_life_expectancy = 50
Running OpenQuake-engine for risk calcu-
lations
Description of the outputs
Loss statistics
Loss maps
Damage distribution
Collapse maps
Loss exceedance curves
Retrofitting Benefit/cost ratio maps
Loss disaggregation
Event loss tables
Whether a user chooses to load pre-computed ground motion fields, or calculate this input using the
hazard component of the OpenQuake-engine, a unique id is associated to the set of ground motion fields,
as depicted below.
Calculation 3 results:
id | output_type | name
12 | gmf_scenario | gmf_scenario
This is the parameter that will be used when launching the risk calculations to indicate which hazard
input should be employed. In the case of the scenario-based calculators, there is only a single hazard input
(one or a set of ground motion fields). For the remaining calculators, where probabilistic seismic hazard
is used, it is possible to have multiple hazard inputs due to the employment of logic trees, as described
in section 3.1. In the following illustration, a set of hazard results produced using the Classical PSHA
calculator is presented.
96 Chapter 8. Risk Calculations and Results
Calculation 4 results:
id | output_type | name
32 | hazard_curve | hc-rlz-32-PGA
33 | hazard_curve | hc-rlz-33-PGA
34 | hazard_curve | hc-rlz-34-PGA
35 | hazard_curve | hc-rlz-35-PGA
36 | hazard_curve | mean curve for PGA
37 | hazard_curve | quantile curve (poe>= 0.15) for imt PGA
38 | hazard_curve | quantile curve (poe>= 0.85) for imt PGA
In this case, since the logic tree had four branches, fours sets of hazard curves were produced, each
one with its own id. In addition, mean and quantile hazard curves were also produced. A user may choose
to run risk calculations using results from one of the branches or mean/quantile curves. To do so, the id of
the respective hazard result should be employed when launching the risk calculations, as depicted below.
or simply:
On the other hand, a user might also want to run the risk calculations considering all the hazard results
from a certain calculation run. In this case, rather than providing the hazard-output-id, users need to
provide the id of the hazard calculation as follows.
or simply:
For further information about consulting the id of hazard results or calculations, users are referred to
section 6.2. To obtain a list of all the risk calculations, the following command can be employed.
or simply:
Then, in order to display a list of the risk outputs from a given job, the following command can be
used
or simply:
Calculation 4 results:
id | output_type | name
29 | loss_curve | loss curves. type=structural, hazard=32
30 | loss_map | loss maps. type=structural poe=0.1, hazard=32
Then, in order to export the risk calculation outputs in the appropriate xml format, the following
command can be used.
or simply:
Mean,Standard Deviation
8717775315.66,2047771108.36
• lossCategory: the type of losses that are being stored. This parameter is taken from the
vulnerability model that was used in the loss calculations (e.g. fatalities, economic loss);
• unit: this attribute is used to define the units in which the losses are being measured (e.g. EUR);
• node: each loss map is comprised by various nodes, each node possibly containing a number of
assets. The location of the node is defined by a latitude and longitude in decimal degrees within
the field gml:Point. The mean loss (mean) and associated standard deviation (stdDev) for each
asset (identified by the parameter assetRef) is stored in the loss field.
For the probabilistic loss maps (expected losses for a given return period), a set of additional
parameters need to be considered as depicted in the following example.
</node>
...
<node>
<gml:Point>
<gml:pos>83.33 28.71</gml:pos>
</gml:Point>
loss assetRef="a997" value="4077.3"/>
loss assetRef="a998" value="2466.4"/>
loss assetRef="a999" value="4434.5"/>
</node>
</lossMap>
</nrml>
• damageStates: this field serves the purposes of storing the set of damage states, as defined in the
fragility model employed in the calculations;
• DDNode: this attribute is used to store the damage distribution of a number of assets, at a given
location (defined within the attribute gml:Point). For each asset, the mean number of buildings
(mean) and associated standard deviation (stddev) in each damage state is defined.
The Scenario Damage calculator can also estimate the total number of buildings with a certain
taxonomy, in each damage state. This distribution of damage per building taxonomy is depicted in the
following example.
In the damage distribution per taxonomy, each DDNode contains the statistics of the number of
buildings in each damage state, belonging to a given building class as specified in the taxonomy attribute.
Finally, a total damage distribution can also be calculated, which contains the mean and standard deviation
of the total number of buildings in each damage state, as illustrated bellow.
<gml:Point>
<gml:pos>83.33 28.71</gml:pos>
</gml:Point>
<cf assetRef="a997" mean="239.4" stdDev="102.0"/>
<cf assetRef="a998" mean="733.0" stdDev="253.2"/>
<cf assetRef="a999" mean="207.4" stdDev="66.5"/>
</CMNode>
</collapseMap>
</nrml>
This schema follows the same structure of the loss maps presented previously. Thus, the results for a
number of assets at a given location are stored within the field +CMNode+). This field is associated with a location
within the gml:Point attribute) and it contains the mean number of collapses (mean) and respective
standard deviation (stdDev) for each asset (identified by the parameter assetRef).
Each lossCurve is associated with a location (defined within the gml:Point attribute) and a
reference to the asset (assetRef) whose loss is being represented. Then, three lists of values are
8.2 Description of the outputs 103
stored: the probabilities of exceedance (poE), levels of absolute loss (losses) and percentages of loss
(lossRatios).
• interestRate: this parameter represents the inflation rate of the economic value of the assets;
• assetLifeExpectancy: a parameter specifying the life expectancy (or design life) of the assets
considered for the calculations;
• node: this schema follows the same node structure already presented for the loss maps, however,
instead of losses for each asset, the benefit/cost ratio (ratio), the average annual loss considering
the original vulnerability (aalOrig) and the average annual loss for the retrofitted (aalRetr)
104 Chapter 8. Risk Calculations and Results
• variable: The type of disaggregation is defined by this attribute, and it can assume the value
magnitude_distance or coordinate;
• bin: Each bin is identified by edges of the corresponding pair of parameter that it represents (e.g.
lower and upper bounds of a given combination of magnitude and distance, as illustrated in the
previous example). Then, the aggregated losses associated with this pair of parameters are stored
in the field absoluteLoss, and their percentage with respect to the overall loss are defined on the
field fraction.
Rupture,Magnitude,Aggregate Loss
1,8.25,79197
2,8.25,74478
3,7.75,46458
4,7.75,45153
5,7.75,42569
6,8.25,40649
7,7.75,38868
8,7.75,37707
9,7.75,37141
...
Scenario Risk demo
Scenario Damage demo
Classical PSHA-based Risk demo
Probabilistic Event-based demo
Retrofitting Benefit/cost ratio demo
9. Demonstrative Examples
This sections describes the set of demos that have been compile to exercise the OpenQuake-engine. These
demos can be found in a public repository in GitHub at the following link http://github.com/gem/oq-
engine/tree/master/demos. Furthermore, a folder containing all of these demonstrative examples is
provided when an OATS (OpenQuake Alpha Testing Service) account is requested, and it is also part
of the OpenQuake-engine virtual image package. These examples are purely demonstrative and do not
intend to represent accurately the seismicity, vulnerability or exposure characteristics of the region of
interest, but simply to provide example input files that can be used as a benchmark for users planning
to employ the OpenQuake-engine in seismic risk and loss estimation studies. Is is also noted that in the
demonstrative examples presented in this section, illustrations about the various messages from the engine
displayed in the command line interface are presented. These messages often contain information about
the calculation id and output id, which will certainly be different for each user.
The five demos use Nepal as the region of interest. An example exposure model has been developed
for this region, comprising 9144 assets distributed amongst 2221 locations (due to the existence of more
than one asset at the same location). A map with the distribution of the number of buildings throughout
Nepal is presented in Figure 9.1.
The building portfolio was organised into four classes for the rural areas (adobe, dressed stone,
unreinforced fired brick, wooden frames), and five classes for the urban areas (the aforementioned
typologies, in addition to reinforced concrete buildings). For each one of these building typologies,
vulnerability functions and fragility functions were collected from the literature. These input models
are only for demonstrative purposes and for further information about the building characteristics of
Nepal, users are advised to contact the National Society for Earthquake Technology of Nepal (NSET -
http:www.nset.org.np/).
This section includes instructions not only on how to run the risk calculations, but also on how to
produce the necessary hazard input. Thus, each demo comprises the configuration file, exposure model
and fragility/vulnerability models fundamental for the risk calculations, but also a configuration file and
associated input models to produce the hazard input.
30˚ 30˚
29˚ 29˚
28˚ 28˚
27˚ 27˚
26˚ 26˚
80˚ 81˚ 82˚ 83˚ 84˚ 85˚ 86˚ 87˚ 88˚ 89˚
been defined in the rupture.xml file, whist the hazard calculation settings have been established on the
job_haz_ini file. In order to calculate the set of ground motion fields due to this rupture, users should
navigate to the folder where the demo files are located, and use the following command:
Calculation 10 results:
id | output_type | name
20 | gmf_scenario | gmf_scenario
Then, this hazard input can be used for the risk calculations using the following command:
Calculation 11 results:
id | output_type | name
21 | aggregate_loss | Aggregate Loss type=structural
22 | loss_map | loss maps. type=structural
Calculation 12 results:
id | output_type | name
23 | collapse_map | Collapse Map per Asset
24 | dmg_dist_per_asset | Damage Distribution per Asset
25 | dmg_dist_per_taxonomy | Damage Distribution per Taxonomy
26 | dmg_dist_total | Damage Distribution Total
Calculation 13 results:
id | output_type | name
27 | hazard_curve | hc-rlz-70
In this demo, loss exceedance curves for each asset and two probabilistic loss maps (for probabilities
of exceedance of 1% and 10%) are produced. The following command launches these risk calculations:
Calculation 14 results:
id | output_type | name
28 | loss_curve | loss curves. type=structural, hazard=27
29 | loss_map | loss maps. type=structural poe=0.1, hazard=27
30 | loss_map | loss maps. type=structural poe=0.01, hazard=27
Calculation 15 results:
id | output_type | name
31 | gmf | gmf-rlz-72
32 | ses | ses-coll-rlz-72
Again, since there is only one branch in the logic tree, only one set of ground motion fields will be
used in the risk calculations, which can be launched through the following command:
Calculation 16 results:
id | output_type | name
28 | loss_curve | loss curves. type=structural, hazard=31
29 | loss_map | loss maps. type=structural poe=0.1, hazard=31
30 | loss_map | loss maps. type=structural poe=0.01, hazard=31
36 | agg_loss_curve | Aggregated curve type=structural, hazard=31
Calculation 17 results:
id | output_type | name
37 | bcr_distribution | BCR Distribution for hazard 27
Appendices
GMPEs for shallow earthquakes in active
tectonic regions
GMPEs for subduction sources
GMPEs for stable continental regions
We provide below a list of the ground motion prediction equations implemented in the oq-hazardlib.
All the implemented GMPE use moment magnitude as the reference magnitude. For each GMPE, the
oq-engine name, a short description, and the corresponding reference are given.
1 http://peer.berkeley.edu/ngawest
114 Chapter A. Supported Ground Motion Prediction Equations
in the context of the NGA West 2 . The model is supposed to be applicable for magnitude in range
4-8.5 (if strike-slip), 4-8 (if normal or reverse) and distances 0-200 km.
• FaccioliEtAl2010
Based on the same functional form of Cauzzi and Faccioli, 2008 but using closest distance to the
rupture instead of hypocentral distance (Faccioli et al., 2010)
• SadighEtAl1997
A ground motion prediction based primarily on strong motion data from California and applicable
for magnitude in range 4-8 and distances < 100 km (Sadigh et al., 1997).
• ZhaoEtAl2006Asc
A ground motion prediction equation for active shallow crust events developed using mostly
japanese strong ground motion recordings (Zhao et al., 2006).
2 http://peer.berkeley.edu/ngawest
Relationships for shallow earthquakes in
active tectonic regions
C. Bibliography
C.1 Books
Aki, K. and P. G. Richards (2002). Quantitative Seismology. Sausalito, California: University
Science Books (cited on pages 24, 26).
C.2 Articles
Abrahamson, N. A. and W. Silva (2008). “Summary of the Abrahamson & Silva NGA Ground-
Motion Relations”. In: Earthquake Spectra 24.1, pages 67–97 (cited on page 113).
Akkar, S. and J. J. Bommer (2010). “Empirical equations for the prediction of PGS, PGV, and
spectral accelerations in Europe, the Mediterranean Region, and the Middle East”. In: Seism.
Res. Lett. 81.2, pages 195–206. DOI: 10.1785/gssrl.81.2.195 (cited on page 113).
Akkar, S. and Z. Çağnan (2010). “A Local Ground-Motion Predictive Model for Turkey, and Its
Comparison with Other Regional and Global Ground-Motion Models”. In: Bull. Seism. Soc.
Am. 100.6, pages 2978–2995 (cited on page 113).
Atkinson, G. A. and D. M. Boore (2003). “Empirical Ground-Motion Relations for Subduction-
Zone Earthquakes and Their Application to Cascadia and Other Regions”. In: Bu 93.4,
pages 1703–1729 (cited on page 114).
— (2006). “Earthquake Ground-Motion Prediction Equations for Eastern North America”. In:
Bulletin of the Seismological Society of America 96.6, pages 2181–2205 (cited on page 114).
Boore, D. M. and G. M. Atkinson (2008). “Ground-Motion Prediction Equations for the Average
Horizontal Component of PGA, PGV, and 5%-Damped PSA at Spectral Periods between
0.01 s and 10.0 s”. In: Earthquake Spectra 24.1, pages 99–138 (cited on page 113).
Campbell, K. W. and Y. Bozorgnia (2003). “Updated Near-Source Ground-Motion (Attenuation)
Relations for the Horizontal and Vertical Components of Peak Ground Acceleration and
Acceleration Response Spectra”. In: Bulletin of the Seismological Society of America 93,
pages 314–331 (cited on page 114).
118 Chapter C. Bibliography
McGuire, K. K. (1976). FORTRAN computer program for seismic risk analysis. Open-File report
76-67. 102 pages. United States Department of the Interior, Geological Survey (cited on
page 27).
Petersen, M. D., A. D. Frankel, S. C. Harmsen, C. S. Mueller, K. M. Haller, R. L. Wheeler,
R. L. Wesson, Y. Yzeng, O. S. Boys, D. M. Perkins, N. Luco, E. H. Field, C. J. Wills, and
K. S. Rukstales (2008). Documentation for the 2008 Update of the United States National
Seismic Hazard Maps. Open File Report 2008-1128. U.S. Department of the Interior, U.S.
Geological Survey (cited on page 21).
Toro, G. R. (2002). “Modification of the Toro Et Al. (1997) Attenuation Equations for Large
Magnitudes and Short Distances”. URL: http : / / riskeng . com / PDF / atten _ toro _
extended.pdf (cited on page 114).
Index
Scenario-based SHA, 25
A
P
Area definition . . . . . . . . . . . . . . . . see Source type
Point source . . . . . . . . . . . . . . . . . . . see Source type
C
R
Characteristic fault . . . . . . . . . . . . . see Source type
Complex fault . . . . . . . . . . . . . . . . . see Source type Running OpenQuake
hazard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
G
OpenQuake-engine
hazard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Hazard calculation workflows . . . . . . . . . . 24
Classical PSHA, 24
Event-based PSHA, 25
Glossary
area source
A source type usually adopted to model distributed seismicity. In an area source the seismicity
occurrence rate is assumed uniform over the source area; this produces an hazard pattern with a
plateau of constant hazard inside the polygon delimiting the area source and values of hazard that
tend to decrease as we move away from the border of the source.
asset
An asset is an element with a certain value, which can include buildings or population. For example,
an asset can include an individual building at a given location, or a number of buildings that are
grouped, co-located at a single location and classified with the same taxonomy..
branch set
The structure describing the epistemic uncertainty on a specific parameter or model included in a
logic tree structure. It ensembles a number of branches, each one representing a discrete alternative.
branching level
It indicates the position where a branch set or a branch is located in a logic tree structure. For
example, the first branching level of the seismic source logic tree always contains one or several
initial seismic source input models.
dip
The dip is the steepest angle of descent of the fault plane relative to a horizontal plane; it is
measured in degrees [0,90].
exposure model
A set of assets grouped according to their geographical location, taxonomy and value.
fault trace
A curve representing the intersection between the surface containing the fault surface (or its
prolongation) and the topographic surface. .
124 Glossary
Hypocenter
fragility function
the probability of exceeding a set of limit states, given an intensity measure level. These functions
can be discrete or continuous.
fragility model
A set of fragility functions used to model the fragility of all the assets in the exposure model..
grid source
It’s a source typology usually adopted to model distributed seismicity. It’s routinely produced by a
seismicity smoothing algorithm (one of the most famous algorithm is the one proposed by Frankel
(1995)).
ground-motion logic tree
A method used to systematically describe the epistemic uncertainties related to the ground motion
models used in the computation of hazard using a specific PSHA input model.
magnitude-frequency distribution
See magnitude-frequency distribution.
magnitude-frequency distribution
A distribution describing the frequency of earthquakes with a specific magnitude. It can be
continuous or discrete. One frequency-magnitude distribution frequently adopted in PSHA is the
double truncated Gutenberg-Richter distribution.
magnitude-scaling relationship
An empirical relationship linking the magnitude with a parameter describing the size of the
corresponding rupture (e.g. the area of the rupture or the rupture length).
point source
The elemental source typology used in OpenQuake-engine to model distributed seismicity.
probabilistic seismic hazard analysis
A methodology to compute seismic hazard by taking into account the potential contributions
coming from all the sources of engineering importance for a specified site.
PSHA input model
An object containing the information necessary to describe the seismic source and the ground
motion models - plus the related epistemic uncertainties.
rake
The.
taxonomy
Scheme used to classify the assets. For buildings, a classification scheme has been proposed by
GEM which considers a number of attributes including lateral load resisting system and its material,
height, year of construction. The taxonomy is currently used to link the assets in the exposure
model to the relevant vulnerability function or fragility function.
vulnerability function
A function that describes the probability distribution of loss ratio, conditioned on an intensity
measure level. Currently only discrete vulnerability functions are supported.
vulnerability model
A set of vulnerability functions used to model the physical vulnerability of all the assets in the
exposure model.