Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
43 views24 pages

Frequency Analysis - An Overview - ScienceDirect Topics

Freuency analysis paper
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views24 pages

Frequency Analysis - An Overview - ScienceDirect Topics

Freuency analysis paper
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

6/29/23, 7:08 PM Frequency Analysis - an overview | ScienceDirect Topics

Frequency Analysis
Bivariate drought frequency analysis is a technique of copula-based
probabilistic analysis that connects multivariate probability
distributions to their one-dimensional (1D) marginal probability
distribution while still capturing the essential features of dependence
and correlation among the random variables (Sadri and Burn, 2012).
From: Extreme Hydrology and Climate Variability, 2019

Related terms:

Paleoflood, Wavelet, United States of America, Low Flow, Flood Frequency,


Quantile

Spring discharge hydrograph


Neven Kresic, Ognjen Bonacci, in Groundwater Hydrology of Springs, 2010

4.4.3 Frequency analysis of extreme flows


Frequency analysis of extremes is one of the most common and earliest
applications of statistics within hydrology. In frequency analysis, it is important to
distinguish between the population and the sample. According to Tallaksen,
Madsen, and Hisdal (2004), the analysis involves (1) definition of the hydrological
event and extreme characteristics to be studied, (2) selection of the extreme events
and probability distribution to describe the data, (3) estimation of the parameters
of the distribution, and (4) estimation of extreme events or design values for a
given problem.
It is possible to perform a frequency analysis by plotting the data without making
any distributional assumption. Because hydrological design for extreme flows
generally requires extrapolation beyond the range of observation and the available
record length is often insufficient to accurately define the probability distribution
for the sample from an unknown distribution, the main problem is to estimate
accurately the tails of the distribution, which contain the extreme events (Tallaksen
et al., 2004).
The uncertainty of the estimated extreme values depends strongly on the sample
size and its stationarity. In general, it is not recommended to perform frequency
analysis for a nonstationary time series. The value of maximum and minimum
annual discharges is recommended to select from the hydrologic rather than from
the calendar years.
Many probability distributions have been found to be useful for frequency analysis
of extreme spring discharges. Theoretical explanations as well as detailed
discussion of numerous distributions can be found in many standard textbooks on
statistics. Here, we give a short explanation of three of them: (1) normal, (2)
lognormal, and (3) Gumbel or extreme value distribution.
In the United States and quite a few other countries, the analytical frequency
procedure recommended for annual maximum and minimum stream flows is the
logarithmic Pearson type III distribution. This distribution requires three
parameters for complete mathematical specification. The parameters are the mean,
or first moment (estimated by the sample mean); the variance, or second moment
(estimated by the sample variance); and the skew, or third moment (estimated by
the sample skew). Since the distribution is a logarithmic distribution, all
parameters are estimated from logarithms of the observations rather than from the

https://www.sciencedirect.com/topics/earth-and-planetary-sciences/frequency-analysis 1/24
6/29/23, 7:08 PM Frequency Analysis - an overview | ScienceDirect Topics
observations themselves. The Pearson type III distribution is particularly useful for
hydrologic investigations because the third parameter, the skew, permits fitting
nonnormal samples to the distribution. When the skew is 0, the log-Pearson type
III distribution becomes a two-parameter distribution that is identical to the
lognormal distribution (U.S. Army Corps of Engineering, 1993).
Normal distribution is a symmetrical, bell-shaped, continuous distribution,
theoretically representing the distribution of accidental errors about their mean,
the Gaussian low of errors. Lognormal or Galton distribution is a transformed
normal distribution in which the variable is replaced by its logarithmic value. It is a
nonsymmetrical distribution, very convenient for the frequency analysis of many
hydrological parameters. The theory of extreme values considers the distribution of
the largest or smallest observations occurring in each group of repeated samples.
Gumbel (1941) was the first to employ extreme value theory for analysis of flood
frequency. Chow (1954) shows that Gumbel distribution is essentially a lognormal
distribution with constant skewness.
There are few commonly used methods and approaches of testing the goodness of
fit of time series data to postulated theoretical probability distributions; for
example, the (1) χ2 (chi-squared) test, (2) Kolmogorov-Smirnov test, (3) probability
plot correlation coefficient test (Adeloye and Montaseri, 2002), and (4) L-moment
goodness of fit test (Tallaksen et al., 2004).
Figure 4-29 represents graphical presentation of three probability distribution
curves (normal, lognormal, and Gumbel), defined for a measured maximum
annual discharge time series of the Kupari gauging station over the period 1951–
2007. Table 4-4 gives the values of maximum annual discharges for different
recurrence periods using the three previously mentioned distributions.

Figure 4-29. Graphical presentation of three probability distribution curves


(normal, lognormal, and Gumbel), defined for measured maximum annual
discharge time series of the Kupari gauging station over the period 1951–2007.

Table 4-4. Normal, Lognormal, and Gumbel Values, Defined for the Measured
Maximum Annual Discharge Time Series of the Kupari Gauging Station over the
Period 1951–2007

Recurrence Period Normal Lognormal Gumbel


Frequency (year) (m3/s) (m3/s) (m3/s)

0.5 2 140 138 136

0.2 5 159 158 156

0.1 10 168 169 169

https://www.sciencedirect.com/topics/earth-and-planetary-sciences/frequency-analysis 2/24
6/29/23, 7:08 PM Frequency Analysis - an overview | ScienceDirect Topics

Recurrence Period Normal Lognormal Gumbel


Frequency (year) (m3/s) (m3/s) (m3/s)

0.05 20 176 179 181

0.04 25 178 182 185

0.02 50 185 191 197

0.01 100 192 199 209

0.005 200 197 207 221

0.002 500 204 217 237

0.001 1000 208 225 249

The zero flow values (dry spring) that occurred at some intermittent springs pose a
special problem for the frequency analysis of a time series of minimum annual
discharges. The problem can be solved using the theorem of total probability. The
following equation serves for calculating the probability of a discharge less or equal
to discharge Qi:

𝑃(𝑄min ≤ 𝑄𝑖 ) = 𝐹(𝑄𝑖 ) = 1 − 𝑘 + 𝑘 × 𝐹∗ (𝑄𝑖 ) (4.35)

where F*(Qi) is a probability distribution function for the nonzero discharges, the
equation for which is
1
−1+𝑘 (4.36)
𝐹∗ (𝑄𝑖 ) = 𝑇 𝑘

𝑘 = 𝑁∗ / 𝑁 (4.37)

where T is the return period in year, N is the number of data (years) in time series,
N* is the number of data (years) in a time series with nonzero values.
The applicability of equation (4.36) depends on getting positive values of the
probability F*(Qi), which means that

𝑘 ≥ (𝑇 − 1) / 𝑇 (4.38)

The equation to estimate the return period of zero discharge is

𝑇(0) = 𝑁 / (𝑁 − 𝑁∗ ) (4.39)

If a karst spring has a limited maximum or minimum outflow capacity, a classical


probability concept of the frequency analyses of extreme discharges should be
applied carefully. In this case, the definition of discharges for the higher recurrence
intervals (for example, 100 years and more) has no physical sense.

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9781856175029000049

Introduction
Fateh Chebana, in
Multivariate Frequency Analysis of Hydro-Meteorological Variables, 2023

1.2 Purpose and aims


Multivariate HFA is a very active research topic in statistical hydro-meteorology. A
relatively large body of literature, dealing with multivariate HFA, is available mostly
as journal papers (theoretical developments, case studies, etc.). In general, these
papers treat specific aspects such as a hydrological event (e.g., flood, drought), a
particular step of the analysis (e.g., modeling, testing, exploratory analysis), or a
given statistical approach or technique (e.g., copulas, L-moments). In addition,
most of the literature focuses on the modeling step mainly based on the copula

https://www.sciencedirect.com/topics/earth-and-planetary-sciences/frequency-analysis 3/24
6/29/23, 7:08 PM Frequency Analysis - an overview | ScienceDirect Topics
function. Fig. 1.1 illustrates some of the aspects of multivariate HFA highlighted in
the literature and their importance and the volume of studies focused on this. It
shows that the modeling step, especially based on copulas, is most prevalent the
literature. Although copula and modeling are important and essential, many other
aspects and steps also need to be considered to perform a complete and
appropriate analysis. Multivariate HFA associated literature as papers and reports is
not easily accessible to practitioners and students. Hence, this has led to an
increasing gap between research and practice in this field. Therefore, the desperate
need of a reference book where the reader can find all the relevant material
covering the different steps and situations of a multivariate HFA in a simplified and
accessible presentation, the connections between them as well as a complete
overview of all steps of the analysis has been felt.

Figure 1.1. Illustration of the importance/volume of studies in the literature on


each step/topic in multivariate hydrological frequency analysis.
This book attempts to reduce or eliminate some of the challenges and difficulties
faced by practitioners in multivariate HFA. This book compiles all the relevant
background material and new developments in one place and also presents this
material in a homogeneous and pedagogical way in order to allow students,
engineers, practitioners, and researchers to access and use efficiently all the
information about this topic. In addition, given the advanced nature of the
approaches in multivariate HFA and the ongoing developments, even though
useful and necessary, they are complex for a majority of practitioners and students,
especially readers without statistical background. Therefore, this book tries to
simplify the presentation of these concepts and hence aims to fill the gap between
theory and practice. Also, a major part of the literature neglects some of steps of
the analysis (Fig. 1.1), potentially leading to incomplete analysis or even wrong
conclusions. Consequently, this book highlights the importance of those steps and
provides the recent and advanced approaches to deal with them as along with
examples from real-life situations.
To the best of the author’s knowledge, there is no such existing book that deals
specifically and directly with the topic of multivariate HFA as a whole and in an
integrated manner. Indeed, the existing books mainly cover copula functions either
in hydrology or statistics, such as Salvadori et al. (2007), Zhang and Singh (2019),
and Chen and Guo (2019) in water sciences and Joe (2014) and Hofert et al. (2018)
in statistics. This book provides a solid platform bringing together multivariate
HFA tools in hydro-meteorological practice and contributes to filling the gap
between theory and practice and the advancement of the field of statistical hydro-
meteorology. This book enables the reader to perform a well-justified multivariate
HFA covering all relevant steps and aspects of the analysis, including the
preliminary important steps (e.g., testing the assumptions) and useful extensions
(nonstationary, regional). This book provides detailed and comprehensive
descriptions of the techniques and all steps involved in performing a complete
multivariate HFA.
https://www.sciencedirect.com/topics/earth-and-planetary-sciences/frequency-analysis 4/24
6/29/23, 7:08 PM Frequency Analysis - an overview | ScienceDirect Topics
In this book, the copula-based approach is given due importance and a large
chapter (Chapter 5) is dedicated to this topic, along with covering other important
topics, including hypothesis testing of the basic assumptions, the return period
and quantile, and preliminary analysis such as outliers and descriptive statistics.
Where appropriate, some examples based on the same datasets are presented
across several chapters to show how to perform the analysis and the steps involved.

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9780323959087000098

Palaeoflood Hydrology
Gerardo Benito, Andrés Díez-Herrero, in
Hydro-Meteorological Hazards, Risks and Disasters, 2015

3.6 Flood Frequency Analysis Using Palaeoflood Data


FFA with systematic data assumes that the distribution of the unknown
magnitudes of the largest floods is well represented by the gauged record or that it
can be obtained by statistical extrapolation from recorded floods (usually modest
floods). However, the limit of credible statistical extrapolation relative to the typical
length of gauged discharges (40–50 years) corresponds, at best, to a period of
200 years (England et al., 2006). The value of palaeoflood records is their potential
for incorporating physical evidence of rare floods and limits to their largest
magnitude (Figure 3.5(e)). The use of historical and palaeoflood information gives
rise to two specific problems: (1) nonsystematic data (only the major floods remain
known); and (c) nonhomogeneous data (hydroclimatically induced nonstationarity
due to natural climatic variability within the past 1,000–10,000 years). These
problems are discussed in detail by Redmond et al. (2002), Benito et al. (2004),
Francés (2004).
In hydrology, flood observations reported as having occurred above some threshold
are known as censored data sets (Leese, 1973). Palaeoflood information is
considered data censored above a threshold (Figure 3.5(e)) and it is assumed that
the number of k observations exceeding an arbitrary discharge threshold (XT) in M
years is known, similar to the partial duration series (Stedinger and Cohn, 1986;
Francés et al., 1994). The value of the peak discharge for palaeofloods above XT may
be known or unknown. Palaeoflood data are organized according to different fixed
threshold levels exceeded by flood waters over particular periods of time. Estimated
flood discharges obtained from the minimum high-water palaeoflood indicators
and maximum bounds (nonexceeded threshold sense; Levish et al., 1997) can be
introduced as minimum and maximum discharge values (Figure 3.5(e)). Estimates
of statistical parameters of flood distribution functions (e.g., Gumbel, LP3 or
upper-bounded statistical models; Botero and Francés, 2010) are calculated using
maximum likelihood estimators (Leese, 1973; Stedinger and Cohn, 1986; Figure
3.5(f )), the expected moment algorithm (Cohn et al., 1997), and a fully Bayesian
approach (O'Connell et al., 2002; Reis and Stedinger, 2005), providing a practical
framework for incorporating imprecise and categorical data as an alternative to the
weighted moment method (U.S. Water Resources Council, 1982).
Before the statistical analysis is carried out, the general characteristics and
stationarity of the flood series must be considered. The temporal changes in the
trajectory and statistics of a variable may correspond to natural, low-frequency
variations of the climate's hydrological system or to nonstationary dynamics related
to anthropogenic changes in key parameters such as land use and atmospheric
composition. Flood record stationarity from censored samples (systematic and/or
nonsystematic) can be checked using Lang's test (Lang et al., 1999). This test
assumes that the flood series can be described by a homogenous Poisson process.
The 95 percent tolerance interval of the cumulative number of floods above a
threshold, or censored, level is computed. Stationary flood series are those
remaining within the 95 percent tolerance interval (Naulet et al., 2005). Recent
advances in FFA have focused on modelling time series under nonstationary
conditions, such as Generalized Additive Models for Location, Scale and Shape
parameters (GAMLSS; Rigby and Stasinopoulos, 2005). The GAMLSS approach is

https://www.sciencedirect.com/topics/earth-and-planetary-sciences/frequency-analysis 5/24
6/29/23, 7:08 PM Frequency Analysis - an overview | ScienceDirect Topics
able to describe the temporal variation of statistical parameters (mean, variance) in
probability distribution functions (Gumbel, Lognormal, Weibull, Gamma). The
statistical parameters may show increasing/decreasing trends that can be modelled
using time as covariate (characterizing the trend or as a smooth function via cubic
splines; Villarini et al., 2009), or they can be related to hydroclimatic covariates such
as climatic indices that reflect low-frequency climatic variability (e.g., Pacific
Decadal Oscillation, North Atlantic Oscillation, Arctic Oscillation; López and
Francés, 2013). The application of these nonstationary models to palaeoflood
hydrology requires a characterization of the occurrence rate (covariate) during the
recorded period.

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9780123948465000035

The Science of Hydrology


S. Grimaldi, ... A. Gedikli, in Treatise on Water Science, 2011

2.18.6.3 Open Problems and New Advances


RFFA has been a research sector for more than five decades now, yet the scientific
community is still very active on this topic. This results from the existence of open
problems, as it is discussed later, but it also can be ascribed to the potentiality of
statistical regionalisation for solving a very common problem in hydrology, that is
prediction in ungauged basins (see, e.g., Sivapalan et al., 2003). For instance,
probabilistic interpretation and regionalization of classical deterministic
hydrological tools (e.g., flow duration curves, FDC, regional envelope curves of
flood flows, REC, etc.) renewed the scientific appeal of these simple methods,
further promoting RFFA among hydrological research topics (see, e.g., Castellarin
et al. (2004, 2007) for FDC and Castellarin et al. (2005) and Castellarin (2007) for
REC).
Some issues and aspects associated with RFFA may perhaps be considered to be
well studied and the margin of improvement in the accuracy of regional estimates
associated with them is probably rather limited. Examples are the choice and
estimation of the regional parent distribution or the statistical homogeneity testing
(Castellarin and Laio, 2006). Some other issues are still critical, instead, and further
analyses may significantly improve the accuracy of regional predictions in
ungauged sites.
One of these issues is certainly the estimation of the index flood in ungauged
basins. Figure 15 eloquently shows for a given case study, but this is a widespread
condition (see, e.g., Kjeldsen and Jones, 2007) that the largest amount of
uncertainty is associated with this step of the regionalization procedure.
Investigators are still dedicating a great deal of effort to the improvement of
existing methodologies (see, e.g., Shu and Burn, 2004; Kjeldsen and Jones, 2007)
and to the definition of guidelines for the identification of the most reliable and
suitable ones depending on the problem at hand (see, e.g., Bocchiola et al., 2003).
Also, classical studies document that intersite correlation among flood flows
observed at different sites is typically not negligible (see, e.g., Matalas and
Langbein, 1962; Stedinger, 1983) and leads to increases in the variance of regional
flood statistics (see, for instance, Hosking and Wallis, 1988). Nevertheless, the
analysis of the impacts of cross-correlation on regional estimates is still poorly
understood. Recent studies have pointed out that cross-correlation may
significantly reduce the regional information content in practical applications,
quantified in terms of equivalent number of independent observation (see, e.g.,
Troutman and Karlinger, 2003, Castellarin et al., 2005; Castellarin, 2007). This
reduction has an impact on the reliability of regional quantiles, as it increases the
variance of regional estimators, and can also severely affect the power of statistical
tests for assessing the regional homogeneity degree (Castellarin et al., 2008).
The delineation of homogeneous pooling group of sites, or catchment
classification, is still an open and highly debated problem, on which the scientific
community is very active (see, e.g., McDonnell and Woods, 2004). Concerning this
issue, the main research activities focus on: (1) the identification of the most

https://www.sciencedirect.com/topics/earth-and-planetary-sciences/frequency-analysis 6/24
6/29/23, 7:08 PM Frequency Analysis - an overview | ScienceDirect Topics
descriptive and informative physiographic and climatic catchment descriptors to be
used as proxies for the flood frequency regime (see, e.g., Castellarin et al., 2001;
Merz and Blöschl, 2005) and (2) the development of pooling procedures as
objective and nonsupervised as possible. Several objective approaches have been
proposed by the scientific literature, such as cluster analysis (Burn, 1989) or
unsupervised artificial neural networks (ANNs) (see, e.g., Hall and Minns, 1999;
Toth, 2009). Furthermore, the scientific community is dedicating an increasing
attention to the possibilities offered by the application of geostatistical techniques
to the problem of statistical regionalisation. These techniques have been showed to
have a significant potential for regionalization and, for this reason, will be briefly
discussed and presented in this section.
Geostatistical procedures were originally developed for the spatial interpolation of
point data (see, e.g., kriging: Kitanidis, 1997). The literature proposes two different
ways to apply geostatistics to the problem of regionalisation of hydrological
information. The first technique is called physiographic space-based interpolation
(PSBI) and performs the spatial interpolation of the desired hydrometric variable
(e.g., T-year flood, but also annual streamflow, peak flow with a certain return
period, low flows, etc.) in the bidimensional space of geomorphoclimatic
descriptors (Chokmani and Ouarda, 2004; Castiglioni et al., 2009). The x and y
orthogonal coordinates of the bidimensional space are derived from an adequate
set of n>1 geomorphologic and climatic descriptors of the river basin (such as
drainage area, main channel length, mean annual precipitation, and indicators of
seasonality; see Castellarin et al., 2001) through the application of multivariate
techniques, such as the principal components or canonical correlation analysis (Shu
and Ouarda, 2007). The second technique, named Topological kriging or
Topkriging, is a spatial estimation method for streamflow-related variables. It
interpolates the streamflow value of interest (i.e., T-year flood, low-flow indices,
etc.) along the stream network by taking the area and the nested nature of
catchments into account (Skøien et al., 2006; Skøien and Blöschl, 2007).
The philosophy behind these innovative approaches to regionalization is rather
interesting because they enable one to regionalize hydrometric variables
dispensing with the definition or identification of homogeneous regions or
pooling groups of sites (see Figure 13). The approaches are particularly appealing
for predictions in ungauged basins as they provide a continuous representation of
the quantity of interest (e.g., T-year flood) along the stream network (Topkriging) or
in the physiographic space (PSBI), providing the user with an estimate of the
uncertainty associated with the interpolated value. In particular, the final output for
Topkriging is the estimation of the measure of interest (with uncertainty) along the
stream network (see Figure 16). A little less intuitive is the output of PSBI. With this
technique any given basin (gauged or ungauged) can be represented as a point in
the x–y space described above; in the same way the set of gauged basins of the
study area can be represented by a cloud of points in this space. The empirical
values of the quantity of interest (e.g., T-year flood) can be represented along the
third dimension z for each gauged catchment, and can then be spatially
interpolated (with uncertainty) by applying a standard interpolation algorithm (e.g.,
ordinary or universal kriging). The spatial interpolation enables one to represent
the quantity of interest over the entire portion of the x–y space containing
empirical data, and therefore to estimate it at ungauged sites lying within the same
portion of the space (see Figure 16).

https://www.sciencedirect.com/topics/earth-and-planetary-sciences/frequency-analysis 7/24
6/29/23, 7:08 PM Frequency Analysis - an overview | ScienceDirect Topics

Figure 16. (a) Topkriging: 100-year flood per unit area (color codes in m3 s−1 km−2)
for a portion of the Mur region (Austria): Topkriging estimates along the stream
network and empirical values as circles. From Skøien JO, Merz R and Bloschl G
(2006) Top-kriging - geostatistics on stream networks. Hydrology and Earth System
Sciences 10(2): 277–287, Fig. 7. (b) PSBI: 3D representation of standardised value of
100-year flood over the physiographic space identified for a set of basins in
northern central Italy (gauged basins are represented as dots).

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9780444531995000464

Regional flood frequency curves for remote rural areas


of the Nile River Basin: The case of Baro-Akobo
drainage basin, Ethiopia
Semu Ayalew Moges, Meron Teferi Taye, in
Extreme Hydrology and Climate Variability, 2019

30.1 Introduction
Flood frequency analysis is a technique commonly used to relate the magnitude of
extreme runoff or river flow events to their frequency of occurrence through the
use of probability distribution functions. Historical records provide essential
information to predict the recurrence interval of hydrological extremes. Such
information on high flow events is crucial in many cases; dike design for flood
protection; reservoir design and management, river pollution control, and in
ecology management (Engeland et al., 2005). Data observed over an extended
period of time at river gaging stations are main input in flood frequency analysis.
Assumption is made that flood data are space and time independent, identically
distributed, and uninfluenced by natural or man-made changes in the hydrological
system (Rao and Hamed, 2000). This analysis estimates the recurrence interval of
extreme events by extrapolating beyond the length of the record. Obviously, the
longer the available data the more accurate the estimations are.
It is likely that sufficient information can be extracted from the “at-site” analysis for
sites with available data. Nevertheless, it is a challenge to estimate floods for
ungaged sites or sites with short time series, which influences the reliability of
estimation. In such cases, regionalization techniques assist in estimating flows of
required return periods at ungaged locations of interest. The primary goal of
regional frequency analysis is to investigate typical observations of the same
variable at numerous measuring sites within a suitably defined “region” (Hosking
and Wallis, 1997). This leads to more accurate conclusions when analyzing all data
samples rather than using only a single sample. The final result of the regional
analysis is a regional curve, which permits flood frequency and quantile
estimations at any location along the basin’s networks. This principle is applied to
the Baro-Akobo River basin to support the basin’s water management decisions.
The regionalization concept applicable to all drainage basins within a
homogeneous region was first introduced by Dalrymple (1960), which “trades
space for time,” by using data from nearby or similar sites to estimate quantiles of

https://www.sciencedirect.com/topics/earth-and-planetary-sciences/frequency-analysis 8/24
6/29/23, 7:08 PM Frequency Analysis - an overview | ScienceDirect Topics
the underlying variable at each site in the homogenous region of consideration
(Stedinger et al., 1993). The concept was continuously developed since, and new
approaches were regularly developed by researchers (Benson, 1962; Matalas and
Gilroy, 1968; Vicens et al., 1975; NERC, 1975; Greiss and Wood, 1981; Rossi et al.,
1984; Hosking et al., 1985; Lettenmaier et al., 1987; Burn, 1990; Stedinger and Lu,
1995; Hosking and Wallis, 1997; Reed et al., 1999; Sveinsson et al., 2001).
One of the most important and challenging steps in regional flood frequency
analysis is the delineation of homogeneous regions. There are no universally
accepted objective methods for such delineation. This is due to the complexity of
factors that affect the generation of floods. To mention some examples, in the UK
flood studies report the British Isles were divided into 11 regions on the basis of
general knowledge of hydrological regime (NERC, 1975). Matalas et al. (1975)
divided the United States into 14 geographical regions, based on the variability of
skewness of instantaneous flood peak data. In New Zealand, Mosley (1981) used
cluster analysis. In Pennsylvania, White (1975) used factor analysis of a collection of
physically similar basins. Acreman and Sinclair (1986) used cluster analysis on a
matrix of basin characteristics to identify similar basins in Scotland. Wiltshire
(1986) used an iterative search procedure to optimally divide the basin
characteristic data space for flood regionalization and also used multivariate
discriminant analysis to form groups for British catchments. A number of other
regionalization techniques were also developed for objective determination of
homogeneous regions. In this study, regionalization of the basin is based on Q-Q
plot method for identification of groups of stations with similar distribution tail
behavior and developed regression equations between the mean annual maximum
flow (MAF) of the catchments and their corresponding physical characteristics.

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9780128159989000300

Hydrological-Hydraulic Modeling of floodplain


inundation: A case study in Bou Saâda Wadi—
Subbasin_Algeria
Zohra Abdelkrim, ... Saeid Eslamian, in Handbook of Hydroinformatics, 2023

4 Results and discussion


4.1 Peak discharge estimation
Flood frequency analysis (using HYFRAN—Plus software) was conducted to
estimate peak flow in different return periods for the hydraulic simulation. It has
shown that Gumbel (Fig. 8), is the best statistical distribution.

https://www.sciencedirect.com/topics/earth-and-planetary-sciences/frequency-analysis 9/24
6/29/23, 7:08 PM Frequency Analysis - an overview | ScienceDirect Topics

Fig. 8. Flood frequency analysis using equation Gumbel.


4.2 Delineation of Bou Saâda Wadi—Subbasin
RAS geometric data was created and delineated by using TIN as base data in RAS
Geometry of HEC-GeoRAS, was exported as RAS data to be used in HEC-RAS, as
shown in Fig. 9.

https://www.sciencedirect.com/topics/earth-and-planetary-sciences/frequency-analysis 10/24
6/29/23, 7:08 PM Frequency Analysis - an overview | ScienceDirect Topics

Fig. 9. Stream geometry created using TIN.


GIS data was exported after steady flow analysis being done in HEC-RAS and
imported into ArcGIS for inundation analysis using RAS Mapping.
The delineated floodplain area at different return period peak discharges is shown
in the Fig. 10.

https://www.sciencedirect.com/topics/earth-and-planetary-sciences/frequency-analysis 11/24
6/29/23, 7:08 PM Frequency Analysis - an overview | ScienceDirect Topics

Fig. 10. 3D multiple cross-section plot.


4.3 Floodplain mapping for return periods
The areas along the Bou Saâda Wadi—Subbasin simulated to be inundated for 5,
10, 50, and 100 years return periods.
Floodplain inundation analysis results in Table 1, shown that 5, 10, and 50 years
return flood inundated 230.76, 239.31, and 254.52 km2, respectively, of Built Up,
Vegetation (forest, grass). And it showed that the flooded area in 100 years return
period flood inundated 265.77 km2, it can be extreme damage large area because
high volume of flood.

Table 1. Classification of flooded area according to land use for 5, 10, 50, and
100 years return period.

Return
period 5 Years 10 Years

Land use Area Minimum Maximum Area Minimum Maximum


(km2) water level water (km2) water level water
(m) level(m) (m) level(m)

Built Up 156.06 0.000061035 5.222117706 161.19 0.000061035 5.3085753

Vegetation 74.7 78.12


(forest,
grass)

Total of 230.76 (km2) 239.31 (km2)


land
inundation

Vegetated areas have low potential to flooding, because of the vegetation is to


reduce the size of the flood peak. (Forest, grass, absorb the impact of both falling
water).

https://www.sciencedirect.com/topics/earth-and-planetary-sciences/frequency-analysis 12/24
6/29/23, 7:08 PM Frequency Analysis - an overview | ScienceDirect Topics
Figures depict the generated river flood hazard maps based on water depth, and
the intersection of land use with flood area boundaries for each flood event
simulation. As shown in Figs. 11–14.

Fig. 11. Floodplain coverage extent on the land use for 5-year return period.

https://www.sciencedirect.com/topics/earth-and-planetary-sciences/frequency-analysis 13/24
6/29/23, 7:08 PM Frequency Analysis - an overview | ScienceDirect Topics

Fig. 12. Floodplain coverage extent on the land use for 10-year return period.

Fig. 13. Floodplain coverage extent on the land use for 50-year return period.

https://www.sciencedirect.com/topics/earth-and-planetary-sciences/frequency-analysis 14/24
6/29/23, 7:08 PM Frequency Analysis - an overview | ScienceDirect Topics

Fig. 14. Floodplain coverage extent on the land use for 100-year return period.
The flood inundation maps indicates a high risk to the land use and Wadi with
considerable water depth.
Wadi flood hazard mapping and management water depth distribution is more
essential in order to determine flooded areas. Floodplain inundation map for
various return periods (5, 10, 50, and 100-year) of the study area can be mapped
and overlay on Land use at Bou Saâda Wadi—Subbasin to analyze of damages
caused to Built-Up area, vegetation.
The approximation of a flood-prone area on a map is shown in Figs. 11–14:
As we have seen in the results, gradually varied steady flow is characterized by
minor changes in water depth from one cross-section to another.
The flooded areas along the Bou Saâda Wadi—Subbasin are 230.76 km2,
239.31 km2, 254.52 km2, and 265.77 km2 for 5, 10, 50, and 100 year return
periods, respectively. The roads affected by floods have considerably affected the
trade exchanges.
On the other hand, the inundation depth of the 100 years return period ranged
from 0.000061035 to 5.40 m, the Built-Up areas, and the Vegetation (forest, grass)
were exposed to overflows, and Whereas the impacts on Built Up areas are highly.
So that the flood inundation area within 100 years return period covers 66.54% of
the Built-up area, 33.45% of Vegetation (forest, grass). Similarly, the land use
affected by the 5, 10, 25, and 50 years of flood frequency has different inundation
depths.
People who are living near the river banks affected by catastrophic flooding are
directly exposed to inundation depths of over 5.40 m. And need further
considerations to flood protection.
Flood is usually a result of natural causes, it may also be caused by man-made
factors. The reason lies in urbanization, density can increase risk, with the
development of considered unplanned construction on flood plains, especially in
high-risk areas, without proper infrastructural.
In Bou Saâda Wadi—Subbasin, flood causes can be attributed to:

https://www.sciencedirect.com/topics/earth-and-planetary-sciences/frequency-analysis 15/24
6/29/23, 7:08 PM Frequency Analysis - an overview | ScienceDirect Topics
1. Characteristics of the Bou Saâda Wadi, that are characterized by a gradual
slope, where the floodplain is relatively lowland the hazard of the river flood is
higher than the other locations.
2. Increase in Bou Saâda Subbasin land-use development causes an increase of
imperviousness which leads to an increase in runoff volume.
3. Improper drainage system throughout the Bou Saâda Subbasin shares.
Based on the findings of this study, a variety of mitigation measures can be
identified which will minimize the impact of flooding in Bou Saâda Wadi—
Subbasin:
• Should give solutions:
• Structural measures.
• Nonstructural measures: include land use regulations, e.g., relocate the
population residing along the wadi banks, and Preventing any new residential
structures in the areas with high risk of flood.
• Early warning networks should be constructed in the city for the people to take
the necessary precaution.
• Filed data related to channel roughness coefficient need to be collected which
can help to get better results of the hydraulic model.

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9780128219621000118

Mass-Movement Geomorphology
Simon Loew, ... Werner Gerber, in Treatise on Geomorphology (Second Edition),
2022

5.09.2.1.2 Rockfall release frequency and retreat rate


Rockfall release frequency analysis is useful for hazard assessment but also for
studying the predisposition and causal factors of rockfalls. The comparison of
frequencies in different sites allows to study the influence of site-specific factors
(predisposition), and the comparison of frequencies in a particular cliff for different
periods allows to analyze the influence of time-varying factors.
Frequency analysis requires a rockfall inventory, which may be obtained from a
historical record of rockfall events (often along roads or railway tracks), from
periodic topographical measurements of a cliff (terrestrial laser scanner or
photogrammetry) or from seismic monitoring. Historical inventories can cover
large areas and long periods (several decades), and consequently they usually
include bigger rockfalls than the other inventories. Terrestrial laser scanner or
photogrammetry allow to detect rockfall as small as 10− 3 m3.
Frequency analyses aim to estimate the temporal or the spatio-temporal frequency
of rockfalls as a function of their volume. The spatio-temporal frequency allows to
compare the rockfall activities of different cliffs. The frequency analyses can display
the cumulative or the non-cumulative distribution of the rockfall volumes. These
distributions are usually fitted by a power law for the volume range where the
inventory is exhaustive (e.g. Gardner, 1970; Hungr et al., 1999). Then the spatio-
temporal frequency F of rockfalls bigger than a volume V can be expressed as:

𝑉 −𝐵 (1)
𝐹 = 𝐴𝑉0 𝑉
0

Where A is the frequency of rockfalls with a volume bigger than V0 (an activity
parameter) and B is a uniformity coefficient, which reflects the decrease of the
frequency when the volume increases. V0 is the minimal value of the considered
volume range or a minimal volume of interest, which depends on the context of
the analysis. Examples of spatio-temporal frequency distributions are given in
Fig. 9. The retreat rate of the cliff can be estimated by integration of the volume-
frequency relation, which can be extrapolated in order to account for rare large
events (Hantz et al., 2003a, b).

https://www.sciencedirect.com/topics/earth-and-planetary-sciences/frequency-analysis 16/24
6/29/23, 7:08 PM Frequency Analysis - an overview | ScienceDirect Topics

Fig. 9. Spatio-temporal rockfall frequencies in massive gneiss and bedded


limestone cliffs.
Modified from Guerin A, D’Amato J, Hantz D, Rossetti J-P, Jaboyedoff M (2014) Investigating rockfall
frequency using terrestrial laser scanner. Vertical Geology Conference, Lausanne, Switzerland, 251–
254.
Beside rockfall frequency analysis, the rockfall activity can be studied by periodic
measurement of the mass of rockfall material deposited in collectors located at the
foot of cliffs (e.g. Krautblatter and Moser, 2009) and by deriving the retreat rate of
the cliff. This method usually concerns a limited cliff area but it allows to consider
small fragments. For example, Sass (2005) and Matsuoka (2019) use wire nets with
10 mm mesh to collect the rock fragments as small as 10− 3 m3.

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9780128182345000663

Rainfall regionalization techniques


Pierluigi Claps, ... Paola Mazzoglio, in Rainfall, 2022

12.2 Variables to be regionalized, data preparation, and data scarcity


12.2.1 Regionalized variables
In a RFA, the selection of the variable to be regionalized requires some discussion,
as it depends on the availability of raw data versus pre-processed data, and can
have an impact on the dataset preparation. If a complete sequence of observations
at the gauging maximum resolution (e.g., 1–2 minutes for electronic devices, or a
continuous line for analogic devices) is available, it is possible to compute the
“complete duration series” (CDS) by applying a moving average of a given width (d)
to the whole record; the CDS will then include all the measured rainfall depth in a
year aggregated at the duration d, and will allow the widest possible range of
analysis. From the complete sequence one can indeed extract the peak values but
also evaluate other occurrence measures, like the inter-event waiting time or the
seasonality of events. If the time series contains only those events that exceed a
fixed threshold, for whatever duration, it is named partial duration series or peaks-
over-threshold (POT) series. The most popular selection of data for the analysis of
rainfall extremes is, however, the block-maxima selection, that rely on series that
contains only the highest values that occur in a fixed period; if the period is one
year, the sample represents the annual maximum series (AMS).

https://www.sciencedirect.com/topics/earth-and-planetary-sciences/frequency-analysis 17/24
6/29/23, 7:08 PM Frequency Analysis - an overview | ScienceDirect Topics
As initially suggested by Gumbel (1954), AMS is the data set most commonly used
in probabilistic analyses. POT is sometimes preferred when the objective is to
increase the time series length, especially while dealing with heavy-tailed
distributions (Madsen et al., 1997). However, it must be considered that the POT
method introduces discretional terms in the choice of the threshold value, as
clarified by Claps and Laio (2003). CDS advantages are described by Marani and
Zanetti (2015) and can be taken into account when critical high-return period
estimation is the objective. However, when using CDS, more evidence is still
required on the selection of the minimum set of necessary data, given a target
return period (Requena et al., 2019).
To apply analyses on long records, however, one must consider that for older
records only AMS may be available, and this issue may limit the operational
options.
Whatever the available observations are, the definition of the variables to use for
the derivation of the regional dimensionless frequency curve K(T) is not a trivial
task, similarly to what happens in the case of flood peaks. The main alternatives in
literature are to proceed towards regionalization of rainfall quantiles (Svensson and
Jones, 2010) for one duration within the simple scaling assumption (Soltani et al.,
2017) or for different durations under the multiscaling approach (i.e., when Kd(T)
changes with the duration d, Burlando and Rosso, 1996). However, many authors
assume that it is preferable to regionalize the probability distribution parameters
(Svensson and Jones, 2010 and reference therein) or the sample ordinary moments,
or the L-moments (Modarres and Sarhadi, 2012; Ngongondo et al., 2012; Smithers
and Schulze, 2001). In all cases, a thorough understanding of the dependence of
the variance of the regionalized parameters on the record length is necessary,
especially for the estimation of high-return period quantiles.
Our evaluation is that the average practitioner would not easily distinguish the
differences in the outcomes when different variables have been chosen for the
regionalization. Differences can emerge in “critical applications,” where the aim is
the estimation of high-return period design rainfall, or in the evaluation of rainfall
spatial variability in regions with few data and high hydro-climatic variability (e.g.,
mountainous regions). In both cases, however, marked differences can arise from
the intervention of individual (significant) records, even if included in short time
series (Libertino et al., 2018)
12.2.2 Data preparation and data scarcity
As mentioned by CSAGroup (2019), practitioners are now more and more
concerned with data and methods that can emphasize the evolution of rainfall
hazard due to climate change. For this reason, the management of large databases
of rainfall measurements requires to consider and manage the presence of gaps in
the time series, the use of recent but short records, and the possible role of rain
gauges relocation.
In the literature of rainfall extremes two main approaches are applied to deal with
the data fragmentation issue. The first one is draconian and is based on the
definition of a minimum acceptable threshold of record length: in this approach, to
achieve the data homogeneity required to proceed with the RFA, only the time
series longer than a given threshold will be considered in the analysis. Although
this approach can be considered as precautionary, one can end up discarding
important information included in some short records, and this can affect the
reliability of the RFA (Ouali et al., 2016; Libertino et al., 2018).
The second approach tries to preserve the available information in relation to its
influence on each parameter to be estimated (Laio et al., 2011). In essence, even if
short time series cannot provide great influence on higher-order moments, their
data can be confidently used to improve the estimation of the first order
parameters. Considering that this second approach may lead to errors if based on
non-robust assumptions (Teegavarapu and Nayak, 2017) significant research efforts
are being recently addressed on data augmentation techniques, using also ancillary
information. A brief review is reported below.
Most of the methodologies proposed to retain information from short records are
based on interpolation or data reconstruction. For instance, Pappas et al. (2014) use
time series autocorrelation to deal with sporadic time gaps. However, low
autocorrelation or frequent and systematic missing data can make the method not
https://www.sciencedirect.com/topics/earth-and-planetary-sciences/frequency-analysis 18/24
6/29/23, 7:08 PM Frequency Analysis - an overview | ScienceDirect Topics
applicable. Ouali et al. (2016) propose a conditional quantile regression model that,
even if not address a data reconstruction, can be ascribed to the “data
augmentation” category. The “patched kriging” (PK) approach, proposed by
Libertino et al. (2018), reconstructs missing values using contemporary
observations at nearby sites through a sequential application of the ordinary
kriging. Homogeneous annual maxima series at each location can also be obtained
with a bootstrap procedure, that accounts for all the measurements in nearby
stations (Uboldi et al. 2014). In this case, data relevance decreases as the distance
between the rain gauge and the estimation point increases. However, the results of
this application end up being highly sensitive to the presence of outliers in the
observed series, and can produce quantiles that can deviate substantially from
those computed from local samples even of consistent length.
Data augmentation in rainfall frequency analysis from ancillary data are the last
frontier, both for applications to areas affected by data scarcity, and for the increase
in the spatial detail of the statistical analyses. In the first case, some studies
investigate the possibility of evaluating statistics using hydrometeorology attributes
rather than inadequately observed precipitation measurements (Satyanarayana and
Srinivas, 2008). This approach, preliminarily tested in India over the summer
monsoon affected regions, suggests using both large-scale atmospheric variables
that affect the precipitation in the study region, and location attributes as
additional features in the regionalization. Over large areas, satellite data can
certainly provide additional insights in RFA (Qamar et al., 2017) since global-scale
rainfall datasets like those acquired by TRMM and GPM constellations have
homogeneous and reasonably long time series. However, additional work is
required to take enough advantage of this information, considering its low
resolution in space and some inherent inaccuracies (Zorzetto and Marani, 2020).
Data scarcity is, however, a relative concept, as it can depend on the required
details of the spatial rainfall analysis. Urban hydrological applications, for instance,
tend to be data-hungry for the need to reproducing rainfall estimates at high
resolution. In this context, Weather Radar data can provide interesting support to
the regional frequency analyses, as witnessed by recent literature (Goudenhoofdt
et al., 2017; Ochoa-Rodriguez et al., 2019; Kašpar et al., 2021). As seen from the
above applications, new and interesting developments in rainfall frequency analysis
can therefore come from the inclusion of additional information from other
sources, providing advances already seen with the use of data assimilation in
rainfall nowcasting applications (Li et al., 2018).

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9780128225448000135

Multivariate nonstationary frequency analysis


Fateh Chebana, in
Multivariate Frequency Analysis of Hydro-Meteorological Variables, 2023

7.3.3 Covariate-varying margins


Since in HFA the focus is on extreme events, the joint distribution (including
marginal distributions and copulas) should be selected accordingly (see Chapter 5).
In univariate nonstationary HFA, among the most employed distributions, we have
the GEV and log-Normal distributions with two or three parameters (denoted LN2
and LN3, respectively). For GEV, recall that its cumulative distribution function is
given by
1
𝑥 − 𝜇- 𝑘 −𝑥 − 𝜇
(7.5)
𝐹GEV 𝑥; 𝜇, 𝜎, 𝑘 = exp−1 + 𝑘 if 𝑘 ≠ 0 and exp−exp if 𝑘 = 0
𝜎 𝜎

𝜎
for 𝑥 ≥ ( 𝜇 − ) if 𝑘 > 0 ( Fréchet ) , 𝑥 ∈ ℝ if 𝑘 = 0 ( Gumbel ) and
𝑘
𝜎
𝑥≤ (𝜇 − 𝑘 ) if 𝑘 < 0 ( Weibull ) where 𝜇 ∈ ℝ, 𝜎 > 0 and 𝑘 ∈ ℝ are the
location, scale, and shape parameters, respectively. However, for LN2 and LN3
distributions, it is preferable to present their density functions, respectively, as

https://www.sciencedirect.com/topics/earth-and-planetary-sciences/frequency-analysis 19/24
6/29/23, 7:08 PM Frequency Analysis - an overview | ScienceDirect Topics

( log 𝑥 − 𝜇 )
2 (7.6)
1 -
𝑓𝐿𝑁2 (𝑥; 𝜇, 𝜎) = 𝑥𝜎 𝑒 2𝜎2 for𝑥 > 0
√2𝜋

𝑓𝐿𝑁3 ( 𝑥; 𝜇, 𝜎, 𝑚 ) = 𝑓𝐿𝑁2 ( 𝑥 − 𝑚; 𝜇, 𝜎 ) for𝑥 > 𝑚 (7.7)

where 𝜇 ∈ ℝ and 𝜎 > 0 are, respectively, the mean and standard deviation of log𝑥,
whereas 𝑚 ∈ ℝ is a threshold parameter (lower bound).
The considered nonstationary GEV version 𝐺𝐸𝑉𝜇𝜐, 𝜎𝜐, 𝑘 incorporates
nonstationarity in its location and scale parameters by linking them to the covariate
𝜐. As mentioned above, for simplicity and for practical reasons, the shape
parameter k is considered constant. According to the shape of the trend, we
consider the notations GEV00: no trend; GEV10: linear trend in the location
parameter 𝜇𝜐 = 𝜇0 + 𝜇1 𝜐; GEV20: quadratic trend in the location parameter
𝜇𝜐 = 𝜇0 + 𝜇1 𝜐 + 𝜇2 𝜐2 ; and GEV11: linear trend in both the location and scale
parameters 𝜇𝜐 = 𝜇0 + 𝜇1 𝜐, 𝜎𝜐 = exp ( 𝜎0 + 𝜎1 𝜐 ) . As mentioned earlier, to keep
the scale parameter 𝜎 ( 𝜐 ) positive, we use the transformation
𝜑𝜎 ( 𝜐 ) = log ( 𝜎 ( 𝜐 ) ) . Using similar notations, we obtain the models LN200,
LN210, and LN220 in analogy with the location parameter 𝜇, respectively, as in
GEV00, GEV10, and GEV20. For the LN3 case, the models LN300, LN310, LN320, and
LN311 can be considered as the possible trend in threshold parameter m, that is,
𝐿𝑁3𝜇𝜐, 𝜎, 𝑚𝜐.

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9780323959087000074

Precambrian geomagnetic field—an overview


Toni Veikkolainen, Lauri J. Pesonen, in
Ancient Supercontinents and the Paleogeography of Earth, 2021

3.3 Inclination frequency analysis


The inclination frequency analysis to test the GAD hypothesis was introduced by
Evans (1976). The method applies the spherical harmonic decomposition to
describe the average geomagnetic field, yet using zonal field components only.
Each combination of these zonal components generates an inclination frequency
distribution, where inclination is shown as a function of its proportion, usually in
discrete intervals. In the Precambrian, the method typically applies 10 degrees
intervals, and ignores the sign of inclination, thus applying absolute values of
geomagnetic inclination (|I|) (Fig. 3.3B). This quantity is independent of
geomagnetic polarity timescale, which is unavailable for data beyond 200 Myr. The
limitation of using zonal terms of the field only is that the inclination distribution
resulting from GAD and that resulting from a tilted dipolar field look similar.
Due to spatiotemporal limitations of Precambrian data, 𝑔01 , 𝑔02 , and 𝑔03 are the only
meaningful zonal terms in this analysis in the Precambrian. Typically, 𝑔02 and 𝑔03 are
normalized with respect to GAD, and notations such as G2=𝑔02 / 𝑔01 or G3=𝑔03 / 𝑔01
are used in analysis. Fig. 3.3B illustrates inclination distributions for 𝑔01 , 𝑔02 , and 𝑔03
as well as dependence of |λ| on |I| (λ is paleolatitude). The case of GAD
supplemented by a small 𝑔03 is also illustrated, because previous studies (e.g.,
Pesonen et al., 2012) have pointed out a possibility of an octupole field as an
explanation of relatively large proportion of low inclinations. Sedimentary data
often are artificially distorted toward low values of I due to compaction-induced
flattening (King, 1955) of an unknown amount, and also, sedimentary basins are
often preferentially located at low latitudes due to climate (Evans, 2006). Therefore
Veikkolainen et al. (2014a) only applied inclinations from igneous and
metamorphic rocks in their optimal zonal multipole configuration (𝑔02 = 0.02 and
𝑔03 = 0.05). This chapter features an update of the previous analysis, using the
PALEOMAGIA database (Veikkolainen et al., 2017b), which incorporates site-level
data from as many as 3799 Precambrian paleomagnetic entries. The wildly varying
quality of Precambrian paleomagnetic results necessitates the use of quality
grading such as the seven-step grading (Q1–7) of Van der Voo (1990). However, the

https://www.sciencedirect.com/topics/earth-and-planetary-sciences/frequency-analysis 20/24
6/29/23, 7:08 PM Frequency Analysis - an overview | ScienceDirect Topics
lack of proper Precambrian apparent polar wander paths (APWPs) hampers the use
of seventh criterion, which states that poles must not have a resemblance to
younger paleomagnetic poles, and therefore only the first six criteria (Q1–6) are
employed here as done in Veikkolainen et al. (2014a).
The methodological proof of inclination frequency analysis rests on the
assumption of random sampling of continents over globe. Rolf and Pesonen (2018)
applied 3D spherical mantle convection models to investigate whether latitudes
have been sampled uniformly by the Earth’s continents in the far past, finding out
that this requirement is not met for a period shorter than 1.3 Gyr. This strengthens
the conclusion of Meert et al. (2003) who used random walk models to prove that
in sampling periods over 600 Myr, inclination distribution resembles that of GAD
only for ~30% of time even if GAD is assumed valid for the entire geologic past.
Originally, Evans (1976) argued that 500–600 Myr is adequate for random
sampling, an obviously outdated hypothesis. A major weakness of the inclination
frequency analysis is also the fact that distributions similar in appearance may
result from different combinations of multipolar fields, that is, various solutions
are not unique. For example, distribution of GAD is similar to the distribution of a
multipolar field with GAD, G2=0.17 and G3=0.03 (Veikkolainen et al., 2014a) This
emphasizes the need for using other independent methods to test the GAD
hypothesis.
Our analysis of PALEOMAGIA was restricted to igneous rocks due to the proven
shallowing of sedimentary magnetization and the cooling-related problems
associated with metamorphic rocks. This does not rule out using sedimentary data
in other analyses, such as studying variations between N and R polarity in certain
period by using magnetostratigraphy and analyzing transitional geomagnetic field
data (see Fig. 3.9A). Our data handling followed a few steps that included removal
of combined polarity (N+R) entries in cases of dual-polarity paleomagnetic data,
removal of certain subentries if they were part of a combined entry with similar or
higher Q1–6 value, and removal of all data with Q1–6 < 3. We also removed entries
that were obviously superseded by new entries. However, to ensure that an
adequate number of data are available for statistical tests, we did not carry out
spatiotemporal binning which, if applied, must be done within each terrane
separately without regard to present-day geography (Veikkolainen et al., 2014c).
Apart from the actual polarity timescale, the significantly larger proportion of
positive inclinations (62%; N=695) compared to negative ones (38%; N=443) may
be a sign of insufficient spatiotemporal sampling. The distinction is partly a result
of the large number (c.200 entries) of data with low positive inclinations from
Baltica and Laurentia in the Nuna supercontinent. The potential removal of steep-
inclination data due to their resemblance to the present-day field (PEF) in Europe
and North America (Lapointe et al., 1978) does not explain the difference, because
PEF in those regions is represented by positive inclination data. The PALEOMAGIA
subset of 635 data entries (Q1–6 ≥ 3) had a mean |I|=42.3 degrees, slightly smaller
than the value of |I|=49.1 degrees expected from GAD, although variations in
different stages of Precambrian are substantial (Fig. 3.10).

https://www.sciencedirect.com/topics/earth-and-planetary-sciences/frequency-analysis 21/24
6/29/23, 7:08 PM Frequency Analysis - an overview | ScienceDirect Topics

Figure 3.10. Distribution of absolute value of inclination from PALEOMAGIA


dataset (N=1138, Q1–6 ≥ 3) with respect to age in our three time periods. Mean
values of |I| for datasets are shown by solid horizontal lines, and values are 47.0
degrees for “early” (Kenorland; pre-1880 Ma), 34.5 degrees for “middle” (Nuna;
1190–1880 Ma), and 45.2 degrees for “late” (Rodinia; post-1190 Ma) data as
opposed to theoretical mean |I| of 49.1 degrees expected for a GAD-derived dataset
using synthetic data. The solid black line shows the mean |I| for GAD. GAD,
Geocentric Axial Dipole.
For finding the best-fit zonal geomagnetic field model to explain paleomagnetic
data in the three Precambrian timeslots (Fig. 3.11A–C) and for the entire
Precambrian (Fig. 3.11D), we applied Pearson’s chi-square testing following
methodology of Veikkolainen et al. (2014A). For the Precambrian dataset filtered
with moderately strong quality criteria (Q1–6 ≥ 3), the best fit had G2=0.00 and
G3=0.08, yet all fits failed the chi-square test, due to the jagged shape of the
distribution (Fig. 3.11D). For the same data filtered with fairly strict quality criteria
(Q1–6 ≥ 4), two fits were equally good in statistical terms; one with G2=0.00 and
G3=0.07 and another with G2=0.03 and G3=0.09. While the shape of distribution
resulted in no fit passing the chi-square test, fits clearly become poorer when
either G2 or G3 exceeds 0.10. This means that while GAD is not the best fit,
optimal fits for the entire Precambrian have 𝑔02 and 𝑔03 with strengths less than
10% that of GAD. In the timeslots, Kenorland (pre-1880 Ma) data (Fig. 3.11A)
features a nearly GAD-like field (G2=0.0, G3=0.02), and GAD actually remains valid
in the chi-square testing. For Nuna (Fig. 3.11B), the result is entirely different; a
very prominent octupole is required and the best fit has G2=0.0 and G3=0.23.
Rodinia (Fig. 3.11C) stays between these cases and the best fit to its data has
G2=0.00 and G3=0.04, yet this is only barely better than GAD, and neither of these
passes the chi-square test. The low-inclination bias in Nuna data may also result
from two causes. First, as many as 255 out of 364 data entries (70.0%) of data in
the subset originate from Baltica and Laurentia, which may bias the global view.
Second, there is a high possibility that continents, while occupying Nuna assembly,
stayed at low latitudes for almost the entire lifetime of the supercontinent. In the
Rodinia data, the inclination distribution is fairly strongly dominated by North
American data in the timespan of 1000–1190 Ma, with particular emphasis on the
Mid-Continent Rift in the Great Lakes area.

https://www.sciencedirect.com/topics/earth-and-planetary-sciences/frequency-analysis 22/24
6/29/23, 7:08 PM Frequency Analysis - an overview | ScienceDirect Topics

Figure 3.11. Observed inclination distribution of PALEOMAGIA dataset (Q1–6 ≥ 3),


and corresponding distributions of best-fitting zonal model and GAD for (A) pre-
1880 Ma data, (B) 1190–1880 Ma data, (C) post-1190 Ma data, and (D) entire
Precambrian.
Apart from bias caused by continents staying at narrow latitude ranges throughout
a lifetime of a certain supercontinent, another major weakness in inclination
frequency analysis results from the fact that igneous activity occurs in pulses,
typically in the Large Igneous Provinces (LIPs) (Ernst, 2014). This leads to uneven
spatiotemporal record that cannot be remedied by spatiotemporal binning,
because binning is unable to fill temporally consecutive gaps in the data of a
certain craton (Veikkolainen et al., 2014c). Despite these uncertainties, it is highly
unlikely that Precambrian geomagnetic field has any longstanding non-GAD
components exceeding 10% of the strength of GAD. However, the functionality of
GAD-based concepts such as paleomagnetic poles remains an open question until
other independent techniques of testing GAD are applied.

Read full chapter


URL: https://www.sciencedirect.com/science/article/pii/B9780128185339000084

https://www.sciencedirect.com/topics/earth-and-planetary-sciences/frequency-analysis 23/24
6/29/23, 7:08 PM Frequency Analysis - an overview | ScienceDirect Topics

Copyright © 2023 Elsevier B.V. or its licensors or contributors.


ScienceDirect® is a registered trademark of Elsevier B.V.

https://www.sciencedirect.com/topics/earth-and-planetary-sciences/frequency-analysis 24/24

You might also like