Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
41 views48 pages

FRM2 - Ai

Uploaded by

adventurine
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views48 pages

FRM2 - Ai

Uploaded by

adventurine
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 48

[MR-1] Estimating Market Risk Measures: An Introduction and Overview

1. According to the text, which form of return calculation implicitly assumes


that interim payments are continuously reinvested?
A) Profit/Loss (P/L)
B) Arithmetic return
C) Geometric return
D) Loss/Profit (L/P)

2. If you are using the Historical Simulation (HS) approach with 1000 loss
observations and want to find the VaR at the 99% confidence level, which
observation would you select?
A) The 1st highest loss observation
B) The 10th highest loss observation
C) The 11th highest loss observation
D) The 990th highest loss observation

3. What is a primary advantage of using a model based on geometric returns


(lognormal model) over one based on arithmetic returns?
A) It is simpler to calculate.
B) It guarantees that the asset price can never become negative.
C) It is always more accurate for short time horizons.
D) It does not require a mean and standard deviation.

4. The parametric approach to estimating VaR requires the user to explicitly


specify what?
A) The number of observations in the tail
B) The risk-free interest rate
C) The statistical distribution from which data observations are drawn
D) The exact value of the worst possible loss

5. How is Loss/Profit (L/P) data derived from Profit/Loss (P/L) data?


A) L/P = P/L + 1
B) L/P = P/L / P(t-1)
C) L/P = ln(P/L)
D) L/P = -P/L

6. What is the definition of Expected Shortfall (ES)?


A) The single most likely loss to occur in the tail of the distribution.
B) The probability-weighted average of tail losses that exceed the VaR.
C) The maximum possible loss for a given portfolio.
D) A VaR calculated at the 99.9% confidence level.

7. In a Quantile-Quantile (QQ) plot, what does a perfectly straight, linear plot


indicate?
A) The data is highly skewed and has fat tails.
B) The specified reference distribution is a good fit for the empirical data.
C) The data contains significant outliers.
D) The sample size is too small for analysis.
8. Based on the formula provided, the standard error of a quantile estimator
(VaR) rises when:
A) The sample size (n) increases.
B) The probability density (f(q)) at the quantile is very high.
C) The analysis moves further into the tail (probabilities become more
extreme).
D) The data follows a perfect normal distribution.

9. The text describes a practical method for estimating Expected Shortfall (ES)
which involves:
A) Averaging the 10 largest losses in the dataset.
B) Using a complex "closed-form" solution applicable to all distributions.
C) Slicing the tail into many segments and taking the average of the VaRs of
those segments.
D) Taking the VaR and multiplying it by a constant factor of 1.5.

10. What does the "halving error" help a risk analyst determine?
A) The standard deviation of the portfolio.
B) Whether an estimate for a coherent risk measure has sufficiently
converged.
C) The 50% confidence level VaR.
D) An error in the source data.

11. The document provides the formula for VaR with normally distributed
Profit/Loss as αVaR = -μP/L + σP/L * zα. What does zα represent?
A) The mean of the P/L distribution.
B) The standard deviation of the P/L.
C) The standard normal variate corresponding to confidence level α.
D) The initial portfolio value.

12. According to the appendix on preliminary data analysis, what is the first
and most important step when confronted with a new data set?
A) Immediately run a regression analysis.
B) "Eyeball" the data to see if it 'looks right' and to spot potential anomalies.
C) Calculate the lognormal VaR.
D) Fit the data to a Student-t distribution.

13. In the context of estimating coherent risk measures, what is the


relationship between Expected Shortfall (ES) and a more general spectral risk
measure (Mφ)?
A) ES is always larger than any spectral risk measure.
B) They are completely unrelated concepts.
C) ES is a special case of a spectral risk measure where all tail-loss quantiles
are given equal weight.
D) A spectral risk measure is a simplified version of ES.

14. What does a QQ plot that is linear in the middle but has steeper slopes at
both ends suggest about the data compared to the reference distribution?
A) The data has thinner tails.
B) The data has heavier (fatter) tails.
C) The data's mean is zero.
D) The data is from a uniform distribution.

15. If geometric returns R_t are 0.05, what is the corresponding arithmetic
return r_t? (Hint: R = ln(1+r))
A) 0.0488
B) 0.0500
C) 0.0513
D) 0.0250

16. The document mentions that the relative accuracy of VaR and ES
estimators can be affected by the characteristics of the distribution. For
particularly heavy-tailed distributions, what did initial studies suggest?
A) VaR and ES estimators had identical standard errors.
B) VaR estimators had much bigger standard errors than ES estimators.
C) ES estimators had much bigger standard errors than VaR estimators.
D) Neither measure could be estimated.

17. Which of the following is listed as one of the three "core issues" to address
when measuring market risk?
A) Which software to use?
B) Which data provider to choose?
C) Which level of analysis (portfolio or position)?
D) Which programming language to implement?

18. In the formula for the variance of a quantile estimator, var(q)


≈ p(1 − p) / n[f(q)]², what does n represent?
A) The number of parameters in the model.
B) The sample size.
C) A normalization constant.
D) The number of tail slices.

19. When calculating the lognormal VaR, the final measure is


given by P(t−1) * (1 − exp[μR − σR * zα]). This calculation ensures
the estimated VaR cannot be greater than what value?
A) The mean of the returns.
B) Zero.
C) The initial portfolio value, P(t-1).
D) The standard deviation of returns.

20. For what reason should arithmetic returns generally not be used when
dealing with long time horizons?
A) They are too volatile.
B) They implicitly assume that interim income is not reinvested.
C) They require the use of logarithms, which is computationally expensive.
D) They always result in negative asset values.

SUMMARY: 20/20
[MR-2] Non-parametric Approaches

1. What is the core underlying assumption of all non-parametric approaches to


risk estimation?
A) That profit/loss distributions are always normal.
B) That the near future will be sufficiently like the recent past.
C) That all historical observations are independent and identically distributed.
D) That market volatility is constant over time.

2. In basic Historical Simulation (HS) with a sample of 1000 daily P/L


observations, how is the 95% Expected Shortfall (ES) estimated?
A) As the 51st highest loss value.
B) As the average of the 50 highest loss values.
C) As the average of all 1000 loss values.
D) As the 50th highest loss value.

3. The Bootstrapped Historical Simulation method involves which key


procedure?
A) Fitting a GARCH model to the data.
B) Resampling from the original data set with replacement.
C) Assigning greater weight to the most recent data.
D) Removing the top 1% of losses to avoid outliers.

4. What is a primary advantage of using non-parametric density estimation


(e.g., kernel methods) over basic HS?
A) It is computationally simpler.
B) It eliminates the need for a large data set.
C) It allows estimation of VaR at any confidence level, not just discrete ones.
D) It guarantees the VaR estimate will be lower.

5. According to the text, which method for estimating confidence intervals for
VaR uses the theory of quantiles to derive a complete distribution function for
the VaR estimate itself?
A) The Bootstrap method
B) The Order-Statistics (OS) approach
C) The Delta-Normal approach
D) The Filtered Historical Simulation (FHS) approach

6. The age-weighted historical simulation approach, proposed by Boudoukh,


Richardson, and Whitelaw (BRW), assigns weights to observations that:
A) Increase linearly with age.
B) Are equal for all observations within the sample window.
C) Decay exponentially as observations get older.
D) Are based on the daily trading volume.

7. How does the Hull and White (HW) volatility-weighted approach adjust
historical returns?
A) By multiplying them by the age of the observation.
B) By dividing them by the historical risk-free rate.
C) By scaling them using the ratio of current volatility to historical volatility.
D) By replacing them with random draws from a normal distribution.

8. Filtered Historical Simulation (FHS) is described as a semi-parametric


method because it combines:
A) A non-parametric bootstrap with a parametric conditional volatility model
(e.g., GARCH).
B) Order-statistics with a moving-average model.
C) Kernel density estimation with principal components analysis.
D) Historical data with subjective expert opinions.

9. A significant advantage of the volatility-weighted and filtered-weighted


approaches over basic HS is that they:
A) Do not require historical data.
B) Are much simpler to implement on a spreadsheet.
C) Can produce VaR and ES estimates that exceed the maximum loss in the
historical data set.
D) Are completely free from "ghost effects".

10. What is a major practical problem when trying to estimate HS VaR for
longer holding periods (e.g., monthly) using daily data?
A) The number of effective observations falls rapidly, reducing precision.
B) It violates the assumption of normality.
C) The computation time increases exponentially.
D) It requires a subscription to a specialist data service.

11. The comparison in Table 4.1 shows that for estimating a 90% confidence
interval for VaR and ES, the Order-Statistics (OS) and Bootstrap approaches
yield:
A) Identical results down to the last decimal.
B) Very different results, suggesting one is superior.
C) Very similar results, suggesting either is reasonable in practice.
D) Results that are always wider than parametric methods.

12. "Ghost effects" in the context of traditional HS refer to:


A) The tendency for VaR estimates to be haunted by data input errors.
B) The undue influence of a past extreme event that remains in the sample
window for n days before abruptly dropping out.
C) The smoothing effect created by using kernel estimators.
D) The impact of using simulated, rather than real, data.

13. In the formula for age-weighted HS, w(i) = λ^(i-1)(1-λ) / (1-λ^n), what does a
λ value close to 1 imply?
A) A very high rate of decay, where only the newest data matters.
B) A slow rate of decay, where older observations retain significant weight.
C) The model is equivalent to a volatility-weighted model.
D) The model collapses to basic HS where all weights are equal.

14. What is the first step in the Filtered Historical Simulation (FHS) process for
a single asset?
A) Bootstrap the raw return data.
B) Fit a conditional volatility model (e.g., GARCH) to the portfolio-return data.
C) Calculate the average of all historical returns.
D) Remove all returns greater than three standard deviations.

15. What is the primary purpose of using Principal Components Analysis


(PCA) in risk management, according to the appendix?
A) To forecast the exact direction of market movements.
B) To reduce the dimensionality of highly correlated data sets.
C) To calculate the precise confidence interval for ES.
D) To ensure all asset returns follow a normal distribution.

16. Which of these is listed as a major disadvantage of non-parametric


methods?
A) They are too complex and difficult to understand.
B) They are incapable of accommodating fat tails and skewness.
C) They cannot make use of readily available market data.
D) They are completely dependent on the historical data set and may not
handle regime shifts well.

17. When constructing a histogram, the text emphasizes that the choice of
which parameter can significantly alter the resulting impression of the data's
distribution?
A) The sample mean
B) The bin width (or bandwidth)
C) The sample kurtosis
D) The number of assets in the portfolio

18. The "naïve estimator" in non-parametric density estimation is an


improvement over the histogram because it:
A) Is a smooth, continuous function.
B) Does not depend on a choice of origin x₀.
C) Is always more accurate than a kernel estimator.
D) Works better with small sample sizes.

19. What is the optimal kernel function to minimize the Mean Integrated Square
Error (MISE), according to the text?
A) The Gaussian kernel
B) The Triangular kernel
C) The Epanechinikov kernel
D) The Box kernel

20. The theory of order statistics provides a distribution function, G_r(x), for a
given order statistic. What does this enable a risk analyst to do?
A) Calculate the exact future value of a portfolio.
B) Determine a confidence interval for a VaR estimate.
C) Prove that the underlying data is normally distributed.
D) Eliminate all sources of model risk.

21. When using a bootstrap to estimate a confidence interval, the BCa (bias-
corrected and accelerated) method is described as an improvement over the
basic percentile interval because it:
A) Is much faster to compute.
B) Corrects for skewness and bias in the parameter estimates.
C) Does not require resampling from the data.
D) Always produces a narrower, more precise interval.

22. How is the historically simulated P/L series for a portfolio constructed?
A) By taking the actual P/L earned by the portfolio over the historical period.
B) By calculating the P/L that would have been earned on the current portfolio
if it were held throughout the historical sample period.
C) By simulating returns from a Monte Carlo model based on historical parameters.
D) By averaging the returns of all assets and multiplying by the portfolio value.

23. The ES curve is typically smoother than the VaR curve when plotted
against the confidence level (as in Figure 4.3) because:
A) ES is a theoretical concept, while VaR is an actual observation.
B) The ES curve uses a logarithmic scale.
C) Each ES point is an average of tail losses, while each VaR point reflects a
single random observation.
D) The ES calculation removes outliers from the data set.

24. The correlation-weighted HS approach is described as a major


generalization of which other method?
A) The age-weighted (BRW) approach.
B) The volatility-weighted (HW) approach.
C) The basic Historical Simulation approach.
D) The order-statistics approach.

25. In the context of the bootstrap appendix, what is the main purpose of the
Andrews and Buchinsky three-step method?
A) To estimate the bias of a bootstrap estimator.
B) To choose the optimal number of bootstrap resamples (B) to achieve a
target level of precision.
C) To modify the bootstrap for data that is not independent.
D) To calculate the BCa confidence interval.

26. Which of the following is not listed as an advantage of non-parametric


methods?
A) They are intuitive and conceptually simple.
B) They are free of operational problems like the "curse of dimensionality".
C) They make no allowance for plausible events that did not occur in the
sample period.
D) They provide results that are easy to report and communicate.

27. In the first stage of Principal Components Analysis (PCA), what does the
first principal component represent?
A) The linear combination of variables that explains the least amount of variance.
B) The average correlation across all variables.
C) The linear combination of variables that explains the maximum possible
variance.
D) A random factor that is uncorrelated with the data.
28. If an analyst wishes to adjust historical data for seasonal patterns in
volatility (e.g., natural gas prices being more volatile in winter), which
approach would be most suitable?
A) Basic Historical Simulation
B) Bootstrapped Historical Simulation
C) Weighted Historical Simulation
D) Order-Statistics Approach

29. The main limitation of the standard bootstrap procedure is that it


presupposes:
A) The data is normally distributed.
B) The observations are independent over time.
C) The sample size is less than 100.
D) The user has access to a supercomputer.

30. The text concludes that while non-parametric methods are attractive, one
should never rely on them alone. What should they be complemented with?
A) More historical data
B) More advanced parametric models
C) Stress testing to gauge vulnerability to "what if" events
D) The opinions of senior management

=> SUMMARY: 26/30


[MR-3] Parametric Approaches (II): Extreme Value Theory
1. What is the main weakness of fitting a single parametric distribution to an
entire data set when the goal is to estimate extreme risks?
A) It cannot capture skewness
B) It ignores dependence across time
C) It sacrifices tail fit in favor of central observations
D) It over-weights extreme observations
2. Extreme Value Theory (EVT) was first developed to address engineering
questions in which discipline?
A) Aeronautics
B) Hydrology and flood control
C) Seismology
D) Structural engineering
3. EVT primarily studies events that are:
A) High-probability, low-impact
B) Medium-probability, medium-impact
C) Low-probability, high-impact
D) Independent of their distributions
4. Central-limit theorems are inappropriate for extremes because extremes are
governed instead by:
A) Large-numbers laws
B) Martingale convergence theorems
C) Extreme-value theorems
D) Slutsky’s theorem
5. The fundamental practical challenge in extreme-value estimation is:
A) Computational complexity
B) Non-stationarity of variances
C) Heteroskedasticity in means
D) Scarcity of extreme observations
6. The Fisher–Tippett theorem gives the limiting distribution of:
A) Sample means
B) Sample maxima
C) Sample medians
D) Sample variances
7. How many parameters define the Generalized Extreme Value (GEV) family?
A) 2
B) 3
C) 4
D) 5
8. For ξ > 0, the GEV reduces to which distribution?
A) Gumbel
B) Weibull
C) Fréchet
D) Normal
9. A tail-index value ξ = 0 corresponds to which special GEV case?
A) Fréchet
B) Gumbel
C) Weibull
D) Burr
10. Which GEV case is generally most appropriate for heavy-tailed financial return
data?
A) Weibull
B) Gumbel
C) Fréchet
D) Logistic
11. In the GEV, the location parameter μ primarily captures:
A) Tail heaviness
B) Dispersion of extremes
C) Central tendency of extremes
D) Skewness of the parent data
12. For a standardized Gumbel distribution, the 5% quantile is approximately:
A) –2.97
B) –1.10
C) 0.00
D) 1.10
13. As the Fréchet tail index ξ increases, high quantiles:
A) Decrease linearly
B) Stay constant
C) Increase (become more extreme)
D) Approach zero
14. The “short-cut” EV method assumes the tail follows a:
A) Log-normal form
B) Linear decay
C) Power-law relation
D) Beta distribution
15. When uncertain whether to choose a Gumbel or a Fréchet fit, selecting
Fréchet is often considered safer because:
A) It needs fewer observations
B) It guarantees unbiasedness
C) It yields higher (more conservative) risk estimates
D) It avoids numerical optimization
16. A major drawback of Maximum Likelihood estimation for EV parameters is
that it:
A) Produces biased estimators
B) Requires numerical optimization without closed-form solutions
C) Works only for heavy-tailed data
D) Cannot handle large samples
17. The Hill estimator is designed to estimate which parameter?
A) μ (location)
B) σ (scale)
C) ξ (tail index)
D) All three simultaneously
18. The principal practical hurdle when applying the Hill estimator is:
A) High computational cost
B) Need for ξ < 0
C) Selecting an appropriate threshold k
D) Sensitivity to skewness
19. A “Hill horror plot” is characterized by:
A) Perfect convergence of estimates
B) No stable plateau across k values
C) Horizontal lines at k = 10 and k = 20
D) Symmetry around zero
20. The Hill estimator formula is essentially the average of:
A) Ordered statistics
B) Residual squares
C) Log differences of extreme observations
D) Quantile regressions
21. Maximum-likelihood estimators of GEV parameters are asymptotically normal
provided the tail index satisfies:
A) ξ > 0
B) ξ > –½
C) ξ < 1
D) ξ = 0
22. Gumbel’s regression method estimates parameters by regressing
log[–log(i/(1+m))] on functions of:
A) Time indices
B) Ordered maxima
C) Sample means
D) Autocorrelation lags
23. Moment-based estimators for EV parameters are often unreliable because
they involve:
A) Covariance shrinkage
B) Higher-order sample moments with poor sampling properties
C) Inversion of singular matrices
D) Non-parametric bootstrapping
24. The Danielsson–de Vries rule chooses k in Hill estimation by:
A) Visual inspection alone
B) Minimizing mean-squared error via bias-variance trade-off
C) Maximum-likelihood search
D) Cross-validation on hold-out samples
25. The Danielsson–de Vries procedure requires a minimum sample size of
approximately:
A) 500
B) 1 000
C) 1 500
D) 3 000
26. The Peaks-Over-Threshold (POT) method is grounded in which theorem?
A) Fisher–Tippett
B) Central Limit
C) Gnedenko–Pickands–Balkema–de Haan
D) Law of Large Numbers
27. How many parameters define the Generalized Pareto Distribution (GPD)?
A) 1
B) 2
C) 3
D) 4
28. In POT analysis, the key trade-off when selecting the threshold u is between:
A) Computational time and memory
B) Sufficient excess observations and validity of the asymptotic
approximation
C) Model parsimony and interpretability
D) Bias and multicollinearity
29. In multivariate EVT, dependence between extreme variables is modeled
primarily via:
A) Pearson correlation
B) Covariance matrices
C) Copulas
D) Principal components
30. The “curse of dimensionality” in multivariate EVT refers to the fact that:
A) Numerical optimization becomes unstable
B) Parameter estimates lose consistency
C) Joint extreme events become exponentially rarer as dimension rises
D) Asymptotic theory no longer holds

=> SUMMARY: 25/30


[MR-4] Backtesting VaR

1. Which of the following best describes backtesting in the context of VaR models?
A. Testing a model’s predictive power using future market data
B. Comparing actual portfolio losses with their predicted VaR over a historical
period
C. Simulating hypothetical market scenarios for a portfolio
D. Adjusting VaR parameters to align with market consensus
2. An "exception" (or "exceedance") in VaR backtesting occurs when:
A. Projected returns are higher than forecasted
B. Actual loss is less than the predicted VaR
C. The actual portfolio loss exceeds the VaR estimate
D. Portfolio returns are not normally distributed
3. Why is backtesting considered essential for VaR model validation?
A. It provides an independent audit trail
B. It aligns capital allocation with peer institutions
C. It verifies whether model predictions match observed losses
D. It simplifies regulatory reporting
4. What does a high number of exceptions typically indicate about a VaR model?
A. The model is overly conservative
B. The model underestimates risk
C. The market is less volatile than expected
D. The model perfectly fits the data
5. Which of the following is a significant practical difficulty in backtesting VaR?
A. Insufficient regulatory guidance
B. Portfolio composition changes over time
C. VaR models always use lognormal distributions
D. Exceptions are always deterministic
6. The key limitation of using a high VaR confidence level (e.g., 99%) for backtesting
is:
A. Too many exceptions for meaningful tests
B. Too few exceptions for robust statistical inference
C. Bias in returns toward outliers
D. Increased cost of capital
7. "Actual return" vs. "Hypothetical return" means:
A. The former excludes transaction costs; the latter includes them
B. Actual includes trading, fees, and income; hypothetical assumes static
positions
C. Both represent identical series in backtesting
D. Hypothetical returns are always larger
8. Why does a small number of exceptions in backtesting pose a challenge?
A. It reduces statistical power to reject inaccurate models
B. It overstates the volatility of the portfolio
C. It shows the model is always correct
D. It makes Type II errors impossible
9. Which event is a Type I error in VaR backtesting?
A. Retaining a model that underestimates risk
B. Failing to detect a faulty model
C. Incorrectly rejecting a valid VaR model
D. Overstating the number of exceptions due to clustering
10. Type II error in VaR backtesting means:
A. Accepting an accurate model
B. Incorrectly rejecting a correct model
C. Failing to reject a flawed model
D. Having a high confidence interval
11. The failure rate in the context of VaR backtesting is:
A. Number of zero returns / total observations
B. Average VaR predicted per trading day
C. Number of exceptions / total number of periods
D. VaR threshold divided by asset value
12. Which statistical test is typically applied for unconditional coverage in
backtesting?
A. Kolmogorov-Smirnov test
B. Kupiec's Proportion of Failures (PoF/LRuc) test
C. Anderson-Darling test
D. Sharpe ratio analysis
13. Suppose you use 250 trading days and a 99% VaR. Baseline expected
exceptions are:
A. 1
B. 2.5
C. 25
D. 5
14. What would unconditional coverage fail to detect?
A. Clusters of exceptions in short periods
B. Accurate overall exception rate
C. Incorrect documentation
D. Losses below VaR
15. Why is conditional coverage needed in backtesting?
A. To adjust capital multipliers annually
B. To ensure exceptions happen randomly and do not cluster
C. To reduce capital requirements
D. To justify higher confidence intervals
16. Basel's traffic-light approach sets the green zone for exceptions in a typical year
at:
A. 0–2
B. 0–5
C. 0–4
D. 0–9
17. What is the immediate regulatory implication for entering the "yellow zone" (e.g.,
5–9 exceptions)?
A. No consequences
B. Discretionary review and possible higher capital charge
C. Mandatory model shutdown
D. Recalculation of all previous VaR estimates
18. The red zone under Basel backtesting means:
A. The bank receives a bonus for model performance
B. The VaR model is rejected; capital multiplier increases significantly
C. All exceptions are ignored
D. The model can continue with no penalty
19. Which aspect does the Christoffersen test add to exception counting?
A. Evaluation of confidence interval width only
B. Serial independence of exceptions (conditional coverage)
C. Estimation of loss given default
D. Calculation of risk appetite
20. For a VaR at 99% confidence with 250 days, if you observe 7 exceptions, you
should:
A. Always reject the model
B. Investigate cause; may be in yellow zone
C. Assume model is conservative
D. Reduce VaR threshold
21. Which is NOT a valid cause for exceptions under Basel’s categories?
A. Model coding error
B. Intraday trading
C. Bad luck/extreme events
D. Increased portfolio diversification
22. Which Basel action follows 10 or more exceptions in a year?
A. Reduce capital requirements
B. Enforce automatic penalty and require capital multiplier k=4
C. Disregard the backtesting results
D. Switch to historical simulation method
23. What does a violation of conditional coverage indicate?
A. Market returns are always normal
B. Losses and VaR predictions are independent
C. Exceptions occur in patterns, suggesting overlooked risk factors
D. Portfolio is well-diversified
24. A key trade-off in designing backtesting tests is:
A. Balancing Type I and Type II errors
B. Fitting multiple models simultaneously
C. Minimizing transaction costs
D. Matching liquidity requirements
25. Increasing the length of the backtesting period (more observations) generally:
A. Decreases test power
B. Makes it harder to detect model flaws
C. Increases power and reduces error rates
D. Has no statistical impact
26. What is the main statistical distribution applied in exception counting for VaR?
A. Poisson
B. Normal
C. Binomial
D. Uniform
27. Under Basel, exceptions due to extreme political events or natural disasters:
A. Always result in penalty
B. Are generally excluded as “bad luck”
C. Must be explained but never penalized
D. Invalidate the entire VaR framework
28. In backtesting, what is “clustering”?
A. All exceptions occur at the start of the year
B. Exceptions are distributed evenly
C. Multiple exceptions occur close together in time
D. VaR is recalibrated daily
29. Which parameter is typically adjusted if exceptions repeatedly enter the yellow or
red zone?
A. The model’s stochastic differential equation
B. The VaR confidence level or capital multiplier (k)
C. The trade settlement cycle
D. The asset’s notional value
30. In the context of backtesting, which of the following most increases the likelihood
of a Type II error?
A. Using a broader test interval for exceptions
B. Raising the significance level of the hypothesis test
C. Small sample size and few exceptions, especially with high VaR confidence
D. Matching actual and hypothetical returns
=> SUMMARY: 30/30
[MR-5] VaR Mapping
1. What is the primary objective of the mapping process in Value-at-Risk (VaR)
measurement?

a) To reduce portfolio value


b) To simplify a portfolio by expressing positions as exposures to a small number of
risk factors
c) To maximize computational complexity
d) To ignore the correlation between assets

2. Why is mapping necessary in large-scale portfolio risk measurement?

a) Because historical data is always inaccurate


b) To allow each position to be modeled completely independently
c) To avoid excessive computational burden by aggregating exposures
d) Because mapping improves pricing accuracy

3. Which type of risk is the result of issuer-specific movements, after accounting for market
factors?

a) Specific risk
b) Systematic risk
c) General risk
d) Interest rate risk

4. How is the total portfolio exposure to a primitive risk factor calculated after mapping?

a) Dividing each exposure by the portfolio value


b) Taking the sum of position exposures across all instruments on that factor
c) Adding exposures only for the largest positions
d) Counting only instruments with positive market value

5. Which mapping method for fixed-income portfolios groups all cash flows into maturity
buckets corresponding to provided volatilities?

a) Principal mapping
b) Duration mapping
c) Cash-flow mapping
d) Volatility mapping

6. What assumption underlies the duration approximation method for risk mapping?

a) All zero-coupon bonds are risk-free


b) The volatility of each maturity vertex is proportional to its duration, and
correlations between maturities are unity
c) Coupon payments can be ignored
d) Only short-term bonds matter for risk
7. In a mapping process, what is an example of a primitive risk factor for a corporate bond
portfolio?

a) Currency exchange rates


b) Credit spread at a particular rating and maturity
c) Commodity spot price
d) None of the above

8. In cash-flow mapping of a fixed-income portfolio, what does each cash flow represent?

a) Exposure to the portfolio’s duration


b) The present value of the cash payment, discounted at the appropriate zero-
coupon rate
c) Exposure to only the average maturity
d) The risk of the asset’s issuer only

9. What is the effect on portfolio VaR when mapping uses more primitive risk factors?

a) VaR always increases


b) Specific risk is reduced for a fixed amount of total risk
c) Risk becomes less quantifiable
d) VaR becomes independent of correlations

10. Which mapping technique for fixed-income portfolios can overstate risk by ignoring
coupon payments?

a) Cash-flow mapping
b) Duration mapping
c) Principal mapping
d) Regression mapping

11. When is it necessary to estimate exposures rather than compute them analytically during
mapping?

a) When instrument prices are static


b) When an instrument’s price is not a direct function of the selected risk factors
c) When instruments have only one maturity
d) When historical data is perfect

12. Which risk factor is most likely to dominate the risk of a forward currency contract?

a) Short-term interest rate volatility


b) Foreign exchange spot volatility
c) Credit risk
d) Correlation with other currencies

13. When might mapping exposures present a challenge due to lack of data?

a) For government bonds


b) For stocks with extensive trading history
c) For newly issued IPOs
d) For well-diversified indices

14. Which of the following describes the mapping process for options in the delta-normal
VaR approach?

a) Mapping options as positions in the underlying and in a risk-free asset based on


their delta
b) Mapping options only as exposures to cash
c) Ignoring options in risk measurement
d) Mapping only gamma risk

15. In the mapping of interest rate swaps, the fixed leg is typically mapped as:

a) A floating rate note


b) A series of zero-coupon bonds
c) A position in equities
d) A commodity exposure

16. Which statement best describes stress testing using mapped exposures?

a) It is irrelevant for risk management


b) It allows simulation of extreme movements in key risk factors
c) It eliminates all risk from the portfolio
d) It ensures perfect correlation between all risk factors

17. Mapping a portfolio to a benchmark for relative VaR allows a risk manager to:

a) Ignore tracking error


b) Measure the risk of deviation from the benchmark
c) Guarantee higher returns than the benchmark
d) Eliminate all portfolio risk

18. Which best describes specific risk in a mapped portfolio?

a) Risk unique to one issuer not explained by aggregate risk factors


b) Total portfolio risk
c) Systematic risk of the market
d) Risk that can be eliminated by cash positions

19. If risk factors are chosen too broadly in mapping, what is the likely result?

a) Overfitting to historical data


b) Too much specific risk remains and large blind spots in risk measurement
c) Faster computations with higher precision
d) Complete elimination of tracking error

20. What is one drawback of principal mapping for bonds?

a) It is computationally difficult
b) It completely ignores coupon payments and overstates risk
c) It gives exactly the same risk as duration mapping
d) It requires complex nonlinear modeling

21. Which mapping system for fixed income is most precise if granular data on cash flows
and yield volatilities is available?

a) Principal mapping
b) Duration mapping
c) Cash-flow mapping
d) Correlation-only mapping

22. What is the major risk factor in a forward rate agreement (FRA)?

a) Change in the spot price of the underlying


b) Change in interest rates at contract and settlement maturities
c) Issuer-specific risk
d) Counterparty credit rating

23. In risk mapping, how is a floating leg of an interest rate swap (at reset date) mapped?

a) As a fixed-duration bond
b) As cash (no risk)
c) As a portfolio of zero-coupon bonds
d) As a forward contract

24. In variance matching for mapping, what does the correlation coefficient between vertices
reflect?

a) Relationship between portfolio and benchmark


b) Degree of co-movement between risk factor volatilities
c) Accrual of coupon payments
d) Whether mapping preserves specific risk

25. After mapping, how can general risk and specific risk components be separated in
portfolio variance?

a) By decomposing exposure using regression on primitive factors


b) By ignoring issuer identity
c) By normalizing risk exposures
d) By lumping all positions into one

26. Which condition makes duration matching exact for risk mapping?

a) Bond cash flows are heavily skewed


b) Correlation between maturity vertices is unity and vertex volatilities are
proportional to duration
c) Only principal is paid at maturity
d) Shortest maturity dominates risk

27. For a portfolio with only fixed-income securities, mapping positions on term-structure
vertices is most analogous to:
a) Assigning positions to equities
b) Allocating to currency buckets
c) Allocating present values by maturity to points along the yield curve
d) Assigning all positions to a single cash factor

28. What is the typical first principle suggested before mapping, for portfolio risk
measurement?

a) To consider every position individually with no aggregation


b) To aggregate portfolio positions onto chosen risk factors
c) To use only historical simulation
d) To perform nonlinear optimization

29. In mapping for market risk, selecting more risk factors generally:

a) Increases processing time but improves risk approximation quality


b) Decreases the model’s accuracy
c) Reduces model complexity
d) Removes all specific risk automatically

30. Which component is not usually mapped directly in risk measurement?

a) Stocks
b) Cash
c) Commodities
d) Bonds
[MR-6] Validating Bank Holding Companies’ Value-at-Risk Models
for Market Risk
Q1. Which of the following best captures the primary purpose of conceptual-
soundness testing in VaR model validation?
A. Ensuring the bank’s VaR matches peer institutions’ models
B. Verifying that model assumptions, data and methodology are appropriate for the
bank’s risk-management objectives
C. Detecting data‐entry errors in trade capture systems
D. Calibrating VaR multipliers for regulatory capital

Q2. A VaR model that cannot reflect how risk changes when positions change would
fail conceptual-soundness review because it is not:
A. computationally efficient
B. backtestable under Kupiec’s test
C. fit for purpose in risk management
D. based on filtered historical simulation

Q3. During conceptual-soundness review, regulators expect banks most directly to


justify:
A. the normality assumption in GARCH residuals
B. the chosen distribution for zt in filtered historical simulation
C. that the model is “fit for purpose” relative to how the firm manages positions and
capital
D. the exact confidence level (99%) mandated by Basel

Q4. Which of the following data issues most commonly challenges the conceptual-
soundness of large-scale trading VaR models?
A. Multiplication overflow in Monte Carlo engines
B. Construction of an accurate pseudo-history of one-day P&L based on current
positions
C. Absence of a variance–covariance matrix for equities
D. Time-varying risk-free rates in duration calculations

Q5. Which historical example is often cited to illustrate why VaR calculated on actual
P&L can under-state risk for dynamic trading strategies?
A. LTCM collapse
B. Capital Decimation Partners case discussed by Lo (2001)
C. Flash-crash of 2010
D. Barings collapse

Q6. In sensitivity analysis, “Euler allocation” is mainly used to:


A. distribute backtesting exceptions across trading desks
B. apportion portfolio VaR into marginal contributions by position
C. calibrate GPD tails in POT-EVT models
D. validate filtered historical simulation shocks

Q7. The marginal VaR of a position is defined as:


A. the regression β in ΔVi = α + βΔVP + ε
B. the standalone VaR of that position
C. the notional value of the position divided by portfolio VaR
D. the expected shortfall contribution at 97.5%

Q8. When the regression approach for component VaR (ΔVi versus ΔVP) is
infeasible due to sparse data, Tasche and Hallerbach recommend estimating the
component VaR by:
A. loading proxies into a multivariate GARCH
B. bootstrapping shocks from filtered historical simulation
C. inspecting the position’s loss on the day that determines historical-simulation
VaR
D. replacing missing returns with zeros

Q9. A key practical benefit of systematic sensitivity analysis is that it helps


supervisors:
A. choose among different Kupiec critical values
B. prioritize model enhancements to capture omitted risk factors
C. derive analytic confidence-interval formulas
D. calibrate trader limits to expected shortfall

Q10. Which of the following is NOT an explicit challenge when constructing


confidence intervals for VaR?
A. Estimating the pdf value f(VaR) when the underlying distribution is unknown
B. Non-normality of financial returns
C. Availability of closed-form DQ-test statistics
D. Dependence (serial correlation) in portfolio P&L

Q11. The Jorion (1996) asymptotic standard-error formula for a VaR quantile
requires knowledge of:
A. the tail index of a Pareto distribution
B. the pdf evaluated at the VaR estimate
C. Kupiec unconditional coverage statistic
D. the filtered shock ranking Q1–c

Q12. Order-statistics confidence intervals widen markedly in stress periods mainly


because:
A. the kernel bandwidth is too small
B. sample tails become thicker, raising sampling variability
C. filtered historical simulation under-scales volatility
D. GARCH(1,1) innovations become iid

Q13. When bootstrapping VaR for a GARCH(1,1) model, Christoffersen & Gonçalves
(2005) insist on re-estimating the variance equation within each resample to:
A. keep the independence assumption valid
B. incorporate parameter-estimation risk into the interval
C. avoid overfitting the historical shocks
D. enforce normal innovations

Q14. Empirical results in the chapter show that, for S&P 500 data, filtered historical-
simulation VaR produced the narrowest confidence intervals because FHS:
A. ignores volatility clustering
B. assumes a Gaussian shock distribution
C. scales the tail shocks by time-varying σT+1, yielding more efficient quantile
estimates
D. uses EVT to fit peaks-over-threshold tails

Q15. One persistent obstacle to benchmarking VaR models across banks is that:
A. the FS-128 template forces identical data windows
B. banks rarely run two parallel VaR engines long enough for statistical comparison
C. regulatory multipliers change daily
D. actual P&L cannot be observed at the desk level

Q16. The Lopez (1996) regulatory loss function penalizes models only when:
A. the bank’s VaR is exceeded
B. VaR is too conservative
C. expected shortfall is under-estimated
D. independence of exceptions is violated

Q17. Under that loss function, a model that systematically over-estimates VaR will:
A. have zero loss
B. be heavily penalized
C. show a higher dynamic-quantile (DQ) statistic
D. fail the Christoffersen conditional-coverage test

Q18. In the sign-test benchmarking approach, the null hypothesis is that:


A. two VaR models have equal median loss differential
B. exceedances are iid Bernoulli(0.01)
C. PITs are U(0,1)
D. component VaRs sum to total VaR

Q19. In Berkowitz & O’Brien (2002), a simple GARCH(1,1) VaR based on actual
trading P&L often outperformed banks’ internal VaRs on accuracy because internal
models were:
A. too aggressive in benign periods
B. conservative due to regulatory incentives
C. lacking any volatility updating
D. fitted with t-copulas

Q20. When comparing positional VaR to P&L-based GARCH VaR using the Lopez
check-loss, the chapter finds that positional VaR underperforms at most banks
because:
A. age-weighted volatility exaggerates recent moves
B. positional VaR is intentionally conservative and therefore less accurate in point
prediction
C. missing fee income inflates tail losses
D. exception clustering invalidates logistic DQ

Q21. The dynamic quantile (DQ) test of Engle & Manganelli (2004) improves upon
basic exception counting by:
A. allowing for regression of PITs on lagged information variables
B. replacing VaR with expected shortfall
C. estimating GPD tails above a threshold
D. transforming exceedances into durations
Q22. One limitation of duration-based tests (Christoffersen & Pelletier 2004) is that
they are:
A. computationally infeasible for daily data
B. rarely implemented in practice despite power advantages
C. applicable only to ES, not VaR
D. valid only under normality

Q23. Pajhede (2017) generalizes Christoffersen’s conditional-coverage test by:


A. using spectral backtests with PIT weighting
B. counting exceptions in a K-day window to capture higher-order dependence
C. bootstrapping GARCH residuals
D. reversing the Mincer-Zarnowitz regression

Q24. A chief advantage of the VaR-quantile regression (VQR) test of


Gaglianone et al. (2011) is that it can detect:
A. both bias (α0 ≠ 0) and slope mis-calibration (α1 ≠ 1) in VaR forecasts
B. clusters of large negative PITs
C. false independence due to seasonal effects
D. spectral mis-weighting in expectile models

Q25. According to the sample backtests (2013-2016), most U.S. BHC trading VaR
models were:
A. aggressive during benign markets and conservative in stress
B. conservative overall, with average exceedance 0.4% versus 1% expected
C. perfectly calibrated at desk level
D. failing unconditional coverage at the 90% level

Q26. The finding that logistic DQ, conditional-coverage and unconditional-coverage


tests give similar failure counts suggests that:
A. exception independence is the main driver of test rejection
B. banks game tests by smoothing VaR day-to-day
C. adding information variables to DQ yields little incremental detection power in
practice
D. quantile-regression tests are redundant

Q27. One reason the VQR test failed nineteen of twenty firms while exception-based
tests flagged only a few is that VQR:
A. ignores PIT uniformity
B. evaluates the full conditional quantile function, not just the 1% tail
C. assumes heavy-tailed ν=5 t-errors
D. uses out-of-sample forecasts only

Q28. When benchmarking VaR against a GARCH VaR on actual P&L, the sign test
showed positional VaR dominated only 1 out of 19 desks. This indicates that:
A. conservative bias can reduce predictive accuracy
B. filtered historical simulation always outperforms GARCH
C. regulatory multipliers were too low
D. PITs showed severe left-tail clustering

Q29. For expected-shortfall models under the Fundamental Review of the Trading
Book (FRTB), which backtesting element translates most directly from VaR
validation practice?
A. Order-statistics confidence-interval estimation
B. Kupiec’s unconditional-coverage test
C. Lopez regulatory loss function
D. Sensitivity analysis for omitted risk factors

Q30. The chapter recommends that future benchmarking of ES models should


exploit spectral backtests (Gordy & McNeil 2020) because they:
A. entirely replace PITs with exceedances
B. allow the user to weight parts of the loss distribution deemed most important
C. eliminate need for simulated P&L
D. are specified in Basel II Annex 4
[MR-7] Beyond Exceedance-Based Backtesting of Value-at-Risk
Models: Methods for Backtesting the Entire Forecasting
Distribution Using Probability Integral Transform
1. Which of the following properties must hold for a correctly specified VaR
model at confidence level α when using Probability Integral Transforms
(PITs)?
A. PITs follow a t-distribution with ν degrees of freedom
B. PITs are independent and identically distributed U(0,1)
C. PITs are normally distributed with mean α and variance α(1–α)
D. PITs follow a chi-square distribution with one degree of freedom

2. The Kupiec test for unconditional coverage evaluates:


A. Whether PITs are uncorrelated
B. Whether the observed exception rate equals the nominal rate
C. Whether exceptions cluster in time
D. Whether the PIT series follows a uniform distribution

3. Christoffersen’s independence test for VaR exceptions is based on:


A. A Poisson process for exception counts
B. A first-order Markov chain for exception indicators
C. A linear probability model of PITs
D. The Ljung–Box test on PITs

4. In the joint conditional coverage test (Christoffersen), the test statistic is:
A. LRuc – LRind
B. LRuc + LRind
C. LRuc × LRind
D. max(LRuc, LRind)

5. Probability Integral Transforms (PITs) are defined as:


A. PITt = 1 if loss > VaRt, 0 otherwise
B. PITt = Ft(P&L t) where Ft is the model’s forecast CDF
C. PITt = −VaRt
D. PITt = (P&L t − VaRt)/σ

6. Which histogram feature indicates that a VaR model’s PITs are too
conservative in the tails?
A. A uniform flat shape
B. Spikes at both ends of the distribution
C. A hump in the center of the distribution
D. A left-skewed distribution

7. The PIT backtesting approach provides information on model accuracy at:


A. Only the 99% quantile
B. Only the mean and variance
C. Any percentile of the forecast distribution
D. Only the tails beyond the 99% VaR
8. Which test among the following assesses deviations from uniformity of PITs
using a regression on lagged PITs and lagged P&L?
A. Kupiec test
B. PCA test
C. Dynamic Quantile (DQ) test
D. Ljung–Box test

9. In the context of PIT backtesting, a significant positive skewness of PITs


implies:
A. The model distribution is too wide (conservative)
B. The model distribution is too narrow (aggressive)
C. PITs are uniformly distributed
D. PITs perfectly match realized P&L

10. Which statistical test places extra weight on tails when testing uniformity of
PITs?
A. Kolmogorov–Smirnov
B. Anderson–Darling
C. Christoffersen independence
D. Ljung–Box

11. A duration-based backtesting test evaluates:


A. The number of exceptions only
B. The time between consecutive exceptions
C. The average PIT value
D. The conditional quantile at each time step

12. The probability integral transform of a correct model’s P&L should be:
A. Exponentially distributed
B. Normally distributed
C. Uniformly distributed
D. Bernoulli

13. Which of these moment statistics of PITs should equal that of a Uniform(0,1)?
A. Kurtosis = 0
B. Mean = 0.5
C. Skewness = 1
D. Median = 0

14. The Cramér–von Mises test statistic for uniformity of PITs compares:
A. Empirical CDF to a Gaussian CDF
B. Empirical CDF to the uniform CDF using squared deviations
C. Two successive PITs for autocorrelation
D. PIT histogram height against expected frequency

15. When applying the Ljung–Box test to the series of VaR exceptions, what null
hypothesis is tested?
A. No excess kurtosis in exceptions
B. Exceedances occur with correct frequency
C. No autocorrelation in the exception indicator series
D. PITs are uniformly distributed

16. In PIT backtesting, “clustered exceptions” indicate:


A. Independence property holds
B. Conservative model specification
C. Aggressive model specification
D. Violation of the independence property

17. A Q–Q plot of PITs that bows above the 45° line in the tails suggests:
A. Too few extreme losses (tails are understated)
B. Too many extreme losses (tails are overstated)
C. Perfect model fit
D. Constant coverage

18. The series of 1-day 99% VaR exceptions should form a Bernoulli(0.01)
process if the model is:
A. Unbiased only
B. Independently and correctly specified
C. Conditionally autoregressive
D. Filtered historical simulation

19. Which test uses the empirical CDF of PITs and compares it to the theoretical
uniform CDF via supremum distance?
A. Anderson–Darling
B. Cramér–von Mises
C. Kolmogorov–Smirnov
D. Ljung–Box

20. The term “exceedance” in VaR backtesting refers to:


A. P&L above the mean
B. P&L greater than zero
C. Realized loss exceeding the VaR estimate
D. PIT value above 0.5

21. Which of the following is NOT a property that correctly specified PITs must
satisfy?
A. i.i.d. U(0,1)
B. Mean = 0.5
C. Maximum = 1
D. Variance = 1/12

22. In a 99% VaR backtest over 250 days, about how many exceedances are
expected if the model is accurate?
A. 1
B. 2.5
C. 10
D. 25

23. The independence property of PITs implies that each PIT:


A. Depends on the previous PIT
B. Is uncorrelated with all other PITs
C. Follows a Gaussian distribution
D. Equals the VaR hit indicator

24. A backtesting test that evaluates conditional coverage jointly tests:


A. Only independence
B. Only unconditional coverage
C. Both independence and correct exception frequency
D. Tail thickness of PITs

25. Which empirical evidence suggests that firm-level PITs deviate less from
uniformity than portfolio-level PITs?
A. KDE of exceptions
B. Histogram of P&L
C. PIT distribution and Q–Q plots
D. Autocorrelation function

26. Applying a regression of the PIT on its lagged values tests:


A. Uniformity of PITs
B. Independence of PITs
C. Mean PIT = 0.5
D. Correct number of exceedances

27. Which of these is an alternative to exception counts for model backtesting?


A. Backward Euler test
B. Profit-and-loss attribution
C. Probability integral transform
D. Bootstrap VaR

28. Significant deviations in the variance of PITs from 1/12 indicate:


A. Unconditional coverage violation
B. Independence violation
C. Aggressive or conservative misspecification of variance
D. Perfect model calibration

29. The conditional tail histograms of PITs focus on:


A. Central 50% of PITs
B. Upper 95% tail of PITs
C. Lower 5% tail of PITs
D. Entire PIT range only

30. The primary advantage of PIT-based backtesting over exceedance-based


backtesting is:
A. Simpler calculation
B. Focus on a single percentile
C. Assessment of forecast distribution over all percentiles
D. Independence from model assumptions
[MR-8] Correlation Basics: Definitions, Applications, and
Terminology
1. An investor has a $10 million position in Spanish bonds and purchases a CDS
from Deutsche Bank to hedge default risk. If the default correlation between Spain
and Deutsche Bank increases from 0.3 to 0.7, what is the most likely impact on the
investor's position?

A) The CDS spread decreases and the investor experiences a paper gain
B) The CDS spread increases and the investor experiences a paper loss
C) The CDS spread decreases and the investor experiences a paper loss
D) The CDS value increases due to higher counterparty protection

2. A correlation swap has a notional amount of $5 million, fixed correlation rate of


0.25, and realized correlations for a 3-asset portfolio of ρ₂₁ = 0.6, ρ₃₁ = 0.4, and
ρ₃₂ = 0.2. What is the payoff for the correlation buyer?

A) $416,667
B) $500,000
C) $583,333
D) $750,000

3. For a quanto option on the Nikkei index with USD/JPY currency exposure, if the
correlation between the Nikkei returns and USD/JPY exchange rate is strongly
negative, the quanto option price will be:

A) Lower due to favorable currency hedging effects


B) Higher due to increased currency conversion costs
C) Unaffected as correlation doesn't impact quanto pricing
D) Lower due to reduced implied volatility

4. A two-asset portfolio has $12 million in Asset A (daily volatility 2.5%) and $8
million in Asset B (daily volatility 1.8%). With correlation of 0.4, what is the 10-day
VaR at 95% confidence level (α = 1.645)?

A) $1.89 million
B) $2.34 million
C) $2.67 million
D) $3.12 million

5. During the May 2005 correlation crisis, hedge funds experienced losses on both
equity and mezzanine CDO tranches when GM and Ford were downgraded. This
occurred because:

A) Equity tranche spreads decreased while mezzanine spreads increased


B) Both tranche spreads increased due to higher correlation
C) Equity tranche spreads increased while mezzanine spreads decreased
D) Both tranche spreads decreased due to lower correlation
6. An exchange option with payoff max(0, S₂ - S₁) has underlying assets with
correlation of -0.8. If correlation increases to +0.2, the option value will:

A) Increase significantly due to higher spread probability


B) Decrease significantly due to reduced spread variance
C) Remain unchanged as correlation doesn't affect exchange options
D) Increase slightly due to improved hedging efficiency

7. A commercial bank has made equal $4 million loans to three companies, each
with 6% default probability. Using the binomial correlation model with correlation
coefficient 0.5, what is the joint default probability?

A) 0.216%
B) 1.080%
C) 1.836%
D) 2.592%

8. According to Basel III requirements, if a bank's 10-day VaR is $2.5 million, the
minimum regulatory capital charge for trading book assets is:

A) $2.5 million
B) $5.0 million
C) $7.5 million
D) $10.0 million

9. In the global financial crisis of 2007-2009, correlations between Dow Jones stocks
increased from pre-crisis levels of 27% to over 50%. This phenomenon is primarily
an example of:

A) Concentration risk manifestation


B) Credit risk migration
C) Systemic risk amplification
D) Operational risk contagion

10. For a portfolio with correlation coefficient ρ = 0.6, asset weights w₁ = 0.4, w₂ =
0.6, and asset volatilities σ₁ = 15%, σ₂ = 20%, the portfolio volatility is closest to:

A) 14.2%
B) 15.8%
C) 17.1%
D) 18.6%

11. A variance swap strategy involves buying variance swaps on individual index
components while selling variance swaps on the index. This strategy profits when:

A) Index correlation increases above the correlation implied in swap prices


B) Index correlation decreases below the correlation implied in swap prices
C) Individual component volatilities increase relative to index volatility
D) Index volatility increases relative to component volatilities

12. Wrong-way risk (WWR) in a CDS transaction occurs when:


A) The reference entity and counterparty have negative correlation
B) The reference entity and counterparty have positive correlation
C) The CDS spread moves inversely to credit quality
D) The recovery rate assumptions are incorrect

13. The concentration ratio for a lender with loans of $5M, $3M, $2M, $7M, and $3M
to five different borrowers is:

A) 0.20
B) 0.25
C) 0.35
D) 0.40

14. A correlation option strategy that benefits from HIGHER correlation between
underlying assets is:

A) Option on the better of two assets


B) Exchange option
C) Spread call option
D) Option on the worse of two assets

15. For investment-grade bonds, the default term structure typically:

A) Decreases with time to maturity due to improving credit quality


B) Increases with time to maturity due to longer exposure periods
C) Remains constant across all maturities
D) Shows an inverted pattern with highest risk in near-term

16. Expected Shortfall (ES) differs from VaR primarily because ES:

A) Measures average losses beyond the VaR threshold


B) Uses different confidence levels than VaR
C) Incorporates correlation effects while VaR does not
D) Is always higher than VaR for the same confidence level

17. In the one-factor Gaussian copula model used in Basel frameworks, if the asset
correlation parameter increases, the capital requirement for a credit portfolio will:

A) Decrease due to diversification benefits


B) Increase due to higher systematic risk
C) Remain unchanged as correlation doesn't affect capital
D) Decrease initially then increase at higher correlation levels

18. The nonmonotonous relationship between CDS spreads and correlation occurs
because:

A) Higher correlation always reduces CDS value


B) At very negative correlations, either party may default but not both
C) CDS pricing models are inherently flawed
D) Market liquidity affects pricing more than correlation
19. During the 2007-2009 financial crisis, super-senior CDO tranches lost up to 20%
of their value primarily due to:

A) Increased default probabilities in underlying mortgages only


B) Increased correlation between CDO tranches destroying protection
C) Rating agency downgrades affecting market perception
D) Liquidity issues in secondary markets

20. A pairs trading strategy profits when:

A) The correlation between two historically correlated assets temporarily decreases


B) The correlation between two historically correlated assets temporarily increases
C) The spread between two historically correlated assets mean-reverts
D) The volatility of both assets in the pair increases simultaneously

21. The leverage effect in CDO structures during the crisis was exemplified by
Leveraged Super-Senior (LSS) tranches with leverage ratios of:

A) 2-5 times
B) 5-10 times
C) 10-20 times
D) 20-50 times

22. Migration risk in credit portfolios is most directly related to correlation through:

A) The tendency for credit downgrades to cluster during stress periods


B) The mathematical relationship between probability of default and correlation
C) The impact of correlation on recovery rates
D) The regulatory requirements for correlated exposures

23. A correlation swap with 4 assets where realized correlations are ρ₂₁=0.7,
ρ₃₁=0.5, ρ₄₁=0.3, ρ₃₂=0.6, ρ₄₂=0.4, ρ₄₃=0.2. The realized correlation is:

A) 0.45
B) 0.48
C) 0.52
D) 0.57

24. In the London Whale case (JPMorgan 2012), the correlation strategy involved:

A) Buying CDS on index, selling CDS on individual components


B) Selling CDS on index, buying CDS on individual components
C) Buying correlation swaps on credit indices
D) Selling variance swaps on individual credits

25. The copula correlation model's weakness during the CDO crisis was primarily:

A) Inability to handle more than 100 assets in a portfolio


B) Assumption of constant correlation across market conditions
C) Computational complexity leading to approximation errors
D) Failure to account for recovery rate variations
26. For a binomial correlation model with PX = 8%, PY = 12%, and
correlation ρ = 0.3, the joint default probability P(X ∩ Y) is closest to:

A) 0.96%
B) 1.44%
C) 1.89%
D) 2.31%

27. The energy sector's correlation characteristics with other sectors make it
valuable for portfolio diversification because:

A) Energy has negative correlation with most other sectors


B) Energy has zero or low correlation with most other sectors
C) Energy correlation is time-varying and unpredictable
D) Energy stocks have lower individual volatilities

28. A multi-asset call option on the maximum of two assets with strike K has payoff
max[0, max(S₁, S₂) - K]. This option's value increases when correlation:

A) Increases, as both assets move together upward


B) Decreases, as higher probability one asset will be significantly above K
C) Remains constant, as correlation doesn't affect maximum payoffs
D) Approaches perfect positive correlation

29. In stress testing scenarios, correlation matrices typically exhibit:

A) Decreased correlations as markets become more efficient


B) Increased correlations approaching unity during severe stress
C) Random correlation changes with no discernible pattern
D) Stable correlations regardless of market conditions

30. The primary difference between static and dynamic financial correlations is:

A) Static correlations use historical data while dynamic use forward-looking models
B) Static correlations measure association within fixed periods while dynamic
measure time-evolution
C) Static correlations apply to bonds while dynamic apply to equities
D) Static correlations are more accurate for risk management purposes
[MR-9] Empirical Properties of Correlation: How Do Correlations
Behave in the Real World?
1. A study of Dow Jones stocks from 1972 to 2017 revealed correlation levels of
37.0% during recessions, 33.0% during normal periods, and 27.5% during
expansionary periods. What is the primary reason for the lowest correlations during
expansionary periods?

A) Increased market volatility reduces statistical correlation measurements


B) Stock valuations are driven more by idiosyncratic rather than macroeconomic
factors
C) Higher trading volumes create noise in correlation calculations
D) Monetary policy interventions distort natural correlation patterns

2. Using the regression equation Y = 0.256 - 0.7903X for mean reversion analysis,
where Y = St - St-1 and X = St-1, what is the expected correlation next month if the
current month's correlation is 25% and the long-run mean is 35%?

A) 32.90%
B) 33.45%
C) 34.21%
D) 35.00%

3. A risk manager observes that correlation volatility during different economic states
shows: recession (80.5%), normal period (83.0%), and expansion (71.2%). The
higher volatility during normal periods compared to recessions is most likely
because:

A) Statistical measurement errors are highest during normal periods


B) Investors have more uncertainty about market direction during normal times
C) Central bank policy is most active during normal economic periods
D) Trading algorithms perform poorly during stable economic conditions

4. In the empirical study of Dow correlations from 1972-2017, the mean reversion
rate was found to be 79.03%. If this relationship holds and the current correlation is
40% while the long-run mean is 32%, what is the expected change in correlation for
the next period?

A) -6.32%
B) -8.00%
C) +6.32%
D) +8.00%

5. The autocorrelation for a one-period lag in the Dow correlation study was 20.97%.
This autocorrelation rate combined with the mean reversion rate demonstrates which
fundamental relationship?

A) Autocorrelation × Mean reversion rate = 1


B) Autocorrelation + Mean reversion rate = 1
C) Autocorrelation - Mean reversion rate = 0
D) Autocorrelation² + Mean reversion rate² = 1

6. Distribution fitting tests using Kolmogorov-Smirnov, Anderson-Darling, and chi-


squared methods found that equity correlations are best fitted by which distribution?

A) Normal distribution
B) Lognormal distribution
C) Beta distribution
D) Johnson SB distribution

7. A correlation analyst runs an autocorrelation test for various lag periods and finds
the highest autocorrelation of 26% occurs at a 2-month lag rather than a 1-month
lag. This pattern suggests:

A) The correlation measurement methodology is flawed


B) There are seasonal effects in correlation patterns
C) Correlation persistence has complex temporal dynamics
D) Market microstructure noise affects short-term measurements

8. In the empirical analysis of 426,300 correlation values between Dow stocks, what
percentage were positive correlations?

A) 69.4%
B) 73.8%
C) 77.2%
D) 81.6%

9. Bond correlations were found to have a mean reversion rate of 26% compared to
equity correlations' 79%. This difference primarily indicates that:

A) Bond correlations are more persistent than equity correlations


B) Bond markets are less efficient than equity markets
C) Statistical measurement errors are higher for bonds
D) Bond correlations exhibit stronger autocorrelation than equity correlations

10. The correlation volatility preceding recessions showed negative changes in 5 out
of 6 cases studied. The exception was the 1990-1991 recession with a +0.06%
change. This anomaly most likely reflects:

A) Data measurement errors during that specific period


B) Different underlying economic causes for that particular recession
C) The mild nature of that recession compared to others
D) Monetary policy interventions that distorted normal patterns

11. Using the mean reversion formula St - St-1 = a(μ - St-1), if μ = 30%, St-1 = 45%,
and the mean reversion rate a = 0.6, what is the expected value of St?

A) 36%
B) 39%
C) 42%
D) 45%
12. The study found that default probability correlations had an average of 30% with
correlation volatility of 88%. Compared to equity correlations (34.83% average,
79.73% volatility), this suggests default correlations are:

A) More stable but at lower levels than equity correlations


B) More volatile but at lower levels than equity correlations
C) Less volatile but at higher levels than equity correlations
D) Both less stable and at higher levels than equity correlations

13. The relationship between correlation level and correlation volatility was found to
be positive. In a portfolio risk management context, this relationship implies:

A) Higher correlations provide more predictable risk estimates


B) Risk models should incorporate correlation uncertainty that increases with
correlation level
C) Low correlation periods are optimal for portfolio rebalancing
D) Correlation forecasting becomes easier during high correlation periods

14. The generalized extreme value (GEV) distribution was found to best fit which
type of correlation data?

A) Equity correlations
B) Bond correlations
C) Default probability correlations
D) Currency correlations

15. A risk manager needs to forecast correlation for the next month given current
correlation of 28%, long-run mean of 34%, and estimated mean reversion rate of
75%. The forecasted correlation should be:

A) 30.5%
B) 31.0%
C) 32.5%
D) 33.0%

16. The severe recessions of 1973-1974 and 1981-1982 were both caused by oil
price shocks and showed GDP declines of -11.93% and -12.00% respectively. The
correlation volatility changes preceding these recessions were -7.22% and -4.65%.
This pattern suggests:

A) Oil price shocks create unique correlation patterns


B) The magnitude of recession severity is inversely related to correlation volatility
changes
C) Correlation volatility is a reliable leading indicator of recession severity
D) External supply shocks affect correlation patterns differently than demand shocks

17. The study period from 1972-2017 included 534 months resulting in 480,600
monthly correlations (900 × 534). The removal of diagonal unity values left 426,300
correlations for analysis. This methodology ensures:

A) Equal weighting of all stock pairs in the analysis


B) Elimination of survivorship bias in the sample
C) Removal of spurious correlations from the dataset
D) Prevention of autocorrelation in the correlation measurements

18. Mean reversion in correlations exhibits the mathematical relationship


∂(St-St-1)/∂St-1 < 0. This partial derivative condition specifically means:

A) Correlations always decrease over time


B) High correlations in one period lead to lower correlations in the next period
C) The change in correlation is negatively related to the previous period's correlation
level
D) Correlation volatility decreases as correlation levels increase

19. The autocorrelation decay pattern from 26% at 2-month lag to approximately
10% at 10-month lag indicates:

A) Correlation shocks have temporary rather than permanent effects


B) Seasonal patterns dominate long-term correlation dynamics
C) Market efficiency improves at longer time horizons
D) Statistical measurement errors increase with longer lags

20. In the regression Y = 0.273 - 0.78X for mean reversion, if current correlation is
30% and long-run mean is 35%, the expected correlation change is:

A) +2.7%
B) +3.9%
C) +5.2%
D) +6.8%

21. The Johnson SB distribution's superiority in fitting equity correlation data over
normal, lognormal, and beta distributions suggests:

A) Equity correlations have fat tails and skewness not captured by simpler
distributions
B) The bounded nature of correlations requires specialized distribution forms
C) Traditional financial assumptions about normality fail for correlation data
D) All of the above

22. Bond correlation levels (41.67%) being higher than equity correlations (34.83%)
while having lower volatility (63.74% vs 79.73%) suggests:

A) Bond markets are more integrated than equity markets


B) Fixed income instruments have more systematic risk factors
C) Interest rate environments create more stable correlation patterns
D) Bond correlations are subject to different macroeconomic drivers

23. The observation that correlation volatility typically decreases before recessions
(except 1990-1991) provides insight for:

A) Early warning indicators of economic downturns


B) Portfolio rebalancing timing strategies
C) Monetary policy effectiveness measures
D) Market crash prediction models
24. Using the discrete Vasicek process St - St-1 = a(μS - St-1)Δt + σSε√Δt,
if we ignore the stochastic term and set Δt = 1, a mean reversion
parameter of a = 1 means:

A) No mean reversion occurs


B) Partial mean reversion at 50% rate
C) Complete mean reversion to long-term mean in one period
D) Exponential mean reversion at accelerating rate

25. The finding that 77.23% of Dow stock correlations were positive over the 1972-
2012 period most likely reflects:

A) Systematic measurement bias in correlation calculations


B) Common exposure to macroeconomic factors across stocks
C) Industry concentration within the Dow index selection
D) Survivorship bias from including only successful companies

26. Default probability correlations showing similar distribution properties to equity


correlations (both best fitted by Johnson SB) but different from bond correlations
(GEV) suggests:

A) Credit risk and equity risk share similar underlying factor structures
B) Default probabilities and stock returns are driven by identical processes
C) Bond correlations are fundamentally different from other asset class correlations
D) Distribution choice is primarily determined by sample size rather than underlying
economics

27. The polynomial trend line of order 4 applied to the correlation time series data
serves to:

A) Remove seasonal effects from correlation measurements


B) Identify long-term structural changes in correlation patterns
C) Smooth short-term noise while preserving major trends
D) Normalize correlations across different economic periods

28. A mean reversion rate of 77.51% for equity correlations combined with long-run
mean of 34.83% implies that extreme correlation events:

A) Have permanent effects on future correlation levels


B) Revert quickly to normal levels within 1-2 periods
C) Create structural breaks in correlation patterns
D) Are primarily driven by measurement errors

29. The relationship between the state of the economy and correlation volatility
showing highest volatility during normal periods rather than recessions suggests:

A) Recession periods have more predictable correlation patterns


B) Economic uncertainty is highest during transitional periods
C) Market participants have clearer expectations during extreme economic states
D) Correlation measurement becomes more precise during crisis periods
30. The empirical finding that correlation levels and volatility exhibit a positive
relationship has implications for risk management because:

A) Risk models can assume constant correlation volatility across different correlation
levels
B) Higher correlation periods require additional uncertainty adjustments in VaR
calculations
C) Portfolio diversification benefits are most reliable during high correlation periods
D) Correlation forecasting accuracy improves during low volatility periods
[MR-10] Financial Correlation Modeling — Bottom-Up Approaches
1. In the Heston (1993) correlation model, the instantaneous correlation
between Brownian motions dz₁(t) and dz₂(t) is defined as Corr[dz₁(t),
dz₂(t)] = ρdt. To allow for negative correlation, the model uses the identity
dz₁(t) = α dz₂(t) + √(1-α²) dz₃(t). What value of α corresponds to perfect
negative correlation?

A) α = 0
B) α = -1
C) α = 1
D) α = -0.5

2. The original Heston model correlates two stochastic differential equations. Which
pair of financial variables does it primarily correlate?

A) Stock returns and interest rates


B) Stock returns and stochastic volatility
C) Interest rates and credit spreads
D) Default intensities and recovery rates

3. In the binomial correlation model of Lucas (1995), two entities X and Y have
default probabilities P(X) = 8% and P(Y) = 12%. If the joint default probability P(XY)
= 2%, what is the binomial correlation coefficient?

A) 0.234
B) 0.456
C) 0.612
D) 0.789

4. The binomial correlation approach is considered a limiting case of which broader


correlation model?

A) Heston correlation model


B) Pearson correlation model
C) Gaussian copula model
D) Contagion correlation model

5. A copula function C transforms an n-dimensional function on the interval into


which type of function?MR10.pdf

A) An n-dimensional function on [0,∞)


B) A unit-dimensional function onMR10.pdf
C) A bivariate function on [-1,1]
D) A multivariate function on (-∞,∞)

6. In the Gaussian copula equation C^G[G₁(u₁),...,Gₙ(uₙ)] =


Mₙ[N⁻¹(G₁(u₁)),...,N⁻¹(Gₙ(uₙ));ρₘ], what does the term N⁻¹ represent?
A) The inverse of the multivariate normal distribution
B) The inverse of a univariate standard normal distribution
C) The negative of the normal distribution
D) The natural logarithm of the normal distribution

7. When applying a Gaussian default time copula to two companies with 1-year
cumulative default probabilities of 6.51% and 23.83% respectively, the mapped
standard normal percentiles are approximately -1.51 and -0.71. What is the primary
purpose of this mapping process?

A) To increase the default probabilities to more realistic levels


B) To convert probabilities to percentile-to-percentile correspondence with standard
normal
C) To eliminate correlation effects between the companies
D) To adjust for time value of money effects

8. In copula modeling, if the mapped values F_i⁻¹(G_i(u_i)) are continuous, what


property does the copula function C possess?

A) C is symmetric
B) C is unique
C) C is bounded
D) C is differentiable

9. For a survival probability calculation using continuous default intensity λ ᵢ(t), the
probability that entity i survives until time t is expressed as Pr[τ ᵢ > t] = exp{-∫₀^τ ᵢ λ ᵢ(t)dt}.
In the case of constant default intensity, the correlated default time becomes:

A) τᵢ = -ln[Mₙ(- )]/λᵢ
B) τᵢ = ln[Mₙ(- )]/λᵢ
C) τᵢ = -Mₙ(- )/ln(λᵢ)
D) τᵢ = Mₙ(- ) × λᵢ

10. The Gaussian copula's lack of tail dependence means that for any
correlation parameter ρ ∈ {-1,1}, the limit as y₁,y₂ → 0 of P(τ₁ < N₁⁻¹(y₁)|
τ₂ < N₂⁻¹(y₂)) equals:

A) ρ
B) 1
C) 0
D) -1

11. Which copula type exhibits high tail dependence, especially for negative
comovements, making it potentially suitable for financial crisis modeling?

A) Gaussian copula
B) Student's t copula
C) Gumbel copula
D) Frank copula
12. In the contagion correlation model of Davis and Lo, the latent variable
Z_i for entity i includes the term (1-X_i)[1-∏(1-X_j K_ij)]. What does the
parameter K_ij represent?

A) The probability that entity i defaults independently


B) The correlation coefficient between entities i and j
C) The contagion variable measuring how entity j's default impacts entity i's default
intensity
D) The recovery rate for entity i given entity j defaults

13. The Jarrow and Yu (2001) contagion model uses default intensity
equations λ_A(t) = a₁ + a₂1{τ_B ≤ t} and λ_B(t) = b₁ + b₂1{τ_A ≤ t}. The
indicator variable 1{τ_B ≤ t} takes value 1 when:

A) Entity B's default time exceeds time t


B) Entity B defaults before or at time t
C) Entity A defaults before entity B
D) The correlation between A and B exceeds the threshold

14. In Cholesky decomposition for a 3×3 correlation matrix S = MM^T, if c₁₁ = 1, c₂₁
= 0.3, and c₂₂ = 1, what is the value of m₂₂?

A) √(1 - 0.3²) = √0.91


B) √(1 + 0.3²) = √1.09
C) 0.3
D) 1 - 0.3² = 0.91

15. For generating correlated random samples using Cholesky decomposition, if M is


the lower triangular matrix and e is a vector of independent standard normal
variables, the correlated samples x are obtained as:

A) x = M^T e
B) x = M e
C) x = e M
D) x = M⁻¹ e

16. A forward default probability for year 7, given 6-year cumulative default
probability Q₆ = 36.73% and 7-year cumulative default probability Q₇ = 40.97%,
equals 4.24%. The corresponding default intensity in year 7 is:

A) 4.24%
B) 6.70%
C) 2.46%
D) 8.15%

17. The "correlation smile" criticism of Gaussian copula calibration refers to traders:

A) Using the same correlation parameter across all CDO tranches


B) Randomly altering correlation parameters for different tranches to achieve
desired spreads
C) Smiling when correlation models work perfectly
D) Using only positive correlation coefficients

18. In the Gaussian copula framework, when finding the joint default probability of
two companies with correlation ρ = 0.4, mapped standard normal values of -1.5133
and -0.7118, the joint probability involves:

A) A univariate normal distribution calculation


B) A bivariate normal distribution M₂ calculation
C) Simple multiplication of individual probabilities
D) Integration of the correlation coefficient

19. The persistence of contagion variable K_ij(t) in a dynamic setting may be


modeled as an exponentially decreasing function K_ij(t) = e^(-g(t)t) where:

A) g(t) > 0 and ∂g/∂t > 0


B) g(t) < 0 and ∂g/∂t > 0
C) g(t) > 0 and ∂g/∂t < 0
D) g(t) < 0 and ∂g/∂t < 0

20. The "looping defaults" problem in symmetric contagion models occurs because:

A) Default times are not properly simulated


B) Correlation coefficients exceed unity
C) Circular dependence makes joint distribution construction complex
D) Recovery rates are assumed to be zero

21. For a company in distress, default probabilities typically show what pattern over
time?

A) Continuously increasing with maturity


B) Remaining constant across all time horizons
C) Higher in immediate future, then decreasing if company survives
D) Following a U-shaped pattern over time

22. In the SABR model extension of Heston's approach, the correlation is applied
between:

A) Stock prices and dividend yields


B) Stochastic interest rates and stochastic volatility
C) Credit spreads and recovery rates
D) Forward rates and spot rates

23. The one-factor Gaussian copula model assumes that correlations between any
two entities in a CDO portfolio:

A) Vary randomly across different asset pairs


B) Are determined by a single common factor
C) Must be estimated separately for each pair
D) Equal zero for diversification purposes
24. When deriving samples from an n-variate copula M_n(- ) ∈ using
Cholesky decomposition, the sample includes default correlation
via:MR10.pdf

A) The individual default probabilities Q_i(t)


B) The default correlation matrix ρ_M of the n-variate standard normal distribution
C) The marginal distributions G_i(u_i)
D) The inverse transformation functions F_i^(-1)

25. The mathematical convenience of the identity dz₁(t) = α dz₂(t) + √(1-


α²) dz₃(t) is that:

A) It automatically generates positive correlations


B) It ensures dz₁ remains standard normal for any α ∈ [-1,1] if dz₂ and dz₃
are standard normal
C) It eliminates the need for correlation matrices
D) It simplifies the calculation of joint probabilities

26. The primary limitation of static copula models in risk management is:

A) Computational complexity
B) Inability to handle more than two assets
C) Lack of stochastic processes for underlying variables like default intensity
D) Requirement for normally distributed marginal data

27. A Bernoulli random variable X_j in the Davis and Lo contagion model can take
values of 0 and 1. If Pr(X_j = 1) = q, then X_j represents:

A) The correlation coefficient between entities


B) The default event occurrence for entity j
C) The survival probability for entity j
D) The contagion intensity parameter

28. For investment grade bonds, default intensity functions typically:

A) Decrease monotonically with time


B) Remain constant across all maturities
C) Increase with time due to growing uncertainty
D) Follow a random walk pattern

29. The asymmetric dependence solution to the looping defaults problem in


contagion models means:

A) Primary entities affect secondary entities, but not vice versa


B) All entities have equal impact on each other
C) Correlation coefficients must be negative
D) Default times are independently distributed

30. In the context of CDO valuation, the criticism that "traders randomly alter
correlation parameters for different tranches" most directly relates to:
A) The mathematical incorrectness of the copula approach
B) The difficulty in calibrating copula models to market prices
C) The assumption of normally distributed defaults
D) The computational burden of Monte Carlo simulations

You might also like