FRM2 - Ai
FRM2 - Ai
2. If you are using the Historical Simulation (HS) approach with 1000 loss
observations and want to find the VaR at the 99% confidence level, which
observation would you select?
A) The 1st highest loss observation
B) The 10th highest loss observation
C) The 11th highest loss observation
D) The 990th highest loss observation
9. The text describes a practical method for estimating Expected Shortfall (ES)
which involves:
A) Averaging the 10 largest losses in the dataset.
B) Using a complex "closed-form" solution applicable to all distributions.
C) Slicing the tail into many segments and taking the average of the VaRs of
those segments.
D) Taking the VaR and multiplying it by a constant factor of 1.5.
10. What does the "halving error" help a risk analyst determine?
A) The standard deviation of the portfolio.
B) Whether an estimate for a coherent risk measure has sufficiently
converged.
C) The 50% confidence level VaR.
D) An error in the source data.
11. The document provides the formula for VaR with normally distributed
Profit/Loss as αVaR = -μP/L + σP/L * zα. What does zα represent?
A) The mean of the P/L distribution.
B) The standard deviation of the P/L.
C) The standard normal variate corresponding to confidence level α.
D) The initial portfolio value.
12. According to the appendix on preliminary data analysis, what is the first
and most important step when confronted with a new data set?
A) Immediately run a regression analysis.
B) "Eyeball" the data to see if it 'looks right' and to spot potential anomalies.
C) Calculate the lognormal VaR.
D) Fit the data to a Student-t distribution.
14. What does a QQ plot that is linear in the middle but has steeper slopes at
both ends suggest about the data compared to the reference distribution?
A) The data has thinner tails.
B) The data has heavier (fatter) tails.
C) The data's mean is zero.
D) The data is from a uniform distribution.
15. If geometric returns R_t are 0.05, what is the corresponding arithmetic
return r_t? (Hint: R = ln(1+r))
A) 0.0488
B) 0.0500
C) 0.0513
D) 0.0250
16. The document mentions that the relative accuracy of VaR and ES
estimators can be affected by the characteristics of the distribution. For
particularly heavy-tailed distributions, what did initial studies suggest?
A) VaR and ES estimators had identical standard errors.
B) VaR estimators had much bigger standard errors than ES estimators.
C) ES estimators had much bigger standard errors than VaR estimators.
D) Neither measure could be estimated.
17. Which of the following is listed as one of the three "core issues" to address
when measuring market risk?
A) Which software to use?
B) Which data provider to choose?
C) Which level of analysis (portfolio or position)?
D) Which programming language to implement?
20. For what reason should arithmetic returns generally not be used when
dealing with long time horizons?
A) They are too volatile.
B) They implicitly assume that interim income is not reinvested.
C) They require the use of logarithms, which is computationally expensive.
D) They always result in negative asset values.
SUMMARY: 20/20
[MR-2] Non-parametric Approaches
5. According to the text, which method for estimating confidence intervals for
VaR uses the theory of quantiles to derive a complete distribution function for
the VaR estimate itself?
A) The Bootstrap method
B) The Order-Statistics (OS) approach
C) The Delta-Normal approach
D) The Filtered Historical Simulation (FHS) approach
7. How does the Hull and White (HW) volatility-weighted approach adjust
historical returns?
A) By multiplying them by the age of the observation.
B) By dividing them by the historical risk-free rate.
C) By scaling them using the ratio of current volatility to historical volatility.
D) By replacing them with random draws from a normal distribution.
10. What is a major practical problem when trying to estimate HS VaR for
longer holding periods (e.g., monthly) using daily data?
A) The number of effective observations falls rapidly, reducing precision.
B) It violates the assumption of normality.
C) The computation time increases exponentially.
D) It requires a subscription to a specialist data service.
11. The comparison in Table 4.1 shows that for estimating a 90% confidence
interval for VaR and ES, the Order-Statistics (OS) and Bootstrap approaches
yield:
A) Identical results down to the last decimal.
B) Very different results, suggesting one is superior.
C) Very similar results, suggesting either is reasonable in practice.
D) Results that are always wider than parametric methods.
13. In the formula for age-weighted HS, w(i) = λ^(i-1)(1-λ) / (1-λ^n), what does a
λ value close to 1 imply?
A) A very high rate of decay, where only the newest data matters.
B) A slow rate of decay, where older observations retain significant weight.
C) The model is equivalent to a volatility-weighted model.
D) The model collapses to basic HS where all weights are equal.
14. What is the first step in the Filtered Historical Simulation (FHS) process for
a single asset?
A) Bootstrap the raw return data.
B) Fit a conditional volatility model (e.g., GARCH) to the portfolio-return data.
C) Calculate the average of all historical returns.
D) Remove all returns greater than three standard deviations.
17. When constructing a histogram, the text emphasizes that the choice of
which parameter can significantly alter the resulting impression of the data's
distribution?
A) The sample mean
B) The bin width (or bandwidth)
C) The sample kurtosis
D) The number of assets in the portfolio
19. What is the optimal kernel function to minimize the Mean Integrated Square
Error (MISE), according to the text?
A) The Gaussian kernel
B) The Triangular kernel
C) The Epanechinikov kernel
D) The Box kernel
20. The theory of order statistics provides a distribution function, G_r(x), for a
given order statistic. What does this enable a risk analyst to do?
A) Calculate the exact future value of a portfolio.
B) Determine a confidence interval for a VaR estimate.
C) Prove that the underlying data is normally distributed.
D) Eliminate all sources of model risk.
21. When using a bootstrap to estimate a confidence interval, the BCa (bias-
corrected and accelerated) method is described as an improvement over the
basic percentile interval because it:
A) Is much faster to compute.
B) Corrects for skewness and bias in the parameter estimates.
C) Does not require resampling from the data.
D) Always produces a narrower, more precise interval.
22. How is the historically simulated P/L series for a portfolio constructed?
A) By taking the actual P/L earned by the portfolio over the historical period.
B) By calculating the P/L that would have been earned on the current portfolio
if it were held throughout the historical sample period.
C) By simulating returns from a Monte Carlo model based on historical parameters.
D) By averaging the returns of all assets and multiplying by the portfolio value.
23. The ES curve is typically smoother than the VaR curve when plotted
against the confidence level (as in Figure 4.3) because:
A) ES is a theoretical concept, while VaR is an actual observation.
B) The ES curve uses a logarithmic scale.
C) Each ES point is an average of tail losses, while each VaR point reflects a
single random observation.
D) The ES calculation removes outliers from the data set.
25. In the context of the bootstrap appendix, what is the main purpose of the
Andrews and Buchinsky three-step method?
A) To estimate the bias of a bootstrap estimator.
B) To choose the optimal number of bootstrap resamples (B) to achieve a
target level of precision.
C) To modify the bootstrap for data that is not independent.
D) To calculate the BCa confidence interval.
27. In the first stage of Principal Components Analysis (PCA), what does the
first principal component represent?
A) The linear combination of variables that explains the least amount of variance.
B) The average correlation across all variables.
C) The linear combination of variables that explains the maximum possible
variance.
D) A random factor that is uncorrelated with the data.
28. If an analyst wishes to adjust historical data for seasonal patterns in
volatility (e.g., natural gas prices being more volatile in winter), which
approach would be most suitable?
A) Basic Historical Simulation
B) Bootstrapped Historical Simulation
C) Weighted Historical Simulation
D) Order-Statistics Approach
30. The text concludes that while non-parametric methods are attractive, one
should never rely on them alone. What should they be complemented with?
A) More historical data
B) More advanced parametric models
C) Stress testing to gauge vulnerability to "what if" events
D) The opinions of senior management
1. Which of the following best describes backtesting in the context of VaR models?
A. Testing a model’s predictive power using future market data
B. Comparing actual portfolio losses with their predicted VaR over a historical
period
C. Simulating hypothetical market scenarios for a portfolio
D. Adjusting VaR parameters to align with market consensus
2. An "exception" (or "exceedance") in VaR backtesting occurs when:
A. Projected returns are higher than forecasted
B. Actual loss is less than the predicted VaR
C. The actual portfolio loss exceeds the VaR estimate
D. Portfolio returns are not normally distributed
3. Why is backtesting considered essential for VaR model validation?
A. It provides an independent audit trail
B. It aligns capital allocation with peer institutions
C. It verifies whether model predictions match observed losses
D. It simplifies regulatory reporting
4. What does a high number of exceptions typically indicate about a VaR model?
A. The model is overly conservative
B. The model underestimates risk
C. The market is less volatile than expected
D. The model perfectly fits the data
5. Which of the following is a significant practical difficulty in backtesting VaR?
A. Insufficient regulatory guidance
B. Portfolio composition changes over time
C. VaR models always use lognormal distributions
D. Exceptions are always deterministic
6. The key limitation of using a high VaR confidence level (e.g., 99%) for backtesting
is:
A. Too many exceptions for meaningful tests
B. Too few exceptions for robust statistical inference
C. Bias in returns toward outliers
D. Increased cost of capital
7. "Actual return" vs. "Hypothetical return" means:
A. The former excludes transaction costs; the latter includes them
B. Actual includes trading, fees, and income; hypothetical assumes static
positions
C. Both represent identical series in backtesting
D. Hypothetical returns are always larger
8. Why does a small number of exceptions in backtesting pose a challenge?
A. It reduces statistical power to reject inaccurate models
B. It overstates the volatility of the portfolio
C. It shows the model is always correct
D. It makes Type II errors impossible
9. Which event is a Type I error in VaR backtesting?
A. Retaining a model that underestimates risk
B. Failing to detect a faulty model
C. Incorrectly rejecting a valid VaR model
D. Overstating the number of exceptions due to clustering
10. Type II error in VaR backtesting means:
A. Accepting an accurate model
B. Incorrectly rejecting a correct model
C. Failing to reject a flawed model
D. Having a high confidence interval
11. The failure rate in the context of VaR backtesting is:
A. Number of zero returns / total observations
B. Average VaR predicted per trading day
C. Number of exceptions / total number of periods
D. VaR threshold divided by asset value
12. Which statistical test is typically applied for unconditional coverage in
backtesting?
A. Kolmogorov-Smirnov test
B. Kupiec's Proportion of Failures (PoF/LRuc) test
C. Anderson-Darling test
D. Sharpe ratio analysis
13. Suppose you use 250 trading days and a 99% VaR. Baseline expected
exceptions are:
A. 1
B. 2.5
C. 25
D. 5
14. What would unconditional coverage fail to detect?
A. Clusters of exceptions in short periods
B. Accurate overall exception rate
C. Incorrect documentation
D. Losses below VaR
15. Why is conditional coverage needed in backtesting?
A. To adjust capital multipliers annually
B. To ensure exceptions happen randomly and do not cluster
C. To reduce capital requirements
D. To justify higher confidence intervals
16. Basel's traffic-light approach sets the green zone for exceptions in a typical year
at:
A. 0–2
B. 0–5
C. 0–4
D. 0–9
17. What is the immediate regulatory implication for entering the "yellow zone" (e.g.,
5–9 exceptions)?
A. No consequences
B. Discretionary review and possible higher capital charge
C. Mandatory model shutdown
D. Recalculation of all previous VaR estimates
18. The red zone under Basel backtesting means:
A. The bank receives a bonus for model performance
B. The VaR model is rejected; capital multiplier increases significantly
C. All exceptions are ignored
D. The model can continue with no penalty
19. Which aspect does the Christoffersen test add to exception counting?
A. Evaluation of confidence interval width only
B. Serial independence of exceptions (conditional coverage)
C. Estimation of loss given default
D. Calculation of risk appetite
20. For a VaR at 99% confidence with 250 days, if you observe 7 exceptions, you
should:
A. Always reject the model
B. Investigate cause; may be in yellow zone
C. Assume model is conservative
D. Reduce VaR threshold
21. Which is NOT a valid cause for exceptions under Basel’s categories?
A. Model coding error
B. Intraday trading
C. Bad luck/extreme events
D. Increased portfolio diversification
22. Which Basel action follows 10 or more exceptions in a year?
A. Reduce capital requirements
B. Enforce automatic penalty and require capital multiplier k=4
C. Disregard the backtesting results
D. Switch to historical simulation method
23. What does a violation of conditional coverage indicate?
A. Market returns are always normal
B. Losses and VaR predictions are independent
C. Exceptions occur in patterns, suggesting overlooked risk factors
D. Portfolio is well-diversified
24. A key trade-off in designing backtesting tests is:
A. Balancing Type I and Type II errors
B. Fitting multiple models simultaneously
C. Minimizing transaction costs
D. Matching liquidity requirements
25. Increasing the length of the backtesting period (more observations) generally:
A. Decreases test power
B. Makes it harder to detect model flaws
C. Increases power and reduces error rates
D. Has no statistical impact
26. What is the main statistical distribution applied in exception counting for VaR?
A. Poisson
B. Normal
C. Binomial
D. Uniform
27. Under Basel, exceptions due to extreme political events or natural disasters:
A. Always result in penalty
B. Are generally excluded as “bad luck”
C. Must be explained but never penalized
D. Invalidate the entire VaR framework
28. In backtesting, what is “clustering”?
A. All exceptions occur at the start of the year
B. Exceptions are distributed evenly
C. Multiple exceptions occur close together in time
D. VaR is recalibrated daily
29. Which parameter is typically adjusted if exceptions repeatedly enter the yellow or
red zone?
A. The model’s stochastic differential equation
B. The VaR confidence level or capital multiplier (k)
C. The trade settlement cycle
D. The asset’s notional value
30. In the context of backtesting, which of the following most increases the likelihood
of a Type II error?
A. Using a broader test interval for exceptions
B. Raising the significance level of the hypothesis test
C. Small sample size and few exceptions, especially with high VaR confidence
D. Matching actual and hypothetical returns
=> SUMMARY: 30/30
[MR-5] VaR Mapping
1. What is the primary objective of the mapping process in Value-at-Risk (VaR)
measurement?
3. Which type of risk is the result of issuer-specific movements, after accounting for market
factors?
a) Specific risk
b) Systematic risk
c) General risk
d) Interest rate risk
4. How is the total portfolio exposure to a primitive risk factor calculated after mapping?
5. Which mapping method for fixed-income portfolios groups all cash flows into maturity
buckets corresponding to provided volatilities?
a) Principal mapping
b) Duration mapping
c) Cash-flow mapping
d) Volatility mapping
6. What assumption underlies the duration approximation method for risk mapping?
8. In cash-flow mapping of a fixed-income portfolio, what does each cash flow represent?
9. What is the effect on portfolio VaR when mapping uses more primitive risk factors?
10. Which mapping technique for fixed-income portfolios can overstate risk by ignoring
coupon payments?
a) Cash-flow mapping
b) Duration mapping
c) Principal mapping
d) Regression mapping
11. When is it necessary to estimate exposures rather than compute them analytically during
mapping?
12. Which risk factor is most likely to dominate the risk of a forward currency contract?
13. When might mapping exposures present a challenge due to lack of data?
14. Which of the following describes the mapping process for options in the delta-normal
VaR approach?
15. In the mapping of interest rate swaps, the fixed leg is typically mapped as:
16. Which statement best describes stress testing using mapped exposures?
17. Mapping a portfolio to a benchmark for relative VaR allows a risk manager to:
19. If risk factors are chosen too broadly in mapping, what is the likely result?
a) It is computationally difficult
b) It completely ignores coupon payments and overstates risk
c) It gives exactly the same risk as duration mapping
d) It requires complex nonlinear modeling
21. Which mapping system for fixed income is most precise if granular data on cash flows
and yield volatilities is available?
a) Principal mapping
b) Duration mapping
c) Cash-flow mapping
d) Correlation-only mapping
22. What is the major risk factor in a forward rate agreement (FRA)?
23. In risk mapping, how is a floating leg of an interest rate swap (at reset date) mapped?
a) As a fixed-duration bond
b) As cash (no risk)
c) As a portfolio of zero-coupon bonds
d) As a forward contract
24. In variance matching for mapping, what does the correlation coefficient between vertices
reflect?
25. After mapping, how can general risk and specific risk components be separated in
portfolio variance?
26. Which condition makes duration matching exact for risk mapping?
27. For a portfolio with only fixed-income securities, mapping positions on term-structure
vertices is most analogous to:
a) Assigning positions to equities
b) Allocating to currency buckets
c) Allocating present values by maturity to points along the yield curve
d) Assigning all positions to a single cash factor
28. What is the typical first principle suggested before mapping, for portfolio risk
measurement?
29. In mapping for market risk, selecting more risk factors generally:
a) Stocks
b) Cash
c) Commodities
d) Bonds
[MR-6] Validating Bank Holding Companies’ Value-at-Risk Models
for Market Risk
Q1. Which of the following best captures the primary purpose of conceptual-
soundness testing in VaR model validation?
A. Ensuring the bank’s VaR matches peer institutions’ models
B. Verifying that model assumptions, data and methodology are appropriate for the
bank’s risk-management objectives
C. Detecting data‐entry errors in trade capture systems
D. Calibrating VaR multipliers for regulatory capital
Q2. A VaR model that cannot reflect how risk changes when positions change would
fail conceptual-soundness review because it is not:
A. computationally efficient
B. backtestable under Kupiec’s test
C. fit for purpose in risk management
D. based on filtered historical simulation
Q4. Which of the following data issues most commonly challenges the conceptual-
soundness of large-scale trading VaR models?
A. Multiplication overflow in Monte Carlo engines
B. Construction of an accurate pseudo-history of one-day P&L based on current
positions
C. Absence of a variance–covariance matrix for equities
D. Time-varying risk-free rates in duration calculations
Q5. Which historical example is often cited to illustrate why VaR calculated on actual
P&L can under-state risk for dynamic trading strategies?
A. LTCM collapse
B. Capital Decimation Partners case discussed by Lo (2001)
C. Flash-crash of 2010
D. Barings collapse
Q8. When the regression approach for component VaR (ΔVi versus ΔVP) is
infeasible due to sparse data, Tasche and Hallerbach recommend estimating the
component VaR by:
A. loading proxies into a multivariate GARCH
B. bootstrapping shocks from filtered historical simulation
C. inspecting the position’s loss on the day that determines historical-simulation
VaR
D. replacing missing returns with zeros
Q11. The Jorion (1996) asymptotic standard-error formula for a VaR quantile
requires knowledge of:
A. the tail index of a Pareto distribution
B. the pdf evaluated at the VaR estimate
C. Kupiec unconditional coverage statistic
D. the filtered shock ranking Q1–c
Q13. When bootstrapping VaR for a GARCH(1,1) model, Christoffersen & Gonçalves
(2005) insist on re-estimating the variance equation within each resample to:
A. keep the independence assumption valid
B. incorporate parameter-estimation risk into the interval
C. avoid overfitting the historical shocks
D. enforce normal innovations
Q14. Empirical results in the chapter show that, for S&P 500 data, filtered historical-
simulation VaR produced the narrowest confidence intervals because FHS:
A. ignores volatility clustering
B. assumes a Gaussian shock distribution
C. scales the tail shocks by time-varying σT+1, yielding more efficient quantile
estimates
D. uses EVT to fit peaks-over-threshold tails
Q15. One persistent obstacle to benchmarking VaR models across banks is that:
A. the FS-128 template forces identical data windows
B. banks rarely run two parallel VaR engines long enough for statistical comparison
C. regulatory multipliers change daily
D. actual P&L cannot be observed at the desk level
Q16. The Lopez (1996) regulatory loss function penalizes models only when:
A. the bank’s VaR is exceeded
B. VaR is too conservative
C. expected shortfall is under-estimated
D. independence of exceptions is violated
Q17. Under that loss function, a model that systematically over-estimates VaR will:
A. have zero loss
B. be heavily penalized
C. show a higher dynamic-quantile (DQ) statistic
D. fail the Christoffersen conditional-coverage test
Q19. In Berkowitz & O’Brien (2002), a simple GARCH(1,1) VaR based on actual
trading P&L often outperformed banks’ internal VaRs on accuracy because internal
models were:
A. too aggressive in benign periods
B. conservative due to regulatory incentives
C. lacking any volatility updating
D. fitted with t-copulas
Q20. When comparing positional VaR to P&L-based GARCH VaR using the Lopez
check-loss, the chapter finds that positional VaR underperforms at most banks
because:
A. age-weighted volatility exaggerates recent moves
B. positional VaR is intentionally conservative and therefore less accurate in point
prediction
C. missing fee income inflates tail losses
D. exception clustering invalidates logistic DQ
Q21. The dynamic quantile (DQ) test of Engle & Manganelli (2004) improves upon
basic exception counting by:
A. allowing for regression of PITs on lagged information variables
B. replacing VaR with expected shortfall
C. estimating GPD tails above a threshold
D. transforming exceedances into durations
Q22. One limitation of duration-based tests (Christoffersen & Pelletier 2004) is that
they are:
A. computationally infeasible for daily data
B. rarely implemented in practice despite power advantages
C. applicable only to ES, not VaR
D. valid only under normality
Q25. According to the sample backtests (2013-2016), most U.S. BHC trading VaR
models were:
A. aggressive during benign markets and conservative in stress
B. conservative overall, with average exceedance 0.4% versus 1% expected
C. perfectly calibrated at desk level
D. failing unconditional coverage at the 90% level
Q27. One reason the VQR test failed nineteen of twenty firms while exception-based
tests flagged only a few is that VQR:
A. ignores PIT uniformity
B. evaluates the full conditional quantile function, not just the 1% tail
C. assumes heavy-tailed ν=5 t-errors
D. uses out-of-sample forecasts only
Q28. When benchmarking VaR against a GARCH VaR on actual P&L, the sign test
showed positional VaR dominated only 1 out of 19 desks. This indicates that:
A. conservative bias can reduce predictive accuracy
B. filtered historical simulation always outperforms GARCH
C. regulatory multipliers were too low
D. PITs showed severe left-tail clustering
Q29. For expected-shortfall models under the Fundamental Review of the Trading
Book (FRTB), which backtesting element translates most directly from VaR
validation practice?
A. Order-statistics confidence-interval estimation
B. Kupiec’s unconditional-coverage test
C. Lopez regulatory loss function
D. Sensitivity analysis for omitted risk factors
4. In the joint conditional coverage test (Christoffersen), the test statistic is:
A. LRuc – LRind
B. LRuc + LRind
C. LRuc × LRind
D. max(LRuc, LRind)
6. Which histogram feature indicates that a VaR model’s PITs are too
conservative in the tails?
A. A uniform flat shape
B. Spikes at both ends of the distribution
C. A hump in the center of the distribution
D. A left-skewed distribution
10. Which statistical test places extra weight on tails when testing uniformity of
PITs?
A. Kolmogorov–Smirnov
B. Anderson–Darling
C. Christoffersen independence
D. Ljung–Box
12. The probability integral transform of a correct model’s P&L should be:
A. Exponentially distributed
B. Normally distributed
C. Uniformly distributed
D. Bernoulli
13. Which of these moment statistics of PITs should equal that of a Uniform(0,1)?
A. Kurtosis = 0
B. Mean = 0.5
C. Skewness = 1
D. Median = 0
14. The Cramér–von Mises test statistic for uniformity of PITs compares:
A. Empirical CDF to a Gaussian CDF
B. Empirical CDF to the uniform CDF using squared deviations
C. Two successive PITs for autocorrelation
D. PIT histogram height against expected frequency
15. When applying the Ljung–Box test to the series of VaR exceptions, what null
hypothesis is tested?
A. No excess kurtosis in exceptions
B. Exceedances occur with correct frequency
C. No autocorrelation in the exception indicator series
D. PITs are uniformly distributed
17. A Q–Q plot of PITs that bows above the 45° line in the tails suggests:
A. Too few extreme losses (tails are understated)
B. Too many extreme losses (tails are overstated)
C. Perfect model fit
D. Constant coverage
18. The series of 1-day 99% VaR exceptions should form a Bernoulli(0.01)
process if the model is:
A. Unbiased only
B. Independently and correctly specified
C. Conditionally autoregressive
D. Filtered historical simulation
19. Which test uses the empirical CDF of PITs and compares it to the theoretical
uniform CDF via supremum distance?
A. Anderson–Darling
B. Cramér–von Mises
C. Kolmogorov–Smirnov
D. Ljung–Box
21. Which of the following is NOT a property that correctly specified PITs must
satisfy?
A. i.i.d. U(0,1)
B. Mean = 0.5
C. Maximum = 1
D. Variance = 1/12
22. In a 99% VaR backtest over 250 days, about how many exceedances are
expected if the model is accurate?
A. 1
B. 2.5
C. 10
D. 25
25. Which empirical evidence suggests that firm-level PITs deviate less from
uniformity than portfolio-level PITs?
A. KDE of exceptions
B. Histogram of P&L
C. PIT distribution and Q–Q plots
D. Autocorrelation function
A) The CDS spread decreases and the investor experiences a paper gain
B) The CDS spread increases and the investor experiences a paper loss
C) The CDS spread decreases and the investor experiences a paper loss
D) The CDS value increases due to higher counterparty protection
A) $416,667
B) $500,000
C) $583,333
D) $750,000
3. For a quanto option on the Nikkei index with USD/JPY currency exposure, if the
correlation between the Nikkei returns and USD/JPY exchange rate is strongly
negative, the quanto option price will be:
4. A two-asset portfolio has $12 million in Asset A (daily volatility 2.5%) and $8
million in Asset B (daily volatility 1.8%). With correlation of 0.4, what is the 10-day
VaR at 95% confidence level (α = 1.645)?
A) $1.89 million
B) $2.34 million
C) $2.67 million
D) $3.12 million
5. During the May 2005 correlation crisis, hedge funds experienced losses on both
equity and mezzanine CDO tranches when GM and Ford were downgraded. This
occurred because:
7. A commercial bank has made equal $4 million loans to three companies, each
with 6% default probability. Using the binomial correlation model with correlation
coefficient 0.5, what is the joint default probability?
A) 0.216%
B) 1.080%
C) 1.836%
D) 2.592%
8. According to Basel III requirements, if a bank's 10-day VaR is $2.5 million, the
minimum regulatory capital charge for trading book assets is:
A) $2.5 million
B) $5.0 million
C) $7.5 million
D) $10.0 million
9. In the global financial crisis of 2007-2009, correlations between Dow Jones stocks
increased from pre-crisis levels of 27% to over 50%. This phenomenon is primarily
an example of:
10. For a portfolio with correlation coefficient ρ = 0.6, asset weights w₁ = 0.4, w₂ =
0.6, and asset volatilities σ₁ = 15%, σ₂ = 20%, the portfolio volatility is closest to:
A) 14.2%
B) 15.8%
C) 17.1%
D) 18.6%
11. A variance swap strategy involves buying variance swaps on individual index
components while selling variance swaps on the index. This strategy profits when:
13. The concentration ratio for a lender with loans of $5M, $3M, $2M, $7M, and $3M
to five different borrowers is:
A) 0.20
B) 0.25
C) 0.35
D) 0.40
14. A correlation option strategy that benefits from HIGHER correlation between
underlying assets is:
16. Expected Shortfall (ES) differs from VaR primarily because ES:
17. In the one-factor Gaussian copula model used in Basel frameworks, if the asset
correlation parameter increases, the capital requirement for a credit portfolio will:
18. The nonmonotonous relationship between CDS spreads and correlation occurs
because:
21. The leverage effect in CDO structures during the crisis was exemplified by
Leveraged Super-Senior (LSS) tranches with leverage ratios of:
A) 2-5 times
B) 5-10 times
C) 10-20 times
D) 20-50 times
22. Migration risk in credit portfolios is most directly related to correlation through:
23. A correlation swap with 4 assets where realized correlations are ρ₂₁=0.7,
ρ₃₁=0.5, ρ₄₁=0.3, ρ₃₂=0.6, ρ₄₂=0.4, ρ₄₃=0.2. The realized correlation is:
A) 0.45
B) 0.48
C) 0.52
D) 0.57
24. In the London Whale case (JPMorgan 2012), the correlation strategy involved:
25. The copula correlation model's weakness during the CDO crisis was primarily:
A) 0.96%
B) 1.44%
C) 1.89%
D) 2.31%
27. The energy sector's correlation characteristics with other sectors make it
valuable for portfolio diversification because:
28. A multi-asset call option on the maximum of two assets with strike K has payoff
max[0, max(S₁, S₂) - K]. This option's value increases when correlation:
30. The primary difference between static and dynamic financial correlations is:
A) Static correlations use historical data while dynamic use forward-looking models
B) Static correlations measure association within fixed periods while dynamic
measure time-evolution
C) Static correlations apply to bonds while dynamic apply to equities
D) Static correlations are more accurate for risk management purposes
[MR-9] Empirical Properties of Correlation: How Do Correlations
Behave in the Real World?
1. A study of Dow Jones stocks from 1972 to 2017 revealed correlation levels of
37.0% during recessions, 33.0% during normal periods, and 27.5% during
expansionary periods. What is the primary reason for the lowest correlations during
expansionary periods?
2. Using the regression equation Y = 0.256 - 0.7903X for mean reversion analysis,
where Y = St - St-1 and X = St-1, what is the expected correlation next month if the
current month's correlation is 25% and the long-run mean is 35%?
A) 32.90%
B) 33.45%
C) 34.21%
D) 35.00%
3. A risk manager observes that correlation volatility during different economic states
shows: recession (80.5%), normal period (83.0%), and expansion (71.2%). The
higher volatility during normal periods compared to recessions is most likely
because:
4. In the empirical study of Dow correlations from 1972-2017, the mean reversion
rate was found to be 79.03%. If this relationship holds and the current correlation is
40% while the long-run mean is 32%, what is the expected change in correlation for
the next period?
A) -6.32%
B) -8.00%
C) +6.32%
D) +8.00%
5. The autocorrelation for a one-period lag in the Dow correlation study was 20.97%.
This autocorrelation rate combined with the mean reversion rate demonstrates which
fundamental relationship?
A) Normal distribution
B) Lognormal distribution
C) Beta distribution
D) Johnson SB distribution
7. A correlation analyst runs an autocorrelation test for various lag periods and finds
the highest autocorrelation of 26% occurs at a 2-month lag rather than a 1-month
lag. This pattern suggests:
8. In the empirical analysis of 426,300 correlation values between Dow stocks, what
percentage were positive correlations?
A) 69.4%
B) 73.8%
C) 77.2%
D) 81.6%
9. Bond correlations were found to have a mean reversion rate of 26% compared to
equity correlations' 79%. This difference primarily indicates that:
10. The correlation volatility preceding recessions showed negative changes in 5 out
of 6 cases studied. The exception was the 1990-1991 recession with a +0.06%
change. This anomaly most likely reflects:
11. Using the mean reversion formula St - St-1 = a(μ - St-1), if μ = 30%, St-1 = 45%,
and the mean reversion rate a = 0.6, what is the expected value of St?
A) 36%
B) 39%
C) 42%
D) 45%
12. The study found that default probability correlations had an average of 30% with
correlation volatility of 88%. Compared to equity correlations (34.83% average,
79.73% volatility), this suggests default correlations are:
13. The relationship between correlation level and correlation volatility was found to
be positive. In a portfolio risk management context, this relationship implies:
14. The generalized extreme value (GEV) distribution was found to best fit which
type of correlation data?
A) Equity correlations
B) Bond correlations
C) Default probability correlations
D) Currency correlations
15. A risk manager needs to forecast correlation for the next month given current
correlation of 28%, long-run mean of 34%, and estimated mean reversion rate of
75%. The forecasted correlation should be:
A) 30.5%
B) 31.0%
C) 32.5%
D) 33.0%
16. The severe recessions of 1973-1974 and 1981-1982 were both caused by oil
price shocks and showed GDP declines of -11.93% and -12.00% respectively. The
correlation volatility changes preceding these recessions were -7.22% and -4.65%.
This pattern suggests:
17. The study period from 1972-2017 included 534 months resulting in 480,600
monthly correlations (900 × 534). The removal of diagonal unity values left 426,300
correlations for analysis. This methodology ensures:
19. The autocorrelation decay pattern from 26% at 2-month lag to approximately
10% at 10-month lag indicates:
20. In the regression Y = 0.273 - 0.78X for mean reversion, if current correlation is
30% and long-run mean is 35%, the expected correlation change is:
A) +2.7%
B) +3.9%
C) +5.2%
D) +6.8%
21. The Johnson SB distribution's superiority in fitting equity correlation data over
normal, lognormal, and beta distributions suggests:
A) Equity correlations have fat tails and skewness not captured by simpler
distributions
B) The bounded nature of correlations requires specialized distribution forms
C) Traditional financial assumptions about normality fail for correlation data
D) All of the above
22. Bond correlation levels (41.67%) being higher than equity correlations (34.83%)
while having lower volatility (63.74% vs 79.73%) suggests:
23. The observation that correlation volatility typically decreases before recessions
(except 1990-1991) provides insight for:
25. The finding that 77.23% of Dow stock correlations were positive over the 1972-
2012 period most likely reflects:
A) Credit risk and equity risk share similar underlying factor structures
B) Default probabilities and stock returns are driven by identical processes
C) Bond correlations are fundamentally different from other asset class correlations
D) Distribution choice is primarily determined by sample size rather than underlying
economics
27. The polynomial trend line of order 4 applied to the correlation time series data
serves to:
28. A mean reversion rate of 77.51% for equity correlations combined with long-run
mean of 34.83% implies that extreme correlation events:
29. The relationship between the state of the economy and correlation volatility
showing highest volatility during normal periods rather than recessions suggests:
A) Risk models can assume constant correlation volatility across different correlation
levels
B) Higher correlation periods require additional uncertainty adjustments in VaR
calculations
C) Portfolio diversification benefits are most reliable during high correlation periods
D) Correlation forecasting accuracy improves during low volatility periods
[MR-10] Financial Correlation Modeling — Bottom-Up Approaches
1. In the Heston (1993) correlation model, the instantaneous correlation
between Brownian motions dz₁(t) and dz₂(t) is defined as Corr[dz₁(t),
dz₂(t)] = ρdt. To allow for negative correlation, the model uses the identity
dz₁(t) = α dz₂(t) + √(1-α²) dz₃(t). What value of α corresponds to perfect
negative correlation?
A) α = 0
B) α = -1
C) α = 1
D) α = -0.5
2. The original Heston model correlates two stochastic differential equations. Which
pair of financial variables does it primarily correlate?
3. In the binomial correlation model of Lucas (1995), two entities X and Y have
default probabilities P(X) = 8% and P(Y) = 12%. If the joint default probability P(XY)
= 2%, what is the binomial correlation coefficient?
A) 0.234
B) 0.456
C) 0.612
D) 0.789
7. When applying a Gaussian default time copula to two companies with 1-year
cumulative default probabilities of 6.51% and 23.83% respectively, the mapped
standard normal percentiles are approximately -1.51 and -0.71. What is the primary
purpose of this mapping process?
A) C is symmetric
B) C is unique
C) C is bounded
D) C is differentiable
9. For a survival probability calculation using continuous default intensity λ ᵢ(t), the
probability that entity i survives until time t is expressed as Pr[τ ᵢ > t] = exp{-∫₀^τ ᵢ λ ᵢ(t)dt}.
In the case of constant default intensity, the correlated default time becomes:
A) τᵢ = -ln[Mₙ(- )]/λᵢ
B) τᵢ = ln[Mₙ(- )]/λᵢ
C) τᵢ = -Mₙ(- )/ln(λᵢ)
D) τᵢ = Mₙ(- ) × λᵢ
10. The Gaussian copula's lack of tail dependence means that for any
correlation parameter ρ ∈ {-1,1}, the limit as y₁,y₂ → 0 of P(τ₁ < N₁⁻¹(y₁)|
τ₂ < N₂⁻¹(y₂)) equals:
A) ρ
B) 1
C) 0
D) -1
11. Which copula type exhibits high tail dependence, especially for negative
comovements, making it potentially suitable for financial crisis modeling?
A) Gaussian copula
B) Student's t copula
C) Gumbel copula
D) Frank copula
12. In the contagion correlation model of Davis and Lo, the latent variable
Z_i for entity i includes the term (1-X_i)[1-∏(1-X_j K_ij)]. What does the
parameter K_ij represent?
13. The Jarrow and Yu (2001) contagion model uses default intensity
equations λ_A(t) = a₁ + a₂1{τ_B ≤ t} and λ_B(t) = b₁ + b₂1{τ_A ≤ t}. The
indicator variable 1{τ_B ≤ t} takes value 1 when:
14. In Cholesky decomposition for a 3×3 correlation matrix S = MM^T, if c₁₁ = 1, c₂₁
= 0.3, and c₂₂ = 1, what is the value of m₂₂?
A) x = M^T e
B) x = M e
C) x = e M
D) x = M⁻¹ e
16. A forward default probability for year 7, given 6-year cumulative default
probability Q₆ = 36.73% and 7-year cumulative default probability Q₇ = 40.97%,
equals 4.24%. The corresponding default intensity in year 7 is:
A) 4.24%
B) 6.70%
C) 2.46%
D) 8.15%
17. The "correlation smile" criticism of Gaussian copula calibration refers to traders:
18. In the Gaussian copula framework, when finding the joint default probability of
two companies with correlation ρ = 0.4, mapped standard normal values of -1.5133
and -0.7118, the joint probability involves:
20. The "looping defaults" problem in symmetric contagion models occurs because:
21. For a company in distress, default probabilities typically show what pattern over
time?
22. In the SABR model extension of Heston's approach, the correlation is applied
between:
23. The one-factor Gaussian copula model assumes that correlations between any
two entities in a CDO portfolio:
26. The primary limitation of static copula models in risk management is:
A) Computational complexity
B) Inability to handle more than two assets
C) Lack of stochastic processes for underlying variables like default intensity
D) Requirement for normally distributed marginal data
27. A Bernoulli random variable X_j in the Davis and Lo contagion model can take
values of 0 and 1. If Pr(X_j = 1) = q, then X_j represents:
30. In the context of CDO valuation, the criticism that "traders randomly alter
correlation parameters for different tranches" most directly relates to:
A) The mathematical incorrectness of the copula approach
B) The difficulty in calibrating copula models to market prices
C) The assumption of normally distributed defaults
D) The computational burden of Monte Carlo simulations