Numerical Methods in Quantitative Finance
Numerical Methods in Quantitative Finance
Abstract
Numerical methods constitute the fundamental framework of quantitative fi-
nance, enabling practitioners to solve complex financial problems where analytical
solutions are intractable. This paper provides a comprehensive analysis of advanced
numerical techniques employed in finance, including Monte Carlo simulations, fi-
nite difference methods, lattice models, numerical optimization, the Fourier-Cosine
method, spectral methods, multilevel Monte Carlo, and stochastic grid methods.
Through detailed theoretical foundations and practical examples, we explore their
applications, implementations, and limitations across various financial domains in-
cluding derivative pricing, risk management, portfolio optimization, and optimal
trading. We examine emerging challenges such as computational complexity, model
risk, numerical stability, and data quality issues, while providing an extensive dis-
cussion of software implementation considerations and regulatory frameworks. The
paper concludes by highlighting future trends including quantum computing ap-
plications, machine learning integration, neural stochastic differential equations,
and hybrid approaches that promise to revolutionize computational finance in the
coming decade.
Contents
1 Introduction 3
1
3.3.1 Lattice Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.4 Transform-Based Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.4.1 Fourier-Cosine (COS) Method . . . . . . . . . . . . . . . . . . . . 8
3.4.2 Fast Fourier Transform (FFT) . . . . . . . . . . . . . . . . . . . . 9
3.4.3 Spectral Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.5 Optimization Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.5.1 Local Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.5.2 Linear Programming . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.5.3 Quadratic Programming . . . . . . . . . . . . . . . . . . . . . . . 12
3.6 Root-Finding Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.6.1 Bisection Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.7 Linear Algebra Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.7.1 Gaussian Elimination . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.7.2 LU Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.7.3 Power Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.8 Numerical Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.8.1 Trapezoidal Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.9 Interpolation and Approximation . . . . . . . . . . . . . . . . . . . . . . 16
3.9.1 Lagrange Interpolation . . . . . . . . . . . . . . . . . . . . . . . . 16
4 Illustrative Examples 16
4.1 Asian Option with Monte Carlo . . . . . . . . . . . . . . . . . . . . . . . 16
4.2 American Put with Finite Differences . . . . . . . . . . . . . . . . . . . . 16
4.3 Portfolio Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.4 Heston Model Option with COS . . . . . . . . . . . . . . . . . . . . . . . 17
4.5 SABR Model Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.6 Optimal Trade Execution . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.7 CVA Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2
8 Future Trends 26
9 Conclusion 26
1 Introduction
Quantitative finance confronts problems of profound complexity—exotic derivatives, stochas-
tic volatility, high-dimensional risk models, and dynamic trading strategies—where ana-
lytical solutions falter due to non-linearities, path dependencies, and market intricacies
[Black and Scholes, 1973, Heston, 1993]. Numerical methods, employing discretization,
iteration, and optimization, bridge theoretical models and practical applications, enabling
quants to price sophisticated securities, quantify multifaceted risks, and optimize strate-
gies.
This paper provides a rigorous treatment of numerical methods in quantitative fi-
nance, emphasizing theoretical depth, practical utility, and inherent challenges. Section
2 outlines core application domains, Section 3 analyzes advanced techniques, Section 4
presents illustrative examples, Section 5 examines limitations, Section 6 surveys com-
putational tools, Section 7 addresses regulatory considerations, and Section 8 explores
future directions.
3
2.3 Portfolio Optimization
Markowitz’s mean-variance optimization balances risk and return [Markowitz, 1952].
Extensions with transaction costs, cardinality constraints, or CVaR demand advanced
solvers. Dynamic strategies use stochastic control [Wilmott, 2006].
Portfolio optimization involves selecting the best mix of assets to maximize return
for a given level of risk. The mean-variance optimization framework is the foundation
of modern portfolio theory. Extensions include considerations for transaction costs, con-
straints on the number of assets (cardinality constraints), and alternative risk measures
like CVaR.
σ2 √
St+∆t = St exp r− ∆t + σ ∆tZt , Zt ∼ N (0, 1). (2)
2
The price is:
4
N
−rT 1 X (i)
V (0) ≈ e h(ST ), (3)
N i=1
√
with error O(1/ N ) [Glasserman, 2004]. Multilevel Monte Carlo (MLMC) reduces
variance via:
L
X
E[P ] ≈ E[Pl − Pl−1 ], (4)
l=0
−2 2
achieving O(ϵ (log ϵ) ) complexity for accuracy ϵ [Giles, 2008].
Limitations
Limitations
• High memory and computational costs limit scalability, particularly for high-dimensional
problems.
5
3.2 Differential Equation Solvers
3.2.1 Finite Difference Methods
Theory FDM solves PDEs like the Black-Scholes PDE:
∂V 1 ∂ 2V ∂V
+ σ 2 S 2 2 + rS − rV = 0, (6)
∂t 2 ∂S ∂S
discretizing (S, t) into (j∆S, n∆t). Central differences approximate:
Limitations
• Stability (e.g., CFL condition for explicit schemes) and boundary conditions require
careful consideration.
6
Limitations
• Euler’s method has only first-order accuracy (O(h)), requiring very small step sizes
for reasonable accuracy.
• It can exhibit instability and significant errors for stiff equations, making higher-
order methods like Runge-Kutta preferable for many applications.
Runge-Kutta Method
k1 = f (tn , yn ), (10)
h h
k2 = f (tn + , yn + k1 ), (11)
2 2
h h
k3 = f (tn + , yn + k2 ), (12)
2 2
k4 = f (tn + h, yn + hk3 ), (13)
h
yn+1 = yn + (k1 + 2k2 + 2k3 + k4 ), (14)
6
The local truncation error is O(h5 ), and the global error is O(h4 ).
Limitations
• While more accurate than Euler’s method, Runge-Kutta methods require more
function evaluations per step, increasing computational cost.
• They may still struggle with stiff equations, where implicit methods might be more
appropriate.
Theory The Finite Element Method discretizes a continuous domain into smaller sub-
domains (elements) and approximates solutions using piecewise polynomial functions.
The method minimizes a residual weighted by test functions, leading to a system of
algebraic equations. The weak form of the PDE is:
Z
∂u
v + Lu dΩ = 0, (15)
Ω ∂t
where v is a test function, u is the solution, and L is a differential operator.
7
Implementation In quantitative finance, FEM is applied to complex option pricing
problems, particularly those involving irregular domains or complicated boundary condi-
tions that are challenging for traditional finite difference methods.
Limitations
• FEM implementation is more complex than finite differences and can be computa-
tionally intensive, especially for three-dimensional or higher problems.
• The method requires careful mesh generation and selection of appropriate basis
functions.
where pu + pm + pd = 1.
Limitations
• Path-dependent options and high-dimensional problems are challenging for lattice
methods.
• Complex model calibration is limited by the tree structure.
8
where ϕ(u) = EQ [eiux ] is the characteristic function. The option price is:
N
X −1
−rT
V (0) = e Ak Vk , (19)
k=0
with Vk as payoff coefficients. The characteristic function ϕ(u) for the Heston model
is given by:
Implementation COS requires the characteristic function ϕ(u) (e.g., for Black-Scholes,
Heston models). Truncation interval [a, b] uses cumulants, and FFT accelerates compu-
tation.
Limitations
• Path-dependent or American options are not well-suited for the COS method.
The FFT algorithm exploits the symmetry and periodicity of the DFT to reduce the
number of computations.
Limitations
• FFT requires sequence lengths to be powers of two for maximum efficiency and may
introduce numerical artifacts if not properly implemented.
• It also assumes periodicity of the input data, which may require additional prepro-
cessing steps in financial applications.
9
3.4.3 Spectral Methods
Theory Spectral methods expand solutions in global bases, e.g., Chebyshev polynomi-
als:
N
X
V (S, t) ≈ ak (t)Tk (S). (22)
k=0
Coefficients ak (t) solve ODEs via collocation, offering exponential convergence for
smooth functions. The Chebyshev polynomials Tk (x) are defined as:
Limitations
• The global nature of basis functions can lead to oscillations near discontinuities
(Gibbs phenomena).
Theory Gradient descent is an iterative optimization algorithm that finds a local min-
imum of a differentiable function by taking steps proportional to the negative gradient:
∂J(θt )
θt+1 = θt − α . (25)
∂θt
Limitations
• The method may converge slowly for ill-conditioned problems, get trapped in local
minima, or oscillate around the optimum if the learning rate is poorly chosen.
10
Newton-Raphson Method
f (xn )
xn+1 = xn − , (26)
f ′ (xn )
Starting with an initial guess x0 , the method uses the function and its derivative to
generate increasingly accurate approximations. The convergence is quadratic if the initial
guess is close to the root.
Limitations
• The method may fail to converge if the derivative is close to zero or if the initial
guess is far from the actual root.
Levenberg-Marquardt algorithm
Limitations
11
Theory The simplex method solves linear programming problems by systematically
moving from one feasible solution to another along the edges of the feasible region, im-
proving the objective function value until reaching an optimal solution. The standard
form of a linear programming problem is:
Limitations
• While efficient in practice, the simplex method has exponential worst-case complex-
ity.
• It only applies to linear programming problems and cannot directly handle non-
linear objectives or constraints.
or calibration error:
X
min (Pmodel,i (θ) − Pmarket,i )2 . (30)
θ
i
Limitations
12
• Initialize a and b such that f (a) · f (b) < 0.
The convergence of the bisection method is linear, halving the error in each iteration.
The error after n iterations is given by:
b−a
|x∗ − xn | ≤ , (31)
2n
where x∗ is the true root, and xn is the midpoint at the n-th iteration.
• Define the function f (x) and the initial bracket [a, b].
Limitations
• The method converges linearly, making it slower than higher-order methods like
Newton-Raphson.
• It requires an initial bracket [a, b] that contains exactly one root, which may not
always be easy to determine.
• The bisection method may not be efficient for functions with multiple roots or
discontinuities.
(A b) (32)
13
Implementation In quantitative finance, Gaussian elimination is used to solve port-
folio optimization problems, risk models, and calibration of multi-factor models where
systems of linear equations naturally arise.
Limitations
• The method has O(n3 ) computational complexity, making it inefficient for large
systems.
• It can also suffer from numerical instability due to round-off errors, particularly
when pivoting strategies are not employed.
3.7.2 LU Decomposition
Theory LU decomposition factors a matrix A into the product of a lower triangular
matrix L and an upper triangular matrix U : A = LU . This decomposition allows efficient
solving of multiple systems with the same coefficient matrix but different right-hand sides.
The decomposition is given by:
A = LU, (33)
where L and U are lower and upper triangular matrices, respectively.
Limitations
• Like Gaussian elimination, LU decomposition has O(n3 ) complexity and may en-
counter numerical stability issues with ill-conditioned matrices.
14
Implementation In finance, the power method is used to find dominant risk factors
in large covariance matrices, perform principal component analysis (PCA) for dimension-
ality reduction, and identify key variables in factor models. The algorithm involves the
following steps:
Limitations
• The power method only finds the dominant eigenvalue/eigenvector pair. It does
not provide information about other eigenvalues or eigenvectors.
• The method converges slowly if the largest and second-largest eigenvalues are close
in magnitude.
• If the initial vector v0 is orthogonal to the dominant eigenvector, the method may
fail entirely.
• The power method is sensitive to the scaling of the matrix A. If the matrix is poorly
scaled, the method may converge to a non-dominant eigenvector.
15
Limitations
• The method has error O(h2 ) which may be insufficient for highly oscillatory or
discontinuous functions.
The Lagrange basis polynomials Li (x) satisfy Li (xj ) = δij , where δij is the Kronecker
delta.
Limitations
• In practice, piecewise interpolation methods like cubic splines are often preferred
for financial data.
4 Illustrative Examples
4.1 Asian Option with Monte Carlo
Asian call payoff: max(S̄ − K, 0), where S̄ is the average price. MCS simulates GBM
paths:
σ2 √
(i) (i) (i)
St+∆t = St exp r − ∆t + σ ∆tZt , (39)
2
P
−rT 1
PN 1 M (i)
Price: V (0) ≈ e N i=1 max M k=1 Stk − K, 0 [Chang et al., 2007].
16
4.3 Portfolio Optimization
Markowitz optimization:
1
min wT Σw s.t. µT w ≥ Rtarget , eT w = 1, w ≥ 0, (40)
w 2
17
rates, necessitating enormous sample sizes for high-precision results. This complexity be-
comes particularly problematic in high-dimensional settings—common in portfolio anal-
ysis with hundreds of assets or in models with multiple stochastic factors. Multilevel
Monte Carlo methods partially address this challenge by achieving improved asymptotic
complexity of O(ϵ−2 (log ϵ)2 ) for accuracy ϵ, but implementation complexities often limit
practical adoption.
Finite difference methods face the curse of dimensionality, with computational re-
quirements growing exponentially with problem dimensions. For example, a three-factor
model discretized with 100 points per dimension requires 1003 grid points, straining even
modern computing resources. The COS method demands increasingly large series ex-
pansions for high-accuracy results, particularly when applied to models with complex
characteristic functions like the rough Heston model.
Parallel computing and GPU acceleration offer partial solutions, with documented
speedups of 10-100× for Monte Carlo simulations. However, algorithm-specific optimiza-
tions are often required, and not all methods parallelize efficiently. Sparse grid techniques
and adaptive mesh refinement have shown promise for finite difference methods, reducing
complexity from O(N d ) to O(N (log N )d−1 ) for d dimensions, but introduce implementa-
tion challenges and approximation errors.
Memory constraints further limit practicality, particularly for path-dependent deriva-
tives where the entire price evolution must be stored. Tree-based methods become un-
wieldy beyond three factors, with node counts growing as O(bnd ) for branching factor b,
time steps n, and dimensions d.
18
Backtesting requires sufficient historical data and assumes some stability in market dy-
namics. Benchmark comparisons across methods can identify implementation issues but
may mask common modeling errors. Sensitivity analysis and stress testing provide insight
into model robustness but struggle to identify unknown unknowns.
19
For risk management applications, historical simulation approaches face data limi-
tations when modeling extreme events. Stress scenarios based on historical data may
underestimate tail risks if the historical sample doesn’t include sufficiently severe market
dislocations. Synthetic data generation techniques address this limitation but introduce
modeling assumptions that may not reflect realistic crisis dynamics.
20
for portfolio construction with various objective functions and constraints. The perfor-
mance package standardizes investment performance reporting, and the xts (eXtensible
Time Series) package provides robust time series manipulation capabilities essential for
financial data.
C++ continues to dominate performance-critical applications, particularly in high-
frequency trading and real-time risk management systems. QuantLib represents the most
comprehensive open-source library for quantitative finance in C++, implementing a wide
range of models, instruments, and numerical methods with hundreds of contributors over
two decades of development. The Boost libraries provide additional mathematical tools,
particularly the Boost.Math and Boost.Random components for statistical functions and
random number generation. Modern C++ (C++11 and beyond) offers improved produc-
tivity through features like auto typing, smart pointers, and lambda functions, reducing
the historical productivity gap compared to interpreted languages.
MATLAB maintains significant presence in research settings and quantitative develop-
ment teams. The Financial Toolbox provides specialized functions for derivative pricing,
interest rate modeling, and portfolio optimization with tight integration to MATLAB’s
broader numerical ecosystem. The Econometrics Toolbox supports time series analysis
and forecasting, while the Optimization Toolbox offers various solvers for constrained
and unconstrained problems common in calibration and portfolio construction. MAT-
LAB’s Parallel Computing Toolbox simplifies distribution of computationally intensive
simulations across multiple cores or clusters.
Julia represents an emerging contender, designed to address the ”two-language prob-
lem” by combining Python-like syntax with C-like performance. The JuliaFinance ecosys-
tem includes DifferentialEquations.jl for solving stochastic differential equations, Tur-
ing.jl for Bayesian inference in financial models, and FinancialDerivatives.jl for option
pricing. Julia’s multiple dispatch paradigm enables elegant implementation of financial
algorithms, while its native support for automatic differentiation benefits calibration and
sensitivity analysis.
21
performance attribution, the pyfolio package provides standardized evaluation of invest-
ment strategies with risk-adjusted metrics and factor analysis.
High-performance computing in finance is supported by libraries like CUDA-enabled
MonteCarloFin, which implements parallel Monte Carlo simulations on GPUs with re-
ported speedups of 10-200× compared to CPU implementations. The Intel Math Kernel
Library provides highly optimized implementations of linear algebra operations critical
for matrix-based methods like finite differences and principal component analysis, with
specific optimizations for Intel processors.
22
expected results as discretization parameters are refined, while benchmark comparison
against analytical solutions validates implementation correctness where closed-form solu-
tions exist.
Performance profiling tools help identify bottlenecks in numerical implementations.
Python’s cProfile and line profiler expose timing metrics at function and line levels, re-
spectively. Valgrind and Intel VTune Profiler provide deeper insights into memory usage
and CPU utilization for compiled languages. Specialized profiling for GPU implemen-
tations uses NVIDIA NSight and CUDA profiling tools to optimize computation and
memory transfer patterns.
Documentation practices ensure knowledge transfer and model governance. Liter-
ate programming approaches using Jupyter Notebooks combine code, explanations, and
visualizations in single documents, improving transparency. Model cards, inspired by
practices in machine learning, document model assumptions, implementation details,
validation results, and limitations in standardized formats to support governance and
regulatory compliance.
23
7.2 Model Governance and Validation
Model governance frameworks have evolved to address the risks associated with complex
numerical implementations. Effective governance requires clear separation of responsi-
bilities between model development, validation, and approval functions. The three lines
of defense model—with model developers as the first line, independent validation as the
second, and audit as the third—has become standard practice across major financial
institutions.
Model validation methodologies have grown increasingly sophisticated, employing
techniques like benchmark comparison across different numerical methods, sensitivity
analysis across parameter ranges, extreme scenario testing, and historical backtesting.
Benchmark databases of test cases with known solutions support validation efforts, though
standardization remains challenging for exotic instruments.
Documentation standards have expanded to include implementation details that may
affect numerical results. Model documentation typically includes mathematical specifica-
tion, numerical approximation methods, discretization approaches, parameter calibration
procedures, and convergence criteria. Documentation of code verification techniques, in-
cluding unit tests and test coverage metrics, supports governance objectives.
Model inventories track implementations across institutions, with larger banks main-
taining thousands of models subject to governance processes. Tiering approaches prior-
itize validation resources based on model materiality and risk, with critical pricing and
risk models receiving the most intensive scrutiny. Regular model review cycles ensure nu-
merical methods remain appropriate as market conditions and computational capabilities
evolve.
24
Environmental impacts of compute-intensive financial modeling have received increas-
ing attention, with estimates suggesting that a single complex Monte Carlo simulation
using cloud computing can generate carbon emissions equivalent to a car journey of
several miles. Balancing computational accuracy against environmental considerations
represents an emerging ethical dimension in numerical finance.
25
(DORA) establish explicit expectations for technology system resilience, with implica-
tions for the deployment architecture of computationally intensive methods like Monte
Carlo simulations.
8 Future Trends
Emerging trends include:
• Neural SDEs: Combine stochastic differential equations with neural networks for
flexible modeling.
9 Conclusion
Numerical methods serve as the essential foundation of modern quantitative finance,
enabling practitioners to navigate complex financial landscapes where closed-form solu-
tions remain elusive. This comprehensive review has examined the theoretical underpin-
nings, practical implementations, and inherent limitations of key approaches including
Monte Carlo simulations, finite difference methods, lattice models, and Fourier tech-
niques. Through detailed analysis of application domains spanning derivative pricing, risk
management, portfolio optimization, and dynamic trading, we have demonstrated how
these methods translate abstract financial models into actionable insights and strategic
decisions.
The field faces significant challenges, from the computational complexity of high-
dimensional problems to model risk stemming from simplifying assumptions and im-
plementation constraints. Numerical stability concerns and data quality issues further
complicate practical applications, while systemic risks arise from homogeneous modeling
approaches across the financial system. Our expanded examination of software implemen-
tation considerations has highlighted the ecosystem of programming languages, special-
ized libraries, and data platforms supporting these methods, along with the development
practices that ensure reliable results. Regulatory frameworks and ethical considerations
provide essential guardrails for the application of numerical techniques, balancing inno-
vation against stability and fairness objectives.
Despite these challenges, the trajectory of numerical finance points toward transfor-
mative innovations. Machine learning approaches promise to enhance traditional methods
through hybrid models that balance interpretability with flexibility. Quantum comput-
ing offers the potential to revolutionize computationally intensive techniques like Monte
Carlo simulation, with early algorithms demonstrating quadratic speedups for specific
problems. Neural stochastic differential equations create opportunities for more flexi-
ble modeling of complex dynamics, while federated approaches may enable collaborative
model development while preserving data privacy.
26
The continued evolution of numerical methods in finance will require interdisciplinary
collaboration across mathematics, computer science, finance, and economics. Open-
source initiatives have democratized access to sophisticated tools, while cloud computing
has reduced infrastructure barriers to implementation. As financial systems grow increas-
ingly interconnected and algorithm-driven, the robustness of numerical methods takes on
systemic importance beyond individual institutions.
In conclusion, while analytical solutions provide elegant insights where available, nu-
merical methods will remain indispensable for addressing the complexity, non-linearity,
and high dimensionality inherent in financial systems. The future of quantitative fi-
nance lies in the thoughtful integration of traditional numerical approaches with emerg-
ing computational paradigms, balancing theoretical rigor with practical implementation
constraints to enable more effective risk management and decision-making in increasingly
complex and interconnected global markets.
References
Fischer Black and Myron Scholes. The pricing of options and corporate liabilities. Journal
of Political Economy, 81(3):637–654, 1973.
Mark Broadie and Paul Glasserman. Pricing american-style securities using simulation.
Journal of Economic Dynamics and Control, 21(8-9):1323–1352, 1997.
Charles G. Broyden, Roger Fletcher, Donald Goldfarb, and David F. Shanno. A class of
methods for solving nonlinear simultaneous equations. Mathematics of Computation,
24(109):577–593, 1970.
Kai-Chun Chang, Chun-Yu Liao, and Chung-Shin Lin. Pricing asian options using monte
carlo methods. Journal of Computational and Graphical Statistics, 16(2):357–373, 2007.
John C. Cox, Stephen A. Ross, and Mark Rubinstein. Option pricing: A simplified
approach. Journal of Financial Economics, 7(3):229–263, 1979.
John Crank and Phyllis Nicolson. A practical method for numerical evaluation of solutions
of partial differential equations of the heat-conduction type. Mathematical Proceedings
of the Cambridge Philosophical Society, 43(1):50–67, 1947.
Fang Fang and Cornelis W. Oosterlee. A novel pricing method for european options
based on fourier-cosine series expansions. SIAM Journal on Scientific Computing, 31
(2):826–848, 2008.
Michael B. Giles. Multilevel monte carlo path simulation. Operations Research, 56(3):
607–617, 2008.
Paul Glasserman. Monte Carlo Methods in Financial Engineering. Springer, New York,
2004. ISBN 0-387-00451-3.
Patrick S. Hagan, Deep Kumar, Andrew S. Lesniewski, and Diana E. Woodward. Man-
aging smile risk. Wilmott Magazine, pages 84–108, January 2002.
27
Steven L. Heston. A closed-form solution for options with stochastic volatility with
applications to bond and currency options. The Review of Financial Studies, 6(2):
327–343, 1993.
Kenneth Levenberg. A method for the solution of certain non-linear problems in least
squares. Quarterly of Applied Mathematics, 2(2):164–168, 1944.
Paul Wilmott. Paul Wilmott on Quantitative Finance. John Wiley & Sons, Chichester,
UK, 2nd edition, 2006.
28