Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
2 views56 pages

Lecture1 Refresher

Uploaded by

Gaily
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views56 pages

Lecture1 Refresher

Uploaded by

Gaily
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

Medical image analysis: Refresher

Medical image analysis: Refresher


Just pointers and reminders

Ender Konukoglu

ETH Zürich

February 18, 2025


Medical image analysis: Refresher

Outline

Basic notation
Probabilistic modeling
Optimization, cost function and regularization
Linear basis models and function parameterizations
Spatial transformations
Derivative approximations
Medical image analysis: Refresher
Basic notation

Basic notation

Continuous version
A volumetric grayscale image is a function
I : Ω → R, Ω ⊂ R3 is the image domain
I (x) is the intensity at point x ∈ Ω, x = [x1 , x2 , x3 ]
Medical image analysis: Refresher
Basic notation

Basic notation

Continuous version
A volumetric grayscale image is a function
I : Ω → R, Ω ⊂ R3 is the image domain
I (x) is the intensity at point x ∈ Ω, x = [x1 , x2 , x3 ]
Discrete version
Ω ⊂ Z3 is the discrete image domain, i.e. Cartesian grid
I (x) is the intensity at x ∈ Ω, x = [i, j, k]
Medical image analysis: Refresher
Basic notation

Basic notation

Continuous version
A volumetric grayscale image is a function
I : Ω → R, Ω ⊂ R3 is the image domain
I (x) is the intensity at point x ∈ Ω, x = [x1 , x2 , x3 ]
Discrete version
Ω ⊂ Z3 is the discrete image domain, i.e. Cartesian grid
I (x) is the intensity at x ∈ Ω, x = [i, j, k]
For multiparametric images, such as diffusion-weighted MRI, spectroscopy or
adding multiple modalities together
I : Ω → RN
I (x) is the vector of intensities at x
Medical image analysis: Refresher
Basic notation

Basic notation

Continuous version
A volumetric grayscale image is a function
I : Ω → R, Ω ⊂ R3 is the image domain
I (x) is the intensity at point x ∈ Ω, x = [x1 , x2 , x3 ]
Discrete version
Ω ⊂ Z3 is the discrete image domain, i.e. Cartesian grid
I (x) is the intensity at x ∈ Ω, x = [i, j, k]
For multiparametric images, such as diffusion-weighted MRI, spectroscopy or
adding multiple modalities together
I : Ω → RN
I (x) is the vector of intensities at x
For dynamic images with temporal information
I : ΩxR+ → RN
I (x, t) is the intensity at point x and time t
Medical image analysis: Refresher
Probabilistic modeling

Outline

Basic notation
Probabilistic modeling
PDF, CDF, PMF
Histogram of an image
Conditionals and the Bayes’ Rule
Posterior distribution, MAP and MLE
Optimization, cost function and regularization
Linear basis models and function parameterizations
Spatial transformations
Derivative approximations
Medical image analysis: Refresher
Probabilistic modeling

PDF, CDF, PMF

Intensity at each point I (x) ∈ R is seen as a random variable


(Other functions can also be seen as random variables, e.g. transformation T (x)
or discrete labels L(x))
Medical image analysis: Refresher
Probabilistic modeling

PDF, CDF, PMF

Intensity at each point I (x) ∈ R is seen as a random variable


(Other functions can also be seen as random variables, e.g. transformation T (x)
or discrete labels L(x))
For continuous random variables we define (dropping (x) for simplicity)
p(i) as its probability density function (PDF)
Ri
P(i) = Pr [I ≤ i] = −∞ p(j)dj as its cumulative distribution function (CDF)
Medical image analysis: Refresher
Probabilistic modeling

PDF, CDF, PMF

Intensity at each point I (x) ∈ R is seen as a random variable


(Other functions can also be seen as random variables, e.g. transformation T (x)
or discrete labels L(x))
For continuous random variables we define (dropping (x) for simplicity)
p(i) as its probability density function (PDF)
Ri
P(i) = Pr [I ≤ i] = −∞ p(j)dj as its cumulative distribution function (CDF)
0.40 1.0

0.35
0.8
0.30

0.25 0.6
0.20
pdf

cdf
0.15 0.4

0.10
0.2
0.05

0.00 0.0
6 4 2 0 2 4 6 6 4 2 0 2 4 6
I I
Medical image analysis: Refresher
Probabilistic modeling

PDF, CDF, PMF

Intensity at each point I (x) ∈ R is seen as a random variable


(Other functions can also be seen as random variables, e.g. transformation T (x)
or discrete labels L(x))
For continuous random variables we define (dropping (x) for simplicity)
p(i) as its probability density function (PDF)
Ri
P(i) = Pr [I ≤ i] = −∞ p(j)dj as its cumulative distribution function (CDF)
0.40 1.0

0.35
0.8
0.30

0.25 0.6
0.20
pdf

cdf
0.15 0.4

0.10
0.2
0.05

0.00 0.0
6 4 2 0 2 4 6 6 4 2 0 2 4 6
I I

For discrete random variables, L, we define


p(l) = p(L = l) as its probability mass function (PMF)
PMF can be seen as a PDF is often named as PDF in scientific articles
Medical image analysis: Refresher
Probabilistic modeling

Histogram of an image

0.030 0.10

0.09
0.025
0.08
0.020

Cumulative Histogram
0.07
Histogram

0.015 0.06

0.05
0.010
0.04
0.005
0.03

0.000 0.02
0 50 100 150 200 250 300 0 50 100 150 200 250 300
I I

Image Histogram Cumulative histogram


If we consider each pixel intensity as an independent realization of the random variable
I then the histogram is an approximation to the PDF and cumulative histogram is an
approximation to the CDF
Medical image analysis: Refresher
Probabilistic modeling

Conditionals and the Bayes’ Rule

When we have two variables such as I and L


p(i, l): joint distribution
R∞
p(i) = N
P
l=0 p(i, l) and p(l) = −∞ p(i, l)di: marginal distributions
p(i|l) and p(l|i) : conditional distributions
p(i, l) = p(i|l)p(l) = p(l|i)p(i)
Medical image analysis: Refresher
Probabilistic modeling

Conditionals and the Bayes’ Rule

When we have two variables such as I and L


p(i, l): joint distribution
R∞
p(i) = N
P
l=0 p(i, l) and p(l) = −∞ p(i, l)di: marginal distributions
p(i|l) and p(l|i) : conditional distributions
p(i, l) = p(i|l)p(l) = p(l|i)p(i)
The conditionals are linked with the Bayes’ rule

p(l|i)p(i) p(i|l)p(l)
p(i|l) = and p(l|i) =
p(l) p(i)
Medical image analysis: Refresher
Probabilistic modeling

Conditionals and the Bayes’ Rule

When we have two variables such as I and L


p(i, l): joint distribution
R∞
p(i) = N
P
l=0 p(i, l) and p(l) = −∞ p(i, l)di: marginal distributions
p(i|l) and p(l|i) : conditional distributions
p(i, l) = p(i|l)p(l) = p(l|i)p(i)
The conditionals are linked with the Bayes’ rule

p(l|i)p(i) p(i|l)p(l)
p(i|l) = and p(l|i) =
p(l) p(i)

In a large variety of problems one of them is observed and the other not. Assume
i is observed in that case
p(i|l): likelihood
p(l): prior distribution
p(l|i): posterior distribution
Medical image analysis: Refresher
Probabilistic modeling

Posterior distribution, MAP and MLE


A large range of problems can be formulated as given an observation i estimate l.
Example problems: image enhancement, segmentation and even registration.
Medical image analysis: Refresher
Probabilistic modeling

Posterior distribution, MAP and MLE


A large range of problems can be formulated as given an observation i estimate l.
Example problems: image enhancement, segmentation and even registration.
A generic solution to such problems is to determine the posterior distribution of l,
i.e. Bayesian Inference

p(i|l)p(l)
p(l|i) =
p(i)
Medical image analysis: Refresher
Probabilistic modeling

Posterior distribution, MAP and MLE


A large range of problems can be formulated as given an observation i estimate l.
Example problems: image enhancement, segmentation and even registration.
A generic solution to such problems is to determine the posterior distribution of l,
i.e. Bayesian Inference

p(i|l)p(l)
p(l|i) =
p(i)

p(i) requires summing over all l (or integration over all l if continuous), which
can be infeasible. The alternative is to determine the l that maximizes the
posterior, i.e. Maximum-A-Posteriori (MAP) estimate

argl max p(l|i) = argl max p(i|l)p(l) = argl max log p(i|l) + log p(l)
Medical image analysis: Refresher
Probabilistic modeling

Posterior distribution, MAP and MLE


A large range of problems can be formulated as given an observation i estimate l.
Example problems: image enhancement, segmentation and even registration.
A generic solution to such problems is to determine the posterior distribution of l,
i.e. Bayesian Inference

p(i|l)p(l)
p(l|i) =
p(i)

p(i) requires summing over all l (or integration over all l if continuous), which
can be infeasible. The alternative is to determine the l that maximizes the
posterior, i.e. Maximum-A-Posteriori (MAP) estimate

argl max p(l|i) = argl max p(i|l)p(l) = argl max log p(i|l) + log p(l)

Posterior distribution and MAP estimate requires the prior. When such a prior
does not exist the alternative is to determine Maximum Likelihood Estimate
(MLE)

argl max p(i|l)


Medical image analysis: Refresher
Probabilistic modeling

Posterior distribution, MAP and MLE


A large range of problems can be formulated as given an observation i estimate l.
Example problems: image enhancement, segmentation and even registration.
A generic solution to such problems is to determine the posterior distribution of l,
i.e. Bayesian Inference

p(i|l)p(l)
p(l|i) =
p(i)

p(i) requires summing over all l (or integration over all l if continuous), which
can be infeasible. The alternative is to determine the l that maximizes the
posterior, i.e. Maximum-A-Posteriori (MAP) estimate

argl max p(l|i) = argl max p(i|l)p(l) = argl max log p(i|l) + log p(l)

Posterior distribution and MAP estimate requires the prior. When such a prior
does not exist the alternative is to determine Maximum Likelihood Estimate
(MLE)

argl max p(i|l)

MLE is the same as MAP when the prior is uniform, i.e. p(l) = c, ∀l
Medical image analysis: Refresher
Cost function, regularization and optimization

Outline

Basic notation
Probabilistic modeling
Optimization, cost function and regularization
Data term and regularization
Optimization
Calculus of variation
Linear basis models and function parameterizations
Spatial transformations
Derivative approximations
Medical image analysis: Refresher
Cost function, regularization and optimization

Data term and regularization


Most problems in medical image analysis are formulated as optimization problems

argθ min L(θ) = D(I ; θ) +λ R(θ)


| {z } | {z } | {z }
Cost Function Data Term Regularization
Medical image analysis: Refresher
Cost function, regularization and optimization

Data term and regularization


Most problems in medical image analysis are formulated as optimization problems

argθ min L(θ) = D(I ; θ) +λ R(θ)


| {z } | {z } | {z }
Cost Function Data Term Regularization

Or the related form with explicit constraints

argθ min D(I ; θ), such that, R(θ) = 0


Medical image analysis: Refresher
Cost function, regularization and optimization

Data term and regularization


Most problems in medical image analysis are formulated as optimization problems

argθ min L(θ) = D(I ; θ) +λ R(θ)


| {z } | {z } | {z }
Cost Function Data Term Regularization

Or the related form with explicit constraints

argθ min D(I ; θ), such that, R(θ) = 0

Examples:
Medical image analysis: Refresher
Cost function, regularization and optimization

Data term and regularization


Most problems in medical image analysis are formulated as optimization problems

argθ min L(θ) = D(I ; θ) +λ R(θ)


| {z } | {z } | {z }
Cost Function Data Term Regularization

Or the related form with explicit constraints

argθ min D(I ; θ), such that, R(θ) = 0

Examples:
Image denoising: Assume I is a noisy image and we want to retrieve a denoised J
Z Z
2 2
argJ min ∥I − J∥2 + λ∥∇J(x)∥1 = argJ min (I (x) − J(x)) dx + λ |∇J(x)|dx
Medical image analysis: Refresher
Cost function, regularization and optimization

Data term and regularization


Most problems in medical image analysis are formulated as optimization problems

argθ min L(θ) = D(I ; θ) +λ R(θ)


| {z } | {z } | {z }
Cost Function Data Term Regularization

Or the related form with explicit constraints

argθ min D(I ; θ), such that, R(θ) = 0

Examples:
Image denoising: Assume I is a noisy image and we want to retrieve a denoised J
Z Z
2 2
argJ min ∥I − J∥2 + λ∥∇J(x)∥1 = argJ min (I (x) − J(x)) dx + λ |∇J(x)|dx

Image registration: Determine T between images I and J


Z Z
2 2
argT min (I (x) − J(T (x))) dx + ∥α∆T (x) + β∇(∇ · T (x)) − γT (x)∥2 dx
Medical image analysis: Refresher
Cost function, regularization and optimization

Data term and regularization


Most problems in medical image analysis are formulated as optimization problems

argθ min L(θ) = D(I ; θ) +λ R(θ)


| {z } | {z } | {z }
Cost Function Data Term Regularization

Or the related form with explicit constraints

argθ min D(I ; θ), such that, R(θ) = 0

Examples:
Image denoising: Assume I is a noisy image and we want to retrieve a denoised J
Z Z
2 2
argJ min ∥I − J∥2 + λ∥∇J(x)∥1 = argJ min (I (x) − J(x)) dx + λ |∇J(x)|dx

Image registration: Determine T between images I and J


Z Z
2 2
argT min (I (x) − J(T (x))) dx + ∥α∆T (x) + β∇(∇ · T (x)) − γT (x)∥2 dx

The MAP estimate is also in the same form


argθ max log p(i|θ) + log p(θ) = argθ min − log p(i|θ) − log p(θ)
Regularizers can be thought of as priors with − log p(θ) ∝ R(θ)
Medical image analysis: Refresher
Cost function, regularization and optimization

Optimization

Basic quantities for discrete θ:


Medical image analysis: Refresher
Cost function, regularization and optimization

Optimization

Basic quantities for discrete θ:


Gradient  T
∂L(θ) ∂L(θ)
∇L(θ) = ,...,
∂θ1 ∂θd
Medical image analysis: Refresher
Cost function, regularization and optimization

Optimization

Basic quantities for discrete θ:


Gradient  T
∂L(θ) ∂L(θ)
∇L(θ) = ,...,
∂θ1 ∂θd
Hessian
∂ 2 L(θ) ∂ 2 L(θ)
 
...
∂θ 2 ∂θ1 ∂θd
 1 
. .
 
H=

. .. .

 . . .


∂ 2 L(θ) ∂ 2 L(θ)
 
...
∂θ1 ∂θd ∂θ 2
d
Medical image analysis: Refresher
Cost function, regularization and optimization

Optimization

Basic quantities for discrete θ:


Gradient  T
∂L(θ) ∂L(θ)
∇L(θ) = ,...,
∂θ1 ∂θd
Hessian
∂ 2 L(θ) ∂ 2 L(θ)
 
...
∂θ 2 ∂θ1 ∂θd
 1 
. .
 
H=

. .. .

 . . .


∂ 2 L(θ) ∂ 2 L(θ)
 
...
∂θ1 ∂θd ∂θ 2
d
Gradient-based algorithms
Gradient descent / ascent
Newton’s method
Limited-memory BFGS, ...
Medical image analysis: Refresher
Cost function, regularization and optimization

Optimization

Basic quantities for discrete θ:


Gradient  T
∂L(θ) ∂L(θ)
∇L(θ) = ,...,
∂θ1 ∂θd
Hessian
∂ 2 L(θ) ∂ 2 L(θ)
 
...
∂θ 2 ∂θ1 ∂θd
 1 
. .
 
H=

. .. .

 . . .


∂ 2 L(θ) ∂ 2 L(θ)
 
...
∂θ1 ∂θd ∂θ 2
d
Gradient-based algorithms
Gradient descent / ascent
Newton’s method
Limited-memory BFGS, ...
Gradient-free algorithms - because sometimes gradient is difficult to evaluate
Nelder-Mead Simplex
Simulated Annealing
Powell’s method, BOBYQA, ...
Medical image analysis: Refresher
Cost function, regularization and optimization

Optimization

Basic quantities for discrete θ:


Gradient  T
∂L(θ) ∂L(θ)
∇L(θ) = ,...,
∂θ1 ∂θd
Hessian
∂ 2 L(θ) ∂ 2 L(θ)
 
...
∂θ 2 ∂θ1 ∂θd
 1 
. .
 
H=

. .. .

 . . .


∂ 2 L(θ) ∂ 2 L(θ)
 
...
∂θ1 ∂θd ∂θ 2
d
Gradient-based algorithms
Gradient descent / ascent
Newton’s method
Limited-memory BFGS, ...
Gradient-free algorithms - because sometimes gradient is difficult to evaluate
Nelder-Mead Simplex
Simulated Annealing
Powell’s method, BOBYQA, ...
Some references
Convex Optimization, Boyd and Vandenberghe, Cambridge University Press, 2004.
Nonlinear Programming, Bertsekas, Athena Scientific, 2016.
Medical image analysis: Refresher
Cost function, regularization and optimization

Calculus of Variation

When θ is continuous, i.e. deformation field, denoised version of the image,...


Medical image analysis: Refresher
Cost function, regularization and optimization

Calculus of Variation

When θ is continuous, i.e. deformation field, denoised version of the image,...


Cost function is called the functional (this looks like some of the examples)
Z xb
L(θ) = L(x, θ, ∇θ)dx
xa
Medical image analysis: Refresher
Cost function, regularization and optimization

Calculus of Variation

When θ is continuous, i.e. deformation field, denoised version of the image,...


Cost function is called the functional (this looks like some of the examples)
Z xb
L(θ) = L(x, θ, ∇θ)dx
xa

The identity similar to gradient is the first variation

L(θ + ϵη) − L(θ)


δL(θ) ≜ lim
ϵ→0 ϵ
for arbitrary η whose value vanishes at the boundaries.
Medical image analysis: Refresher
Cost function, regularization and optimization

Calculus of Variation

When θ is continuous, i.e. deformation field, denoised version of the image,...


Cost function is called the functional (this looks like some of the examples)
Z xb
L(θ) = L(x, θ, ∇θ)dx
xa

The identity similar to gradient is the first variation

L(θ + ϵη) − L(θ)


δL(θ) ≜ lim
ϵ→0 ϵ
for arbitrary η whose value vanishes at the boundaries.
Setting first variation to 0 for all η gives the Euler-Lagrange equation
d
∂L X d ∂L ∂θ
− (k)
= 0, θ(k) =
∂θ k=1
dx k ∂θ ∂xk
Medical image analysis: Refresher
Cost function, regularization and optimization

Calculus of Variation

When θ is continuous, i.e. deformation field, denoised version of the image,...


Cost function is called the functional (this looks like some of the examples)
Z xb
L(θ) = L(x, θ, ∇θ)dx
xa

The identity similar to gradient is the first variation

L(θ + ϵη) − L(θ)


δL(θ) ≜ lim
ϵ→0 ϵ
for arbitrary η whose value vanishes at the boundaries.
Setting first variation to 0 for all η gives the Euler-Lagrange equation
d
∂L X d ∂L ∂θ
− (k)
= 0, θ(k) =
∂θ k=1
dx k ∂θ ∂xk

Left handside is called functional derivative δL/δθ. Often used in gradient ascent.
Medical image analysis: Refresher
Cost function, regularization and optimization

Calculus of Variation

When θ is continuous, i.e. deformation field, denoised version of the image,...


Cost function is called the functional (this looks like some of the examples)
Z xb
L(θ) = L(x, θ, ∇θ)dx
xa

The identity similar to gradient is the first variation

L(θ + ϵη) − L(θ)


δL(θ) ≜ lim
ϵ→0 ϵ
for arbitrary η whose value vanishes at the boundaries.
Setting first variation to 0 for all η gives the Euler-Lagrange equation
d
∂L X d ∂L ∂θ
− (k)
= 0, θ(k) =
∂θ k=1
dx k ∂θ ∂xk

Left handside is called functional derivative δL/δθ. Often used in gradient ascent.
Reference: The Calculus of Variations, Bruce van Brunt, Springer, 2004.
Medical image analysis: Refresher
Linear basis models and function parameterizations

Outline

Basic notation
Probabilistic modeling
Optimization, cost function and regularization
Linear basis models and function parameterizations
Basics
Function parameterizations with linear basis models
Kernel-based parameterization
Spatial transformations
Derivative approximations
Medical image analysis: Refresher
Linear basis models and function parameterizations

Basics for parameterization

Many problems need optimizing over a function, e.g. non-linear registration -


transformations, bias correction - bias field
Medical image analysis: Refresher
Linear basis models and function parameterizations

Basics for parameterization

Many problems need optimizing over a function, e.g. non-linear registration -


transformations, bias correction - bias field
Discretizing Euler-Lagrange at every point is an option but something with less
parameters would be very useful
Medical image analysis: Refresher
Linear basis models and function parameterizations

Basics for parameterization

Many problems need optimizing over a function, e.g. non-linear registration -


transformations, bias correction - bias field
Discretizing Euler-Lagrange at every point is an option but something with less
parameters would be very useful
Function parameterization for optimization, few parameters to describe the
function
Medical image analysis: Refresher
Linear basis models and function parameterizations

Basics for parameterization

Many problems need optimizing over a function, e.g. non-linear registration -


transformations, bias correction - bias field
Discretizing Euler-Lagrange at every point is an option but something with less
parameters would be very useful
Function parameterization for optimization, few parameters to describe the
function
In finite vector spaces

⃗ b 1 + a2 ⃗
v = a1 ⃗ b 2 + · · · + ad ⃗
bd = B⃗
a

bi are the basis functions and ai coefficients.
if ⃗
bi are orthogonal, i.e. ⃗
biT ⃗ vT ⃗
bj = 0, ∀i ̸= j then ai = ⃗ bi /∥⃗
bi ∥2 .
if not then Ordinary Least Square (OLS) regression must be performed
 −1
arg⃗a min ∥⃗ a∥22 = BT B
v − B⃗ BT ⃗
v

For known ⃗
bi , ⃗
a can be a parameterization for ⃗
v
Medical image analysis: Refresher
Linear basis models and function parameterizations

Function parameterizations with linear basis models

Global parameterizations
d
X
f (x) = ai bi (x)
i

where bi (x) are smooth basis functions.


The parameterization is ⃗
a
To determine ⃗
a, the space is discretized and a B matrix is formed.
Note that this parameterization cannot represent all functions.
Examples:
polynomials: bias-field correction in MRI
splines: non-linear registration
Medical image analysis: Refresher
Linear basis models and function parameterizations

Kernel-based parameterization

d
X
f (x) = ai K (x, xi )
i=1

where xi are the control points and K (x, xi ) is a kernel function


Radial basis functions are often used: K (x, xi ) = K (∥x − xi ∥2 )
Popular radial basis functions:
Gaussian
∥x − xi ∥22
K (∥x − xi ∥2 ) = exp −
σ2
Thin-plate spline

K (∥x − xi ∥2 ) = ∥x − xi ∥22 ln(∥x − xi ∥2 )

Example: landmark based registration, linear/non-linear registration, kernel


density estimation
Medical image analysis: Refresher
Spatial transformations

Outline

Basic notation
Probabilistic modeling
Optimization, cost function and regularization
Linear basis models and function parameterizations
Spatial transformations
Linear transformations
Non-linear transformations
Transformation related identities
Derivative approximations
Medical image analysis: Refresher
Spatial transformations

Linear transformations

T (⃗
x ) = T⃗
x,
Medical image analysis: Refresher
Spatial transformations

Linear transformations

T (⃗
x ) = T⃗
x,

Used in linear image registration


Medical image analysis: Refresher
Spatial transformations

Linear transformations

T (⃗
x ) = T⃗
x,

Used in linear image registration


Rigid
3 dof in 2D - 2 translation and 1 rotation
6 dof in 3D - 3 translation and 3 rotation
Intra-subject registration, multi-modal intra-subject registration
Medical image analysis: Refresher
Spatial transformations

Linear transformations

T (⃗
x ) = T⃗
x,

Used in linear image registration


Rigid
3 dof in 2D - 2 translation and 1 rotation
6 dof in 3D - 3 translation and 3 rotation
Intra-subject registration, multi-modal intra-subject registration
Similarity transformation
4 dof in 2D - Rigid + scale
7 dof in 3D - Rigid + scale
Used for coarse alignment in inter-subject. Initialization for non-linear
Medical image analysis: Refresher
Spatial transformations

Linear transformations

T (⃗
x ) = T⃗
x,

Used in linear image registration


Rigid
3 dof in 2D - 2 translation and 1 rotation
6 dof in 3D - 3 translation and 3 rotation
Intra-subject registration, multi-modal intra-subject registration
Similarity transformation
4 dof in 2D - Rigid + scale
7 dof in 3D - Rigid + scale
Used for coarse alignment in inter-subject. Initialization for non-linear
Affine transformation
6 dof in 2D
12 dof in 3D
Often not used in full dof
Instead, 9 dof transformation (3 translation + 3 rotation + 3 scale) may be preferred.
      
sx 0 0 1 0 0 cos(ϕ) 0 sin(ϕ) cos(γ) sin(γ) 0 tx
T =  0 sy 0  0 cos(θ) −sin(θ)  0 1 0  −sin(γ) cos(γ) 0  +  ty 
0 0 sz 0 sin(θ) cos(θ) −sin(ϕ) 0 cos(θ) 0 0 1 tz
| {z }| {z }| {z }| {z } | {z }
scale rotation around x-axis rotation around y-axis rotation around z-axis translation
Medical image analysis: Refresher
Spatial transformations

Non-linear transformations

x) = ⃗
T (⃗ x+ ⃗(⃗
u x)
|{z}
displacement field

Used in non-linear registration.


Different models are used.
Linear basis function models - additive
More advanced techniques based on composition of transformations

x ) = Tn ◦ Tn−1 ◦ · · · ◦ T1 (⃗
T (⃗ x)

where ◦ is basic function composition


Allows modeling diffeomorphisms
see Computational Anatomy
Medical image analysis: Refresher
Spatial transformations

Transformation related identities

T (⃗
x ) = [T1 (⃗
x ), T2 (⃗ x )]T
x ), T3 (⃗

Jacobian  ∂T1 ∂T1 ∂T1 


∂x1 ∂x2 ∂x3
∂T2 ∂T2 ∂T2
J=
 
∂x1 ∂x2 ∂x3 
∂T3 ∂T3 ∂T3
∂x1 ∂x2 ∂x3

Jacobian determinant quantifies local volumetric change.


det(J) = 1: no change, det(J) < 1: compression, det(J) > 1: expansion
Divergence: ∇ · T - trace of the Jacobian.
Gives information about the amount of compression and expansion as well.
Curl - ∇ × T - information on the amount of infinitesimal rotation
Medical image analysis: Refresher
Spatial transformations

Outline

Basic notation
Probabilistic modeling
Optimization, cost function and regularization
Linear basis models and function parameterizations
Spatial transformations
Derivative approximations
Medical image analysis: Refresher
Spatial transformations

Derivative approximations

Finite difference approximations is used the most often


First order derivative at a grid point x0 with grid spacing ∆x

df f (x0 + ∆x) − f (x0 )


|x ≈
dx 0 ∆x
f (x0 ) − f (x0 − ∆x)

∆x
f (x0 + ∆x) − f (x0 − ∆x)

2∆x
Second order derivative at a grid point x0 with grid spacing ∆x

d 2f f (x0 + ∆x) − 2f (x0 ) + f (x0 − ∆x)


|x ≈
dx 2 0 ∆x 2
Many different approximations exist

You might also like