2018 JB CompactVersion
2018 JB CompactVersion
Jean-Baptiste Thomas
HABILITATION À DIRIGER
DES RECHERCHES
to obtain the title of
Defended by
Jean-Baptiste Thomas
Maître de Conférences, Section CNU 61
Jury :
Forewords
This manuscript is intended to put my research on spectral lter arrays imaging
in perspective. To the reader who wish to learn more about me than about my
research, I recommend starting with the Chapters 6 and 7, where an overview on
my research and curriculum vitae are presented. Publications and funding schemes
are detailed there as well as the names of my collaborators. Taken in order, the
Chapters will provide a scientic introduction to my contributions and to the eld,
which should be interesting to most natural readers. I tried to keep the core simple
in order to make this document accessible to a wide audience, e.g. students rst
reading on this topic.
Contents
1 Introduction 1
1.1 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Multispectral imaging . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Scientic publications on multispectral imaging . . . . . . . . . . . . 5
1.4 Spectral lter arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Overview on related research on this technology . . . . . . . . . . . . 8
1.6 Overview on scientic contributions related to this manuscript . . . . 9
1.7 Structure of this manuscript . . . . . . . . . . . . . . . . . . . . . . . 10
2 Imaging pipeline 11
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.2 Pipeline components . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.3 Pipeline outputs . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 Article . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3 Sensor prototyping 17
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2.1 Historical background . . . . . . . . . . . . . . . . . . . . . . 17
3.2.2 Analysis on sensitivities . . . . . . . . . . . . . . . . . . . . . 18
3.2.3 How to design a sensor? . . . . . . . . . . . . . . . . . . . . . 19
3.3 Article . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4 Illumination 21
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.3 Article . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7 Curriculum Vitae 41
7.1 Situation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
7.2 Synopsis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
7.3 Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
7.4 Scientic History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
7.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
7.6 Teachings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
7.6.1 Courses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
7.6.2 Hours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.6.3 Responsabilities . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.7 Supervisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.7.1 Post Docs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.7.2 PhD students . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.7.3 Master students . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.7.4 Other . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.8 Projects and funding . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.8.1 MUVApp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.8.2 EXIST and CISTERN . . . . . . . . . . . . . . . . . . . . . . 47
7.8.3 OFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.8.4 CNRS-INS2I-JCJC-2017 MOSAIC . . . . . . . . . . . . . . . 47
7.8.5 AURORA 2015 . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.8.6 PARI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.8.7 BQR PRES 2014 . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.8.8 BQR 2012 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.8.9 Hypercept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.8.10 COSCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Bibliography 51
Chapter 1
Introduction
Contents
1.1 Context . . . . . . . . . . . . . . . . . . . . . . . . . .. . ... 1
1.2 Multispectral imaging . . . . . . . . . . . . . . . . . .. . ... 4
1.3 Scientic publications on multispectral imaging . .. . ... 5
1.4 Spectral lter arrays . . . . . . . . . . . . . . . . . .. . ... 7
1.5 Overview on related research on this technology .. . ... 8
1.6 Overview on scientic contributions related to this
manuscript . . . . . . . . . . . . . . . . . . . . . . . .. .... 9
1.7 Structure of this manuscript . . . . . . . . . . . . . .. . . . . 10
1.1 Context
The core of the research presented in this manuscript has been developed at the Le2i,
Laboratoire d'Electronique, Informatique et Image (UMR CNRS 6306, then FRE
2005), at UFR Sciences et Techniques of Université de Bourgogne, then Université de
Bourgogne, Franche-Comté. The time lapse considered is seven years between 2010
and 2017. What is presented is a reduced set amongst several research activities,
see Chapter 6 for a more comprehensive overview.
research stas, and I had co-directed 4 PhDs and had 2 post-docs to work with me,
who are also listed in Chapter 7.
The work at Le2i has been strengthen by specic interactions with two notewor-
thy academic partners, namely the Norwegian Colour and Visual Computing Labo-
ratory (CVCL) at NTNU in Norway, and the Images and Visual Representation Lab
(IVRL) at the EPFL in Switzerland. The latter collaboration was permitted by a
Délégation CNRS in 2015-16. The former is a historical collaboration developed
by several research stays, of which a long stay of 4 months in fall 2012 (permitted
by capitalization of teaching hours) and a Mise en détachement since Fall 2016,
which continues to Fall 2019.
Please see next Chapters and in particular Chapter 7 for more details on the con-
text. Table 1.1 is a timeline description of the research presented in this manuscript.
Imaging technology of the visible range is related to several research elds and,
in my opinion, is essentially trans-disciplinary. An image is a representation of the
world, and so its capture, processing and essence can be considered from dierent
perspectives; e.g. Acquisition itself depends on physics and electronics; Image pro-
cessing is related to signal theory; Image understanding may depend on cognitive
psychology; Tools to handle images are usually linked to computer science; Models
may be based on applied mathematics, physics or computer graphics and deep learn-
ing. Extensions can also be found into design and graphics, and many applications
could be considered from a point of view of sociology or at least Humanities in the
large sense. Although, the colour and spectral imaging eld is a research niche, it
is very rich in diversity of backgrounds and perspectives. Nevertheless, this has an
impact on the size of the community and thus on its visibility. I will develop this
aspect related to publications later in Section 1.3.
Table 1.1: Gantt chart putting into perspective projects, supervisions and publications directly related to SFA in a timeline. Numbers of publications refer
to Chapter 6 labels. A similar table with research comprehensive content is available in Chapter 7, where the reader will nd a description of the projects
and more information. This table is made in early June 2018.
7 years on SFA
2011 2012 2013 2014 2015 2016 2017 2018
Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4
EXIST
CISTERN
K Ansari Post-Doc 100% complete
OFS
PJ Lapray Post-Doc 100% complete
BQR-PRES
Stay EPFL 100% complete
PARI 1
X Wang PhD 100% complete
PARI 2
H Ahmad PhD 90% complete
Hypercept
Stay NTNU 100% complete
MOSAIC
MUVapp
Stay NTNU 66% complete
16 15 11 10 95 7 3
Related Journals
32 30,31 33 23 25 26 21 20 18 9 11,1312,16 14
Related Conferences
4 Chapter 1. Introduction
Z
f (x) = s(x, λ)c(λ)dλ (1.1)
If we consider the hypothesis of diuse materials and light source, then the radiance
is the contribution of the global spectral power distribution of the illumination, e(λ), and
the spectral reectance, r(x, λ), of the surface, as in Eq. 1.2. This equation is the basis
for many spectral reconstruction methods, which aim at the reconstruction of r(x, λ)
from f (x). This is an ill-posed problem, but several assumptions, such as smoothness of
spectral reectance and sometimes uni-modality of the sensitivities permit to compute a
good approximation. This also implies to know e(λ) and c(λ).
Z
f (x) = r(x, λ)e(λ)c(λ)dλ (1.2)
If the diuse material hypothesis does not hold, as in most cases, then one may use
a dichromatic reectance model, which considers a specular, σ, contribution from the
illumination in addition to the diuse, δ , component, such as in Eq. 1.3. The dichromatic
model has been dened by Shafer [Shafer 1985] for colour images and generalised to
spectral by Tominaga and Wandell [Tominaga 1989]. This equation is the basis for
computer vision, which aims at separating object surface properties from illumination
and shadows in the image. In this document it may be used in Chapter 4 to estimate
the illumination, from highlights for instance.
Z Z
f (x) = δ(x) r(x, λ)e(λ)c(λ)dλ + σ(x) e(λ)c(λ)dλ (1.3)
Those equations are usually enough to understand most of the literature and sim-
ulations about multispectral imaging. In some cases, e(λ) would be e(x, λ) to account
for spatial variation of illumination and shadows. I have not addressed specically this
aspect yet. We note that there is no uorescence involved in the model, and I do not
1
In color images, the three sensitivities are often dened for colour estimation and can vary while
they still stand for Red, Green and Blue.
1.3. Scientic publications on multispectral imaging 5
consider it in the following. We also note that there is no sub-scattering involved in the
model, and I do not consider it in the following. Those aspects are tremendously im-
portant, and I do not discard them lightly in this manuscript. Some applications should
consider and incorporate a more complex model.
The spectral nature of illumination and camera sensitivities are very important, and
a system calibration is often required to use multispectral imaging, as well as the control
of illumination. A vast body of literature addresses those topics, readers may start
their review by the recent book from M. Kriss et al. [Kriss 2015], and then relate to
the subsequent literature. Note also two of the major handbooks of the color imaging
eld [Sharma 2002, Lee 2005], which will provide useful insights.
• time resolution, while performing a sequential acquisition that will reduce the num-
ber of frames per second and generate potential needs for pixel registration,
It is noteworthy that with the acceptance of the colour images for computer vision
applications, many experts in colour imaging turned to image processing or computer
vision journals. Computational imaging has been a growing eld, and we have seen
the creation of new journals in the recent years, such as IEEE Journal of computational
imaging and MDPI Journal of Imaging created both in 2015, IS&T Journal of perceptual
imaging created in 2017.
It is also to be noted that, not unlike colour imaging 20 years ago, multispectral
imaging is becoming increasingly accepted and used by the computer vision and robotic
communities, in particular, articles are published around the specic case of RGB-NIR
imaging or case study based on existing commercial multispectral sensors.
In support of this, we can observe on Figure 1.1 the elds related to the keyword
multispectral imaging in the web of science interface. We can note the diversity of
applications and the traditional scientic disciplines that are concerned with the eld. I
should mention that many of these publications is more concerned by remote sensing.
6 Chapter 1. Introduction
Figure 1.1: Graph representing publications by eld that are concerned by the keyword
multispectral imaging from web of science, accessed on May 30, 2018.
Figure 1.2: Graph representing the number of publications with the keyword multispectral
imaging from web of science, accessed on May 30, 2018.
On Figure 1.2, we can observe the increased number of publications related to this
eld, in particular after 2011. This is partly due to the progress of the acquisition
technology, which permits to enable this modality of imaging for diverse applications
(so the communications are not only based on its development). This is also due to the
increasing number of scientic publications worldwide linked to the trend to quantity,
to the electronic publishing, but also due to the emergence of new very active countries
(e.g. China) in our research elds.
It is interesting to compare the dierent elds that show up when we refer to hy-
perspectral imaging (Figure 1.4), where remote sensing and spectroscopy are becoming
prominent. It is also informative to refer to colour imaging (Figure 1.3), where computer
science is then becoming prominent.
Conferences in this eld are traditionally focused on colour imaging (e.g. Colour and
Imaging Conference - CIC, Electronic Imaging - EI) or image processing (e.g. ICIP).
1.4. Spectral lter arrays 7
Figure 1.3: Graph representing publications by elds that are concerned by the keyword
colour imaging from web of science, accessed on May 30, 2018.
Figure 1.4: Graph representing publications by elds that are concerned by the keyword
hyperspectral imaging from web of science, accessed on May 30, 2018.
Some permeability are observed with the computer graphics community (e.g. SIG-
GRAPH, Eurographics), with the computer vision (e.g. CVPR), remote sensing (e.g.
ICASS), and so on. Smaller focusing workshops are related to diverse aspects of the
eld: Computational colour Imaging Workshop (CCIW), Multispectral Colour Science
(MCS), Colour and Multispectral Imaging (CoMI) or Colour and Visual Computing
Symposium (CVCS).
Figure 1.5: Graph representing a typical instance of the spectral lter arrays technology.
example is given in Figure 1.5, where we can observe the principal components of a SFA:
the lters, a monochrome sensor and a process to reconstruct the full-resolution image.
Advantages and disadvantages are along those trade-o: We can capture multispectral
images at video rates without mechanical or optical registration problems, but we are
limited in the number of bands in order to preserve a useful spatial resolution. We are
also constrained in sensitivities to ensure that enough photons are integrated by the
sensors. The main advantage is the easy encapsulation into a classical imaging pipeline
with a rather simple optical set-up and a single solid-state sensor. Direct competitive
technology would be light-eld cameras, which would have a simpler spectral ltering
process due to the size of lters, but more complex optical set-up and paths [Lam 2015].
2
https://www.ximea.com/
1.6. Overview on scientic contributions related to this manuscript 9
Dierent instances assumed dierent starting hypothesis. Monno et al. [Monno 2015]
targeted colorimetric imaging and relighting with their ve bands instantiation of
SFA. Hirakawa targeted spectral reconstruction with his Fourier sensor [Jia 2016,
Hirakawa 2017]. We targeted general computer vision with our visible and NIR sen-
sor [Thomas 2016b]. IMEC developed an approximated hyperspectral imager by using a
number of narrow-bands either in the visible or in the NIR [IMEC 2018].
I should mention, in addition, SFA cameras for short-wave infrared (SWIR) imag-
ing [Kutteruf 2014, Kanaev 2015]. Kanaev and his team dened also the imaging
pipeline including demosaicing and super-resolution for still images and for videos within
the SWIR. Sensors based on arrays of polarimeters and processings [Andreou 2002,
LeMaster 2014] have been developed too.
If I should summarise and structure our research contributions during that period, I
would cluster them into three aspects: Contributions to colour imaging and processing,
contributions to prototype and experimental data generation, contributions to compu-
tational spectral imaging (demosaicing, white balance, etc.). In fact, exploratory works
and experimental data acquisition are a very important part of this research. This also
explains the diversity of methods and models that were used: I focused on the type of
data, not on a specic model.
The resulting object is a group of techniques and data that permit to propose the
use of multispectral imaging in general computer vision tasks. Indeed, I provide proto-
type instances and experimental data that enable real-time acquisition simulations and
applications. I also provide tools to handle those data until they could be understood
and handled by the computer vision community, although there is still a strong need for
standardization. I provide in parallel a link to the visualisation of those data, although
I remain so far in a colorimetric image research paradigm on this aspect. An analysis of
the weaknesses and strengths of the dierent research items is provided along the three
next Chapters. An analysis of the limits of the overall research and perspectives are
discussed in Chapter 5.
10 Chapter 1. Introduction
Imaging pipeline
Contents
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.2 Pipeline components . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.3 Pipeline outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 Article . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1 Introduction
This article was written after we gained experience on the technology, and aims at getting
an overview on the imaging pipeline. It also aims at solving the problem of unbalanced
spectral sensitivities identied in [Lapray 2017c] by the mean of high dynamic range
imaging (HDR). The quality estimation is original and we identify the need for new
research on HDR-spectral images quality.
It is published in Sensors, MDPI, which is a fast track, online open access publication
journal. The journal has an Impact Factor of 2.677 (2016) and a 5-Year Impact Factor:
2.964 (2016). The H-index of this journal by Scimago is 104 and is in the Q2 quartile
for electrical and electronic engineering. The JCR category by web of science is Q1 for
instrumentation.
This article has been well read according to the statistics shown by the journal in
Figure 2.1. It has been self-cited twice but is very recent.
2.2 Discussion
2.2.1 Denition
We dene the imaging pipeline for SFA cameras. As shown in Figure 2.2, the pipeline is
essentially similar to the CFA pipeline. Thus, it could be, in the principle, studied simi-
larly with respect to signal and image processing theories. However, good care should be
taken in several hypothesis related to the nature of the images: They are not, in general,
color images, neither large band intensity images related to luminance, i.e. panchromatic.
SFA camera, similarly to CFA, samples the image domain Ω ⊂ R2 by the use of
one of the dierent sensitivities c(x) at each pixel, then M c ⊂ Ω is the subset of the
image domain that is covered by the sensor mosaic. We use a similar notation as in
c
T
[Thomas 2018d]. In each pixel, only one of the channels exists, so cM = ∅. In
Mc
S
general, all pixels are represented in one channel such that c = Ω. The pixel values
of the mosaiced image are denoted
c (x)
fM for x ∈ M c.
12 Chapter 2. Imaging pipeline
Figure 2.1: Graph representing access to this article on the publisher website on May 30,
2018.
Figure 2.2: Graph representing a typical instance of the spectral lter arrays imaging
pipeline.
The task of demosaicing is to solve the problem of having a spatial shift between
the spectral sampling. In other words, we want to nd the values for all the pixels in
the image, f c (x) for x ∈ Ω, such that f c (x) = fM
c (x) for x ∈ M c. And so, recover
f (x) on the image domain. Although demosaicing is very important for colour imaging
because of the visualisation on displays, and some standard storage formats, it is not
obvious that it is always needed for multispectral data. Indeed, it seems that, for in-
stance, texture parameters may be well retrieved from raw data rather than demosaiced
data [Losson 2015]. Nevertheless, if we consider that demosaicing is a registration cor-
rection in SFA, then it is important, and if we consider measurement of a spectrum at
position x, then we need to agree on the best estimate of the set of multispectral values
f (x) for this specic location.
Median ltering for demosaicing raw SFA images [Wang 2013b]. The median ltering
provides a safe demosaicing in the sense that it could be designed to not introduce new
values in the image. However, it is quite long to compute and the visual or accuracy
resulting image were not very convincing in our simulations.
Discrete wavelet decomposition demosaicing was extended to SFA [Wang 2013a]. The
underlying concept of demosaicing based on wavelet decomposition is that high frequency
components are similar for all the bands at the native scale of decomposition. So, the
low frequency could be estimated from a higher sampled band (or a panchromatic image
version), and the high frequency component estimated at a downsampled scale level can
be simply reported from one band to another. The results in our paper were not very
good because we used a band that was sampled as sparsely as the other ones for low-
frequencies. At the contrary, if used in a moxel arrangement such as the one from Monno
et al. or on the panchromatic image, such as Mihoubi et al. did [Mihoubi 2017a], then
the hypothesis may be as reasonable as for the Bayer instance. One issue that would
remain is how good a SFA is to capture high frequencies when a band is occurring several
pixels away from the previous occurrence. This should be related to the image content
and perhaps to natural image statistics.
Xingbo Wang also developed a linear minimum mean square error (LMMSE) for-
mulation to demosaic SFA images in [Wang 2014a]. His PhD thesis contains extended
results [Wang 2016]. We improved LMMSE to N-LMMSE with Prakhar Amba and David
Alleysson in [Amba 2017a]. The recent PhD of Prakhar Amba considers also more learn-
ing methods to demosaic SFA [REF PRAKHAR PHD]. Those methods performs very
well and have the advantage to generalise to any arrangement of moxels. They are how-
ever constrained by the learning database, and it is not easy to predict their eciency
on images that do not exhibit the same statistics.
Several other teams has also addressed this problem. Monno et al. [Monno 2015]
developed several demosaicing optimal for their specic sensor. We did present a more
recent state-of-the-art in [Amba 2017a] that contains references to the most recent works.
14 Chapter 2. Imaging pipeline
That being said, the best attempt of generalisation of demosaicing for SFA, in my opinion,
is developed by Soane Mihoubi et al. in their article [Mihoubi 2017a]. Indeed the use
of a panchromatic image permits the development, extension and unication of most
frameworks and algorithms proposed in the literature for CFA to SFA. This work contains
the methods that should be used as benchmarks when possible.
Optimal pipeline should consider the joint optimisation of all the elements, such as
stated by Li et al. [Li 2008] in their conclusion. This is probably the research papers
that we will observe in the next years, especially with the combined use of hyperspectral
database image acquisition and the use of deep learning methods. I develop this aspect
in Chapter 5.
The storage of multispectral images is a problem due to the large size of those
data. It usually calls for compression. However, in the case of SFA, the raw image is
not larger than a greylevel version, so storage of raw, augmented with information on
the camera, e.g. moxel and spectral sensitivities, may provide an ecient way to do
that. This implies that the decoder would be a little complex: Further research and
standardisation should discuss those aspects.
It is to be noted that, at least in the visible range, spectral images were hardly
competitive versus colour or greylevel images in computer vision. This may be explained
by the fact that spatial resolution was not as good in spectral images as in greylevel
images, while algorithms were mostly based on gradient computation. This created an
implicit dominant role of spatial resolution to the advantage of greylevel images. We
can also add that dimension reduction was mostly applied before processing, then some
of the advantages of spectral images were cancelled out versus a good resolution image
in greylevel or in colour. This will probably change with the development of adequate
processing, in particular learning protocols. However, we observe already that the use
of an additional NIR channel has been accepted as valuable by the computer vision
community, e.g. RGB-N or VNIR imaging.
2.3. Article 15
2.3 Article
The paper is here - as published.
Sensor prototyping
Contents
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2.1 Historical background . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2.2 Analysis on sensitivities . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2.3 How to design a sensor? . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3 Article . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.1 Introduction
This article was written after we had realized the lters of our SFA prototypes, but before
full integration into a camera that can capture images. It contains the state of the art
for snapshot spectral imaging and denes practical instances of SFA. It is referred to in
the literature for the general denition of this technology and for showing one of the rst
published implementations, but also for the state of the art.
It is published in Sensors, MDPI, which is a fast track, online open access publication
Journal. The journal has an Impact Factor of 2.677 (2016) and a 5-Year Impact Factor:
2.964 (2016) The H-index of this journal by Scimago is 104 and is in the Q2 quartile
for electrical and electronic engineering. The JCR category by web of science is Q1 for
instrumentation.
This article has been very well read according to the statistics shown by the Journal
in Figure 3.1. According to google scholar, it has been cited 65 times on May 30, 2018,
which is rather good for our eld. Amongst the references, I cited it 11 times. It has also
been cited by the competitive teams introduced in Chapter 1. It is to be read together
with a subsequent article published in the same journal [Thomas 2016b], which presents
the spectral characterisation of the nal camera.
3.2 Discussion
3.2.1 Historical background
This Chapter considers the realization of our prototypes. We must state that we were
pushed by the community to get real data to work on SFA demosaicing when Xingbo
Wang started his PhD in 2011. Back then, there were no available commercial solutions
beside the rst prototypes of 4 bands, e.g. RGB-NIR realized by Ocean Optics. Most
of the works were performed in simulations based on hyperspectral reectance images.
This was a very reduced set of available data for us because we needed images in both
visible and NIR. A time constraint was also coming from the project Open Food System
in which a prototype sensor was a deliverable (See 7.8.3).
We considered several partners to help us with lters manufacture, only SILIOS
Technologies could really master lter realisation at the level of a few pixels. However,
18 Chapter 3. Sensor prototyping
Figure 3.1: Graph representing access to this article on the publisher website on May 30,
2018.
they could not realise on the same plate visible and NIR lters with the same technology.
We had to run for a hybrid process: It was important for us that this prototype sensed
the NIR part of the spectrum for both literature consideration and project application.
Due to diculty of realisation and uncertainty of results, we decided to use directly the
state-of-the-art binary-tree method to design the arrangement of moxels as a general
purpose instance. In fact we realized three dierent sensor layouts, but only characterise
deeply and published about one of these instances. One of the three layout is under a
patent process.
One critic that we can rise here is that we should have spent more time in simula-
tion to design an optimal design before realisation, we go back on that in Chapter 5.
Nevertheless, due to this urge, we could publish results quite rapidly, at the same time
as competitive research teams. Thanks to that we had been invited to join the EU
projects CISTERN and EXIST (see 7.8.2). We could also demonstrate very early image
acquisitions and show the proof of concept.
The number and shapes of the lters is providing very dierent sensor features and
3.2. Discussion 19
Figure 3.2: Graph representing the spectral sensitivities of the prototype sensor.
performance, often depending on applications. In general, we can say that very narrow
bands do not exhibit the best general performance, and an amount of correlation or
overlapping between the bands benets most later computations. This statement may
be wrong in the case of very specic application for which sensing a set of narrow parts
of the spectrum is useful, e.g. some biological measurements. However, notch lters are
not better either, especially because they would increase the contribution of noise to the
signal.
We discussed the diculty on the choice of lters, layout and number for the very
general cases in the following perspectives:
• Filters shape and number for spectral reconstruction [Ansari 2017, Wang 2014c,
Wang 2013c].
In fact, the diculty of lter realisation makes the practical instances quite far from
the ideal case or from the simulations. As an example, measurement of the sensitivities of
our prototype sensor are shown on Figure 3.2, while simulations and lter measurements
are shown in Figure 10 of the following article.
We can observe that the lters are not of any ideal shape. In addition, we can
observe that there might be curves that are not uni-modal. This is true for most RGB-
NIR instances, for most commercial instances such as IMEC sensors, and so on. Although
the general shape is not critical for later computation, the mix of very dierent spectral
components into a single channel may be a problem for the applications, and specic
solutions have been considered. We considered the special case of our prototype and
proposed to demultiplex visible and NIR components [Sadeghipoor 2016]. This problem
is generally addressed in the literature as visible-NIR separation and applied to RGB-NIR
sensors.
Due to the many dierent possibilities, an exhaustive search would be very dicult and
it is dicult to nd a model that gives all freedom in all the pipeline elements.
However, we recall with Li et al. [Li 2008] that the pipeline must be optimised in
its entirety despite of the diculty, if not, the local optimisations would only relate
to academic niches. A recent article in this direction proposed a solution for natural
scenes [Li 2018]. They observed an optimal spatio-spectral behaviour with 5-6 bands to
describe the visible part of the spectrum. I will go back to this point in Chapter 5.
One question remains: Should we try to achieve a universal design that respect general
natural image statistics or to dene specic sensors for particular uses? In fact, it would
be easier to dene an optimal SFA camera for a particular application where the cost
function is very well understood, or at least computed. However, the market mass of
each of those instances would probably not be sucient to see many instances to appear
in the market. An ubiquitous instance that permits several type of applications would
lead to cheap and robust options. It however must outperforms other existing solutions.
And it would require the tuning of the pipeline for each specic uses. This is also further
addressed in Chapter 5.
3.3 Article
The paper is here - as published.
Illumination
Contents
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.3 Article . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.1 Introduction
This article is the theoretical core of the PhD of Haris Ahmad, and the results are
based on simulation. Although it addresses general spectral imaging, the concept is very
useful in the case of SFA, where the camera maybe used in uncontrolled illumination
environment. The thesis is still ongoing, so this is a work in progress. Maturity is
still required on this topic. In particular, we are currently investigating the concept in
practice for material identication. Further research must address spatial variation of
illuminations and shadows or object interaction.
It is published in Journal of Imaging Science and Technology from IS&T, which is a
focused journal for our community. The journal has an Impact Factor of 0.35 (2016) and
a 5-Year Impact Factor: 0.46 (2016) The H-index of this journal by Scimago is 37 and
is in the Q4 quartile for the topic of the article. The JCR category by web of science is
Q4 for Imaging Science & Photographic Technology.
This article is very recent, so any analysis on how it has been read is irrelevant.
4.2 Discussion
According to Eq. 1.2, for each specic e(λ), f (x) have good chances to be very dierent.
It is intuitive that for most tasks, it is useful to getf (x) related to r(x, λ), with no
s
inuence of e(λ). We propose that for any ei (λ), there would be a transform Gi that
computes the fi (x) into a stable representation fs (x), which can then be related to
r(x, λ) thanks to a calibration process. We write this in the very general case in Equa-
tions 4.1 and 4.2. In our research, the transform is a linear transform to investigate the
concept, but any formulation may be investigated in the future.
Z
fs (x) = Gsi r(x, λ)ei (λ)c(λ)dλ (4.2)
We dene that as spectral constancy [Khan 2018c]. This is very similar to compu-
tational colour constancy, and similarly, a class of solutions assumes that the illumination
ei (λ) is known. In practice, the illumination estimate in the sensor domain is enough
[Khan 2017c], and we show that a linear transform is accurate enough to improve greatly
the representation of spectral data [Khan 2018c]. We also show that a linear diagonal
22 Chapter 4. Illumination
transform, equivalent to a Von Kries transform, is not enough for most practical choices
of c(λ), but would benet from a spectral adaptation transform. This later con-
cept is similar to a chromatic adaptation transform [G. 2004], or to spectral sharpening
[Finlayson 1994].
The article in next section addresses this problem by providing experimental results
in simulation. We used an evaluation based on spectral reconstruction, which permits to
compare between dierent set of c(λ). We show also that the results depend strongly
on a good illuminant estimation, but we have not been able to quantify yet how well it
should be estimated to become useful. Validation of the simulation results in practice is
an ongoing work. Upcoming communications will demonstrate how robust the proposal
is, and how good must be the illumination estimated for material identication based on
spectral reectance.
Development of this concept by the use of dierent technologies, from the investi-
gation based on the dichromatic reectance model to deep learning optimisation of the
imaging pipeline will be addressed in the future.
4.3 Article
The paper is reproduced here - as published.
Haris Ahmad Khan, Jean-Baptiste Thomas, Jon Yngve Hardeberg and Olivier
Laligant. Spectral Adaptation Transform for Multispectral Constancy. Journal of
Imaging Science and Technology, vol. 62, no. 2, pages 20504-120504-12, March
2018.
Chapter 5
Contents
5.1 Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.1.1 Use of improved imaging model and diverse modalities . . . . . . . 23
5.1.2 Unication of pipeline . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.1.3 Visualisation and image quality . . . . . . . . . . . . . . . . . . . . 25
5.1.4 Spectral video . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.1.5 Standardisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.2 Technology transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.2.1 General applications . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.2.2 Technical commercial products . . . . . . . . . . . . . . . . . . . . 28
5.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.1 Research
This Section provides analysis and guidelines for further research in the academics.
On another direction, similarity and dierences with approaches used in remote sens-
ing would be very interesting. There are many possible extensions by revisiting spectral
unmixing under dierent priors for close-range multispectral imaging, e.g. separation
between visible and NIR components.
Other additional modalities, such as polarisation, may be considered into the SFA
paradigm, which may then become lter arrays imaging. Database of joint spectral
and polarimetric images started to be developed [Lapray 2018]. 3D, or at least depth,
could also be considered.
24 Chapter 5. Perspectives and conclusion
Models to optimise are often based on the simulation of part of the pipeline or
pipeline item . We observe several dominant approaches in our community. A good
part of the literature considers linear formulations to optimise the resulting image based
on simulations, in particular, denoising, demosaicing and spectral sensitivities (e.g.
[Zahra Sadeghipoor 2012]). Some authors propose to use variational models to process
images between raw data and the nal colour images, in particular image enhancement,
tone-mapping, white balance, etc. (e.g. [Provenzi 2008]). Recent literature using deep
learning to optimise the color imaging pipeline [Zhou 2018, Henz 2018] opens the door
to further studies and new optimisation frameworks.
It is hard to recommend the use of one model compared to another for further studies.
To me, the linear formulations permit a good tracking of errors and permit to decom-
pose each steps easily, however they are not providing the best possible peak accurate
results in most cases though they are robust and relatively fast. The variational methods
are very interesting in the sense that they provide a good model for images that takes
into account local edges, and they are embedded into a robust mathematical formula-
tion [Provenzi 2017]. Problem is that the inversion of the model is usually iterative and
5.1. Research 25
thus it may not be feasible in real-time though it generally converges fast, which may
or may not be a very important feature for pipeline optimisation. Deep learning for-
mulations may nd out unexpected optimal solutions or validate empirically traditional
hypothesis. It is promising in a sense that we may nd interesting hypothetical expla-
nations by understanding how the solutions were identied. It is probable that all those
approaches should be used and compared.
It is yet not sure how the need of data will be handled. The deep learning techniques
would require many data, and the community has been looking for such data for decades.
On the other hand, there are several databases that are published and made available in
the recent years thanks to progress in optics and the commercialisation of hyperspectral
cameras, such as [HSI 2018].
Any optimal proposal will be limited to the choice of a cost function, and there is a
very important and dicult scientic challenge in this direction.
• If the images are converted to colour images, then we can use colour image quality
indexes.
• If the SFA individual bands are considered as greylevel images, there is no evidence
that those images follow the same assumptions or statistics than the large band or
photometric greylevel images. This needs further investigations.
• If we should reconstruct spectra, then there are also problems in the evaluation of
the quality of the measure. Today's measures from PSNR, GoFC or Angular error
are highly correlated and usually only provide relative indicators and no spatial
information. One very interesting direction for this problem is the recent works
of Noel Richard and his team to provide the mathematical formulations that can
be used to describe spectral images. However, those tools are not yet adapted to
multispectral images.
• Another indicator of quality for a pipeline is the results for a specic application.
However, good care have to be taken in this direction: Final application evaluation
may also introduce bias. Classical way to annotate databases may not be the most
adequate: We annotated manually videos [Benezeth 2014], but could only label
what was visible: it is possible that we missed some information that actually the
machine may see.
Z
f (x, t) = s(x, t, λ)c(λ)dλ (5.1)
This equation can be considered from dierent perspectives. One way to understand
it is to consider spectral video, where the time is another dimension that opens to real-
time applications and new classes of processing and algorithms. Another way to consider
this equation is to consider that t − 1 and t + 1 frames can be used to better process the t
frame and produce a better image. This is what we have done for HDR spectral imaging.
This is what is done in recent smartphones and some cameras. There is however only
little academic literature that considers explicitly this aspect for SFA, so there is an open
space for research there.
The former proposal opens the problem of handling of spectral video and also could
be developed in two directions: Either a human is the nal user or the machine is. In
the last case, the machine will need to extract features that helps it to interact with its
environment and realize the task it has been made for. This would surely call for fea-
ture extraction and classication and should not surprisingly be treated from a machine
learning perspective in the next years.
If a human observer is the nal destination, then the rendering of the scene should be
meaningful according to the human cognitive system. Then I would suggest to consider
it as an augmented reality problem, where the scene is rendered in a natural pleasant
5.2. Technology transfer 27
way, so that the person is not disturbed by the content, but in the same time good use
should be made of the extra, pertinent information. This extra-information should be
added to this natural content in a way that it is not disturbing, but full of sense. It is a
similar case as above, but more complex due to the change of scene over time.
5.1.5 Standardisation
Standardisation of sensors is "to dene a unied method to measure, compute and present
specication parameters for cameras and image sensors used for machine vision appli-
cations". This is also a quality assessment. There is an attempt from the European
Machine Vision Association [EMV 2018a] toward sensors standardisation. EMVA 1288
in particular, [EMV 2018b] is meant to be extended to SFA cameras and sensors. Reach-
ing an industrial quality and communicating about it is indeed very important for the
large spread of those type of sensors.
An other standardisation attempt is to dene multispectral image formats, which
is dicult due to the diversity of bands, number of bands and various acqui-
sition techniques, it is very important that the le formats also contain addi-
tional information and not only pixels values. The Division 8, TC 8-07, of
the CIE addresses this issue in the Technical Report Multispectral Image For-
mats [CIE (International Commission on Illumination) 2017].
These initiatives are very important, although in general those collective works are
very time-consuming and sometimes not very well acknowledged in academic researcher
evaluations. They are of specic importance in this case because industry and users need
those guarantees and tools to accept further technology transfer.
Adequate algorithms that compete or overcome the state of the art are needed, but
they must be embedded into demonstrators and not limited to simulation or laboratory
prototypes.
In addition, available commercial solutions are not yet excellent and users need ex-
pert to help them to develop solution. So there is a room for expertise transfer and
consultancy. This shall improve in the next years.
Other applications to material appearance measurement are also very appealing.
5.3 Conclusion
This research permits to demonstrate in practice a technology that only existed in sim-
ulation. It contains a set of prototypes of camera, experimental data, algorithms and
methods that permit to shape or use the data captured for machine vision. Based on
this proof of concept, here is room to develop optimal or more adequate algorithms and
sensors. There are also interesting research directions and perspectives that could gen-
erate activities for the next years. Interestingly, we note that several aspects could be
shaped as a commercial product if the sensor industry manages to develop standards and
commercial oers, which they seem to do.
Chapter 6
Contents
6.1 Material appearance . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.2 Color reproduction - Displays . . . . . . . . . . . . . . . . . . . . . 31
6.3 Image acquisition - Camera . . . . . . . . . . . . . . . . . . . . . . 31
6.4 Visual aspects and quality . . . . . . . . . . . . . . . . . . . . . . . 32
6.5 Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
6.5.1 Journals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6.5.2 Conferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.5.3 Book chapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.5.4 Noticeable talks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
I do believe that the simplicity of the SFA concept coupled with illuminant under-
standing is the key for getting multispectral cameras out of the labs.
6.5 Publications
Publications are listed below. I refer to my Google Scholar prole for indexes and cita-
1
tions . There are several submitted works that do not appear at those links. You may
refer to my personal webpage for accessing my publications .
2
On Figure 6.2, we can observe the number of articles I published by year between 2007
and 2017. We can observe the dierent consequences of my actions: less publications
after I took my position at UB, increasing number after I obtained the funding for
the OFS project, and increasing numbers after I was out of teaching duties. On the
numbers provided and due to the revision time, this graph must be smoothed, and
the most interesting information is that I demonstrate a scientic production that is
increasing thanks to experience and funding, coupled with my délégation CNRS and my
détachement.
1
https://scholar.google.fr/citations?user=MkzII3cAAAAJ&hl=fr
2
http://jbthomas.org/publications-2.html
6.5. Publications 33
Figure 6.2: Graph representing publications between 2007 and 2017. In blue conference
proceedings, in yellow journal articles.
6.5.1 Journals
Impact factors for 2016 are provided when applicable, and number of citations from
google scholar at early June 2018.
5. Haris Ahmad Khan, Jean-Baptiste Thomas, Jon Yngve Hardeberg and Olivier
Laligant. Illuminant estimation in multispectral imaging. J. Opt. Soc. Am. A,
vol. 34, no. 7, pages 10851098, Jul 2017 [IF=1.621], cited 6 times.
7. Prakhar Amba, Jean Baptiste Thomas and David Alleysson. N-LMMSE Demo-
saicing for Spectral Filter Arrays. Journal of Imaging Science and Technology,
vol. 61, no. 4, pages 4040714040711, 2017 [IF=0.35], cited 1 time.
10. Pierre-Jean Lapray, Jean-Baptiste Thomas, Pierre Gouton and Yassine Ruichek.
Energy balance in Spectral Filter Array camera design. Journal of the European
Optical Society-Rapid Publications, vol. 13, no. 1, jan 2017 [IF=0.975], cited 10
times.
11. Jean-Baptiste Thomas, Pierre-Jean Lapray, Pierre Gouton and Cédric Clerc. Spec-
tral Characterization of a Prototype SFA Camera for Joint Visible and NIR Acqui-
sition. Sensors, vol. 16, no. 7, page 993, 2016 [IF=2.667], cited 17 times.
12. Philippe Colantoni, Jean-Baptiste Thomas and Alain Trémeau. Sampling CIELAB
color space with perceptual metrics. International Journal of Imaging and Robotics,
vol. 16, no. 3, pages xxxx, 2016.
13. Marius Pedersen, Daniel Suazo and Jean-Baptiste Thomas. Seam-Based Edge
Blending for Multi-Projection Systems. International Journal of Signal Processing,
Image Processing and Pattern Recognition, vol. 9, no. 4, pages 1126, 2016 cited
1 time.
6.5. Publications 35
14. Ping Zhao, Marius Pedersen, Jon Yngve Hardeberg and Jean-Baptiste Thomas.
Measuring the Relative Image Contrast of Projection Displays. Journal of Imaging
Science and Technology, vol. 59, no. 3, pages 3040413040413, 2015 [IF=0.35],
cited 5 times.
15. Pierre-Jean Lapray, Xingbo Wang, Jean-Baptiste Thomas and Pierre Gouton. Mul-
tispectral Filter Arrays: Recent Advances and Practical Implementation. Sensors,
vol. 14, no. 11, page 21626, 2014 [IF=2.667], cited 66 times.
16. Xingbo Wang, Jean-Baptiste Thomas, Jon Yngve Hardeberg and Pierre Gouton.
Multispectral imaging: narrow or wide band lters? Journal of the International
Colour Association, vol. 12, pages 4451, 2014 cited 15 times. eng.
17. Philippe Colantoni, Jean-Baptiste Thomas and Jon Y. Hardeberg. High-end col-
orimetric display characterization using an adaptive training set. Journal of the
Society for Information Display, vol. 19, no. 8, pages 520530, 2011 [IF=0.877],
cited 10 times.
18. Jean-Baptiste Thomas, Arne Bakke and Jérémie Gerhardt. Spatial Nonuniformity
of Color Features in Projection Displays: A Quantitative Analysis. Journal of
Imaging Science and Technology, vol. 54, no. 3, pages 3040313040313, 2010
[IF=0.35], cited 4 times.
19. Jean-Baptiste Thomas, Philippe Colantoni, Jon Y. Hardeberg, Irene Foucherot and
Pierre Gouton. A geometrical approach for inverting display color-characterization
models. Journal of the Society for Information Display, vol. 16, no. 10, pages
10211031, 2008 [IF=0.877], cited 5 times.
20. Jean-Baptiste Thomas, Jon Y. Hardeberg, Irene Foucherot and Pierre Gouton. The
PLVC display color characterization model revisited. Color Research & Application,
vol. 33, no. 6, pages 449460, 2008 [IF=0.798], cited 43 times.
6.5.2 Conferences
1. Davit Gigilashvili, Jean-Baptiste Thomas, Marius Pedersen and Jon Yngve Hard-
eberg. Behavioral investigation of visual appearance assessment. To appears in
Color and Imaging Conference, vol. 2018, no. 2018, 2018.
4. Jean-Baptiste Thomas, Aurore Deniel and Jon Yngve Herdeberg. The Plastique
collection: A set of resin objects for material appearance research. To appear in
the XIV Conferenza del colore, Firenze, Italy, September 2018.
5. Haris Ahmad Khan, Jean-Baptiste Thomas and Jon Yngve Hardeberg. Towards
highlight based illuminant estimation in multispectral images. In Image and Signal
Processing: 8th International Conference, ICISP 2018, Lecture Notes in Computer
Science. Cham, June, 2018. Vol. 10884, pp. 517-525. Springer International
Publishing, 2018.
36 Chapter 6. Research summary and communications
8. Prakhar Amba, Jean Baptiste Thomas and David Alleysson. N-LMMSE Demo-
saicing for Spectral Filter Arrays. Color and Imaging Conference, vol. 61, no. 4,
pages 4040714040711, 2017.
9. Keivan Ansari, Jean-Baptiste Thomas and Pierre Gouton. Spectral band Selection
Using a Genetic Algorithm Based Wiener Filter Estimation Method for Reconstruc-
tion of Munsell Spectral Data. Electronic Imaging, vol. 2017, no. 18, pages 190193,
2017.
10. Vincent Whannou de Dravo, Jessica El Khoury, Jean Baptiste Thomas, Alamin
Mansouri and Jon Yngve Hardeberg. An Adaptive Combination of Dark and
Bright Channel Priors for Single Image Dehazing. Color and Imaging Conference,
vol. 2017, no. 25, pages 226234, 2017.
11. Haris Ahmad Khan, Jean-Baptiste Thomas and Jon Yngve Hardeberg. Analytical
Survey of Highlight Detection in Color and Spectral Images. In Simone Bianco,
Raimondo Schettini, Alain Trémeau and Shoji Tominaga, editors, Computational
Color Imaging: 6th International Workshop, CCIW 2017, Milan, Italy, March 29-
31, 2017, Proceedings, pages 197208, Cham, 2017. Springer International Pub-
lishing.
12. Haris Ahmad Khan, Jean Baptiste Thomas and Jon Yngve Hardeberg. Multi-
spectral Constancy Based on Spectral Adaptation Transform. In Puneet Sharma
and Filippo Maria Bianchi, editors, Image Analysis: 20th Scandinavian Confer-
ence, SCIA 2017, Tromsø, Norway, June 1214, 2017, Proceedings, Part II, pages
459470, Cham, 2017. Springer International Publishing.
14. Soane Mihoubi, Benjamin Mathon, Jean-Baptiste Thomas, Olivier Losson and Lu-
dovic Macaire. Illumination-robust multispectral demosaicing. In The six IEEE In-
ternational Conference on Image Processing Theory, Tools and Applications IPTA,
Montreal, Canada, November 2017.
15. Jean-Baptiste Thomas, Jon Yngve Hardeberg and Gabriele Simone. Image Con-
trast Measure as a Gloss Material Descriptor. In Simone Bianco, Raimondo Schet-
tini, Alain Trémeau and Shoji Tominaga, editors, Computational Color Imaging:
6th International Workshop, CCIW 2017, Milan, Italy, March 29-31, 2017, Pro-
ceedings, pages 233245, Cham, 2017. Springer International Publishing.
6.5. Publications 37
16. Jean-Baptiste Thomas, Pierre-Jean Lapray and Pierre Gouton. HDR Imaging
Pipeline for Spectral Filter Array Cameras. In Puneet Sharma and Filippo Maria
Bianchi, editors, Image Analysis: 20th Scandinavian Conference, SCIA 2017,
Tromsø, Norway, June 1214, 2017, Proceedings, Part II, pages 401412, Cham,
2017. Springer International Publishing.
17. Jessica El Khoury, Jean-Baptiste Thomas and Alamin Mansouri. A Color Image
Database for Haze Model and Dehazing Methods Evaluation. In Alamin Mansouri,
Fathallah Nouboud, Alain Chalifour, Driss Mammass, Jean Meunier and Abder-
rahim Elmoataz, editors, Image and Signal Processing: 7th International Confer-
ence, ICISP 2016, Trois-Rivières, QC, Canada, May 30 - June 1, 2016, Proceedings,
pages 109117, Cham, 2016. Springer International Publishing.
19. Jessica El Khoury, Jean-Baptiste Thomas and Alamin Mansouri. Haze and con-
vergence models: Experimental comparison. In AIC 2015, Tokyo, Japan, May
2015.
21. Xingbo Wang, Philip J. Green, Jean-Baptiste Thomas, Jon Y. Hardeberg and
Pierre Gouton. Evaluation of the Colorimetric Performance of Single-Sensor Im-
age Acquisition Systems Employing Colour and Multispectral Filter Array. In Alain
Trémeau, Raimondo Schettini and Shoji Tominaga, editors, Computational Color
Imaging: 5th International Workshop, CCIW 2015, Saint Etienne, France, March
24-26, 2015, Proceedings, pages 181191, Cham, 2015. Springer International Pub-
lishing.
22. Ping Zhao, Marius Pedersen, Jon Yngve Hardeberg and Jean-Baptiste Thomas.
Measuring the Relative Image Contrast of Projection Displays. Color and Imaging
Conference, vol. 2015, no. 1, pages 7991, 2015.
23. Yannick Benezeth, Désiré Sidibé and Jean-Baptiste Thomas. Background sub-
traction with multispectral video sequences. In IEEE International Conference on
Robotics and Automation workshop on Non-classical Cameras, Camera Networks
and Omnidirectional Vision (OMNIVIS), pages 6p, 2014.
24. Jessica El Khoury, Jean-Baptiste Thomas and Mansouri Alamin. Does Dehazing
Model Preserve Color Information? In Signal-Image Technology and Internet-
Based Systems (SITIS), 2014 Tenth International Conference on, pages 606613,
Nov 2014.
25. Pierre-Jean Lapray, Jean-Baptiste Thomas and Pierre Gouton. A Multispectral Ac-
quisition System using MSFAs. Color and Imaging Conference, vol. 2014, no. 2014,
pages 97102, 2014.
26. Xingbo Wang, Marius Pedersen and Jean-Baptiste Thomas. The inuence of
chromatic aberration on demosaicking. In Visual Information Processing (EUVIP),
2014 5th European Workshop on, pages 16, Dec 2014.
38 Chapter 6. Research summary and communications
27. PPing Zhao, Marius Pedersen, Jon Yngve Hardeberg and Jean-Baptiste Thomas.
Image registration for quality assessment of projection displays. In 2014 IEEE
International Conference on Image Processing (ICIP), pages 34883492, Oct 2014.
28. Ping Zhao, Marius Pedersen, Jean-Baptiste Thomas and Jon Yngve Hardeberg.
Perceptual Spatial Uniformity Assessment of Projection Displays with a Calibrated
Camera. Color and Imaging Conference, vol. 2014, no. 2014, pages 159164, 2014.
29. Jean-Baptiste Thomas, Philippe Colantoni and Alain Trémeau. On the Uniform
Sampling of CIELAB Color Space and the Number of Discernible Colors. In Shoji
Tominaga, Raimondo Schettini and Alain Trémeau, editors, Computational Color
Imaging: 4th International Workshop, CCIW 2013, Chiba, Japan, March 3-5, 2013.
Proceedings, pages 5367, Berlin, Heidelberg, 2013. Springer Berlin Heidelberg.
30. Hugues Peguillet, Jean-Baptiste Thomas, Pierre Gouton and Yassine Ruichek. En-
ergy balance in single exposure multispectral sensors. In Colour and Visual Com-
puting Symposium (CVCS), 2013, pages 16, Sept 2013.
31. Xingbo Wang, J.-B. Thomas, J.Y. Hardeberg and P. Gouton. Discrete wavelet
transform based multispectral lter array demosaicking. In Colour and Visual
Computing Symposium (CVCS), 2013, pages 16, Sept 2013.
32. Xingbo Wang, Jean-Baptiste Thomas, Jon Y. Hardeberg and Pierre Gouton. Me-
dian ltering in multispectral lter array demosaicking. In Proc. SPIE, volume
8660, pages 86600E86600E10, 2013.
33. Xingbo Wang, Jean-Baptiste Thomas, Jon Yngve Hardeberg and Pierre Gouton.
A Study on the Impact of Spectral Characteristics of Filters on Multispectral Image
Acquisition. In Sophie Wuerger Lindsay MacDonald Stephen Westland, editor,
Proceedings of AIC Colour 2013, volume 4, pages 17651768, Gateshead, Royaume-
Uni, July 2013.
34. Ping Zhao, Marius Pedersen, Jon Yngve Hardeberg and Jean-Baptiste Thomas.
Camera-based measurement of relative image contrast in projection displays. In
Visual Information Processing (EUVIP), 2013 4th European Workshop on, pages
112117, June 2013.
35. Jean-Baptiste Thomas and Jérémie Gerhardt. Webcam based display calibration.
Color and Imaging Conference, vol. 2012, no. 1, pages 8287, 2012.
38. Jérémie Gerhardt and Jean-Baptiste Thomas. Toward an automatic color calibra-
tion for 3D displays. Color and Imaging Conference, vol. 2010, no. 1, pages 510,
2010.
40. Arne Magnus Bakke, Jean-Baptiste Thomas and Jeremie Gerhardt. Common
assumptions in color characterization of projectors. In GCIS'09, volume 3, pages
5055, 2009.
41. Philippe Colantoni and Jean-Baptiste Thomas. A Color Management Process for
Real Time Color Reconstruction of Multispectral Images. In Arnt-Børre Salberg,
Jon Yngve Hardeberg and Robert Jenssen, editors, Image Analysis: 16th Scandina-
vian Conference, SCIA 2009, Oslo, Norway, June 15-18, 2009. Proceedings, pages
128137, Berlin, Heidelberg, 2009. Springer Berlin Heidelberg.
42. Jean-Baptiste Thomas and Arne Magnus Bakke. A Colorimetric Study of Spa-
tial Uniformity in Projection Displays. In Alain Trémeau, Raimondo Schettini
and Shoji Tominaga, editors, Computational Color Imaging: Second International
Workshop, CCIW 2009, Saint-Etienne, France, March 26-27, 2009. Revised Se-
lected Papers, pages 160169, Berlin, Heidelberg, 2009. Springer Berlin Heidelberg.
43. Espen Bårdsnes Mikalsen, Jon Y. Hardeberg and Jean-Baptiste Thomas. Veri-
cation and extension of a camera-based end-user calibration method for projection
displays. Conference on Colour in Graphics, Imaging, and Vision, vol. 2008, no. 1,
pages 575579, 2008.
45. Jean-Baptiste Thomas, Gael Chareyron and Alain Trémeau. Image watermarking
based on a color quantization process. In Proc. SPIE, volume 6506, pages 650603
65060312, 2007.
46. Jean-Baptiste Thomas, Jon Hardeberg, Irene Foucherot and Pierre Gouton. Ad-
ditivity Based LC Display Color Characterization. In GCIS'07, volume 2, pages
5055, 2007.
47. Jean-Baptiste Thomas and Alain Tremeau. A Gamut Preserving Color Image
Quantization. In Image Analysis and Processing Workshops, 2007. ICIAPW 2007.
14th International Conference on, pages 221226, Sept 2007.
Curriculum Vitae
Contents
7.1 Situation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
7.2 Synopsis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
7.3 Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
7.4 Scientic History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
7.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
7.6 Teachings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
7.6.1 Courses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
7.6.2 Hours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.6.3 Responsabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.7 Supervisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.7.1 Post Docs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.7.2 PhD students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.7.3 Master students . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.7.4 Other . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.8 Projects and funding . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.8.1 MUVApp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.8.2 EXIST and CISTERN . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.8.3 OFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.8.4 CNRS-INS2I-JCJC-2017 MOSAIC . . . . . . . . . . . . . . . . . . 47
7.8.5 AURORA 2015 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.8.6 PARI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.8.7 BQR PRES 2014 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.8.8 BQR 2012 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.8.9 Hypercept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.8.10 COSCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.1 Situation
Jean-Baptiste THOMAS, PhD
Maître de Conférences / Associate Professor at Université de Bourgogne,
Franche-Comté,
Section CNU 61 (National University Council label),
Faculty of science and technology, dpt IEM
Laboratory of electronics, computer science and image (Le2i), FRE CNRS 2005
Sex : Male
Birthdate: 26/10/1981
Phone: +47 47 74 74 17
email: [email protected]
42 Chapter 7. Curriculum Vitae
7.2 Synopsis
• I am currently Postdoctoral research fellow at NTNU-Gjøvik, working on the
MUVApp project. This project is dedicated to understand and measure visual
appearance of objects (See 7.8.1).
• I review for dierent scientic journals (IS&T Journal of Imaging Science and Tech-
nology, SID Journal of the Society for Information Display, OSA Chinese Optical
Letters, Scientic Research and Essays, IEEE Transactions on Image processing,
IEEE Transactions on Circuits Systems and Video Technology, IEEE Transactions
on Industrial Electronics, T& F Journal of Modern Optics, MDPI Sensors, MDPI
Remote Sensing, SPIE Optical engineering).
• I was principal researcher and coordinator for the project PSPC Open Food
System
2 for my lab.
• I was co-head of the team MOTI (Methods and tools for image processing) of my
lab in 2015 and I was elected to seat at the Lab council between 2012 and 2016.
1
http://www-iem.u-bourgogne.fr/MASTER/MSCAESE/homepage_128.htm
2
http://www.openfoodsystem.fr
3
http://cordis.europa.eu/project/rcn/198017_en.html
4
http://www.cistern.nl/index.php/consortium
7.3. Education 43
7.3 Education
• PhD, Color imaging science (2009), from Université de Bourgogne, France, in col-
laboration with the Gjøvik University College, Norway.
• MSc, Optics, Image and Vision, with major in Image, Vision and Signal processing
(2006), from Université Jean Monnet, Saint-Etienne, France.
Projects: MUVApp.
Supervisors: Professors Pierre Gouton and Jon Y. Hardeberg, and Dr. Irène
Foucherot.
Laboratory: LIGIV.
7.5 References
• Academic references.
7.6 Teachings
7.6.1 Courses
My teaching is due to the IEM department of the UFR sciences et techniques at UBFC-
Dijon.
• I had the responsibility of the module colorimetry in the rst year of Master
EVA, in which I built a new course format. I handled lectures, TDs and TPs. I
initiated a project-based process with them, in which they were responsible, with
my help, to design a topic to work on and present to the rest of the class. I
also invited several
5
academics , from Norway in particular through our ERASMUS
agreement or other research projects; Who gave a few hours teaching or participated
in the recommendations for their projects. I also received colleagues from France to
participate. This specic format permitted to make the students more responsible
for their global studying project, and according to their evaluation and feedback,
they enjoyed this format.
5
Pr. Ivar Farup, Ass. Pr. Marius Pedersen, Edoardo Provenzi, Philippe Colantoni.
7.7. Supervisions 45
7.6.2 Hours
I did give teachings every year, except when I was in délégation CNRS, for which UBFC
received a nancial compensation. Since I am in détachement/sabbatical now, I do not
have any teaching duty at UBFC. Note that I had to refuse my PEDR prime, obtained
in 2016, because of this situation.
7.6.3 Responsabilities
• I took the responsibility of the Master Advanced Electronic Systems Engineering ,
6
taught in English, in Spring 2015. My main action was to start this Master program.
At Fall 2016, the Master opened with 14 students. Along with this Master, we
signed an MoU with HAINAN UNIVERSITY in China, in which we agree on
exchange of Master students. We also did collaborate with the French Ambassy
in Nigeria and our students from there were attributed 3 grants from oil industry
to come to study with us. I had to leave for my sabbatical, so I handed over the
Master program to Professor Jean-Marie Bilbault in September 2016.
7.7 Supervisions
7.7.1 Post Docs
I worked with 2 Post Doctoral fellows that we recruited on projects OFS and EXIST,
which are summarized in Table 7.2.
6
http://www-iem.u-bourgogne.fr/MASTER/MSCAESE/homepage_128.htm
46 Chapter 7. Curriculum Vitae
• Dr Xingbo Wang works now for AAC Technologies, a company ub China that
manufacture mobile phone components. Where he is in charge of the Chinese
part of the imaging solution team for this company, focusing on image quality
(IQ lab construction, IQ assessment, IQ tuning, but also algorithm development,
instrument purchasing, recruitment, routine management, etc.).
• Haris Ahmad Khan is in the process of writing his thesis. He has got oered several
post doctoral positions already, so I have no doubt that he will do very well after.
7.7.4 Other
• I am occasionally Master thesis external examinator at HIG/NTNU-Gjøvik and at
EPFL.
• I was invited to a jury for a PhD thesis defense (Hasan SHEIKH FARIDUL, Uni-
versité Jean Monnet, the 06/01/2014)
7.8. Projects and funding 47
I wrote the proposal about multispectral imaging for the Le2i and would have man-
aged the projects if I had not to quit to the détachement. Pierre Gouton followed up
after my leave.
7.8.3 OFS
Open Food System aims at dening the kitchen of tomorrow by the mean of connected
and instrumented cooking devices. I managed this project, funded by the ministry of
industries, for the Le2i. This project started in January 2013 and ended in July 2016.
I wrote the proposal and managed the project for the Le2i.
We wrote the proposal together with Benjamin, who is the project manager. At rst
the partnership was with the Le2i, but the project followed me to Norway.
48 Chapter 7. Curriculum Vitae
7.8.6 PARI
This program from the Conseil Régional de Bourgogne permitted to nance 2 PhDs,
the PhD of Xingbo Wang and the PhD of Haris Ahmad. Both co-funding came from
NTNU-Gjøvik in Norway.
Within the PARI there were several call for projects. The project that contains
the thesis of Xingbo Wang was co-written by Pierre Gouton and me. The project that
contains the thesis of Haris Ahmad was co-written by Olivier Laligant and me.
7.8.9 Hypercept
I was invited to participate to the hypercept
7 project funded by the Norwegian research
council. This project provided me the possibility to give continuity to my historical
collaboration with HIG/NTNU-Gjøvik.
I only participated in this project as external member. This helped to nance many
travels and stays at NTNU.
7.8.10 COSCH
I am a member of the network COST project COSCH
8 dedicated to cultural heritage.
I only participated lightly to this project as external member. Pr Alamin Mansouri
was in charge for this thematic at the Le2i.
7.9 Summary
A timeline summary is shown in Table 7.5, where the reader will observe the link between
the projects and research fellows. Jessica El Khoury was funded by the OFS project, as
well as Pierre-Jean Lapray. Xingbo Wang and Haris Ahmad were funded by the regional
council through the PARI, the co-funding came from NTNU for both of them. Ping Zhao
was funded on the Hypercept project at NTNU, and we could develop his mobility via
7
http://colourlab.no/research_and_development/research_projects/hypercept
8
http://www.cost.eu/domains_actions/mpns/Actions/TD1201
7.9. Summary 49
the AURORA project. Keivan Ansar was funded by the EXIST project. My dierent
research stays were funded by several projects, when related.
Table 7.5: Gantt chart putting into perspective projects and supervisions in a timeline. This table is made in early June 2018.
7 years on SFA
2011 2012 2013 2014 2015 2016 2017 2018
Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4
EXIST
CISTERN
K Ansari Post-Doc 100% complete
OFS
PJ Lapray Post-Doc 100% complete
J El Khoury PhD 100% complete
BQR-PRES
BQR
Stay EPFL 100% complete
PARI 1
X Wang PhD 100% complete
PARI 2
H Ahmad PhD 90% complete
Hypercept
Stay NTNU 100% complete
AURORA
P Zhao PhD 100% complete
COSCH
MOSAIC
MUVapp
Stay NTNU 66% complete
Bibliography
[Amba 2017a] Prakhar Amba, Jean Baptiste Thomas and David Alleysson. N-LMMSE
Demosaicing for Spectral Filter Arrays. Journal of Imaging Science and Technol-
ogy, vol. 61, no. 4, pages 4040714040711, 2017. (Cited on pages 13 and 31.)
[Amba 2017b] Prakhar Amba, Jean Baptiste Thomas and David Alleysson. N-LMMSE
Demosaicing for Spectral Filter Arrays. Color and Imaging Conference, vol. 61,
no. 4, pages 4040714040711, 2017. (Cited on page 31.)
[Ansari 2017] Keivan Ansari, Jean-Baptiste Thomas and Pierre Gouton. Spectral band
Selection Using a Genetic Algorithm Based Wiener Filter Estimation Method for
Reconstruction of Munsell Spectral Data. Electronic Imaging, vol. 2017, no. 18,
pages 190193, 2017. (Cited on pages 19 and 31.)
[Bakke 2009] Arne Magnus Bakke, Jean-Baptiste Thomas and Jeremie Gerhardt. Com-
mon assumptions in color characterization of projectors. In GCIS'09, volume 3,
pages 5055, 2009. (Cited on page 31.)
[Bayer 1976] Bryce Edward Bayer. Color imaging array, July 20 1976. US Patent
3,971,065. (Cited on page 7.)
[Benezeth 2014] Yannick Benezeth, Désiré Sidibé and Jean-Baptiste Thomas. Back-
ground subtraction with multispectral video sequences. In IEEE International Con-
ference on Robotics and Automation workshop on Non-classical Cameras, Camera
Networks and Omnidirectional Vision (OMNIVIS), pages 6p, 2014. (Cited on
pages 14, 24, 25 and 32.)
[Colantoni 2010] Philippe Colantoni, Jean-Baptiste Thomas and Ruven Pillay. Graph-
based 3D Visualization of Color Content in Paintings. In Alessandro Artusi,
Morwena Joly, Genevieve Lucet, Denis Pitzalis and Alejandro Ribes, editors,
VAST: International Symposium on Virtual Reality, Archaeology and Intelligent
Cultural Heritage - Short and Project Papers. The Eurographics Association,
2010. (Cited on page 32.)
52 Bibliography
[Colantoni 2016] Philippe Colantoni, Jean-Baptiste Thomas and Alain Trémeau. Sam-
pling CIELAB color space with perceptual metrics. International Journal of Imag-
ing and Robotics, vol. 16, no. 3, pages xxxx, 2016. (Cited on page 32.)
[Cuevas Valeriano 2018a] Leonel Cuevas Valeriano, Jean-Baptiste Thomas and Alexan-
dre Benoit. Deep learning for dehazing: Benchmark and analysis. In NOBIM,
Hafjell, Norway, March 2018. Slides there: http://jbthomas.org/Conferences/
2018NOBIMSlides.pdf. (Cited on page 32.)
[Cuevas Valeriano 2018b] Leonel Cuevas Valeriano, Jean-Baptiste Thomas and Alexan-
dre Benoit. Deep learning for dehazing: limits of the model. In To appear in
CVCS 2018, 2018. (Cited on page 32.)
[de Dravo 2017a] Vincent Whannou de Dravo, Jessica El Khoury, Jean Baptiste Thomas,
Alamin Mansouri and Jon Yngve Hardeberg. An Adaptive Combination of Dark
and Bright Channel Priors for Single Image Dehazing. Journal of Imaging Science
and Technology, vol. 2017, no. 25, pages 226234, 2017. (Cited on page 32.)
[de Dravo 2017b] Vincent Whannou de Dravo, Jessica El Khoury, Jean Baptiste Thomas,
Alamin Mansouri and Jon Yngve Hardeberg. An Adaptive Combination of Dark
and Bright Channel Priors for Single Image Dehazing. Color and Imaging Con-
ference, vol. 2017, no. 25, pages 226234, 2017. (Cited on page 32.)
[El Khoury 2014] Jessica El Khoury, Jean-Baptiste Thomas and Mansouri Alamin. Does
Dehazing Model Preserve Color Information? In Signal-Image Technology and
Internet-Based Systems (SITIS), 2014 Tenth International Conference on, pages
606613, Nov 2014. (Cited on page 32.)
[El Khoury 2015] Jessica El Khoury, Jean-Baptiste Thomas and Alamin Mansouri. Haze
and convergence models: Experimental comparison. In AIC 2015, Tokyo, Japan,
May 2015. (Cited on page 32.)
[El Khoury 2016] Jessica El Khoury, Jean-Baptiste Thomas and Alamin Mansouri. A
Color Image Database for Haze Model and Dehazing Methods Evaluation. In
Alamin Mansouri, Fathallah Nouboud, Alain Chalifour, Driss Mammass, Jean
Meunier and Abderrahim Elmoataz, editors, Image and Signal Processing: 7th
International Conference, ICISP 2016, Trois-Rivières, QC, Canada, May 30 -
June 1, 2016, Proceedings, pages 109117, Cham, 2016. Springer International
Publishing. (Cited on page 32.)
[El Khoury 2017] Jessica El Khoury, Steven Le Moan, Jean-Baptiste Thomas and
Alamin Mansouri. Color and sharpness assessment of single image dehazing.
Multimedia Tools and Applications, Sep 2017. (Cited on page 32.)
[El Khoury 2018a] Jessica El Khoury, Jean-Baptiste Thomas and Alamin Mansouri. Col-
orimetric screening of the haze model limits. In Alamin Mansouri, Abderrahim
Elmoataz, Fathallah Nouboud and Driss Mammass, editors, Image and Signal
Processing: 8th International Conference, ICISP 2018, Lecture Notes in Com-
puter Science, volume 10884, pages 481489, Cham, June 2018. Springer Interna-
tional Publishing. (Cited on page 32.)
Bibliography 53
[El Khoury 2018b] Jessica El Khoury, Jean-Baptiste Thomas and Alamin Mansouri. A
Database with Reference for Image Dehazing Evaluation. Journal of Imaging
Science and Technology, vol. 62, no. 1, pages 1050311050313, 2018. (Cited on
page 32.)
[Finlayson 1994] Graham D. Finlayson, Mark S. Drew and Brian V. Funt. Spectral
sharpening: sensor transformations for improved color constancy. J. Opt. Soc.
Am. A, vol. 11, no. 5, pages 15531563, May 1994. (Cited on page 22.)
[G. 2004] Hunt Robert W. G., Li Changjun and Luo M. Ronnier. Chromatic adaptation
transforms. Color Research & Application, vol. 30, no. 1, pages 6971, 2004.
(Cited on page 22.)
[Gigilashvili 2018a] Davit Gigilashvili, Jean-Baptiste Thomas and Jon Yngve Hardeberg.
Comparison of Mosaic Patterns for Spectral Filter Arrays. In To appear in CVCS
2018, 2018. (Cited on page 31.)
[Henz 2018] Bernardo Henz, Eduardo S. L. Gastal and Manuel M. Oliveira. Deep Joint
Design of Color Filter Arrays and Demosaicing. Computer Graphics Forum, 2018.
(Cited on page 24.)
[Hirakawa 2008] K. Hirakawa and P. J. Wolfe. Spatio-Spectral Color Filter Array Design
for Optimal Image Recovery. IEEE Transactions on Image Processing, vol. 17,
no. 10, pages 18761890, Oct 2008. (Cited on page 8.)
[Jia 2016] J. Jia, K. J. Barnard and K. Hirakawa. Fourier Spectral Filter Array for
Optimal Multispectral Imaging. IEEE Transactions on Image Processing, vol. 25,
no. 4, pages 15301543, April 2016. (Cited on page 9.)
54 Bibliography
[Khan 2017a] Haris Ahmad Khan, Jean-Baptiste Thomas and Jon Yngve Hardeberg.
Analytical Survey of Highlight Detection in Color and Spectral Images. In Simone
Bianco, Raimondo Schettini, Alain Trémeau and Shoji Tominaga, editors, Com-
putational Color Imaging: 6th International Workshop, CCIW 2017, Milan, Italy,
March 29-31, 2017, Proceedings, pages 197208, Cham, 2017. Springer Interna-
tional Publishing. (Cited on page 32.)
[Khan 2017b] Haris Ahmad Khan, Jean Baptiste Thomas and Jon Yngve Hardeberg.
Multispectral Constancy Based on Spectral Adaptation Transform. In Puneet
Sharma and Filippo Maria Bianchi, editors, Image Analysis: 20th Scandinavian
Conference, SCIA 2017, Tromsø, Norway, June 1214, 2017, Proceedings, Part
II, pages 459470, Cham, 2017. Springer International Publishing. (Cited on
page 32.)
[Khan 2017c] Haris Ahmad Khan, Jean-Baptiste Thomas, Jon Yngve Hardeberg and
Olivier Laligant. Illuminant estimation in multispectral imaging. J. Opt. Soc.
Am. A, vol. 34, no. 7, pages 10851098, Jul 2017. (Cited on pages 19, 21 and 32.)
[Khan 2018a] Haris Ahmad Khan, Soane Mihoubi, Benjamin Mathon, Jean-Baptiste
Thomas and Jon Yngve Hardeberg. HyTexiLa: High Resolution Visible and Near
Infrared Hyperspectral Texture Images. Sensors, vol. 18, no. 7, 2018. (Cited on
page 31.)
[Khan 2018b] Haris Ahmad Khan, Jean-Baptiste Thomas and Jon Hardeberg. Towards
highlight based illuminant estimation in multispectral images. In Alamin Mansouri,
Abderrahim Elmoataz, Fathallah Nouboud and Driss Mammass, editors, Image
and Signal Processing: 8th International Conference, ICISP 2018, Lecture Notes
in Computer Science, volume 10884, pages 517525, Cham, June 2018. Springer
International Publishing. (Cited on page 32.)
[Khan 2018c] Haris Ahmad Khan, Jean-Baptiste Thomas, Jon Yngve Hardeberg and
Olivier Laligant. Spectral Adaptation Transform for Multispectral Constancy.
Journal of Imaging Science and Technology, vol. 62, no. 2, pages 20504120504
12, 2018. (Cited on pages 13, 21 and 32.)
[Kriss 2015] Michael Kriss. Handbook of digital imaging. Wiley, John Wiley & Sons,
Ltd., 2015. (Cited on page 5.)
[Lam 2015] Edmund Y. Lam. Computational photography with plenoptic camera and
light eld capture: tutorial. J. Opt. Soc. Am. A, vol. 32, no. 11, pages 20212032,
Nov 2015. (Cited on page 8.)
[Lapray 2014a] Pierre-Jean Lapray, Jean-Baptiste Thomas and Pierre Gouton. A Mul-
tispectral Acquisition System using MSFAs. Color and Imaging Conference,
vol. 2014, no. 2014, pages 97102, 2014. (Cited on page 31.)
Bibliography 55
[Lapray 2014b] Pierre-Jean Lapray, Xingbo Wang, Jean-Baptiste Thomas and Pierre
Gouton. Multispectral Filter Arrays: Recent Advances and Practical Implementa-
tion. Sensors, vol. 14, no. 11, page 21626, 2014. (Cited on page 31.)
[Lapray 2017b] Pierre-Jean Lapray, Jean-Baptiste Thomas and Pierre Gouton. High
Dynamic Range Spectral Imaging Pipeline For Multispectral Filter Array Cameras.
Sensors, vol. 17, no. 6, page 1281, 2017. (Cited on pages 13 and 31.)
[Lapray 2017c] Pierre-Jean Lapray, Jean-Baptiste Thomas, Pierre Gouton and Yassine
Ruichek. Energy balance in Spectral Filter Array camera design. Journal of the
European Optical Society-Rapid Publications, vol. 13, no. 1, jan 2017. (Cited on
pages 11, 13, 18 and 31.)
[Lapray 2018] Pierre-Jean Lapray, Luc Gendre, Alban Foulonneau and Laurent Bigué.
Database of polarimetric and multispectral images in the visible and NIR regions.
In Proc.SPIE, volume 10677, pages 10677 10677 14, 2018. (Cited on page 23.)
[Lee 2005] Hsien-Che Lee. Introduction to color imaging science. Cambridge University
Press, 2005. (Cited on page 5.)
[LeMaster 2014] Daniel A. LeMaster and Keigo Hirakawa. Improved microgrid arrange-
ment for integrated imaging polarimeters. Opt. Lett., vol. 39, no. 7, pages 1811
1814, Apr 2014. (Cited on page 9.)
[Li 2008] Xin Li, Bahadir Gunturk and Lei Zhang. Image demosaicing: a systematic
survey. In William A. Pearlman, John W. Woods and Ligang Lu, editors, Visual
Communications and Image Processing 2008, volume 6822. SPIE, January 2008.
(Cited on pages 14, 20 and 24.)
[Li 2018] Yuqi Li, Aditi Majumder, Hao Zhang and M. Gopi. Optimized Multi-Spectral
Filter Array Based Imaging of Natural Scenes. Sensors, vol. 18, no. 4, 2018. (Cited
on page 20.)
[Losson 2015] Olivier Losson and Ludovic Macaire. CFA Local Binary Patterns for Fast
Illuminant-invariant Color Texture Classication. J. Real-Time Image Process.,
vol. 10, no. 2, pages 387401, June 2015. (Cited on page 13.)
[Mansouri 2005] Alamin Mansouri, Franck Marzani, Jon Yngve Hardeberg and Gouton
Pierre. Optical calibration of a multispectral imaging system based on interference
lters. Optical Engineering, vol. 44, pages 44 44 12, 2005. (Cited on page 1.)
[Miao 2006a] Lidan Miao and Hairong Qi. The design and evaluation of a generic method
for generating mosaicked multispectral lter arrays. IEEE Transactions on Image
Processing, vol. 15, no. 9, pages 27802791, Sept 2006. (Cited on page 8.)
[Miao 2006b] Lidan Miao, Hairong Qi, Rajeev Ramanath and Wesley E. Snyder. Binary
Tree-based Generic Demosaicking Algorithm for Multispectral Filter Arrays. IEEE
Transactions on Image Processing, vol. 15, no. 11, pages 35503558, Nov 2006.
(Cited on page 8.)
56 Bibliography
[Mikalsen 2008] Espen Bårdsnes Mikalsen, Jon Y. Hardeberg and Jean-Baptiste Thomas.
Verication and extension of a camera-based end-user calibration method for pro-
jection displays. Conference on Colour in Graphics, Imaging, and Vision, vol. 2008,
no. 1, pages 575579, 2008. (Cited on page 31.)
[Neuhaus 2018] Frank Neuhaus, Christian Fuchs and Dietrich Paulus. High-resolution
hyperspectral ground mapping for robotic vision. Proc.SPIE, vol. 10696, pages
10696 10696 9, 2018. (Cited on page 24.)
[Nozick 2013b] Vincent Nozick and Jean-Baptiste Thomas. Camera Calibration: Geo-
metric and Colorimetric Correction. In 3D Video, pages 91112. John Wiley &
Sons, Inc., 2013. (Cited on page 31.)
[Pedersen 2016] Marius Pedersen, Daniel Suazo and Jean-Baptiste Thomas. Seam-Based
Edge Blending for Multi-Projection Systems. International Journal of Signal Pro-
cessing, Image Processing and Pattern Recognition, vol. 9, no. 4, pages 1126,
2016. (Cited on page 31.)
[Peguillet 2013] Hugues Peguillet, Jean-Baptiste Thomas, Pierre Gouton and Yassine
Ruichek. Energy balance in single exposure multispectral sensors. In Colour and
Visual Computing Symposium (CVCS), 2013, pages 16, Sept 2013. (Cited on
page 31.)
[Provenzi 2017] Edoardo Provenzi. Computational color science. John Wiley & Sons,
Inc., apr 2017. (Cited on page 24.)
Bibliography 57
[Ramanath 2001] Rajeev Ramanath, Wesley E. Snyder, Gri L. Bilbro and William A.
Sander. Robust Multispectral Imaging Sensors for Autonomous Robots. Technical
report, North Carolina State University, 2001. Retrieved 16 June 2014. (Cited
on page 8.)
[Ramanath 2004] Rajeev Ramanath, Wesley E. Snyder and Hairong Qi. Mosaic multi-
spectral focal plane array cameras. In Infrared Technology and Applications XXX,
volume 5406 of Proc. SPIE, pages 701712, 2004. (Cited on page 8.)
[Shafer 1985] Steven A. Shafer. Using color to separate reection components. Color
Research & Application, vol. 10, no. 4, pages 210218, 1985. (Cited on page 4.)
[Sharma 2002] Gaurav Sharma. Digital color imaging handbook. CRC Press, Inc., Boca
Raton, FL, USA, 2002. (Cited on page 5.)
[Thomas 2007a] Jean-Baptiste Thomas, Gael Chareyron and Alain Trémeau. Image
watermarking based on a color quantization process. In Proc. SPIE, volume 6506,
pages 65060365060312, 2007. (Cited on page 32.)
[Thomas 2007b] Jean-Baptiste Thomas, Jon Hardeberg, Irene Foucherot and Pierre Gou-
ton. Additivity Based LC Display Color Characterization. In GCIS'07, volume 2,
pages 5055, 2007. (Cited on page 31.)
[Thomas 2007c] Jean-Baptiste Thomas and Alain Trémeau. A Gamut Preserving Color
Image Quantization. In Image Analysis and Processing Workshops, 2007. ICI-
APW 2007. 14th International Conference on, pages 221226, Sept 2007. (Cited
on page 32.)
[Thomas 2008c] Jean-Baptiste Thomas, Jon Y. Hardeberg, Irene Foucherot and Pierre
Gouton. The PLVC display color characterization model revisited. Color Research
& Application, vol. 33, no. 6, pages 449460, 2008. (Cited on page 31.)
[Thomas 2009c] Jean-Baptiste Thomas and Arne Magnus Bakke. A Colorimetric Study
of Spatial Uniformity in Projection Displays. In Alain Trémeau, Raimondo Schet-
tini and Shoji Tominaga, editors, Computational Color Imaging: Second Interna-
tional Workshop, CCIW 2009, Saint-Etienne, France, March 26-27, 2009. Revised
Selected Papers, pages 160169, Berlin, Heidelberg, 2009. Springer Berlin Heidel-
berg. (Cited on page 31.)
[Thomas 2010b] Jean-Baptiste Thomas, Arne Bakke and Jeremie Gerhardt. Spatial
Nonuniformity of Color Features in Projection Displays: A Quantitative Analysis.
Journal of Imaging Science and Technology, vol. 54, no. 3, pages 30403130403
13, 2010. (Cited on page 31.)
[Thomas 2012] Jean-Baptiste Thomas and Jérémie Gerhardt. Webcam based display cal-
ibration. Color and Imaging Conference, vol. 2012, no. 1, pages 8287, 2012.
(Cited on page 31.)
[Thomas 2013a] Jean-Baptiste Thomas, Philippe Colantoni and Alain Trémeau. On the
Uniform Sampling of CIELAB Color Space and the Number of Discernible Colors.
In Shoji Tominaga, Raimondo Schettini and Alain Trémeau, editors, Computa-
tional Color Imaging: 4th International Workshop, CCIW 2013, Chiba, Japan,
March 3-5, 2013. Proceedings, pages 5367, Berlin, Heidelberg, 2013. Springer
Berlin Heidelberg. (Cited on page 32.)
[Thomas 2013b] Jean-Baptiste Thomas, JonY. Hardeberg and Alain Trémeau. Cross-
Media Color Reproduction and Display Characterization. In Christine Fernandez-
Maloigne, editor, Advanced Color Image Processing and Analysis, pages 81118.
Springer New York, 2013. (Cited on page 31.)
[Thomas 2014b] Jean-Baptiste Thomas. MultiSpectral Filter Arrays: Design and demo-
saicing, November - December 2014. Guest lecture, LPNC, Grenoble and LISTIC,
Annecy. (Cited on page 31.)
[Thomas 2016a] Jean-Baptiste Thomas. MultiSpectral Filter Arrays: Tutorial and proto-
type denition, November - December 2016. EPFL, Lausanne and NTNU-Gjovik.
(Cited on page 31.)
Bibliography 59
[Thomas 2016b] Jean-Baptiste Thomas, Pierre-Jean Lapray, Pierre Gouton and Cédric
Spectral Characterization of a Prototype SFA Camera for Joint Visible and
Clerc.
NIR Acquisition. Sensors, vol. 16, no. 7, page 993, 2016. (Cited on pages 9, 17
and 31.)
[Thomas 2017a] Jean-Baptiste Thomas, Jon Yngve Hardeberg and Gabriele Simone. Im-
age Contrast Measure as a Gloss Material Descriptor. In Simone Bianco, Rai-
mondo Schettini, Alain Trémeau and Shoji Tominaga, editors, Computational
Color Imaging: 6th International Workshop, CCIW 2017, Milan, Italy, March
29-31, 2017, Proceedings, pages 233245, Cham, 2017. Springer International
Publishing. (Cited on page 31.)
[Thomas 2017b] Jean-Baptiste Thomas, Pierre-Jean Lapray and Pierre Gouton. HDR
Imaging Pipeline for Spectral Filter Array Cameras. In Puneet Sharma and Fil-
ippo Maria Bianchi, editors, Image Analysis: 20th Scandinavian Conference,
SCIA 2017, Tromsø, Norway, June 1214, 2017, Proceedings, Part II, pages 401
412, Cham, 2017. Springer International Publishing. (Cited on page 31.)
[Thomas 2017c] Jean-Baptiste Thomas, Yusuke Monno and Pierre-Jean Lapray. Spectral
Filter Arrays Technology, September 2017. T2C short course at Color and Imaging
Conference, 25th Color and Imaging Conference, Society for Imaging Science and
Technology, September 11-15, 2017, Lillehammer, Norway. (Cited on page 31.)
[Thomas 2018c] Jean-Baptiste Thomas, Aurore Deniel and Jon Yngve Herdeberg. The
Plastique collection: A set of resin objects for material appearance research. In To
appear in the XIV Conferenza del colore, Firenze, Italy, September 2018. (Cited
on page 31.)
[Thomas 2018d] Jean-Baptiste Thomas and Ivar Farup. Demosaicing of Periodic and
Random Colour Filter Arrays by Linear Anisotropic Diusion. to appear in Jour-
nal of Imaging Science and Technology, 2018. (Cited on page 11.)
[Wang 2013a] Xingbo Wang, J.-B. Thomas, J.Y. Hardeberg and P. Gouton. Discrete
wavelet transform based multispectral lter array demosaicking. In Colour and
Visual Computing Symposium (CVCS), 2013, pages 16, Sept 2013. (Cited on
pages 13 and 31.)
[Wang 2013b] Xingbo Wang, Jean-Baptiste Thomas, Jon Y. Hardeberg and Pierre Gou-
ton. Median ltering in multispectral lter array demosaicking. In Proc. SPIE,
volume 8660, pages 86600E86600E10, 2013. (Cited on pages 13 and 31.)
60 Bibliography
[Wang 2013c] Xingbo Wang, Jean-Baptiste Thomas, Jon Yngve Hardeberg and Pierre
Gouton. A Study on the Impact of Spectral Characteristics of Filters on Multispec-
tral Image Acquisition. In Sophie Wuerger Lindsay MacDonald Stephen Westland,
editor, Proceedings of AIC Colour 2013, volume 4, pages 17651768, Gateshead,
Royaume-Uni, July 2013. (Cited on pages 19 and 31.)
[Wang 2014a] Congcong Wang, Xingbo Wang and Jon Yngve Hardeberg. A Linear In-
terpolation Algorithm for Spectral Filter Array Demosaicking. In Abderrahim
Elmoataz, Olivier Lezoray, Fathallah Nouboud and Driss Mammass, editors, Im-
age and Signal Processing, pages 151160, Cham, 2014. Springer International
Publishing. (Cited on page 13.)
[Wang 2014b] Xingbo Wang, Marius Pedersen and Jean-Baptiste Thomas. The inuence
of chromatic aberration on demosaicking. In Visual Information Processing (EU-
VIP), 2014 5th European Workshop on, pages 16, Dec 2014. (Cited on pages 14
and 31.)
[Wang 2014c] Xingbo Wang, Jean-Baptiste Thomas, Jon Yngve Hardeberg and Pierre
Gouton. Multispectral imaging: narrow or wide band lters? Journal of the
International Colour Association, vol. 12, pages 4451, 2014. eng. (Cited on
pages 19 and 31.)
[Wang 2015] Xingbo Wang, Philip J. Green, Jean-Baptiste Thomas, Jon Y. Hardeberg
and Pierre Gouton. Evaluation of the Colorimetric Performance of Single-Sensor
Image Acquisition Systems Employing Colour and Multispectral Filter Array. In
Alain Trémeau, Raimondo Schettini and Shoji Tominaga, editors, Computational
Color Imaging: 5th International Workshop, CCIW 2015, Saint Etienne, France,
March 24-26, 2015, Proceedings, pages 181191, Cham, 2015. Springer Interna-
tional Publishing. (Cited on pages 19 and 31.)
[Wang 2016] Xingbo Wang. Filter array based spectral imaging : Demosaicking and
design considerations. In NTNU, editor, PhD, Doctoral thesis at NTNU;2016:251.
PhD, Doctoral thesis at NTNU;2016:251, 2016. (Cited on page 13.)
[Winkens 2017] Christian Winkens, Volkmar Kobelt and Dietrich Paulus. Robust Fea-
tures for Snapshot Hyperspectral Terrain-Classication. In Michael Felsberg, An-
ders Heyden and Norbert Krüger, editors, Computer Analysis of Images and
Patterns, pages 1627, Cham, 2017. Springer International Publishing. (Cited on
page 24.)
1
[Zahra Sadeghipoor 2012] Sabine SÃ sstrunk Zahra Sadeghipoor Yue M. Lu.
4 Optimum
spectral sensitivity functions for single sensor color imaging. In Proc.SPIE, volume
8299, pages 8299 8299 14, 2012. (Cited on page 24.)
[Zhao 2013] Ping Zhao, Marius Pedersen, Jon Yngve Hardeberg and Jean-Baptiste
Thomas. Camera-based measurement of relative image contrast in projection dis-
plays. In Visual Information Processing (EUVIP), 2013 4th European Workshop
on, pages 112117, June 2013. (Cited on page 32.)
[Zhao 2014b] Ping Zhao, Marius Pedersen, Jean-Baptiste Thomas and Jon Yngve Hard-
eberg. Perceptual Spatial Uniformity Assessment of Projection Displays with a
Calibrated Camera. Color and Imaging Conference, vol. 2014, no. 2014, pages
159164, 2014. (Cited on page 32.)
[Zhao 2015a] Ping Zhao, Marius Pedersen, Jon Yngve Hardeberg and Jean-Baptiste
Thomas. Measuring the Relative Image Contrast of Projection Displays. Journal
of Imaging Science and Technology, vol. 59, no. 3, pages 3040413040413, 2015.
(Cited on page 32.)
[Zhao 2015b] Ping Zhao, Marius Pedersen, Jon Yngve Hardeberg and Jean-Baptiste
Thomas. Measuring the Relative Image Contrast of Projection Displays. Color
and Imaging Conference, vol. 2015, no. 1, pages 7991, 2015. (Cited on page 32.)
[Zhou 2018] Ruofan Zhou, Radhakrishna Achanta and Sabine Süsstrunk. Deep
Residual Network for Joint Demosaicing and Super-Resolution. CoRR,
vol. abs/1802.06573, 2018. (Cited on page 24.)
Multispectral imaging for computer vision
Abstract: The main objective of this report is to provide an overview on my research
activities on multispectral imaging based on spectral lter arrays. Based on this experi-
ence, we formulate future directions and challenges.
Keywords: Multispectral imaging, Spectral lter arrays, imaging pipeline