Introducing Photonics Compress
Introducing Photonics Compress
The essential guide for anyone wanting a quick introduction to the fundamental ideas
underlying photonics, the science of modern optical systems. The author uses his
40 years of experience in photonics research and teaching to provide intuitive explan-
ations of key concepts and demonstrates how these relate to the operation of photonic
devices and systems. The reader can gain insight into the nature of light and the ways in
which it interacts with materials and structures and learn how these basic ideas are
applied in areas such as optical systems, 3D imaging and astronomy. Carefully designed
end-of-chapter problems enable students to check their understanding. Hints are given
at the end of the book and full solutions are available online. Mathematical treatments
are kept as simple as possible, allowing the reader to grasp even the most complex of
concepts. Clear, concise and accessible, this is the perfect guide for undergraduate
students taking a first course in photonics and anyone in academia or industry wanting
to review the fundamentals.
BRIAN CULSHAW
University of Strathclyde
University Printing House, Cambridge CB2 8BS, United Kingdom
One Liberty Plaza, 20th Floor, New York, NY 10006, USA
477 Williamstown Road, Port Melbourne, VIC 3207, Australia
314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre,
New Delhi – 110025, India
79 Anson Road, #06–04/06, Singapore 079906
www.cambridge.org
Information on this title: www.cambridge.org/9781107155732
DOI: 10.1017/9781316659182
© Cambridge University Press 2020
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.
First published 2020
Printed in the United Kingdom by TJ International Ltd, Padstow, Cornwall
A catalogue record for this publication is available from the British Library.
Library of Congress Cataloging-in-Publication Data
Names: Culshaw, B., author.
Title: Introducing photonics / Brian Culshaw.
Description: Cambridge ; New York, NY : Cambridge University Press, 2020. |
Includes bibliographical references and index.
Identifiers: LCCN 2019036970 (print) | LCCN 2019036971 (ebook) | ISBN 9781107155732
(hardback) | ISBN 9781316609415 (paperback) | ISBN 9781316659182 (epub)
Subjects: LCSH: Photonics.
Classification: LCC TA1520 .C85 2020 (print) | LCC TA1520 (ebook) |
DDC 621.36/5–dc23
LC record available at https://lccn.loc.gov/2019036970
LC ebook record available at https://lccn.loc.gov/2019036971
Preface page xi
1 Photonics – An Introduction 1
1.1 What is Photonics? 1
1.2 Exploring Some Concepts 3
1.3 Applications 5
1.4 About This Book 5
1.5 Some Related Background Ideas 8
1.6 Problems 9
2 The Nature of Light 11
2.1 Light in a Vacuum 11
2.2 Light in Isotropic and Anisotropic Materials 12
2.3 Light Interacting with Structures 19
2.4 Summary 23
2.5 Problems 23
3 Light Interacting with Materials 25
3.1 Linear Optical Materials 25
3.2 Non-Linear Optical Materials 29
3.3 The Diversity of Optical Absorption 33
3.4 Light Emission 43
3.5 Materials with Controllable Optical Properties 51
3.6 Summary 54
3.7 Problems 55
4 Light Interacting with Structures 58
4.1 Structures with Dimensions Much Greater than the Optical
Wavelength 58
vii
viii Contents
Light has immediate and obvious appeal – the vast majority of us see light and
communicate through it daily. We process images – facial expressions and
traffic signs. We know light has an impact on people and places, emotionally
and physically. Different colours affect our moods, sunshine lifts the spirit but
with side effects for the excessively indulgent.
The word ‘photonics’ emerged into the scientific and technical vocabulary
towards the end of the twentieth century and has come to mean much more than
simply visible light extending into the redder than red and the bluer than blue.
Photonics is the science of light. It is the technology of generating, controlling,
and detecting lightwaves and photons, which are particles of light. The characteris-
tics of the waves and photons can be used to explore the universe, cure diseases and
even to solve crimes. Scientists have been studying light for hundreds of years. The
colours of the rainbow are only a small part of the entire lightwave range called the
electromagnetic spectrum. Photonics explores a wide variety of wavelengths, from
gamma rays to radio including x-rays, ultraviolet, and infrared light.
Photonics is also gaining much respect within the technical community;
indeed 2015 was the International Year of Light and from 2018 onwards the
International Day of Light will be marked on 21 May. But what makes photon-
ics special? Indeed, what is it and how does it contribute to our emerging world?
These questions are easy to answer. Fibre optic communications are critical to
the success of the internet, the biomedical community uses ever more versatile
visible imaging systems exploiting subtle photonic concepts and phototherapy is
now routine. Mobile displays and digital cameras are ubiquitous and there is
much, much more to come.
The aim of this book is primarily to provide answers to the above two
questions in concise and accessible manner. Yes, there are one-thousand-page
tomes full of mathematics which expound every detail of the scientific
principle. However, here we seek to highlight basic concepts and ideas and,
most of all, attempt to develop some intuitive insight into what is going on.
xi
xii Preface
Such insights, whilst not necessarily always totally accurate, are essential to
understanding and formulating ideas and making that essential first guess
(rough estimate) as to whether an idea is indeed feasible.
Some readers will be intrigued by photonics for its fascination alone as a
technical discipline. For many the thought that this fascination may evolve into
a diversity of application sectors over the coming decades will stimulate
further curiosity and encourage broader participation. The final couple of
chapters look very briefly into the diversity of prospects, both in applications
and in scientific advances, to trigger yet more application potential. One
aspiration here is to motivate yet further interest in this intriguing domain.
The book also includes problems at the end of each chapter, and the principal
aim of these is to encourage readers to dig further and discuss the possible
solutions among themselves. Indeed, many may respond best to teamwork
approaches where different readers approach different pre-agreed tasks and then
come together to evaluate a complete solution. Many of the problems are some-
what like ‘real engineering’ in that there is no perfect answer but the optimum
approach needs to be fully understood and critically assessed by the readers. There
will be discussion spaces on the web available as the book progresses.
For those wishing for more detail there are also many suggestions for further
reading; there is a great deal of good material available on the web. Indeed,
perhaps the real hope is that this book will stimulate the reader’s curiosity
towards discovering more about this fascinating topic.
Approaching and hopefully consolidating these insights has, from the
author’s perspective, emerged over many years with enormous thanks to
countless encounters with groups ranging from students to internationally
recognised experts; hopefully this book will consolidate at least the gist of
these many experiences. Any list of these encounters is too long to mention
and any attempt will inevitably have omissions.
There are also other acknowledgments to make, notably to the ever tolerant
Sarah Strange at CUP whose constructively critical and wise comments on the
manuscript have been remarkably helpful and thought provoking. All errors and
omission, though, are purely the author’s fault and comments from readers will
always be welcome. Thanks too to Elizabeth Horne and Julie Lancashire for
constructive support and understanding when yet another delay in the delivery of
the manuscript appeared. Then there was Susan Parkinson, a splendidly under-
standing copy editor. Last, but by no means least, a very big thank you to the
long suffering Aileen, who not only put up with the author in the household but
also turned the manuscript from unintelligible ramblings into (we hope) a
comprehensible text. We shall certainly celebrate when the book emerges.
Brian Culshaw
Glasgow
1
Photonics – An Introduction
1
Electronics increasingly important Photonics increasingly important
Visible
Medium VHF UHF Radio Astronomy Security Global Warming Sunburn Medical Astronomy
Wave TV Satellite Links Imaging Photosynthesis Imaging Mössbauer
MRI
Radar X-ray Spectroscopy
Mobile phones Astronomy
1 MHz 1 GHz 1 THz 1 PHz 1 EHz Frequency
Wavelength
1 km 1m 1 mm 1 mm 1 nm
Figure 1.1 The principal regions of the electromagnetic spectrum. The boundaries between the various regions are fuzzy (except for the tiny
visible part), though most regions have a presence in everyday life.
1.2 Exploring Some Concepts 3
The focus in what follows is on those concepts which are most important in
and around the visible spectrum, from the far infrared to the far ultraviolet.
This has emerged as the more conventional domain for the term ‘photonics’.
The aspiration in this book is to convey an intuitive understanding of the
essential photonics concepts and to develop a critical insight into applying
them in order to appreciate the potential and, indeed, the limitations of new and
emerging photonic technologies.
the particles in the material with which the electromagnetic wave is interacting.
The energy of the photon, namely Planck’s constant multiplied by the fre-
quency of the electromagnetic wave, and the energy of a freely moving
particle, which is of the order of Boltzmann’s constant times the absolute
temperature, are compared in Figure 1.2 at a temperature of 300 K. (The
photon energy here is presented in electron volts – the energy change as an
electron changes its electrical potential by 1 V – see Appendix 5 for the values
of these constants.) When the photon energy comfortably exceeds the inherent
thermal background energy of particles (at a wavelength of about 30 microns
at 300 K – the far infrared), then electromagnetic radiation is captured or,
indeed, generated as a particle. At lower energies (wavelengths above about
300 microns, 1 THz) we can regard the capture or generation of electromag-
netic radiation as being due to a flow of electrons produced by the electric field
in an electromagnetic wave. In other words, at lower frequencies a wave model
for the interaction becomes appropriate; the ideas of ‘electronics’ rather than
‘photonics’ apply. This will depend on temperature: at 3 K a photonic descrip-
tion of an interaction will begin to be applicable at frequencies two decades
less than at 300 K.
Similar observations apply to structural dimensions, though the boundaries
between the photonic and electronic descriptions are based on different criteria.
Materials behave ‘electronically’ when their dimensions are significantly less
1 MV
Photonics Electronics
concepts concepts
to the fore to the fore
1 kV
Photon
energy 1V Visible
(eV) spectrum
Thermal particle motion energy
at room temperature – 300 K
1 mV
1 mV
1 pm 1 nm 1 mm 1 mm 1m
Wavelength (metres)
Figure 1.2 The plot shows photon energy as a function of wavelength. The
broken line shows the energy of thermal random motion at 300 K. At wavelengths
below about 30 microns (10 THz), the photon energy begins to dominate. At about
300 microns (~1 THz) the ideas of electronics become more applicable. The THz
region falls in the transition zone.
1.4 About This Book 5
than a wavelength. A contrasting aspect of photonics has been that very small
scale structures with dimensions in the photonics region have only recently
become potentially achievable; that is, to dimensional precision measured in
nanometres. It is remarkable that gold nanospheres have been used to provide a
spectrum of permanent colour in glassware since ancient times. Finally, there is
the dimensional range of structures which is comparable with the wavelength
and it is here that diffraction effects, in particular, become critically important
and waveguiding – as in fibre optics – comes to the fore.
These principles and their context in potential applications will be explored
in more detail later in the book. They have, however, prompted considerable
debate as to whether light is ‘really’ a wave or a particle. The answer to this
question is that it depends on the context. Our aspiration here is to help
develop an understanding of which approach is relevant and when. An attempt
to portray these differing situations is presented in Figure 1.3, which highlights
the underlying phenomena that constitute photonics.
1.3 Applications
Applications, achieved and potentially feasible, are becoming a major stimulus
for the interest in and development of photonics. We shall explore briefly a
number of case studies later in the book, but among the everyday presence of
photonics sits the internet, which could not function at all without fibre optic
communications. Your camera, your smart phone and the increasing efficiency
and versatility of lighting technologies all rely on recent advances in photon-
ics. Many aspects of medical diagnosis, microscopy and medical therapeutics,
rely on photonics. Optical absorption is the initiator for climate change; the
monitoring processes measuring changes in our atmosphere, as well as the
long-term solutions aimed at driving our energy sources directly from light, all
need an understanding of photonics and its exploitation.
Waveguides,
Transmit the dimensions ~ l: Detect and
in time in time
Manipulate photon as wave model decode the
the photons waves or arriving
Generate currents photons
the
photons:
lasers, in Nanostructures, in received
LEDs, transmitted dimensions << l: properties
thermal properties current model – intensity,
sources – intensity, wavelength,
wavelength, polarisation
polarisation etc.
etc.
Figure 1.3 A picture of ‘what is photonics’: the generation, manipulation, transmission and detection of electromagnetic radiation in that region of
the spectrum where the photon picture dominates the behaviour at generation and detection, but where transmission can be described in terms of
waves, in large-scale dielectrics or in medium-scale waveguides. At the smallest scale, over very short distances, currents in conductors or in nano-
scale synthetic dielectric structures, become important.
Chapter 1 Chapter 2 Chapter 3 Chapter 4
Introduction The Nature of Light Light and Materials Light and Structures
Light-emission
LED lasers
Light-emission
LED lasers
Figure 1.4 The essential concepts of photonics and their organisation in this book.
8 Photonics – An Introduction
Chapter 5 Chapter 6
Entanglement
Fibrecomms
Plasmonics,
nanophotonics,
Microscopy, telescopes optical circuits
Phototherapies
Solar energy
Environmental etc.
Figure 1.5 The present and future role of photonics in our developing society.
through further exploitation of what we already know or through the use of the
new tools, as encapsulated in Figure 1.5.
1.6 Problems
Some topics to investigate:
1. How might the boundary lines for photonics, as indicated in Figure 1.2,
change in deep space? Could there be circumstances where, for example,
the photon concept is useful for describing an electromagnetic wave of 3 cm
wavelength?
10 Photonics – An Introduction
The question ‘is light a ray, a wave or a particle?’ has long fascinated
philosophers and has intensified since the concept of the photon first
emerged at the beginning of the twentieth century. The pragmatic answer
to the question is ‘it depends what you are looking for’ and is really about
how best to conceptualise light in particular circumstances. We have
already mentioned such distinctions briefly in Chapter 1. This chapter will
extend these basic concepts to introduce the material which follows in later
chapters.
11
12 The Nature of Light
Spherical
Ray propagation wavefronts
Wave amplitude
indicating 1/r spreading
(1/r2 power density)
Distance
marked
in l
Figure 2.1 Propagation of rays and waves from a point source in a vacuum. The
wave amplitude representation indicates a reduction in power density with the
inverse square of the distance, but the total power remains constant as the wave
progresses.
Incoming
-
nucleus
optical +
wave
Equivalent
spring
constant k
(electron
attracted to
nucleus)
C
Damper constant C
representing
material losses
Atoms or molecules in
an anisotropic crystalline
structure
Another important observation here is that the refractive index for polarisa-
tion states other than the two states aligned with the crystalline structure of the
material will be a mix of two values – the input state will be split into these two
components along the so-called principal axes of the material. Each compon-
ent will consequently exit with a different phase delay, so that the output
polarisation state will in general be different from the input state. Appendix 1
explores this in more detail.
There are many other insights which are enabled by the mass–spring–
damper model. Any mass–spring–damper arrangement involves a frequency
response to a given applied force. At low frequencies the deflection introduced
by the applied force has a particular characteristic value and is in phase with
the applied force. At the resonant frequency, the deflection is a maximum but
is now in quadrature with the applied force. After passing through the reson-
ance the displacement becomes small and in antiphase (Figure 2.5).
By the same token, then the refractive index will have exactly the same general
resonance characteristics, as the mass–spring–damper model though in molecular
materials there are many resonances and therefore the refractive index behaviour
follows the trends indicated in Figure 2.6. Consequently, since all these reson-
ance effects are taking place, the refractive index does vary with frequency and so
the velocity of light within the material is also a function of frequency – a
phenomenon known as dispersion. Looking at an everyday example, at low
frequencies water has a dielectric constant of 80, corresponding to a refractive
index of about 9, but the refractive index in the visible region is about 1.3.
Figure 2.6 also imples a number of other important photonic features,
common to all materials. The refractive index generally decreases with
2.2 Light in Isotropic and Anisotropic Materials 15
100
Displacement,
arbitrary units
50
0
0 10 20 30 40 Frequency, 50
–50 arbitrary units
–100
–200
100 Rf and
microwaves
Infrared
Real part
of the
refractive UV
index
1 Nuclear
vibrations
Molecular Atomic
vibrations vibrations Real part < 1
frequency
Imaginary part
frequency except around the resonances, after which there is another step
downwards. Also, the refractive index can drop below unity. But does the
velocity of light ever exceed its velocity in a vacuum? The implications of this
apparent dilemma are examined briefly in Appendix 4, which considers the
phase and group velocity concepts.
Thus far we have ignored another important aspect of our mass–spring–
damper model – namely, the damper. This represents the sources of loss, and
this lost energy is converted typically into heat. So, strictly speaking, the
16 The Nature of Light
refractive index, which describes the transfer of energy through the material,
should include this loss term. This is expressed through the representation n0 =
n + jk, where j is the square root of 1 and k is the known as the imaginary
component of the refractive index. Additionally, as indicated in Figure 2.6, the
loss terms increase significantly at resonance since this is where the radiation is
more readily absorbed, typically by a large factor. This absorbance feature is
one source of the colour in the world around us.
The propagation of light within a homogenous material is then in general
described through the propagating wave equation
2πnz 2πkz
E ðz; t Þ ¼ E 0 exp j ω t (2.2)
λ λ
Here E is the electric field amplitude of the propagating wave, z the axis along
which the wave propagates, ω the optical angular frequency in radians/second
and λ the wavelength in vacuum; and the final term represents the attenuation
of the electromagnetic wave. The first term gives the time dependence, and the
second term the distance dependence, of the optical phase. Note that the
imaginary part k of the refractive index determines the level of attenuation.
There is yet another aspect to this – suppose the electrons are no longer
bound to individual molecules but free to move. This happens, for example, in
conducting metals and also in the ionosphere around the earth. In this case the
restoring force is due to the electrostatic forces exerted by the positive ion
lattice, as indicated schematically in Figure 2.7. Remember, too, in the case of
+ +
-
+
- -
Fixed nuclei (+)
- + +
-
+
+ - +
-
- + -
+ + +
-
- - -
+
- + Electrons ( – )
- +
- - free to roam
-
+ + - +
-
+ + +
-
+ - +
- +
-
-
+
- + -
+
+ -
Figure 2.7 The plasma case where free electrons experience a restoring force from
stationary positive ions. Note there is minimal damping for frequencies above the
inverse of the electron lifetime in the material.
2.2 Light in Isotropic and Anisotropic Materials 17
conductors, the losses are due to the collisions between the moving electrons
and the stationary ions, a phenomenon encapsulated in the electron relaxation
time for the material concerned.
There is again a moving mass, namely that of the electrons, and so again we
have a resonance dependent upon the density of the free electron cloud and
given by (see Appendix 3):
ω2p ¼ q2e N 0 =me (2.3)
where qe and me are the electronic charge and mass, N0 is the free electron
density per cubic metre and ωp is known as the plasma resonance (angular)
frequency.
This plasma resonance has many interesting ramifications. Everyone is
aware that X-rays travel through metal but light does not. Satellite broadcast-
ing in the GHz region comes down from outer space through the ionosphere.
However, much lower-frequency radio waves in the short wave band bounce
off the ionosphere in the same way that light bounces off metals.
The impact of the relaxation time in plasmas (see below) is also important.
As mentioned above, the conduction losses in a metal are basically determined
by the frequency of collisions between the freely flowing conduction electrons
and the stationary ions in the metal. However, if we apply an electric field at a
frequency which exceeds this collision frequency, the inverse of which is the
relaxation time, then there are many fewer collisions between the conduction
electrons and the stationary particles in the metal, so the losses decrease
dramatically – in other words, the metal becomes transparent.
The refractive index of gold as a function of wavelength is indicated in
Figure 2.8. Among other things the rapid variation of the complex index with
wavelength in the visible region (roughly 0.4–0.8 µm) accounts for the differ-
ence in the colour of gold as compared, for example, to the colour of silver.
Most metals in and around the optical and ultraviolet region exhibit an intri-
guing transition between relatively low-frequency behaviour as a conductor
and the higher-frequency behaviour as a relatively transparent dielectric. In
other words, the electrons in the metal constantly turn around so that there is no
current flow and likewise no collisions to cause absorption losses. This
fascinating transition has, within the past couple of decades, evolved into so-
called plasmonics, which seeks to understand and exploit this somewhat
complex evolution.
This discussion has been about light as a wave. There has been no mention
of the photon. These classical wave approaches do give a very reliable guide
and an indispensable insight into the interaction between light and materials.
The photon model also has its place, however. This is most apparent with
18 The Nature of Light
18
Refractive
index Imaginary
part
10
Visible
range
Real part
0
0 1000 2000 3000
Wavelength (nm)
Figure 2.8 The refractive index of gold indicating its change in behaviour
emerging at frequencies above the plasma resonance, corresponding to about
200 nm wavelength.
Excited
states
Energy
Figure 2.9 An energy level representation for a single atom – which approximates
to the behaviour of a gas at low pressure.
regard to the energy levels evident in a material (Figure 2.9), a concept which
has evolved as the ‘conventional’ approach to explaining the wavelength-
selective absorption of light in materials.
The differences between these energy levels and the ground state in a
molecule equal the photon energy required to excite the molecule from the
stable ground state. This absorbed energy then typically (but not always) re-
emerges as heat when the molecule relaxes to its ground state. The same
behaviour can be ascribed to the resonant absorption of mass–spring–damper
2.3 Light Interacting with Structures 19
systems if we are looking purely to match the frequency of the input light with
the frequency of the mechanical resonance. This merging of the wave (and
mechanical resonance) domain and the photon domain (via energy level
differences) can provide useful insights into material behaviour.
There are lower-frequency transitions in molecules as well, corresponding to, for
example, exciting an entire molecule into rotational rather than electron-orbit
resonances (see Figure 2.6 for the general trends). A microwave cooker which
operates at around 2.5 GHz exploits exactly this effect, tuning to resonances in
water, fats and sugars. Here, however, we are by no means concerned with a photon
energy in the order of a few microvolts; this is well below the photonics region
discussed in Chapter 1. The scientific community also talks of these frequencies in
terms of molecular resonances, implying an analogous electromagnetic concept.
Much of the wave and photon dilemma is encapsulated in the above
discussion. Whilst numerous weighty texts have been published on the subject
there is really no definitive answer. For us it amounts to finding the approach
which gives the most useful model to analyse and understand the photonics
scenario of interest.
0.5
P polarisation
S polarisation
0
0 30 60 90
Incident angle (degrees)
Figure 2.10 Reflection and refraction at a planar interface between two materials
differing in refractive index. The diagram at top left shows a ray striking the
interface at angle θi and being partly transmitted at angle θr and partly reflected
back into the higher-index material. The ray shown with a broken line is incident
at the critical angle, and so would travel along the interface. At incident angles
greater than thus, total internal reflection occurs. The Brewster angle plot applies
to an index of 1.5 in the higher-index material for light incident from the
low-index side of the interface.
is also shown in Figure 2.10. The partial removal of the S polarisation state
(with electric field vibrations perpendicular to the P state vibrations and to the
direction of the reflected ray) manifests itself in daily life, amongst other things
in the polarising sunglasses designed to reject the (partially) polarised reflected
light from our surroundings.
The discussion thus far concerns a light beam travelling from a low-index
medium to a higher-index medium. When the situation is reversed, there will
be a case when the angle of incidence is high enough for refraction to occur
into the interface plane (i.e. the angle of refraction is 90), and then the incident
angle is referred to as the critical angle. If the incident angle goes beyond that,
total internal reflection occurs, as implied in Figure 2.10, which also indicates
some reflection for incident angles below the critical angle.
Incidentally, the derivation of Snell’s law (see the Chapter 2 problems) exem-
plifies a very important principle in understanding the coupling of light from one
material or structure into another. This principle is that the projected components
of the wavelengths along the interface must be the same – a concept known as
2.3 Light Interacting with Structures 21
phase matching. This idea applies to any wave propagating through an interface,
whether the wave is acoustic, electromagnetic or even a water wave.
There are many other possible interfaces. In the above we have assumed a
perfectly planar interface. However, slight roughness will introduce corres-
ponding slight changes in the angles of incidence and refraction until, in the
limit, a perfectly scattering surface would send the light in all directions. Again
we encounter this daily – sunlight entering your room is scattered throughout
the entire volume of the room, by the walls and furniture.
Moving on to a medium-scale structure reaches into diffraction and interfer-
ence: we move from the ray optic approach, which works at large scales, into the
wave optics approach. Much is encapsulated in the classic Young’s slits experi-
ment, shown in Figure 2.11. Light from a single-parallel-beam single-wavelength
source impinges on two slits, separated by a distance d and each of a width
conveniently assumed to be much less than the wavelength. The input light then
reradiates in all directions and produces an interference pattern as indicated. The
interference pattern is determined by the vector sum of the fields directed through
the two paths. Full constructive interference occurs when the path difference dA –
dB is an integer number n of wavelengths, which gives, for y » d,
dx
nλ (2.5)
y
where x is the distance on the screen from the axis of the system and y is the
distance along the axis between the slits and the screen. There is a time delay
Observation
Incident plane wave,
Slits screen x
wavelength l
separation d
dA
A
dB
between arrivals at a particular point on the screen from the two paths and there
is an implicit assumption in equation 2.5 that the phase delay on the optical
signal is completely predictable for this time difference. Sometimes it is not,
which leads us into the concept of coherence; ultimately this coherence
concept applies to both the spatial and temporal behaviour within an optical
source, a subject to which we shall return later.
Much of photonics concerns this medium-scale structure, ranging from
camera lenses and image sensing arrays to DVD players and the display on
your telephone handset. Furthermore, almost all of this discussion is concerned
with wave optics.
At small scales we move into the domain of subwavelength structure in a
material. This domain has only recently become available, with the notable
exception of materials such as colloidal gold, which has been used in pottery for
centuries and for which the colour can range through reds to blue depending
upon the diameter of the tiny gold spheres within the colloid. The resonance
wavelength for the electron plasma circulating the spheres varies as indicated in
Figure 2.12, but gold spheres throughout this dimensional range also absorb
more in the blue than the red, owing to material absorption, the property which
makes ‘normal’ gold appear golden. In current terminology these gold spheres
would be regarded as an example of quantum dots. As colloidal gold they were
simply a way of introducing permanent colour into artefacts. The tiny structural
dimensions take the ‘normal’ colour far from its golden origins.
580
Peak absorption wavelength (nm)
560
540
520
10 50 100
Gold nanosphere diameter (nm)
Figure 2.12 The variation in the peak absorption wavelength of the electron
plasma as a function of the gold nanosphere diameter. A suspension of 10 nm
nanospheres in water, or in glassware, looks ‘reddish’ whilst 100 nm spheres
appear more towards the blue since now the peak absorption is in the red.
2.5 Problems 23
Much of the story here is concerned with the ability of structures to produce
what are, in effect, wavelength filters. Examples of such phenomena occur
extensively in nature – butterfly wings are amongst the most common and
striking. This comes initially as something of a surprise: let’s make something
a different shape and it will appear as something different. However, a few
thoughts on electronic circuits shows how we take this for granted. As far as a
radio frequency wave is concerned, a piece of straight wire appears as an
object very different from a coil of wire yet both have dimensions significantly
less than a wavelength. What has become known as ‘nanophotonics’ has been
described as a manifestation of this familiar concept – so here light becomes an
electromagnetic wave but in the sense of an electric current.
2.4 Summary
Is light a wave, a particle or a ray or an electric current? The answer is yes to all
four possibilities, but it all depends on the context. The preceding discussion
has given a glimpse of all these aspects and exemplified some of the circum-
stances in which light is more readily perceived using one or the other model.
There is some interchangeability but usually an appropriate combination of
points of view is the simplest way to arrive at a usable understanding. It is quite
a fight, say, to model the refractive index through quantum mechanics and
particles, but in general bringing one model into the domain of another, as Nils
Bohr did with his hydrogen atom model, can give useful, workable and, with
care, accurate results. Of course, those of a philosophical nature should consult
examples of the many extensive texts on the nature of light.
2.5 Problems
These problems could involve collaborative searching on the internet.
1. What features determine the colour of the objects around you? Consider, for
example, butterflies, plants, the bloom on a camera lens or some spectacle
lenses, the iris of your eye.
2. What are the light sources around you – for lighting in your room, for your
DVD player, in your fluorescent watch, in a ‘black lighting’ tee shirt, in the
sun, in the TV screen and in those old cathode ray tube TV sets? How do
these sources vary in structure, in their physiological effect on the observer,
in beam quality etc.?
24 The Nature of Light
Here we shall explore the principal phenomena which occur when light passes
through, and therefore interacts with, a homogenous material.
1
In terms of the dielectric constant ɛ = n2, the Kramers–Kronig relation is
Ð ∞ ðεÞ
ReðεðωÞÞ = 1 þ π2 P 0 ΩΩIm
2
ω2
dΩ.
25
26 Light Interacting with Materials
Figure 3.1 Some of the potential resonant modes in a triatomic molecule, indicat-
ing the range of rotational, vibrational, orbital and nuclear potential resonances, all
of which contribute to the wavelength dependence of light energy, which pro-
duces the characteristic colours of materials, their unique spectroscopic signatures.
(a) A molecule, e.g. water or carbon dioxide, (b) rotational modes, the entire
molecule and bonds rotate, (c) ‘longitudinal stretch’ vibrational modes, (d) ‘lateral
stretch’ vibrations, (e) vibrations of ‘bond orbits’, (f) vibrations of atomic orbits,
(g) nuclear vibrations in individual nuclei.
Excited
states Levels progressively split
as molecules get closer,
often overlapping in
Energy liquids and solids
Figure 3.2 A schematic of the splitting of energy levels as atoms get closer to
each other. For most solids and liquids these levels begin to overlap into energy
bands. Molecules also have similar energy level diagrams – but with significantly
more detail therein.
In groups of molecules there are also other forces at work, most noticeably
in gases, associated with changes in pressure and temperature. Increases in
pressure obviously involve increases in molecular density and consequently
also increases in the value of the refractive index. Additionally, bringing mol-
ecules closer together increases the level splitting, and thereby broadens any
particular atomic or molecular energy level. Bringing molecules together also
increases collision probabilities, thereby broadening the perceived linewidth.
Increasing the temperature speeds up the molecular motion, thereby increas-
ing the mean velocity as the square root of the absolute temperature. Hence,
because of the Doppler shift, the perceived range of energy levels participating
in a particular transition also increases – that is, the absorption line broadens
again. However, with gases there is another factor at work. The preceding
argument applies to a constant-volume rise in temperature. If the volume of the
gas is unconstrained, the answer is somewhat different. The number density of
molecules decreases, with the opposite effect.
These pressure- and temperature-related phenomena are present in the
spectra of liquid and solid dielectric materials but are far less evident than
for gases, except during phase transitions from gas to liquid to solid or, in some
cases, when solids or liquids move from one particular molecular state of
organisation to another.
The linear propagation of electromagnetic waves in conductors is also
important, even in the range of the spectrum where photon interactions become
significant. We have previously mentioned the extrapolation of the Claussius–
Mossotti equation into the free electron case appropriate to conductors. Here
the restoring force is generated by the separation of fixed and mobile charges,
giving rise to the concept of collective oscillations at a plasma frequency,
28 Light Interacting with Materials
100
Region of
plasma resonances
10
1
0.1 1 10 100 1000
Wavelength (microns)
Aluminium Gold Silver Pt
Figure 3.3 Skin depth in silver, gold, aluminium and platinum as a function of
wavelength. The last of these has significantly higher resistivity, so the skin
depths are substantially higher. The line with negative gradient shown as
gives the resistance in ohms of a gold wire 0.1 wavelength long
and one skin depth in diameter.
Incoming
photon
Energy Outgoing
photon
leaving after
relaxation time
Figure 3.4 The concept of ‘virtual’ energy levels, where an incoming photon can
temporarily excite an atom or molecule to a level not within the energy level
spectrum. The photon will escape a short time later. This is a quantum mechanical
equivalent to the Clausius–Mossotti oscillator.
Metastable level
Stokes
Anti-Stokes
Incoming photon
Relative power
levels Illuminating
frequency
Anti-Stokes
Stokes
Figure 3.5 (a) The scattering process: energy level diagrams for Raman and
Brillouin spectroscopy, where the virtual state releases a photon into a range of
different ground states. These ground states are characteristic of the constituent
material. (b) A resulting spectrum. The optical frequency scale is much larger for a
Raman spectrum, with its greater energy differentials. Because of this, Raman
spectra are more easily filtered from the incoming excitation spectrum.
different from the incoming photon, namely Brillouin and Raman scattering.
Brillouin scattering is initiated at a lower power density threshold, with
corresponding smaller wavelength shifts; for Raman scattering the wavelength
shifts are much larger.
This explanation indicates one important feature of these processes: the
range of different ground states which can be occupied gives a spectrum of
energy differences between the outgoing photons and the incoming photons.
This range of differences is uniquely determined by the material through which
the light is passing; the effect is present over a broad range of input optical
wavelengths and, in particular, at wavelengths well away from the absorption
spectrum of the sample. Here, then, we have another optical means for
characterising materials, namely examining the Brillouin or Raman spectra
and, thanks to the greater photon energy separation between input and output,
Raman spectra are more useful (Figure 3.5).
This gives rise to an important question – what happens to the energy
differential between the incoming and outgoing photons? The answer is that
it released as, or sourced from, phonons, that is, thermal vibrations. In the
case of Brillouin scattering these phonons create an acoustic spectrum
typically in the tens of GHz region. These phonons – very high frequency
32 Light Interacting with Materials
Metastable
excited state
Decay to material
excited-state level
Incoming
optical
Emitted photon
photon
at lower energy
than input photon
Energy
Figure 3.6 Fluorescence spectroscopy. Here the metastable excited state is well
defined but has a short lifetime, in contrast with the Raman and Brillouin cases.
Decay from it to the material excited state produces heat. Decay from the material
excited state produces luminescence.
acoustic waves – are eventually dissipated as heat. For Raman scattering these
phonons are in the terahertz region and again finally manifest themselves as
heat energy.
Another other important and surprisingly commonplace non-linear optical
phenomenon is fluorescence, as illustrated diagrammatically in Figure 3.6. Here
a high-energy photon excites an electron to a well-defined transient ‘metastable’
state and from this state the molecule relaxes into an intermediate level corres-
ponding to an actual excited energy level within the atomic structure. Thereafter,
the molecule collapses to its ground state, emitting a photon with a characteristic
wavelength. This usually occurs in liquids and solids (only rarely in gases), so
there will be a range of energy levels available, giving the fluorescent radiation a
characteristic output spectrum. Fluorescence is similar to the Raman effect but
had established its identity long before the Raman effect was discovered, in 1928.
Also, there is one important difference. In fluorescence the incoming photon
excites the molecule into a short-lived but well-defined energy level. This photon
relaxes via the two-stage process mentioned above into states related to the
complex refractive index of the fluorescent material. For the Raman effect the
initial transient state can be at any level and is not related to the complex
refractive index of the material concerned. Additionally, Raman scattering
occurs for all materials, but fluorescence for relatively few.
3.3 The Diversity of Optical Absorption 33
The overall results are very similar, though fluorescence is the more famil-
iar, from black lighting, where ultraviolet lights trigger visible fluorescence in
clothing and décor in nightclubs, to glow worms, which excite their radiating
molecules through biological rather than a directly optical interaction, in a
process usually referred to as bioluminescence. The basic process is the same –
an input stimulus, in the glow-worm context, biochemically excites a molecule
to an intermediate state prior to its dropping a little energy before returning to
its ground state and emitting a corresponding photon. There are many other
variations on the basic theme. Electro-luminescence is perhaps the most
familiar, where an electric current excites atoms which thereafter relax and
emit photons; this happens in, for example, a light emitting diode. All these
variations, however, have one common feature in that the light emitted is
characteristic of the material from which it originates and consequently can,
among other possibilities, be used as an optical signature in materials analysis.
Incident Reflected
light light
Absorbed
Local section
light in ~ 100 nm
through plate
sample
Figure 3.7 Photoacoustic conversion. The light is absorbed in the optical skin
depth, producing a thermal wave whose characteristic skin depth is dependent
upon the frequency at which the light is modulated.
optical skin depth, which is usually (Figure 3.3) less than 100 nm. All the heat is
initially generated in this tiny volume but rapidly diffuses away into the bulk
material. The thermal conduction processes are exactly analogous to the elec-
trical conduction process so that, again, if the dynamic heat source (our incident
light beam) is switched on and off at a particular angular frequency ω, at this
frequency the thermal diffusion depth will be given by
where σth is the thermal conductivity, Δ is the density and Sth is the specific
heat. Values of the thermal diffusion depth for brass, which is typical of many
metallic materials, are shown in Figure 3.8 for a range of frequencies ω up to
around 100 MHz. Acoustic wavelengths for compressional waves in a typical
metallic material are also plotted for comparison. We can then see that if the
light beam is switched on and off sequentially then the local temperature will
rise to a specific depth, which, in this range of acoustic frequencies, is much
less than the acoustic wavelength. This will induce local thermal expansion,
resulting in pressure differentials and hence ultrasonic waves. Such laser-
generated ultrasound is an extensive topic in its own right, with important
applications in acoustic imaging and non-destructive testing, and numerous
full texts describe its principles and applications.
Thus optically induced localised heating, typically from a short-pulse laser
beam, produces a highly localised temperature rise, initially within the optical
skin depth and rapidly thereafter spreading further into the target material.
This leads to another observation, that the smaller the laser spot and the greater
3.3 The Diversity of Optical Absorption 35
Structural
Thermal and acoustic waves in brass dimensions
of interest
1m
Acoustic
Diffusion depth / wavelength
1 mm
Thermal
1 mm
Figure 3.8 Relative dimensions for thermal diffusion depths and acoustic wave-
lengths, here given for brass.
the energy density applied within a given time period (i.e. the higher the peak
power density), the higher the localised temperature rise within the optical skin
depth. This highly localised short-term temperature rise can be high enough to
melt or even evaporate the metal.
An important application is laser machining, now extensively used in many
production processes. For some materials, notably polymers, this machining
process may not necessarily involve melting the target but, rather, introducing
light at a high enough photon energy to break the chemical bonds which hold
the polymeric molecules together.
Closely related to this is laser-induced breakdown, typically but not exclu-
sively in gases, which can give a highly intense source of light stimulated by
tightly focusing a laser beam in the sample material (Figure 3.9). In this case
breakdown is induced by the large electric field within the light beam at the
focus, similar to the arcing sometimes seen around high-voltage power lines.
This laser-induced breakdown produces a spectrum which is characteristic of
the material in which the breakdown occurs. Laser-induced breakdown spec-
troscopy (LIBS) is a spectroscopic tool with an immense range of application
spanning atmospheric analysis to the characterising of biological cell structures.
Turning light into heat on a less dramatic scale also has its applications.
Indeed, thermally driven solar power can provide hot water in warm countries.
Large-scale arrays of mirrors focussing the sun onto a black boiler structure, a
36 Light Interacting with Materials
Breakdown region
Figure 3.9 Laser induced breakdown spectroscopy, where a tightly focussed laser
beam produces very high, very localised electric fields which ionise the sample
into a gas, producing a characteristic, usually line, spectrum.
solar furnace, can even provide steam for electrical power generation. In a very
different context, imagine that you could fabricate the perfect ‘black body’,
which would absorb all photonic wavelengths. The temperature rise in such a
body would then be proportional to the optical power incident upon it. If this
thermal change were coupled into an electrical thermistor or a thermocouple
then measuring the temperature change would give a direct reading propor-
tional to the optical power; this is realised in practice in a device called a
bolometer, which has applications across the electromagnetic spectrum.
In other forms of photodetection, incident light to creates an electric signal,
usually a current, proportional to the optical power absorbed. The origins of
this direct conversion lie in the photoelectric effect, which led to the concept
of the photon via the idea of a characteristic ‘work function’ necessary to
eject an electron from a cathode material into a vacuum.
The photomultiplier (Figure 3.10) was thereafter for many years the princi-
pal means through which optical signals could be turned into proportional
electric currents. However, it is important to note that in general one photon
creates only a single electron, so the resulting current is proportional to the
photon arrival rate rather than the optical power. It is also important to note
that the idea of the photomultiplier is that each photon-generated electron
produces many electrons via the electrode cascade. This process is statistical
and introduces noise into the detected current. There have been many vari-
ations on the photomultiplier theme. For example, early television cameras
were based upon the vidicon, a device which operated through a directed
electron beam and a photocathode on which an image was projected.
3.3 The Diversity of Optical Absorption 37
Intermediate electrodes
Input light
Anode contact
Figure 3.10 The principle of the photomultiplier tube. The input light causes
electrons to escape from the cathode and these are multiplied through successive
intermediate electrode stages.
(a) Photodector
+
-
+
-
1.0 90%
InGaAs
Quantum
Responsivity (A/W) efficiency
50%
0.6
Si
Ge
0.2 10%
(a)
INDIRECT band gap
e.g. silicon
Conduction
bands
Energy
Band Absorbed photon to electron–
gap hole pair in two steps
Valence
band
Momentum
Zero
(b)
DIRECT band gap
e.g. gallium arsenide
Conduction
bands
Energy
Band Absorbed photon to electron-
gap hole pair in single step
Valence
band
Momentum
Zero
Figure 3.13 Band structure diagrams for (a) indirect-gap and (b) direct-gap
materials.
raised to a relatively uniform and high enough level to ensure the fastest
possible removal of any free carriers which have been generated. However,
there is only one electron–hole pair for each absorbed photon.
Also shown in Figure 3.11 is the principle of the avalanche photodiode.
Here we again have a short p+ region through which, as with the p–i–n,
photodiode the input photons arrive, followed by a relatively low-field intrinsic
region. However, immediately before the n+ substrate contact region there is a
short, carefully controlled, p+ type region. An appropriate voltage applied to
this structure creates a high electric field zone in the region just before the free
carriers (here electrons) arrive at the n+ (positive) contact. The design of these
diodes is quite tricky, since the idea is that in the high-field region the electric
field is just sufficient to cause controlled multiplication (but avoiding total
3.3 The Diversity of Optical Absorption 41
(a)
Hole diffusion
leaves negative charge
p+ thin top layer
Lightly doped n region
Electron
n+ substrate diffusion
leaves
positive
charge
(b)
Figure 3.14 (a) An unbiased p–n junction, showing the origins of the built-in
voltage. (b) This is changed by providing an external load and injecting photons of
energy exceeding the bandgap Eg to produce a flow of carriers for a solar cell. In
both diagrams the lightly shaded arrows indicate natural diffusion into the
depletion layer.
breakdown) of the input electrons arriving from the intrinsic region, therefore
increasing the detected current by a controlled factor. There is inevitably some
statistical variation within this process, as with a photomultiplier, so the
avalanche diode is subject to additional current fluctuations. These are appar-
ent as additional noise. However, the extra sensitivity garnered from the use of
this process often far outweighs the noise penalty. The light sources them-
selves also carry their own inherent and inevitable noise component, as will be
discussed in more detail in Chapter 5.
Thus far, we have briefly explored the action of the p–n junction as a
photodetector when operating under reverse bias. However (Figure 3.14), even
when unbiased from external sources, a p–n junction sets up its own depletion
layer and its own in-built, albeit small, reverse bias: the free electrons in the
n-type material diffuse into the p-type material and the holes move vice versa.
42 Light Interacting with Materials
2hc2
Bλ ðλ; T Þ ¼ in W= sr m3 (3.5)
λ ½ exp ðhc=λkB T Þ 1
5
14
Visible spectrum
12
Spectral density (kW / (sr m2 nm))
10
8
5000 K
6
4000 K
2
3000 K
0
0 0.5 1 1.5 2 2.5 3
Wavelength (microns)
Figure 3.15 Planck’s formula for black body radiation from sources at various
temperatures with the visible spectrum superimposed.
carbon dioxide, the more the absorption; hence global warming. Furthermore,
carbon dioxide is not the only gas which absorbs within this spectral range –
atmospheric methane is another contributor, with a far higher (by a factor >20)
absorption cross section, but with a somewhat shorter stable lifetime and lower
concentration than that of the CO2 in the atmosphere.
As electricity became more available and understood, the concept of passing
electric current through a filament, thereby heating it to produce light, grad-
ually emerged, with early demonstrations from Alessandro Volta and Hum-
phrey Davy consolidating into Edison’s carbon filament lamp, enclosed in a
weak vacuum to prevent burning of the filament. This well-known invention
emerged into everyday use for over a century. It was the best readily available
light source despite its inevitably inefficient (Figure 3.15) conversion of heat
into visible light: most of the radiated energy appeared in the infrared, heating
the immediate surroundings. The colour temperature of an incandescent elec-
tric light bulb is typically around 2000 K, which is less than half that of natural
daylight, and corresponds to a peak emission wavelength of about 1.5 microns.
Technically, in spite of their huge contribution to human well being over
many years, the vast majority of incandescent (i.e. hot-filament) light bulbs do
not directly exploit the spectroscopic properties of the materials involved in
their fabrication. However, the fluorescent light bulb is deeply rooted in
spectroscopic understanding. Figure 3.16 illustrates the principle of operation
3.4 Light Emission 45
AC supply
Circuit maintaining breakdown
Visible
range
Figure 3.16 The basics of the fluorescent lamp, where a UV discharge introduced
by electrical breakdown of the gas (neon) in the tube excites the phosphor to
produce visible radiation.
of a fluorescent lamp, indicating the origins of the relatively dull warm-up time
before the lamp becomes fully operational. The effective colour temperature,
which is determined by the properties of the phosphor, is typically around
4000 K, but the spectrum also includes some sharply defined lines. These lines
depend on the combined properties of the gas discharge lamp used to excite the
phosphor, the phosphor itself and the local temperature distributions. A typical
spectrum is also shown in Figure 3.16. This combination of a source spectrum
and a phosphor spectrum, originally conceived over a century ago, continues
to find application in today’s light emitting diode illuminators.
The light emitting diode, based upon a semiconductor p–n junction, has
evolved over the past half century or so into the preferred source of artificial
light, thanks primarily to the greater electrical-power-to-optical-power conver-
sion efficiency and the significantly enhanced overall reliability. The principle
is relatively straightforward (Figure 3.17). First and foremost, the semicon-
ductor must be a direct bandgap system, to minimise ‘non-productive’
electron–hole recombination resulting in phonons and hence heat. The direct-
bandgap system significantly improves the probability of electron–hole recom-
bination resulting in photon emission. The second principal consideration is that
the recombination region must be situated as close as possible to the surface
46 Light Interacting with Materials
Recombination zone
+
Lightly doped
n region - Electron flow from n+ contact
n+ substrate
Figure 3.17 The basics of a light emitting diode, in which free carriers combine in
the lightly doped n region to produce light.
One solution:
Intermediate-
Refractive index ~ 3.5 index
Critical angle ~ 17 degrees coating/lens
Radiated light
Lost light
Figure 3.18 (left) Light lost through total internal reflection and (right) a typical
solution comprising a surface mounted lens.
1
Relative Dark- Daylight-
sensitivity adapted adapted
0
400 700
Wavelength (nm)
Figure 3.19 The visual sensitivity of the eye showing the shift to the blue for
dark-adapted rod vision. Also, the rods and cones have different spatial distribu-
tions on the retina, giving different resolutions in bright sunshine compared with
those in the gathering dusk.
watt then corresponds to a conversion efficiency of about 15% , far higher than
the conversion efficiency of an incandescent bulb.
There is much more to understanding human vision than these comments
can provide. However, the different roles of rods for low light vision and cones
for higher light levels, and their different spectral sensitivities, provide some
clues to the roles of real and artificial illumination in human lifestyles. In
particular, the relatively low-level ‘warm glow’ of incandescent lamps is
arguably conducive to a much more restful frame of mind than the much
brighter blue-emphasised light from LED illumination. Indeed, as LED ‘white
lights’ becomes more widespread, there is an increasing awareness that such
illumination not only tends to disturb sleep patterns but also produces visual
confusion when one is moving between dark areas and bright areas illuminated
by LED streetlamps. There is much to learn on optimising the balance between
spectral distribution, brightness and human physiological and psychological
reactions in bringing this application into broader acceptance.
The LED has, however, found wide acceptance after a modest start in
calculator displays of just a few digits. Not only is it evolving into a preferred
form of illumination, it has also found its place in display technology, from the
huge, on the football field, to the tiny, in hand-held devices. The recent
emergence of the OLED (organic LED) has added flexible screen technology
to the list of possibilities.
There is an important variation on the LED. Thus far we have looked at
surface emitters and acknowledged the impact of total internal reflection. If we
3.4 Light Emission 49
can confine the radiation within a waveguiding region and arrange for it to
come from just one face then, in principle, a much more efficient device is
realised. Additionally, if we make the system sufficiently small in cross
section, it becomes compatible with optical fibres; the edge-emitting LED is
an important light source in fibre communications.
Suppose that we make a relatively minor modification to the basic LED of
Figure 3.18 and coat both ends with a wavelength-selective mirror with a very
high reflection coefficient (say 99%). This will send nearly all light which is
endeavouring to escape from that end surface back into the cavity where the
light is generated. We have to ensure that the mirrors are carefully aligned with
each other – preferably they should be perfectly parallel. In this case any light
which bounces back into the cavity will not escape (Figure 3.20). In doing so
some light will be reabsorbed in the semiconductor between the reflective
surfaces to create yet more electron–hole pairs. Whilst some elections and
holes in such pairs may go their separate ways and be reabsorbed in the
contacts, the majority will recombine to form another photon. This creates
yet more light, in addition to the light created through the ongoing current
injection. But this mechanism only works effectively if the wavelength of
this new photon corresponds to the reflection wavelength of the mirrors.
Within a very short time the light bouncing to and fro will select the specific
resonant frequency dictated by the mirror spacings, and the outcome is the
semiconductor laser. The semiconductor laser takes two basic forms, the
edge emitter and the surface emitter, and is now essential in optical fibre
communication systems, in CD and DVD players and in the ubiquitous laser
pointer.
Laser light has special properties, namely spatial and temporal coherence,
which express the correlation of the optical phase factor across the beam
emitted by the laser and in time. Temporal coherence is often expressed as
a coherence length, along the direction of propagation of the beam. The
coherence time refers to the phase correlation along the beam. The coherence
time is very short for an LED and much longer for the laser. It is equivalent to
the frequency linewidth of the laser oscillator, which quantifies the output
frequency jitter over a long observation period compared with the coherence
time. It is a reasonable assumption that the coherence time is the inverse of the
linewidth in frequency units, though there will be slight variations on this
depending on the detailed definitions applied.
Spatial coherence is very low in a light emitting diode, in which individual
photons are generated from randomly selected energy level differentials in a
emitter and emerge in random directions. However, in a laser the light emitted
from the cavity has been selected in both frequency and direction by the
Emitted
laser
beam
Emitted laser beam
Highly
reflective
surfaces Spatial
Temporal Coherence Coherence
Is the optical phase Is the optical
distribution along phase distribution
the beam predictable? across the beam
predictable?
Figure 3.20 (left) Turning an LED into a laser using reflective surfaces and (middle and right) the concepts of temporal and spatial coherence.
3.5 Materials with Controllable Optical Properties 51
Lasing output
Figure 3.21 The generic schematic of a laser system using external excitation
into an optically active gain medium, with feedback from high-reflectivity
wavelength-selective mirrors.
Mechanical force
Figure 3.22 How a mechanical force can modify the refractive index of a solid –
here shown with a regular molecular structure – by forcing atoms closer together
or further apart. Generally, any anisotropic force distribution will, as shown,
produce birefringence. External electric fields can produce similar effects in some
types of crystalline solid.
in the refractive index, and as we have seen, the greater the molecular density,
the greater the refractive index at a particular wavelength. There is, then, one
obvious means for changing the index of a material and that is to simply
compress it. We have already seen this stress effect in the behaviour of gases
under pressure. In solids it is sometimes called photoelasticity and character-
ised through a stress-optic coefficient.
The stress effect in solids depends upon the way in which the forces are
applied. For example, an isotropic pressure brings the molecular structure
closer together in all directions whilst a force applied from a single direction
(Figure 3.22) compresses the structure in the direction of the force but, as
expressed in Poisson’s ratio, causes expansion in the other directions. Conse-
quently, light polarised with its electric vector in the direction of the compress-
ing force will see an increase in the index, whilst light polarised orthogonally
will see a decrease. Thus birefringence is induced.
This effect could, for example, form the basis of an acousto-optic phase
modulator. Here, an optical beam passing perpendicularly to the direction of travel
of an acoustic wave in a transparent liquid or solid, and with dimensions in all
directions significantly shorter than the acoustic wavelength, will see the index
oscillate as the acoustic wave passes through the optical beam. Consequently, the
optical beam will experience a variable delay at the acoustic frequency.
By the same token, if we pass an acoustic wave through a relatively large
transparent material extending over many acoustic wavelengths we have a
travelling periodic variation in refractive index. If we then launch a wide
optical beam (compared to the acoustic wavelength) across this structure then,
via the acoustic wave, we have the basis of an electronically tunable diffraction
grating system, of which more in the next chapter.
3.5 Materials with Controllable Optical Properties 53
spatial light modulators (as in a digital projector), tunable filters and a host of
other useful devices. There are indeed many possibilities!
3.6 Summary
The most obvious optical property of any material is its refractive index. The
real part indicates the speed with which an optical phase front propagates (at
the phase velocity) through the material and the imaginary part indicates the
inherent absorption losses within the material. The real and imaginary parts
are, in common with any other linear oscillatory structure, linked through
the Kramers–Kronig relationships. These optical phenomena can also be
viewed in terms of quantum mechanical energy levels and photon inter-
actions. Both approaches give useful insights, and the choice of approach
is, in the final reckoning, a matter of suiting the concept to a particular
situation.
These material properties all vary significantly with optical wavelength, as
they also do at wavelengths well outside the optical region. This variation
provides a unique signature and is an indispensable tool in the characterisation
of materials through spectroscopy. There are also non-linear effects, notably,
in Raman and Brillouin spectroscopy, which provide useful signatures that are
independent of the probing wavelength. However, these do require much
higher optical power densities.
The absorption of light in materials also has numerous manifestations:
absorbed light becoming heat via a phonon excitation process; re-emergence
of the light at a lower photon energy in fluorescence and phosphorescence;
electron–hole pair generation and photo-detection; laser driven photo-
acoustic effects. These processes all operate, to some extent, in reverse and
the optical properties of materials can themselves be modified in response to
numerous external stimuli, We have mentioned mechanical forces and elec-
tric fields but anything which can modify the local molecular structure and/or
the binding forces between nucleus and electrons within that structure will
also change the apparent refractive index; consequently, intense heat and
nuclear radiation are two other possible candidates for changing optical
material properties.
Much of photonics is concerned with understanding the optical properties of
materials and applying this understanding to a particular situation. Here we
have only mentioned briefly the underlying theoretical analysis necessary for
detailed design. However, a basic conceptual understanding is essential in
identifying and utilising appropriate optical materials.
3.7 Problems 55
3.7 Problems
1. (a) The sunlight that reaches the outer atmosphere of the earth has a spectrum
very similar to that of a black body at about 5775 K (see Figure 3.15).
Discuss how the spectrum might appear when it reaches the earth’s
surface and explain any differences, both in spectral structure and in
power density per unit wavelength.
(b) Bright sunshine reaching the earth has a power density of ~1 kW per
square metre. Consider the sunny-day ‘magnifying glass onto paper’
experiment. What would be a reasonable estimate for the power
density in the focussed spot?
2. (a) The value of the Kerr coefficient for silica can be obtained from data
given just after equation 3.3. Using this value, estimate the Kerr-
induced phase change in a total optical path of 50 km of optical fibre
carrying on average 1 mW of power in a circular cross sectional area
8 microns in diameter. The input power is attenuated in transmission –
hence we use the average power transmitted. If we were to model this
more accurately and take the variation around the average into
account, would the total perceived phase delay increase, decrease or
stay the same – and why? (You may need to look up an appropriate
value for the fibre attenuation.) Additionally, the power density will not
be uniform across the fibre cross sectional area; the typical fibre mode
shape is roughly Gaussian across the diameter. Using the same logic, will
this increase, decrease or not significantly change the impact of the Kerr
effect from the original ‘uniform in all directions’ estimate?
(b) Suppose two optical signals at different wavelengths are introduced
into a fibre at power levels high enough to give significant Kerr
effect phase changes. What changes might you expect to find in the
spectrum of the output signals, and why? State your criteria for a
significant change.
3. (a) There are two basic approaches to detecting light – as photons and when
absorbed as heat. Light can also be regarded as an electromagnetic field.
For radio waves we use this field to excite movement in electrons in a
wire – why don’t we do this with light? Or could we also do it with
light?
56 Light Interacting with Materials
(b) Compare the basic properties of (i) the spectral response of thermal
detectors (bolometers) and bandgap detectors and (ii) discuss the
factors influencing the intrinsic sensitivity of the two detection pro-
cesses, endeavouring to derive some comparative performance esti-
mates (the reason why photodetection is dominated by photon
detection processes might be a starting point).
(c) Suppose you have a bandgap-based detector material with a bandgap of
1 eV and a quantum efficiency equal to zero above the threshold
wavelength and unity immediately below the threshold wavelength.
Calculate the responsivity in amperes per watt for such a detector over
the wavelength range 0.1 to 2 microns. Explain why this curve looks
nothing like the practically observed values in Figure 3.12 and why
there is a peak in the responsivity in the curve which you have derived.
(d) You are given the task of optimising the sensitivity of a detector system
to visible light (say to 300–800 nm wavelengths, which goes a little
beyond the visible at both ends) that is focussed down to a spot. You
have no limitations on technological design for the detector. How
would you approach this?
4. (a) Estimate the percentage of generated light that actually escapes from a
simple planar-surface light emitting diode which produces light through
the full 4π steradians solid angle.
(b) There are lensed versions of light emitting diodes mentioned in the
chapter – design what you feel would be an optimum lens using ray
optics. (Assume that glasses of refractive index up to 2 are available.).
Using the formulae for reflections at interfaces and similar effects given
by the Fresnel equation (see equation 4.3 in Chapter 4), estimate the
efficiency of your modified design. How might you improve on this
design? Does your design include a consideration of the glasses that are
available glasses for the lens? If not, what would the implications be of
replacing your ‘optimum’ glass with the best available?
(c) We have also mentioned lasers in the context of light generation from
semiconductors and gases. The operating mechanism was described in
the text. It involves a combination of a photon bouncing back into the
active region and being absorbed and the consequent excited state in the
material producing another, coherent, photon before it hits the electrodes
in the generation region to become ‘electric current’. Think through this
process carefully and see if you can (perhaps collectively) arrive at a
relationship between regeneration times, lifetimes and feedback percent-
ages in the process: the photon lifetime needs to be matched by the
regeneration time for the steady state to be reached. Look up on the
3.7 Problems 57
internet the laser ‘rate equations’ to rationalise your thoughts with the
algebra! It can all be found in the description in the chapter, but careful
thinking through the process is essential!.
5. (a) Polarising sunglasses are readly available. Try twisting the frame of a pair
of polarising sunglasses through 90 on a sunny day and explain the
differences you see whilst changing the angle. Then take two pairs and
cross the polarisers so that no light comes through. (You may also have
access to polarising sheets – readily available from many internet sup-
pliers.) Place a transparent plastic sheet between the two crossed polarisers
and stretch it in various directions. Explain the patterns and also the colours
you see.
(b) The above approach is sometimes used as a means for examining
stresses in plastic samples. Usually the results are taken qualita-
tively – how would you make them quantitative?
6. (a) You have been given the task of designing a silicon photodetector. In
silicon the saturation velocity for electrons (roughly speaking, this is the
‘terminal’ velocity at which any increase in field causes no further
increase in velocity) can be taken as 107 cm/s and saturation is reached
at a field level of 20 kV/cm. The ionisation field can be taken as
~300 kV/cm. Assume a 10-micron-long slightly p-doped depletion
region and 1 micron of p+ (see Figure 3.11(b)) In your estimates, use
Gauss’s law to arrive at the doping levels needed to achieve the field
levels for saturated velocity operation from the carriers, and from this
estimate obtain the bias voltage.
(b) What would be the potential maximum detection bandwidth for
modulation applied to the optical signal incident on the structure?
(c) Look up the optical absorption coefficients in silicon and comment, with
reasons, on the wavelength range over which this 10-micron / 1-micron
design is practical.
4
Light Interacting with Structures
58
4.1 Dimensions Much Greater than the Wavelength 59
Red
Parallel white
light input
Blue
Figure 4.1 Dispersion from a triangular prism with a parallel white beam incident
at an angle on one face showing the effect of the different refractive indices for red
and blue.
60 Light Interacting with Structures
Convex lens,
focal length f
1 1 1
Object, + =
Back focal u v f
distance u from lens
point
Front focal
point Image,
distance v from lens
Figure 4.3 The paraxial approximation image relationships for a thin convex lens
illuminated by a parallel beam.
make, for example, compact large-area reflectors and even to form the basis of
the ‘pentaprism’ viewfinder systems in some types of single-lens reflex cameras.
These features of the right-angled prism give this direct reflection effect even
for slight changes in the angle of incidence of the input beam. This back
reflection effect also occurs for two mirrors with their surfaces at right angles
to each other, as indicated in Figure 4.2 – a configuration often referred to as a
retro-reflector. This retro-reflector is somewhat limited, because a more careful
examination shows that it only works for an input beam incident at right angles
to the out-of-plane surface of the mirror. However, this can be made into a much
more versatile system: a corner cube retro-reflector, with reflecting surfaces on
three mutually adjacent surfaces of an opened cube. Taken a step further, a
spherical retro-reflector can effectively cope with a wide range of input angles,
as also illustrated in the figure. These retro-reflectors and several variants
thereon feature prolifically in everyday life, in the ubiquitous cat’s eyes on
roads, in reflective jackets, traffic signs, LED optics and a host of other places.
The lens – typically spherical and convex, as sketched in Figure 4.3 – is
probably the most familiar imaging structure. The figure also indicates the
basic principle of operation through refraction at the input and output faces.
4.1 Dimensions Much Greater than the Wavelength 61
For the so-called thin-lens paraxial approximation, where all the angles of
incidence and refraction obey the sinθ ~ θ approximation, the lens formula
applies:
1 1 1
þ ¼ (4.1)
u v f
where the object distance u is related to the image distance v through the focal
length f. This shows that, for example, if the object is at infinity (that is, a
parallel beam arrives) then the focal point is also the image point.
For a practical lens to be used in the visible range, an important caution must
be noted. We have already mentioned that the refractive index for the blue part
of the spectrum is typically higher in glass than that for the red part of the
spectrum, so the focal length for blue light will be shorter than that for red
light – giving rise to chromatic aberration. There is also another important
observation – the lens is an aperture and so not only will there be ray
propagation through the lens but there will be diffraction (discussed later) at
the edge points; this diffraction will in turn create a ‘blur’ on the idealised focal
point. The angular spread of this blur will be of the order of the wavelength
divided by the lens aperture D and consequently the blur reduces as the
aperture gets larger. This also leads to criteria for lens resolution, since the
minimum feature size that this lens will be able to discriminate when imaging
will also be of the order of the spot size. Again, this is wavelength dependent,
with better resolution in the blue than the red.
Increasing the resolution – an incessant demand whether for cameras,
telescopes or microscopes – automatically implies increasing the aperture of
the lens, in other words, the ratio of the diameter to the focal length. This
takes us away from the paraxial approximation, with implications shown in
Figure 4.4. For a simple spherical surface, the focal length becomes a function
of the distance away from the lens axis in the manner indicated. However, it is
Figure 4.4 The off-axis impact for a thick convex lens, indicating changes in the
focal length with wavelength and the range of foci for rays incident at varying
distances from the lens axis.
62 Light Interacting with Structures
Parabolic reflector
Opcal
All rays arrive at same focal point – regardless of colour and distance from axis axis
Figure 4.5 A parabolic mirror reflector, for which all rays arrive at the same focal
point, regardless of their distance from the mirror’s principal axis. The image
receptor is at the focal point.
Mirrors
The whole process becomes much simpler using mirrors (Figure 4.5). Mirrors
have been particularly useful in astronomical telescopes since then almost any
object of interest is, to a very good approximation, an infinite distance from the
mirror so all images will cluster around the focal point. The mirror can be made
large, so that a useful imaging area is available without too much masking of
the input light (also, a large mirror both maximises the light collected from
distant stars and galaxies and enhances stellar-image resolution). The accept-
ance angle of such a system is also limited to the order of Di/f, where Di is the
dimension of the image receptor at the focal point, typically in centimetres, and
f is the focal length of the mirror, typically in metres.
If the mirror is to provide a perfect focus, first of all its front surface must be
coated with the reflecting medium, rather than its rear surface. Also, the surface
must be very accurately machined and kept totally clean thereafter. Second,
its shape has to be exactly parabolic to an ‘optically flat’ degree, implying
better-than-wavelength precision! The details of this shape are given
4.1 Dimensions Much Greater than the Wavelength 63
A
q
O
B
Figure 4.6 Diffraction from a single out-of-plane slit AB, of width a in a screen.
The first minimum occurs when OP – AP ≏ BP – OP ≏ λ/2.
Diffraction Gratings
We now consider what happens when there is more than one slit. The case of
two slits was mentioned in Section 2.3, see Figure 2.11, and the expression for
the maxima was given by sin θ = nλ/d. Now let us think about a device which
has not two but many evenly spaced slits. Two simple diffraction gratings are
shown schematically in Figure 4.7. Here we see, in an opaque screen, rulings,
servings as slits, with very sharp edges which scatter the light in all directions.
As for two slits, complete constructive interference occurs every time the path
difference between light diffracted from adjacent slits is an integral number of
wavelengths. For wavelength λ and slit separation d, this again occurs at angles
θn given by
nλ
sin θn ¼ (4.3)
d
Since we have assumed that the diffracted light is equal in intensity in all
directions, then all these image lines, at angles θn, would be of equal intensity.
In the region where sin θ ~ θ the angle θ is related linearly to the inverse
spacing, 1/d, of the grating (i.e., the spatial frequency per unit length of the
grating) and the images are all of equal intensity. For clarity, only the first-
order angle of diffraction for each structure is shown in the diagram. The
central observation here is that the angular dispersion of a grating near the axis,
66 Light Interacting with Structures
First order beams diffracted by the object – the through beam is omied for clarity
Object
Illuminaon beam
Figure 4.7 The finer the detail in an object the wider the angle of diffraction and
therefore the wider the aperture of the lens required to collect that detail.
Line diffraction
pattern for very
Square wave wide aperture
diffraction grating Sinc2
diffraction
pattern
Line pattern
convolved with sinc2
diffraction pattern
Figure 4.8 The diffraction pattern from a square wave diffraction grating and its
modification after passing through a finite but wide aperture (see text). Also
indicated, on the right, is a spatial filter, a narrow aperture allowing through only
the central three diffraction orders – what would be the image following this
spatial filter?
These observations also point toward techniques for image processing and
image enhancement, much of which take place by manipulation of the spatial
frequency content of the image using optical elements (or even software tools
on digital images) in the so-called Fourier transform plane (Figure 4.9). This is
in the focal plane of the imaging objective; here the diffracted beams from the
object are focussed to produce an image of the spatial frequency spectrum
(strictly speaking, a squared image, since what we see is intensity rather than
amplitude). A basic example was given in Figure 4.8. Here we started with a
square wave grating and producing the Fourier transform distribution as indi-
cated. Each of the image lines has an amplitude sinc function around it arising
from the finite aperture of the lens and/or grating. Suppose we then insert a
spatial filter (narrow aperture), shown in Figure 4.8, which allows through only
three beams, corresponding to the zeroth-order and first-order diffracted beams.
The image no longer fully represents the original. And what would the resulting
image be if this spatial filter is placed in the front focal plane of a second lens?
It can be shown that if on the other hand we attenuate the central (low spatial
68 Light Interacting with Structures
Figure 4.9 Collecting the diffraction pattern from an object in the front focal
plane of a lens produces a focussed representation of the periodic components as
the image in the back focal plane. Also, note the impact of the lens aperture, which
cuts off the higher angles of the diffraction pattern.
frequency) beam with respect to the higher-order beams then the result is edge
enhancement – a sharpening of the image definition. Spatial filtering, by means
of aperture, attenuating screens, or even spatially varying phase delays (see
below), is a frequently used tool in image enhancement.
The Fourier transform relationship between an object and its far-field diffrac-
tion pattern (or between the input plane and the transform plane in a simple
convex lens system) also applies to transparent objects, many of which have
structure within them which is invisible in a conventional image. However, this
structure will correspond to variations in optical thickness – that is, optical
phase and phase functions also have Fourier transform properties. For example,
a sine wave variation in the optical thickness of an object (which corresponds to
a spatially varying phase delay) gives rise to a set of sideband frequencies
which recombine to produce a zero-contrast image, with no intensity variations.
In the simplest case we can remove the central background component – the
average transmission through the object – to obtain the frequency-doubled
intensity distribution indicated in Figure 4.10(a). Whilst this is distorted, it
can be interpreted by an experienced observer. This technique is known as
dark-field imaging. You may like to consider why in Figure 4.10(a) the image is
spatial-frequency doubled, as indicated, with a zero every half-period of the
original sinusoidal phase grating.
There is also a related technique, known as the phase contrast method,
through which the phase of the zero-frequency component is shifted by 90
(this will only apply exactly for monochromatic illumination. Useful results
4.2 Structures with Dimensions of Order of the Wavelength 69
(a) (b)
Intensity variaon:
spaal frequency
Intensity variaon
doubled in image plane
in image plane
Transparent phase
object
Front focus
On-axis stop
at rear focal point
Quarter wave delay
on-axis at rear
focal point
Figure 4.10 Making phase changes in a transparent object visible: (a) using an
on-axis stop to produce a frequency-doubled intensity image or (b) using an on-
axis quarter-wave delay plate to produce an intensity image of the original phase
distribution.
Three-Dimensional Imaging
The above brief discussion contains much conceptual food for thought. It also
points towards the principles of holography – in effect three-dimensional
imaging without lenses. Here light scattered from the object (Figure 4.11) again
forms a diffraction pattern but in this case the diffraction pattern is combined
with a larger sinusoidal phase-reference beam extracted from the same illumin-
ating laser. The interference between the diffraction pattern and the laser beam
produces an intensity distribution containing sinusoidal components exactly
related to the diffraction angles, and therefore containing all the three-
dimensional information about the illuminated areas of the original object.
Consequently, recording this pattern to make a permanent hologram and
subsequently shining a laser onto this modified diffraction pattern retraces the
original paths and the viewer sees a three-dimensional image. Need this laser be
the same wavelength, and why do we need the high-intensity reference?
Imaging is among the most important manifestations of photonics. This
section has highlighted most of the essential underlying principles of imaging
systems. Some demonstration experiments with everyday objects – a magni-
fying glass, a laser pointer and possibly some simple samples and apertures
70 Light Interacting with Structures
Reference beam
Parallel
input
laser
beam
Illumination for object
Figure 4.11 Making a hologram on a photographic plate, shown at top left of the
diagram.
Reflected wave –
Incident wave at constructive interference
correct angle necessary from all
contributory waves
which can illustrate these principles – are outlined in the problems at the end of
the chapter.
In preceding discussion we implicitly assumed that all the diffracting objects
were thin compared with the wavelength of the light which is being diffracted.
Thick gratings – phase objects many optical wavelengths in thickness – also
play their part; this is exemplified in Bragg diffraction (Figure 4.12). In optics
the Bragg grating is typically a phase object structure which can be created by,
for example, the pressure variations induced by a travelling ultrasonic wave.
4.2 Structures with Dimensions of Order of the Wavelength 71
White
input
light
Oil film
Incident wave
Water
Figure 4.13 (left) The oil-on-water effect embodying multiple reflections from the
surface and the oil–water interface and (right) the incident and reflected beams for
normal incidence.
For an incident beam at the appropriate angle and an appropriate total phase
delay within an ultrasonic-wave grating, the entirety of the incident optical
beam can be diffracted as indicated in the diagram. Since the ‘grating’ is
moving, the diffracted beam is Doppler shifted in frequency by an amount
identical to the input frequency of the ultrasonic driving signal, a feature which
is often used in optical frequency shifters.
Finally, another observation on diffraction gratings. Suppose a parallel beam
of white light, rather than monochromatic light, shines upon a diffraction grating
of the type shown in Figure 4.10. In this case the blue component will be
diffracted rather less (by a factor of 2 or so) than the red component; therefore,
by appropriate spatial filtering, the apparent colour of this grating can be made to
change – the first hint that perceived colours depend not only on spectroscopic
properties, as discussed in Chapter 3, but also on the structural properties of both
a material itself and the system through which it is observed.
Thin Films
A commonly observed example of this is the coloured structure of a film of oil
on water. Figure 4.13 illustrates how this might happen. Oil and water have
different refractive indices and therefore both the interface between the oil and
the water and the interface between the oil and the air will introduce partial
reflection, with amplitude reflection coefficient R. At each interface, at normal
incidence R is given by the Fresnel equation, namely
n1 n2
R¼ (4.4)
n1 þ n2
72 Light Interacting with Structures
where n1 is the refractive index of the medium from which the light is incident
on the interface and n2 is the refractive index of the medium into which it passes
(this equation is a special case of the Fresnel reflection relationships mentioned
in Appendix 6.2). Note that this equation 4.3 implies a phase change of 180 for
light incident from a low-index material to a higher-index material. The value of
the reflection coefficient R (remember that it is an amplitude), for light travelling
from air to glass, where the index of the latter is 1.5, is around 0.2, correspond-
ing to 4% power reflection. The situation for the case of oil and water is slightly
different, in that a typical oil has an index of approximately 1.4 whereas water
sits at 1.33. Consequently, there is no phase inversion at the oil-to-water
interface and, typically, the amplitude reflection coefficient for normal incidence
on the oil-to-air interface is about 0.14 compared with –0.1 (the negative sign
indicates 180 phase reversal) at the water-to-oil interface. However, reflections
from the two interfaces will interfere, with a range of possible results varying
from constructive interference, producing a total net reflected amplitude of 0.24,
to destructive interference, producing 0.04. This is a 30 to 1 ratio in perceived
optical intensity (we have ignored multiple reflections since these are relatively
small though they will have some impact). The visible spectrum covers around a
factor of 2 in wavelengths so the above discussion indicates that, depending
upon the oil thickness and the angle of observation, the reflected colour will vary
throughout the spectrum – another example of where the spectroscopically
defined colour of a material can be radically changed by its structure.
Thin films and the design and realisation thereof constitute a vast topic.
Antireflection coatings are a common application. If we arrange for the two
reflected beams to be out of phase (this happens if the film thickness is one
quarter of a wavelength and the reflection amplitudes are identical for each
interface) then the resulting destructive interference at the surface implies zero
reflectance. This concept finds widespread applications. Camera lenses, spec-
tacle lenses and a host of other optical surfaces typically exhibit a bluish tinge
when viewed from an appropriate angle. This is a manifestation of anti-reflection
coatings. These are incorporated to minimise the impact of reflections on image
quality typified by the glowing, often yellowish, circles seen in photographs
taken when the camera is pointed towards, but not directly at, the sun.
Waveguides
Figure 4.14 shows the simplest possible implementation of an optical wave-
guide, a step-index fibre. Here, provided the light is incident on the interface
between the core and cladding at an angle exceeding the critical angle, the light
will be guided through the higher-index core region. Typically, light would
be launched from some external source – often air. There will be a maximum
4.2 Structures with Dimensions of Order of the Wavelength 73
Higher-index
core
(a) (b)
Lower-index
cladding
Figure 4.14 A step-index optical waveguide showing, in the main part of the
figure, ray paths for two different modes of propagation and, on the right, the light
amplitude distributions for (a) the lowest-order mode and (b) a slightly higher
mode, corresponding to interference between the beams at steeper angles of
incidence. See Appendix 4, Figure A4.2.
angle that the incident launched light beam can make with the axis of the
waveguide, beyond which guiding cannot take place. This angle corresponds
to the critical angle for light travelling from the core to the cladding. The
acceptance angle θna is given by
30
4
20
3 10 Material
0 dispersion
Attenuation (ps/km)
2
(dB/km) –10
1 –20
–30
0
0.5 1 1.5 2
Wavelength (microns)
Figure 4.15 Losses (solid line) and material dispersion (broken line) in silica. The
zero dispersion point at 1.3 µm is a small, but significant distance away from the
lowest attenuation at 1.5 µm wavelength. See Section 5.7.
optical fibre. These fibres have been extensively used for over three decades
in long-distance optical fibre communications and among other things are essen-
tial in fuelling the even-increasing demand for domestic and industrial band-
width. The internet could not function without optical fibres, as one of many vital
contributing technologies. Even the single-mode fibre, though, has its inherent
limitations. There is dispersion of different wavelengths in the glass which forms
the fibre as well as the dispersion introduced by the optical structure; longer
wavelengths penetrate further into the cladding via the evanescent wave (see
below). Furthermore, there are spectroscopic wavelength-dependent losses
within the materials themselves. The losses and the dispersion characteristics of
silica (the principal raw material for optical fibres) are shown in Figure 4.15.
The basic operating mechanism of the single-mode fibre points towards a
wave rather than a ray interpretation of transmission along a high-index region
surrounded by lower-index regions. In this wave interpretation, only a limited
number of ray directions are permitted even in the large-scale case of
Figure 4.14. The reason is that across the interface between the high-index
region and the lower-index outer region there must be continuity in the electric
field. In the case of a perfect metallic interface this would imply that the electric
field is zero at the interface but at a dielectric interface the continuity is a little
more complex and requires some limited penetration of the electric field into the
low-index region; this is often referred to as the evanescent tail.The net result is
a field distribution of the type shown in Figure 4.14(b). For the two-dimensional
case (in other words, a film enclosed between two other films), this can be
considered as interference between two beams at a specific angle (Figure 4.14
(a)). In principle there are other options which would fulfil the same basic
4.2 Structures with Dimensions of Order of the Wavelength 75
criteria, namely that at the boundary between the dielectric interfaces the
appropriate field relationships hold. An example of such a higher-order mode
is also indicated in Figure 4.14(b). This mode has more than the minimum half
wavelength of an interference pattern across the core region. However, in order
to achieve this second interference pattern, the imaginary beams which are
interfering need to be incident on the interface at a higher angle. For a single-
mode fibre, this angle would be chosen to exceed the critical angle and therefore
the higher-order mode would not be guided. Hence a combination of index
difference and core dimensions gives rise to the modal structure of the fibre. The
circular cross sectional structure characterising an actual fibre involves slightly
more complex beam structures but the underlying concepts are identical. The
basic idea of viewing guided waves as interfering beams for which the interfer-
ence pattern satisfies the necessary boundary conditions along the wave-guiding
interface gives a very useful insight. Essential features such as the ideas of
spatial waveguide modes, the number of modes possible in a given structure and
important propagation features like intramode and inter-mode dispersion can all
be approached in this manner.
The waveguide structure has other interesting implications. There is obvi-
ously a range of wavelengths for which the basic single-mode boundary
conditions can be fulfilled and indeed, in principle, this range extends well
beyond the optical region to the microwave and radio spectrum, though the
strength of the waveguided beam becomes vanishingly small at longer wave-
lengths (less and less of the energy is held in the core region and eventually, at
wavelengths significantly longer than the structural cross sectional dimensions,
most spills out into the surrounding air). At shorter wavelengths there comes a
point where at least two angular options can give rise to an appropriate
interference pattern and therefore satisfy the dielectric boundary conditions,
a point at which the waveguide becomes over-moded.
In the single-mode-transmission case there are dispersion mechanisms at
work similar to those on which we commented for multimode guides – namely
that the variation in reflecting angle for the imaginary interfering beams will
inherently give a variation in propagation time, with shorter wavelengths
experiencing longer delays due to geometric dispersion than the longer wave-
lengths – a similar trend to that experienced due to material dispersion in the
‘normal’ dispersion region, of a dielectric.
These dispersion effects are significant and have stimulated much ingenuity
in refining the basic concept in order to cram more and more terabytes along a
single fibre. There are other factors which limit fibre transmission distances, of
which the most important are the inherent losses in the silica which forms the
basis of the core of the fibre. These become very low (<1 dB/km) in the 1.5 μm
76 Light Interacting with Structures
Higher index:
solid material
Lowest index:
(or less ‘holey’)
gas
Lower index:
‘holey’ material
Diffracng layers:
Bragg guiding
‘holey’ material
Figure 4.16 Two examples of simple ‘photonic crystal’ structures, showing how
structures can provide innovative waveguide geometries. The structure dominates
the optical transmission properties of the material from which it is composed.
region (Figure 4.15). The material itself has another interesting property: its
inherent dispersion goes through zero in the 1.3 μm region – a case in which
nature conspires to be helpful to the technologist. However, the dispersion in
the silica at 1.5 microns, the lowest-loss region, remains significant, even at
~20 ps/km, for very-high-capacity communication systems.
Consequently, much has been achieved to minimise dispersion in this 1.5
micron wavelength region through the ingenious use of structural and material
properties, so that the transmission of hundreds of gigabits per second, and
more, over hundreds of kilometres through a single channel is now available.
The losses in silica exemplify different basic phenomena. The loss at long
wavelengths in the infrared is fundamental to the absorption bands within
silica itself and consequently there are many practical difficulties in extending
optical fibre transmission much beyond the 2 μm wavelength barrier. The
attenuation glitch at about 1.3 μm is due to water absorption but, thanks to a
great deal of processing effort, this can be virtually eliminated. At shorter
wavelengths the impact is seen of tiny inhomogeneities within the structure
itself and those that inevitably arise in the processing and preparation of the
fibre owing to the slightly random process of cooling down. These tiny
inhomogeneities cause Rayleigh scattering which technically applies to struc-
tures much less than the wavelength; we shall return to this in the next section.
The increasing precision of fabrication technologies has continually
expanded the level of control available for these photonic materials.
A conceptual example is shown in Figure 4.16. Here – on the left – a number
of voids (holes) have been introduced into the structure of an optical wave-
guide and these voids will in turn reduce the local effective refractive index, in
the planar case with different changes for vertically and horizontally polarised
inputs. The key point here is that the holes are a small fraction of a wavelength
in diameter, so that, as seen by the propagating wave they are effectively
4.3 Dimensions Much Smaller than the Wavelength 77
Metamaterial,
index = -1
Perfect
Object image
giving rise to a negative sign in the refractive index. The behaviour of such a
metamaterial structure is indicated in Figure 4.17. In principle such a structure
can produce a totally undistorted image of an object. It can beat the diffraction
limit, the ‘blur’ from the edges of the lens or mirror, which characterises
traditional imaging.
The detailed principles are quite complex but some insight into the basic
ideas can be derived from the observation that negative-index metamaterials
can be synthesised using periodically spaced arrays of resonant circuits.
A feature of such a circuit is that at low frequencies, well below resonance,
the current (i.e. the magnetic field) and voltage work in phase whereas at
higher frequencies, well above resonance, they are in antiphase. Within an
electromagnetic wave this will attempt to pull the electric and magnetic fields
out of phase with each other, a feature which can be compensated by the
wave’s moving to the opposite side of the normal. Metamaterials, in particular
those with a negative index, have found application in acoustics (in inverting
the phase of the pressure and velocity components of the acoustic wave) and in
electromagnetics. Early demonstrations were at relatively low (microwave)
frequencies but there have been ventures into the optical, constrained to extent
by the properties of conductors at optical frequencies.
One of the principal prospects for nanostructured materials lies in the use of
metallic conductors at frequencies well below the plasma resonance and at
thicknesses sufficiently low for optical signals to cope with the inherent losses.
One manifestation of such configurations is that the presence of a metal on an
interface between two dielectric materials can significantly enhance the mag-
nitude of the electric field on the face of the dielectric interface opposite to the
direction of incidence (Figure 4.18). There is another important aspect to this.
The losses are low if the electric field is perpendicular to the plane of the
80 Light Interacting with Structures
(a) (b)
Launch a TM (P) wave
along here. Thin metal film
nsubstrate Substrate
Figure 4.18 Surface plasmon waves giving high field-penetration into the overlay
but requiring careful phase matching whether in multiple layers (a) or waveguide
format (b). The latter arrangement finds application in, for example, liquid
characterisation, hence the reference to ‘analyte’ (a substance under chemical
analysis).
interface. However, if the electric field is in-plane, then it sees a long length of
metallic material – the in-plane polarisation is very lossy. So here we have a
polariser, much used over the past few decades! This basic phenomenon,
which is an essential aspect of plasmonics, raises prospects for a range of
photonic devices emulating their electromagnetic forerunners, from wire-based
antenna arrays capable of refocussing a diverging beam to optically excited
electro-chemical sensing.
Quantum Dots
Nanostructures other than butterfly wings have also been with us since ancient
times. Colloidal gold nanoparticles have given permanent colour to glassware
and pottery for centuries. These tiny gold spheres are perceived from red to
blue over the entire visible spectrum – completely at odds with the visual
appearance of bulk gold. These particles, which are typically a few tens of
nanometres in diameter, have now extended their presence beyond ancient
pottery into medicinal potions and (bio)chemical assay. How does this struc-
ture so efficiently modify the bulk reflected colour? There is a clue in that the
larger particles appear more to be blue than the smaller particles, which veer
towards the red. It would seem, then, that the large particles absorb the red end
of the spectrum and vice versa for the smaller particles. Clearly, then the key
lies in the dimension of the spheres but also in the observations on surface
currents discussed above. A large sphere, which is typically about 100 nm in
diameter, resonates in the red and therefore absorbs the red whereas a small
sphere, around 30 nm in diameter, resonates in the blue.
4.4 Summary 81
4.4 Summary
Perhaps the principal message from this chapter is that structures are an
extremely flexible, even fundamental, tool for the manipulation of light,
whether regarded as rays, waves or photons. The operational functionality of
this tool changes in a very versatile manner with the dimensions of the
structure in comparison with the wavelength of light.
For a very large structure – essentially a fraction of a millimetre dimension
and above – we have seen how the shape of interfaces and of optical compon-
ents, typically glasses, modifies optical functions, prisms and lenses being the
dominant systems. We have also seen that using a mixture of glasses and shape
functions can enhance functionality; this is most apparent in the design of
compound lens systems. The basic phenomenon of refraction, together with
the associated polarisation-dependent reflection effects at interfaces, is at the
heart of the familiar optical components within this dimensional range.
Even such large structures are, however, affected by the impact of phenom-
ena most conveniently thought of as relevant to the medium-dimensional
range; that is, around the optical wavelength. The resolving power of lenses
is the most obvious feature, as it can be explained in terms of the essential
medium-scale concepts of diffraction and interference. These medium-scale
features underlie holograms, fibre optics, the appearance of oil on water and
many other familiar optical phenomena.
82 Light Interacting with Structures
4.5 Problems
1. (a) Think about the basic spherical biconvex lens of the type indicated in
Figure 4.4. Now use your knowledge of the slopes of the spherical
surfaces (just consider two dimensions) to gain some insight into how
the focal point might vary as a function of lens aperture (compared with
the focal length) at a single wavelength. At what aperture would this
variation in the position of the focal point begin to compromise the
resolution of the lens for illumination at 500 nm wavelength (think of
the implications of Figure 4.6)?
(b) It is possible to achieve a perfect (monochromatic) focus with an
aspherical surface. How would you approach determining the shape
of this surface?
2. This problem concerns retro-reflectors.
(a) Consider a prism with a right-angled isosceles-riangle cross section. It is
simple to demonstrate that a beam normally incident on the hypotenuse
4.5 Problems 83
will be retro-reflected. Over what angular range (for angles lying within
the plane along the hypotenuse and perpendicular to the input beam)
will a parallel incident beam be directly reflected (take the refractive
index as 1.5)?
(b) How will the reflected beam angle change with incident angle? You
will probably have instinctively approached the problem in two dimen-
sions – varying the angle about an axis which is perpendicular to the
hypotenuse and in the plane of the paper on which you’ve sketched the
system. How might this change if the beam angle were varied around an
axis a plane perpendicular to the hypotenuse?
3. Red sky at sunrise and sunset, blue skies in day time. Why are clouds are
essential for an impressive sunset? On the same theme – why are clouds
white (except at sunrise and sunset) when relatively thin, but go through
shades of grey into a threatening black?
4. (a) Convince yourselves that a parabolic mirror gives a perfect focus at all
light input wavelengths (Figure 4.5).
(b) Assume that this applies to a 5 m diameter reflecting telescope. Give a
reasonable estimate for the angular resolution and practical angular
coverage of such an instrument. Atmospheric turbulence can distort
images for even a perfect reflector. How would you approach setting
up a system to overcome these disturbances?
5. (a) How would you measure the number of threads per centimetre in a piece
of cloth with just a laser pointer and a piece of white paper? Do the
experiment, comment on how the answers change when the cloth is
stretched in one direction and try shining the pointer on a piece of
scratched or smeared glass and looking at the spots that are transmitted.
(b) What might be some practical implication of these observations in
optical-system design and in the processing of digital images?
6. (a) Create an expression to describe a simple phase object with a one-
dimensional sinusoidal distribution. From this derive the features of the
diffraction pattern from such an an object, and demonstrate how the
diffraction may be modified (i.e. spatially filtered) to create an intensity
object that is related to the original phase object, and sketch a suitable
system.
(b) There are at least two options on the approach to the above procedure –
both mentioned in the text. Compare and contrast the features of these
two approaches and also find examples of phase objects, some of which
can be perceived with the naked eye. How is that possible?
7. (a) Convince yourself that the Bragg conditions (angle of incidence equals
angle of reflection, and path difference between adjacent reflected rays
84 Light Interacting with Structures
equals one optical wavelength; see Figure 4.12) can give 100% con-
structive interference for an input parallel beam meeting a suitable phase
object. What are the conditions for the object to be ‘suitable’?
(b) A Bragg diffraction process uses ultrasonic waves in transparent media.
The reflections involved are now from a moving object, so that the
output beam is shifted in frequency according to the Doppler effect.
Explain how this frequency shift is related to the ultrasonic frequency.
8. (a) Figure 4.14 shows the cross section of a dielectric waveguide compris-
ing two low-refractive-index regions surrounding a slightly higher-
index core region. Typical refractive indexes are in the region of 1.5
and typical index differences are around 1% of this. By considering the
interference of intersecting beams at symmetrical angles with respect to
the core axis in the waveguide, estimate the cross section of the guiding
region at which the guide will become overmoded for a wavelength of
1 micron.
(b) The intersecting-beams approach works well for metallic interfaces in
waveguides at much longer (for example, microwave) wavelengths but
for dielectric guides, it can give only a good insight especially at these
much higher frequencies. What are the differences between dielectrics
and metals and how might these differences modify the original esti-
mate for the depth of the waveguide?
5
Photonic Tools
In this chapter we look briefly at some of the many applications which are
enabled through photonics, ranging from photo-therapies to intercontinental
communications. The majority of these applications rely upon extracting an
optical signal from various forms of background noise, which is where we start.
85
86 Photonic Tools
(a)
RL
Input light
1mW
at 1mm
1E-19
1E-22
1E-23
1E-24
1E-25
Phase noise
(b)
Intensity
Carrier vector noise
Figure 5.1 (a) A basic photodetector circuit demonstrating the trade-off between
load resistor thermal noise and shot noise on the incoming optical signal. The load
resistance and detector capacitance define the detection bandwidth. (b) A vector
representation of noise components with equal distribution in phase and amplitude
components.
approaches. This squeezing can in principle squeeze the phase, or the ampli-
tude, noise away completely to zero and leave only the complementary
domain. In practice, this has yet to happen completely. However, at present
there is one outstanding example; it is being used in earnest on just one
instrument, the gravitational-wave telescope, LIGO. Here, without optical
squeezing giving a 10 dB or more additional signal to noise ratio, the detection
of gravitational waves would have remained elusive. This gain is achieved
through an impressively complex system, but to date there are no simpler
approaches. Squeezed light exemplifies the many recent conceptual develop-
ments in photonics that combine the wave and photon approaches.
5.2 Spectrometers
Spectroscopy is a crucially important tool in photonics for characterising both
materials and material structures. In practice, most spectrometers follow the
format shown in Figure 5.2, where parallel white light is passed through the
sample of interest to interact with the dispersive element (here a prism) after
which the emerging light is focussed onto a detector array. The basic procedure
is that the white light source is calibrated in the absence of the sample. Hence,
any differences between the calibration readings on the array and those after
the source interacts with the sample relate uniquely to the spectral properties of
the sample. The dispersive element can in fact be either a prism or a diffraction
grating; the former deflects the blue more than the red whilst the grating does
White
Collection
light
lens
input Red
deflection
Blue
Sample for analysis
deflection
Figure 5.2 The basic dispersion diagram for a prism spectrometer. Grating spec-
trometers operate similarly but give the greater deflection to the red rather than
the blue.
5.2 Spectrometers 89
the reverse. A grating also offers the benefit that the angular dispersion is
directly related to the grating period, whilst for a glass-based prism the
relationships are more involved.
The intrinsic wavelength resolution of these instruments is dictated by
several factors. The number of photodiodes in the detector array obviously
limits the ultimate achievable resolution, (though with too many detectors, the
achievable noise limit on each detector, which is shot noise limited, also
contributes) Additionally, the angle of separation between one colour com-
ponent and another comes into play. In the case of a grating this is determined
by the grating periodicity whilst for a prism the dispersive power of the glass
of the prism is the determining factor. Finally, the beam width of the interro-
gating optics (comprising a combination of the beam width, the depth of the
sample container, and the effective incident width) provides an ultimate limit,
dictated by diffraction from the edges of its optical aperture. In an optimal
design all three of these limitations would come into play together.
There is also an important variation on the basic theme of the photodetector
array. The same functionality can be achieved if a single photodiode, with a slit
in front of it, is scanned across the resulting spectrum either by moving the
photodiode or by rotating the spectrum. The overall result is that, for a typical
prism or diffraction grating spectrometer, a spectral resolution of the order of
0.1 nm over a dynamic range of 1 μm is achievable.
There are occasions when better resolution is desirable and also occasions
where the photodetection processes become more involved; typically, both
factors apply in the near and mid infrared, where an alternative approach based
on Fourier transform spectroscopy (Figure 5.3) is frequently used. In this
system a single detector records the total output as the path difference between
the two arms of the interferometer formed by the two mirrors is scanned from
around zero to a maximum value. At each setting the interferometer acts as a
filter with co-sinusoidal response, as indicated in the figure. Consequently, in
effect each optical frequency in the input spectrum is multiplied by this co-
sinusoidal function in the frequency domain for each setting of the interfer-
ometer path difference – in other words, the system is performing a Fourier
transform at each setting with spectral components determined by the path
difference. Assuming that the path difference increments by equal amounts
between a minimum (which sets the range of frequencies that can be resolved
by the interferometer) to a maximum (which determines the resolution of the
resulting spectrum), then a direct Fourier transform of the output gives the
spectral response of the sample. The detector needs to be calibrated appropri-
ately in the range over which it is to used, and some precautions are needed to
ensure that the Fourier transforms are appropriately performed, taking into
90 Photonic Tools
Fixed mirror
Path
difference:
Input Zero
light Moving mirror
0
Detected output
Beam splier
Sample L
2L
Calibrated
photodetector
0
Input optical
frequency
Figure 5.3 Fourier transform spectroscopy, showing how varying the position of
the moving mirror by L multiplies the input spectrum by a defined co-sinusoidal
function. The beam splitter divides the input light equally between the two paths
through the interferometer. The top line on the graph indicates that at zero path
difference the output power is fixed for all wavelengths.
account any spectral variations i.e., variations in the light output from the
interrogating optics as a function of wavelength. Overall then, the system is
functionally identical to the dispersive element in a prism or grating
spectrometer.
response typified in Figure 3.19. This synthesis process from the three primary
colours was quantified nearly a century ago in the CIE (International Commis-
sion on Illumination) chromaticity chart, still a very useful tool in the detailed
design of visual systems.
These factors are mentioned here as among the most important consider-
ations in the design and implementation of both display technologies and also
electronic image acquisition technologies, such as the image capture array in a
digital camera. Hopefully this will give the reader a little insight into the
considerable efforts required in balancing colour-filtering systems, relative
brightness, spatial resolution and a host of other factors in the design of these
very commonplace but extremely subtle devices.
Low-temporal-coherence source
Single-mode
e.g. super-radiant LED
optical fibre – for
spatial coherence CCD camera
Collimating lens objective
CCD camera
detection
array
Moving
reference To computer storage
Multi-layered and image presentation
mirror
sample software.
Reference
mirror with
Beam splitter
quadrature
offset
Reflective target
for measurement
Detector 1
Detector 2
Fringe pattern
Detector 1 Fringe pattern
Detector 2
Fringe
Intensity
Figure 5.5 Using a quadrature offset system (i.e. the beams are 90 out of phase)
in a Michelson interferometer to extract directional information and therefore
measure absolute changes in position.
length of the source greatly exceeds any possible value of the path difference
between the two arms, one cannot distinguish whether the path difference is
increasing or decreasing as it monotonically changes. However, a change in
the direction of movement can be detected from a single out put provided that
this change does not occur at a peak or trough of the interference pattern (see
Figure 5.5). Consequently, interferometric precision-measurement systems
need to incorporate some method of determining the direction of changes in
path difference. One approach, shown in the figure, is to utilise a quadrature
offset between, in effect, two reference beams. This arrangement enables a
combination of fringe counting and directional assessment to produce highly
accurate measurements. This approach, based upon quadrature detection, can
routinely yield measurement precision better than 1% of an optical wave-
length; measurement accuracies can even be of the order of several parts per
million, depending upon the stability of the source illumination and the
variations in local environmental conditions.
Interferometry has found enormous application in the precision measurement
of mechanical objects, especially in systems based upon coordinate-measuring
machines. In these machines, a very precise x–y–z translation system is coupled
to interferometric displacement-measurement systems. The translation system
5.6 Precision Measurement 95
Optical flat
Quadrature interferometer
Ultrasensitive
tactile probe
controls drive
Object to be measured
Interference between flat and
test object
Figure 5.6 (left) Measuring a fixed external object using a sensitive probe and
quadrature interferometer. (right) Using interferometry to measure objects with
respect to an optically flat surface (here interference produces Newton’s rings,
which are observed from a spherical partially reflecting surface as indicated). The
same basic approach is used in holographic inspection systems.
carries a very sensitive tactile probe which responds precisely as it touches the
object under study. There are several such probes, at least three in a particular
instrument, to facilitate three-dimensional measurements. The displacement of
these tactile probes can be interferometrically monitored and from this the
surface profile of the object can be precisely determined. Submicrometre reso-
lution can be routinely obtained.
Figure 5.6 illustrates this basic tactile system and also shows how a preci-
sion optically flat glass can be used to characterise the shape of a lens. Now
referred to as Newton’s rings, this patteon was first observed over 300 years
ago. When viewed with white light the rings are coloured, but when viewed
with monochromatic light, for example from a gas discharge lamp or even
through the rudimentary colour filtering of sunlight, the characteristic interfer-
ence pattern shown in the figure provides an accurate measure of the lens
curvature, and variations thereof, with respect to the reference optical flat. The
spacing of the fringes depends upon the local tangential angle between the lens
surface and the optical flat.
Yet another variation on the interferometric measurement theme is shown in
Figure 5.7. The Sagnac interferometer sends light in counterpropagating direc-
tions around a single-spatial-mode loop, which can be a fibre or the active part
of a laser. As the loop rotates, the light travelling in the same direction as the
rotation needs to travel a small distance further than the light travelling against
the rotation in order to return to the entry point. Consequently, if this system
96 Photonic Tools
modulator
Polariser
DC1 DC2
rotation
Fibre loop
Figure 5.7 The basic principles of the Sagnac interferometer, as used as an optical
fibre gyroscope, and an example of its precision realisation for space navigation.
1
The term ‘scatter’, abbreviating ‘the scattering of electromagnetic radiation’ is often used when
we are considering a phenomenon rather a designed process.
5.7 Optical Fibres and Communication Systems 97
their limit at around this attenuation level for carrier rates in the 250 Mbits/s
region. However, the skin effect losses (see Figure 3.3) in copper increase with
frequency, yet even higher frequencies were needed to carry more and more
data. As we saw in Chapter 4, numerous paths in the simplest fibre will
continue to guide until cut off at the critical angle between the core and the
cladding. This range of paths implies a travel time difference which is typically
of the order of a few tens of nanoseconds per kilometre. This dispersion in turn
implies the need for maximum modulation frequencies into the fibre in the
region of many tens of megahertz on a one-kilometre length. Thus, initially
multimode fibres offered little obvious benefit for long-distance communica-
tion when compared with copper.
However, there is a modification which changes the picture somewhat.
Suppose the refractive index profile were to change more gradually than the
abrupt high-in-the-core and low-in-the-cladding version profile found into a
step-index fiber. If the profile were instead parabolic (along the same principle
as the parabolic mirror in an astronomical telescope) then, at least for one
specific wavelength, the dispersion effect could be removed. There are prac-
tical issues regarding how closely can we make the refractive index profile into
a parabola, and there is also material dispersion in the core itself, but the
graded-index multimode fibre remains a useful tool. You may wish to explore
further how this might work and why it could be useful.
Single-mode fibres offer improvements in dispersion over even a graded-
index fibre. However, even here the minimum-attenuation wavelength (1.5
microns) and the zero-material-dispersion point at 1.3 microns do not coincide.
Making the zero-dispersion point coincide with the very low attenuation region
then promises enormous benefits!
The 1.5-micron lowest-attenuation region has emerged as the optimum
wavelength for optical communications, and so minimising dispersion is crit-
ical. Considerable ingenuity has been directed at this problem, based on the
observation that the evanescent tail (see Section 4.2) of the optical field distri-
bution in the fibre cladding extends further into the lower-index cladding as the
wavelength increases. Consequently, that mode will have a faster velocity as it
penetrates further into the lower-index region. A carefully designed ‘W’ refract-
ive index profile (Figure 5.8) controls this wavelength-dependent evanescent
field by means of a high-index core followed by a lower-index cladding
surrounded by a slightly higher-index cladding. This design can in fact compen-
sate for the material dispersion; it controls the relative amounts of optical power
in the core and depressed cladding regions and the process is optimised to realise
this objective. The reader can consult Figure 4.16 to verify the logic behind this
approach. The result is extremely low-loss transmission (a small fraction of a
98 Photonic Tools
Guided
optical
field
Depressed cladding: lowest index
Figure 5.8 The basic concept of dispersion compensation, combining the prospect
of low attenuation at 1.5 μm whilst correcting for finite material dispersion (see
also Figures 4.14 and 4.15). In the central part of the figure are shown a shorter-
wavelength pulse and a longer wavelength pulse, which penetrates further into the
depressed cladding region.
Figure 5.9 The essentials of laser machining and laser surgery, utilising a tightly
focussed spot from a high-peak-power pulsed laser, very similar to laser-induced-
breakdown spectroscopy (LIBS) but with a different aim.
cutter wear and minimal machine vibration. The process also offers smooth
edges and the ability to cut in otherwise inaccessible places, particularly with
the flexibility of fibre-coupled cutting systems.
Laser surgery has had a particular impact in the treatment of ophthalmic
problems such as macular degeneration and cataracts where highly localised
cutting and preparing without the use of the surgeon’s knife is particularly
beneficial. The laser scalpel has also found its place, especially in soft-tissue
surgery where its precision and the minimal consequential bleeding that it
produces, owing to self-cauterisation, offer great benefits. Also, a laser beam
clearly does not require careful sterilisation between procedures, though there is
the need to pay due attention to the delivery system, which could even be a
disposable fibre. Laser surgery has been incorporated into endoscopically
observed surgery and a host of other medical procedures.
Lasers have also found their way into dentistry. There is a clear optical
distinction between decay in a tooth and the intact regions around it, with the
former much more optically absorptive. There is then the potential for thermal
removal rather removal by drilling. Lasers are also useful in oral surgery, again
especially when fibre-coupled to facilitate the precision location of a cutting
tool, and this, together with the above-mentioned automatic self-cauterisation,
are attractive features. Indeed, possibly the only disadvantage of lasers is that
the capital outlay compared with a scalpel or a mechanical drill remains high.
D2
Scaer
Wh
light inputs:
ite
D1
spectral nephelometry
li
gh
Input at l Wh
t in
Sample ite
ligh
pu
Transmied t in
t
light pu
Nephelometry t
Spectrometer
q Sample
White light input
q t
spectroscopy for a white light input
Normalised Typical
transmittance extra
virgin
0.2 Typical
non-
extra-
virgin
0.0
500 550 600
Wavelength (nm)
0.4
Normalised
transmittance
0.2
Typical non-
extra-virgin
0
500 550 600
Wavelength (nm)
(c)
EXTRA VIRGIN
0
PC3
-5
-10
-15 NON-EXTRA-VIRGIN
20 –10
10 0
PC1 0 10 PC2
–10 20
Figure 5.11 Extracting information from nephelometry data using principal com-
ponents analysis to separate extra virgin and non-extra-virgin olive oil. There are
many applications for the technique, and numerous signature identification
algorithms.
5 metres
Front surface parabolic
or more
reflector
Rays from distant galaxies
Guide star
Image capture
e.g. by camera
Rigid
mirror
mount Focal point
Figure 5.12 Adaptive optics precisely adjusting the parabolic reflector in, for
example, a very large space-borne optical telescope.
Objective lens
Forces on particle
Radial resultant,
opposing incident-
laser pressure
Longitudinal
incident-laser
radiation pressure
Figure 5.13 The basic principles of optical tweezers. Just below the waist of the
beam the longitudinal radiation pressure on a particle is balanced by the longitu-
dinal opposing component of the radial forces due to refraction through the
particle; the momentum transfer is against the photon flow and can balance it.
utilisation of radiation pressure and the gradients thereof in a focussed laser beam,
as indicated in Figure 5.13. Radiation pressure occurs when a photon, of energy hν
and momentum hν/c, is reflected from a surface of a particle; then the particle feels
the rebound as a force 2hν/c, corresponding to a pressure 2I/c where I is the
incident light intensity (the optical power per beam area).
The forces on the particle from the radiation pressure will increase towards
the focal point of the lens and there will be a cross sectional gradient tending to
keep the particle in the centre of the beam. As the light strikes the particle,
some will be refracted through it and will emerge at a different angle,
imparting a momentum change to the particle equal and opposite to the
momentum change of the light. When the particle is just below the focus (or
waist) of the beam, its momentum change has a component towards the laser
source and under appropriate conditions the radiation pressure and refraction
forces balance to suspend the particle in the beam (for further information see
the Wikipedia article on optical tweezers).
The forces involved are very small, in the 100 nano-newton range, and the
focussed spot is typically produced by a high-aperture microscope objective
lens with input laser power in the mW range. Particles with sizes from a few
microns down to 5 nm can be trapped, facilitating the examination of objects
ranging from biological cells to DNA molecules.
Optical tweezers have evolved into many formats since their inception over
30 years ago. They are used in microfluidic experimentation to trap and
106 Photonic Tools
5.13 Summary
This applications chapter has indicated the diversity and versatility of photon-
ics as a tool, as an enabler, as a corrective treatment. Of necessity, much has
been omitted and all the discussions had to be brief. Nevertheless, hints have
been given of the roots of the large expansion in photonics since the mid
twentieth century, The discipline is merging into everyday life, with DVD
players, fibre optics to the home, powerful photographic tools and displays, the
laser pointer, laser surgery and machining, photonic therapies and photonic
systems in the local opticians.
There is also is a great deal more emerging for the future. The functionalities
offered by photonics will expand enormously. The fabrication tool set, espe-
cially for extremely high-precision machining at the nanometre level, and also
the conceptual framework for future prospects, will broaden significantly. We
shall briefly explore some of these emerging possibilities in the next chapter.
5.14 Problems
2. Grating spectrometers are regularly used for many types of spectral analy-
sis. You are seeking a spectrometer with an operating wavelength range
from 1 micron down to 200 nm with a resolution of 0.5 nm.
(a) What would be needed in terms of the detection array, i.e. the number
of sensors in the charged-couple device (CCD) detector, the aperture of
the input lens and the power spectral density of the illuminating white
light source? What would be an acceptable signal to noise ratio for
each detection point? Remember that you will need to quantify the
attenuation in an absorption band with an accuracy of, say, 1% of the
input source in order to achieve this performance level.
(b) How might the signal to noise ratio be enhanced without changing any
system elements (except in the detection process – assuming that time
is not too pressing), and also what might be the implications of taking
the desired resolution down to 1 pm?
3. Fourier transform infrared spectroscopy (Figure 5.3) is noted for its poten-
tially high resolution. Consider a spectrometer that is required to operate in
the range 0.5 to 1.5 micrometres with a resolution of 10 pm over that range.
(a) What would be the necessary travel distance necessary for the moving
mirror? From your answer make some estimates – with reasons – of the
ultimate resolution of such a system.
(b) Could it go to GHz resolution, or better, for example?
4. Optical coherence tomography (OCT) (Figure 5.4) is a widely used tool.
(a) For an optical source operating at 1 micron what would be the minimum
spectral width in nanometers needed to achieve a depth resolution of
5 microns? Would such a source be practically available?
(b) From your observations comment on the practical limits in depth resolution
which may be achievable. Assuming that the necessary precise (how
precise?) mirror drives are available, what could restrict the possible depth
range through which an image may be constructed?
(c) The major applications of OCT lie in medicine, predominantly
ophthalmology, but OCT has been used in, for example, measuring
the thickness of a silicon wafer. Discuss – with reasons – whether
the same system which works for silicon could also be effective for
measurements on the eye.
5. The fibre optic gyroscope (Figure 5.7) relies on the fact that light emerging
from the fibre travelling in the direction of rotation will stay in the fibre for
slightly longer than the light travelling in the other direction. It is based on
the Sagnac effect, originally described over a century ago.
(a) Using this basic observation, arrive at a simple expression for the phase
change in a fibre gyroscope as a function of the source wavelength, the
108 Photonic Tools
fibre length, the coil radius and the rotation rate. Then check to see
whether your answer is appropriate (there are good web sources
for this).
(b) From this expression make some observations on what is needed to
obtain a sensitivity of 0.0005 degrees per hour in a loop the size of
that indicated in the figure (you can work out the rough magnitude
of the dimensions from the cables and connectors).
6. This problem concerns the underlying principles of optical tweezers. Sup-
pose we have a laser beam of total power 1 mW focussed through a
microscope objective to a spot 2 microns in diameter.
(a) Make an estimate of the radiation pressure force exerted on a disc one
micron in diameter placed in the centre of the beam.
(b) From the above estimate, indicate how these forces might distribute
themselves around a spherical object of diameter 1 micron.
These estimates will give an order-of-magnitude insight into tweezer
operation.
6
The Future
109
110 The Future
Objective
Pinhole lens
aperture Beam splitter
Light Sample
source
High-
Typically,
precision
laser for
sample
spatial
drive
coherence
Pinhole
Detector
aperture
matched
to source
Tapered (single-
mode) optical
fibre end
Metal coating
(Al, Au etc.)
Taper aperture,
typically <100 nm
Sample
diameter
Figure 6.2 The scanning near-field optical microscope, which has a tapered
single-mode optical fibre with metal coating as the illuminating source to ensure
near-field operation. The tip maintains a constant subwavelength separation from
the sample, using a drive similar to that in an atomic-force microscope (see the
main text).
systems for the fibre probe need to be very precise mechanically, to either
maintain consistent contact with or, by far the more commonly encountered, to
maintain a stable (to nanometres) distance from the sample.
It is here that another precision microscope technology – the atomic force
microscope (AFM) system – makes its contribution. The probe from the
AFM is modified to hold the optical probe and maintain a highly precise, tens
of nanometres, spacing between the probe and the surface. This facilitates a
local pick up for the optical field and thus generates what has become
referred to as a super-resolution image. The system demands precision
fabrication of the probe itself, especially if it departs from cylindrical sym-
metry. Achieving the necessary precision of the measurement system to
determine the location of the drive network and the precision needed within
the scanning x, y and z stages themselves presents challenges. At present, the
use of these instruments is confined to specialist applications though there are
prospects for applying the basic ideas using simpler and more adaptive
probes. One example at the time of writing is the ezAFM+ from NanoMag-
netics Instruments.
An alternative approach to achieving effective super-resolution imaging is
to treat the sample in such a way that the areas of interest have a highly non-
linear reaction to the illuminating light. Possibly the most reported example of
this concerns fluorescent imaging in biological systems using a cell-selective
label. In this case a somewhat modified version of confocal spectroscopy, in
which a pinhole is used to block out-of-focus light and so improve the image,
can achieve a resolution exceeding the diffraction limit by looking at the
fluorescence induced through the scanning illumination and by knowing, from
the diffraction pattern, the shape of the confocal spot. In principle, then, a
biological cell with dimensions in microns, which is visible using a normal
microscope, can be treated with a fluorescent marker; then, under suitable
illumination and precision scanning, the exact location of this marker can be
determined to accuracies limited by the non-linearity of the transfer function
between the illumination and the consequent fluorescence. Typically, this
corresponds to 20% or thereabouts of an optical wavelength.
This discussion has merely touched on the topic of high-precision imaging
using light. There are numerous other techniques, some requiring well-defined
but complex illumination functions and some requiring advanced deconvolu-
tion mathematics (the received image is a convolution of the structure with the
resolution profile of the system), and there are numerous other labelling
techniques. Currently, all these methods are complex and they all rely upon
precision scanning drives to provide location information. The basic principles
are, however, well rehearsed, so speculation for the future of high-resolution
6.2 Photonic Integrated Circuits 113
Substrate
Light in Light out
Waveguide
End cross section
imaging is centred around the prospects for making the technology more
readily accessible to a wider audience. There are obvious uses for such
technologies, especially in biomedicine, so there is good reason to be optimis-
tic that they will evolve. For example, three-dimensional printing approaches
to fabricating the probes are already emerging.
Optical fibre
sits here
Figure 6.4 Some simple silicon micro-optics; on the left a V-groove alignment
system for fibres and on the right a rotatable mirror approximately 100 µm in
diameter.
Figure 6.5 Silicon integrated optics exemplified in (left) a rib waveguide and
(right) the tight bends which a high-index difference permits.
Metal Metal
contact n Si n++ contact Matching optics for
p Si Overlay source waveguide launch
SiO2 Contacts
p++Si
SiO2
Figure 6.6 Some sample silicon circuits including (left) a modulator and (right) an
integrated source.
component set on a chip. Also, this chip can be shared with all the necessary
electronic drives to enable versatile, integrated, optoelectronics platform which
is compatible with the essential elements of CMOS (complementary metal-
oxide semiconductor) processing. At the time of writing this it has yet to
become a mass production item, but the signs are definitely there for photonic
integrated circuits to emerge as a ‘technology of the present’ relatively soon.
Custom-built systems for specialised applications are already emerging.
~ 100 nm ~ 1 micron
(a) (b)
metallic structures with these dimensions. For metallic structures the concept
of ‘conductivity’ is well understood, and plasmonics is essentially this concept
applied to optical frequencies but recognising that the free electrons in a metal
can change their behaviour as the plasma resonance is approached. Nanopho-
tonics and plasmonics are closely interlinked and have been described, by at
least one of the pioneers in the field, as transferring electrical circuits into the
photonic frequency range, though with due recognition of the differences in
electrical properties at these very much higher frequencies.
There have been many demonstrations of nanophotonic circuits, a couple of
which are shown in Figure 6.7 illustrating a ring gap resonator, which operates
very similarly to an inductance and capacitor, and a Yagi antenna array, which
is very close conceptually to the ultra high-frequency TV receiver antenna and
indeed performs in a similar way.
There are other features of nanophotonic circuits which are broadly analo-
gous to their much lower-frequency counterparts. Perhaps the most useful of
these is the exploitation of metals as electric field enhancers at a metal –
dielectric interface, particularly for pointed metallic structures. An optical
‘lightening conductor’! This concept has been used for many years, particu-
larly in the Kretchmann configuration as a sensitive probe for the dielectric
constant of liquids (Figure 6.8(a)). This basic concept has also been
coupled into photonic integrated circuits illuminated by a fibre waveguide
(Figure 6.8(b)). The generic idea has also been used to enhance the electric
field in solar cells and significantly increase the efficiency of some biophotonic
therapies.
Nanophotonics and plasmonics currently attract widespread and enthusiastic
research endeavours, since even this brief discussion should have indicated
their remarkable potential. In common with all the topics mentioned in this
chapter there are textbooks and journals dedicated to these topics.
6.4 Metamaterials 117
Thin
TM (P) wave noverlay
metal
launched. film
nsubstrate
6.4 Metamaterials
A metamaterial (mentioned briefly in Chapter 4) is conceptually a material
ensemble with structural dimensions far below the wavelength of the electro-
magnetic (or indeed acoustic) fields with which it interacts. Through this
interaction, structural variations in the metamaterial can produce a response
that would otherwise be unobtainable. Whilst structures such as gratings
could be argued as falling within this classification, a moment’s reflection
demonstrates that the optical properties of these structures are dictated by
the properties of their component materials and the way in which these
component materials are organised. The component materials behave in such
structures exactly identically to their bulk equivalents. Metamaterials, in con-
trast, are made-to-measure three-dimensional materials with structural vari-
ations in dimensions considerably less than a wavelength, resulting in an
overall material which behaves as a bulk system with entirely different prop-
erties. The incoming optical wavelength is too long to ‘see’ the structural
artefacts!
Typically, metamaterials can comprise arrays of nanophotonic circuits
(Figure 6.7) with resulting properties dictated by this structure (resonance,
for example). Alternatively, arrays of other material structures can be arranged
in such a way that the incident light beam ‘sees’ the average properties of the
structure (i.e. any structural variations must be well controlled and extremely
small compared to a wavelength). Another possibility is arrays of quantum
dots, where the optical properties of the materials themselves are modified by
118 The Future
Input light
neither deflected
nor reflected
Figure 6.9 How cloaking might work – but broadband cloaking, even for simple
shapes, and full cloaking for complex shapes have yet to be realised.
• Interfaces, metal
and dielectric
Material
properes • Dimensional
variaons >>
wavelength
Figure 6.10 The building blocks for designing optical materials and the essential
options dependent upon critical scale.
These three conceptual building blocks (Figure 6.10) are common to any
situation in which waves interact with materials, whether the waves be elec-
tromagnetic, mechanical, acoustic or quantum mechanical. They are at the
heart of much of our scientific and technological evolution.
–
–
–
–
–
Free electrons
Figure 6.11 Illustrating graphene: a single layer of carbon atoms linked together
with free electrons flowing within the layer. The atoms are about 0.14 nm apart.
6.7 Graphene and Other Exotic Materials 123
100 1000
(W2)
graphene
Transmittance (%)
graphene
100
10 ITO
ITO
1
0
200 400 600 800 0.1 1 10 100 1000
Wavelength (nm) Film thickness (nm)
Figure 6.12 The properties of graphene in comparison with the currently pre-
ferred transparent contact material, indium tin oxide (ITO).
10.6-micron
5 nm top layer Graphene layer
launch transmits
surface plasmons
27 nm
h-boron
nitride
Control electrodes:
central electrode 150 nm long, gaps 100 nm
Cryostat
environment
Light input
~ 20 mm
Figure 6.14 An exploratory mechanical structure for squeezing light in the phase
domain, based upon nanoscale micromechanical resonators. The light grey area is
a vacuum enclosure. The resonator, shown in black, is freely mounted over a
cavity in the substrate structure. The striped element coming from the right-hand
side is an optically excited resonator. The long striped rectangles facilitate
coupling between the light and the mechanical resonator.
Figure 6.15 The basics of the frequency comb utilising a 2.5 femtosecond pulse in
the green and, in this example, based upon a 50 terahertz pulse repetition rate. More
typically, a 1 Ghz repetition rate would give 400,000 ‘teeth’ over the same band-
width, in principle enabling accurately controlled spectroscopic measurements.
the 1980s but the thought of beating the shot noise limit in a compact and cost
efficient format remains a significant challenge and a source of great interest.
A frequency comb is another example of an intriguing optical source. The
concept originated as a means towards high-precision spectroscopy, where the
source is locked through some form of multiplier to a radio frequency master
oscillator (Figure 6.15). The output from such a system when viewed in the
time rather than the frequency domain is a sequence of very short pulses
separated by the master frequency. The exact techniques through which such
‘combs’ can be generated involve ultra-precision non-linear optics and opto-
electronic design. The special feature of the frequency comb is that each
component is accurately phase locked with respect to the others, resulting, in
126 The Future
the time domain, in unrivalled phase noise performance among the various
pulses within the pulse train. The applications of this continue to be explored:
optical clocks using frequency combs have been demonstrated and highly
precise spectroscopy, particularly in the time domain, is another area of
exploration. Just as in squeezing, a few tentative demonstrations of combs
realised in photonic integrated circuits have begun to emerge. In common with
squeezing, frequency combs continue to attract much attention.
6.9 Summary
The aim of this final chapter has been to give some insight, albeit superficial
and far from complete, into the immense potential which photonics offers for
the future. There are continually evolving new photonic tools. In parallel,
continuing new societal challenges emerge for which the tools offered through
photonics promise innovative and effective solutions. Figure 6.16 attempts to
encapsulate at least some of this potential on the basis of our discourse
throughout the book and the remarkably consistent recognition among numer-
ous international organisations of the challenges which society currently needs
to address. There are other indictors too, including, for example, the fact that,
at the time of writing (2018) the European production output in photonics
exceeded that in electronics and the research and development endeavour
within this sector continually gains more momentum. There is indeed some
confidence in asserting that ‘photonics is the electronics of the twenty-first
century’.
Figure 6.16 A view of the technological challenges currently facing society and
the areas to which photonics promises to contribute (based on a US National
Academy of Engineering 2009 list, still current, and similar to the listings for
many professional organisations, e.g. Photonics 21 in the EU, the National
Photonics Initiative in the US etc.). (The prevention of nuclear terror is separately
boxed because photonics is unlikely to make a direct contribution to this large-
scale technological challenge.)
6.10 Problems 127
6.10 Problems
Problems involving predicting the future are by their very nature encroaching
on the totally inaccessible! Answers will be educated guesswork and will vary
with time and among communities. The whole ‘future scenario’ discourse is
one to speculate about with fellow enthusiasts. So the problems here are in two
categories: in section A, the reader is invited to look in a little more depth at a
selection of current ideas within the community and in section B there is a
discursive section inviting speculation on what might happen and why.
Section A
1. (a) How could a Mach–Zehnder interferometer in integrated optic format
become an intensity modulator (Figure 6.3)? Remember the electro-
optic effect and note that the early devices, of which this interfer-
ometer is an example, used lithium niobate as the substrate.
(b) What parameters in this device could limit its modulation bandwidth
and its possible modulation depth? How might this change if your final
system application were the design of a phase or frequency modulator?
2. (a) Figure 6.6 shows some silicon integrated optics. How does the modulator
(on the left-hand side of the figure) function in this particular case?
(b) When compared to the device in Figure 6.3 what might its limitations
be in terms of operational optical wavelength range and in achievable
phase and intensity modulation depths?
3. (a) Figure 6.8(a) shows the field distribution for a wave propagating along a
dielectric–metal–dielectric interface. Suppose that the lower layer (sub-
strate) is one face of an equilateral prism and light is incident on one of
the other faces. How does the incident angle on this face required to
produce propagation along the metallic surface depend on the propaga-
tion constant of the surface plasmon wave? How might this change if
the overlay index and thickness were varied? The clue lies in phase
matching at the interface. This prism arrangement is the basis of the
Kretchmann configuration used in overlay index measurements.
(b) why does this arrangement need a TM wave to be launched and what
would happen to a TE wave? (In a transverse magnetic (TM) wave the
electric field oscillations are perpendicular to the interface under
consideration, and in a transverse electric (TE) wave they are paralled
to the plane of the interface.) The system shown in Figure 6.8(b) has
no facility for phase-matched launching. Is this important and if so
(or, indeed, if not), why?
128 The Future
4. Figure 6.13 shows a phase modulator which has been demonstrated for the
10 micron wavelength region. Given the dimensions in the figure, and
given the observation that the signal travels from the launch electrode input
to the receive probe as a surface plasmon (i.e. an electric current), comment
on how the modulator modifies the phase delay and on any implications this
might have for the effective electron velocities in the graphene layer.
Figure 6.12 also contains some pertinent background.
5. Figure 6.14 shows a mechanical structure fabricated from silicon and
designed to perform optical squeezing, in order to reduce the phase noise
below the shot noise limit, albeit in practice to a modest extent. Consider how
this precisely micro-machined structure works Why does it need to operate in
a cryostat? How would you view the future prospects for such a concept?
Section B
6. There are many respected sources of information overviewing interesting
technical developments in photonics (Photonics 21, US National Photonics
Initiative and several others, including the professional societies in optics
such as SPIE and OSA). Look for the latest freely available reports on trends
from these organisations and compare and contrast these predicted trends for
agreements and differences. How might these trends map into the societal
needs identified by numerous other professional organisations such as the
National Academies of Engineering, the latest EU research programmes and
socio-technological organisations such as the Royal Society of Arts, UK?
7. Use collective imagination and insight (remembering that fresh views are very
valuable!) to consider other feasible photonics initiatives, such as combining
parallels from the bio-sciences, materials science, very-high-precision machin-
ing, quantum systems, electronics and information availability, for example.
8. Speculate on the detail of a few future technologically possible applications
suggested in Figure 6.16. Technological feasibility is, however, but one
factor influencing the eventual realisation of a new technology. Community
acceptance is perhaps the most important of these. Will people use it and be
willing to pay? What factors may come into this in the specific cases you
have chosen? Considerations on safety, and standards for demonstrating
safety, performance repeatability, rivalry with established practices aimed
at performing a similar function and many similar factors come into play.
Look up Porter’s five forces in this context.
Appendix 1
More About the Polarisation of Waves
129
130 Appendices
Figure A1.1 The electric field vibrations in various polarisation states: (a) a simple
linear state; (b) a composite linear state with horizontal and vertical components;
and (c) a circularly polarised state with vertical components shifted by 45.
Air (A)
Input plane
(B)
Slow axis
Fast axis
diagram the two components are in quadrature so the output from this thick-
ness of the medium would be circularly polarised. If we were to proceed
further, the two components would become 180 out of phase, and the output
would be linearly polarised but orthogonal to the input at (A), and so on.
Birefringent materials used in this manner – often referred to as phase plates –
are extremely useful components in optical systems.
This moves us into questions concerning the influences of the input light on
the output fields. As an example suppose that the distance between the point
(A) and the point (B) in the diagram was kept constant but that the input
frequency was doubled. Assuming no net dispersion in the behaviour of light
travelling along the fast or slow axis then the output would be back to linearly
polarised but at 90 to the input.
This gives a hint on how a broadband source might behave when passed
through a phase plate of this nature in this particular format. The broadband
source would give slightly different polarisation states at the output depending
upon the wavelength of the input, and these differences will become greater
with a longer path within the birefringent medium, which can be, in principle,
extended as far as we wish. Thus the light ends up unpolarised. This assumes
that the input state is polarised, and for sources such as super-luminescent light
Appendices 131
emitting diodes this is the case. A depolariser which operates in this manner is
often useful in conjunction with such sources.
The situation gets more complex if we remove the assumption of spatial
coherence – in other words if the input beam has components travelling in
many directions rather than simply in one direction as in Figures A1.1 and
A1.2. This is an interesting problem to ponder but in practice many situations
in optics can be visualised using the approaches that we have adopted here.
Further Reading
E. Collett, Field Guide to Polarization, SPIE Field Guides, Vol. FG05, SPIE, 2005.
Appendix 2
The Fourier Transform Properties of Lenses
132
Appendices 133
Amplitude
transmittance
Parallel input Through beam
wavelength l
Position
Diffracted beams
Figure A2.1 The diffraction pattern from a grating with amplitude transmission
varying sinusoidally in one dimension.
Amplitude
transmittance
Position
Figure A2.2 A square wave grating – only the odd harmonics appear in the
diffraction pattern with amplitudes varying as the inverse of the harmonic number
(diffraction order).
On-axis
Front Convex Back
input
focal lens focal
laser
plane plane
light
Complex
Phase Fourier transform
and/or of
amplitude object
object
Figure A2.3 A convex lens set-up for obtaining the Fourier transform of an input
object.
that for twice the angle of diffraction for the fundamental component there is
no diffracted wave – square waves only contain odd harmonics. The third
harmonic amplitude has one-third of the amplitude and three times the deflec-
tion (in the sin θ ~ θ approximation) of the first harmonic, and so on. Further-
more, the amplitudes of the odd harmonics vary inversely as the harmonic
number (diffraction order).
Now imagine a convex lens collecting all this diffracted light (Figure A2.3).
The back focal plane of the convex lens (neglecting distortions, aberrations
and so on) will exhibit a series of dots spaced by a distance proportional to the
diffraction angle (strictly speaking, the tangent of the diffraction angle is
the focal length divided by the distance from the point where optical axis
meets the back focal plane, but we are assuming the same sin θd ~ θd approxi-
mation as before). Furthermore the amplitude of these spots in the focal plane
will be proportional to the amplitudes of these spatial frequency components. If
the lens collects all the diffracted light, an accurate image will be produced
wherever the image plane is given by the traditional lens formula. However, if
the lens misses any of this light then the corresponding amplitude components
will be reduced or even eliminated, so the image will have a different spatial
harmonic amplitude distribution from that of the original grating. For example, if
we miss the entirety of the third-order beam and above, we will be back to an
sinusoidal amplitude variation in the image plane at the fundamental spatial
frequency of the grating, since we now have first the average illumination and
the two first-order components to make up the final image.
The final nuance comes if the input object (our input grating) is placed
exactly in the front focal plane of the lens. In this case the focussed light in the
Appendices 135
back focal plane represents both the amplitude and phase components of the
input amplitude transmittance. The concept of phase components in the amp-
litude transmittance may appear somewhat arbitrary – where is the phase
reference? The simplest approach to this is to take the mean transmittance
(zero deflection) as zero phase and define all the phase components with
respect to that. This is also effectively the standard approach in normal Fourier
transform operations and gives an insight into how an object which has only a
phase transmittance and no change in amplitude can be visualised by changing
the zero-frequency component of the Fourier transform in the back focal plane
of the lens. The reader should examine the Fourier transform expressions for a
phase-modulated carrier and the striking way in which changing the phase or
even removing the fundamental components turns the input phase distribution
into a plane amplitude output distribution.
Figure A2.3 shows a set-up that can be used to perform Fourier transforms
on any arbitrary input object and also indicates how, by manipulating the
transmission properties at the Fourier transform plane, the properties of the
resulting image, formed by the lens after the Fourier transform plane, can also
be modified. This concept has found enormous application in feature enhance-
ment (e.g. in highlighting the edges in an object and thus producing an image
with the lower spatial frequency components attenuated or sometimes
removed) and in various forms of digital image manipulation software. For
the latter, of course, the hardware between the input object’s image and the
final output is not there. However, in imaging systems from microscopes to
telescopes these phenomena are always present and this approach to hardware-
based imaging is often used. Perhaps the most familiar is ‘dark ground
imaging’ in microscopes, which enables the visualisation of low-contrast
objects, including those comprising only phase-delay detail. These phenomena
determine the quality of the final image, how this image quality can vary with
the illuminating wavelength (e.g. through chromatic aberration in lenses or the
fact that blue light is less diffracted than red) and many other features of
imaging systems.
Further Reading
The key here lies in the concept of polarizability – in other words the
modifying effect of an electric field on the charge distribution in an atom or
molecule. Figure A3.1 shows that an applied field will induce a net dipole
moment on the molecule and that this in turn distorts the field distribution
around the polarised molecule. Consequently, in a dielectric medium the
effective field on each individual molecule is the sum of the applied field
and that due to the induced dipole moments within the medium – here assumed
homogeneous and isotropic.
The dielectric constant or relative permittivity, ԑ, gives a measure of the
polarisation effect. Consider the capacitor shown in Figure A3.1. By
definition,
applied field C1
ε¼ ¼ (A3.1)
nett field C0
where C1 and C0 are respectively the capacitance with and without the dielec-
tric present. The ratio ԑ can be quite large; for silicon, for example, it is nearly
12 owing to the near-cancellation of the applied field within the dielectric.
The Clausius–Mossotti equation relates the dielectric constant of a medium
to the polarisability of its molecules. For a particular frequency within the
absorption spectrum of the molecule, it can be shown that
εr 1 1 X
¼ N j αj (A3.2)
εr þ 2 3ε0 j
136
Appendices 137
Applied + + + - - -
Electric + -
+ Depolarisation -
Field Polarisation Field
+ -
E + P - -
+ + + - -
+ + + + + + + + + + + +
Applied
Electric - - - - - - - - - - - -
Field Induced charge in dielectric
E
+ + + + + + + + + + + +
- - - - - - - - - - - -
Figure A3.1 Illustrating how an applied electric field displaces the charges in a
dielectric and how this produces a depolarising field that partly cancels the applied
field within the dielectric.
Principal electronic
absorption line
typically near infrared
Relative
Regularly spaced absorption
sidebands due to
molecular resonances,
typically rotational
Wavelength
Figure A3.2 A simplified line structure for absorption in gases around the near
infrared (around 1 to 3 microns wavelength). There is a central peak with
numerous rotational sidebands, typically about 2 to 3 nm apart.
Further Reading
The ideas of phase and group velocity for a travelling wave have an important role
to play in photonics, and also in any other wave related transmission phenomena.
A whole chapter in Pierce’s splendid text (see the Further Reading section) is
dedicated to the topic. The phase velocity, vp, is the speed with which a wavefront
at a particular frequency travels through a medium. The group velocity, vg, is the
speed with which a wave packet containing a spread of frequencies travels though
the same medium. Implicitly this spread of frequencies is an essential feature if the
wave is to carry energy or information. Another essential feature is that these ideas
are most apparent in material structures rather than in a uniform material space
without dispersion: in such a space, the phase and group velocities are the same.
Remember though that here the term ‘structure’ can include variations in material
properties over the spread of frequencies in the wave packet, so for example when
there are material resonances the differences between phase and group velocity
will manifest themselves. Another important implication is that, whilst the speed
with which energy travels cannot exceed the speed of light in the medium, the
speed with which the wavefronts move can exceed this velocity. Recall that the
refractive index refers to a particular frequency, hence to the phase velocity; all
that’s travelling here is an important abstract concept!
Here, we will briefly explore the meaning of phase and group velocity and
their representation in what has become known as the omega–beta (ω–β)
diagram, an example of which is shown in Figure A4.1. This figure indicates
a structure which does not transmit below the cut-off frequency shown and for
which the phase velocity consistency exceeds the velocity of light c in the
transmitting material, going to infinity at frequencies below the cut off. As a
comparison the dispersion diagram for a very large section of this unstructured
material is also shown. For this baseline material, there is zero dispersion (no
velocity variation with frequency) and so both phase and group velocities are c.
In practice, this dispersion curve will only strictly apply to a vacuum.
139
140 Appendices
w (rad/s)
b = 2p/l (cm-1)
Figure A4.1 A typical ω–β dispersion diagram for a structured system, with that
of a corresponding non-dispersive uniform material shown as a dotted line.
Travelling
Paths taken in the phase
guide by the front
‘interfering
waves’
Angle Q
Perfect
conductors
Energy flow in
component beams
interface with the metallic guide surfaces, with no additional zeroes in the
interference pattern between the walls.
The energy in these component wave fronts travels in the directions indi-
cated and at a velocity c dictated by the medium, which we take as a vacuum.
However, the energy propagating along the waveguide will flow at the com-
ponent of this velocity along the waveguide – in other words at c cos θ. This is
the group velocity along the guide. Note also that the precise angle required in
the interfering beams to produce the necessary patterns will vary with fre-
quency. Hence we shall see group velocity dispersion.
What about the phase fronts travelling along the guide? The points at which
the indicated interfering components cross at the waveguide-to-air- interface
are the travelling wavefronts along the guide. If we take the wavelength in
vacuum along the direction of travel of the component waves as λ then the
distance between these crossing points along the direction of the guide – the
phase wavelength – is λ/cos θ. The phase velocity is then the phase wavelength
times the optical frequency – namely c/cos θ. You might have noticed that the
product of the phase and group velocities in this case is c2. This is a general
rule, with only a very few, rarely encountered, exceptions.
Further Reading
J. R. Pierce, Almost All About Waves, Dover Books, 2006. Reprinted from the original
1974 text.
Appendix 5
Some Fundamental Constants
142
Appendix 6
Comments and Hints on Chapter Problems
To start with, we give a quick recap on the comments in the Preface. To reach
solutions, there will often be a need to look up sources and to appreciate trade-
offs. Central to solving problems is the idea that you discuss any approach with
others: we can certainly learn from our fellows in addition to our formal
teachers. Often, there is no such thing as a totally right answer, especially to
any real engineering problem. So, collectively, work out the best approach and
go for the best solution you can find. Then, very importantly, critically analyse
this solution. Make sure it stands up to scrutiny! Do not forget, that there is
much source material on the web but, even here, critically assess what you
find. Very occasionally it can be misleading.
Also, exchange your views through the community; there will be web-based
forums from which you can also learn from the experiences of others.
A6.1 Chapter 1
The principal theme of all four of these Chapter 1 problems is the following
question. ‘When do photons really matter in our endeavours to usefully
understand the world around us?’ Associated with this question is the matter
of the sometimes erroneous or ambiguous use of scientific language.
Problem 1
This concerns the ideas of creating and detecting electromagnetic radiation. It
is reasonable to work with the model according to which transmission always
takes place as electromagnetic waves. However, the way to view the creation
and detection processes varies depending on circumstances. The temperature
in deep space can be rapidly determined by a question in your web browser. It
143
144 Appendices
Problem 2
Microwave photonics is an ambiguous term: it is used occasionally in situ-
ations when microwaves can be viewed as photons (and you may dig around
on the web for suitable examples). More often, though, it refers to the
applications and technologies around modulating an optical signal at micro-
wave frequencies (typically in the 1 to 100 GHz band). Optical communication
applications in deep space are beginning to look promising – so is one photon
modulating another? Is this a combination of electromagnetics and electronics?
This is a fruitful topic for discussion!
Problem 3
This is a straightforward extension of the thought processes above . . .
Problem 4
This highlights the much vaunted wave–particle duality discourse. Look up
X–ray diffraction, then think of the implications – and how hot would the
world need to be for X-rays to be normally viewed as being propagated as
electromagnetic waves in a medium? This estimate also illustrates the prag-
matic approach to waves, particles and electric currents explored in Chapter 2.
A6.2 Chapter 2
For the remaining chapters, we will discuss some of the problems, maybe a
sample or maybe all of them, and then encourage you to explore answers
among yourselves.
In this chapter, perhaps problem 4 is the most pertinent. In problem 4(a) –
deriving Snell’s law – the key here is that the projections of the wavefronts
onto the interface must match on each side and also that the phase velocity of
the incident and refracted rays on each side is determined by the relevant
refractive index. Another way of saying this is that the optical phase velocities
Appendices 145
are the same on each side of the interface between the two different materials;
as mentioned before this is an important generic observation. Also, when you
sketch the situation, notice that the wavelength components along the interface
exceed those in the input beam itself. If the input is from air or vacuum, then
the phase velocity at the interface exceeds c! This is a useful conceptual tool to
visualise situations when vphase> c.
For problem 4(b), look around and think about everyway phenomena – and
these can be acoustic or mechanical as well as visual.
In Problem 4(c), full derivations for the Fresnel reflection coefficients are
quite involved, but the final results are very definitely of interest. In terms of
amplitude reflection coefficients and transmission coefficients we have, for
waves incident from a region of index n1 entering a region of index n2 via a
planar interface;
ts = 2n1 cos θ1 / (n1 cos θ1 + n2 cos θ2)
A6.3 Chapter 3
The problems here all explore what happens when light passes through a
material. Here, the basic possibilities are absorption and scattering, absorbed
146 Appendices
light turning into heat, light at other wavelengths, electronic carrier generation,
refractive index changes. Consider what happens at interfaces, as already
mentioned in Chapter 2. In some places the discussion here is a preamble to
topics discussed in Chapter 4, for example, birefringence.
Problem 1
This concerns the possible ways in which radiation from the sun (assume a
black body at the temperature mentioned in the text) is altered by passing
through the earth’s atmosphere. First, calculate the spectrum as a function of
wavelength. The consider the various phenomena which can modify the
spectrum between its arrival at our atmosphere and its arrival at earth’s surface.
You will need to explore absorption and scattering and also the ways in which
these phenomena vary with weather conditions, where for example both scatter
(from clouds in particular) and atmospheric content (water vapour) can change.
There are also implications regarding altitude – how would things change at
8000 m?
And, as for most of our problems, there is scope for teamwork in this
problems. Again, there is no totally right answer, so a good basic understand-
ing is essential in order to weigh up the different possibilities!
Problem 2
The approach here could be to start by taking 1 mW as the same power level all
along the fibre, for 50 km, and see what comes out of that. Thereafter, examine
the situation where the average loss is a typical value (look this up – or assume,
for a start, that it is 0.5 dB per km). What is the input power needed to average
1 mW over the fibre length? Integrate this distribution over the length to obtain
the total Kerr-induced phase change. Next, examine the same process for, say
1 dB/km and 0.1 dB/km.
Part 2 of the problem is about the mixing processes which occur in non-
linear media and the generation of sum and difference signals. You will need to
think this process through, but the clue lies in the fact that changes induced by
frequency 1 will also impact on the transmission properties of frequency 2. As
for ‘significant’ – you can access the literature on four-wave mixing in optical
communications when two signals very close to each other in wavelength are
transmitted along a fibre, or you can decide on appropriate sounding criteria
and then compare with actual practice. An appropriate starting point might be
that the amplitude of the sum and difference signals is 10% of the separate
Appendices 147
signals (20 dB down), for example. In cases like this it is always helpful to
initially establish some guidelines to get a feel for the situation of interest.
Problem 3
This is about detecting light and how detection is almost inevitably a function
of optical wavelength. The differences between bandgap (photon) and bolo-
meter (power-into-temperature-change) systems are discussed in the text. You
might like to analyse this further, however, and think about the impact of the
modulation bandwidth on the performance of the two types of system and also
the achievable signal to noise ratios for each type – how small an input power
change could be detected by the bolometer and the (shot noise limited)
bandgap detector? How might these thresholds vary with the optical
incident power?
The final part of this problem, concerning the design of an ‘ideal’ photo-
detector, requires some thought on what is ideal. Yes, it needs to respond over
the wavelength region specified. But over what input power range is it not
specified? Here, making some assumptions seems appropriate. Start with an
input of 1 mW and then consider the design implications for inputs of 1 micro-
watt and 1 watt? Also, we have not stated the operational bandwidth for
modulated optical signal. However, here it is reasonable to assume for a start
that the source is continuous.
Working from these assumptions leads into a starting point based on a small
thermally well insulated bolometer with appropriate thermal conductivity
coefficients (look up some reasonable values) designed into a suitable resist-
ance measurement bridge. In contrast you will have a shot noise limited
bandgap detector with the implications on responsivity previously explored.
Could band gap detectors be stacked to enhance the lower wavelength
responsivity?
There’s much to explore in this problem, and again it is a discussion topic
among a group more than an individual endeavour.
Problem 4
This is about Fresnel reflection from interfaces and optimising the combination
of the reflection from the lens-to-air interface together with the LED-to-glass
interface with a view to getting the maximum power release (remember that we
are usually looking at amplitude reflection (see the comments above on the
Chapter 2 problems).
148 Appendices
The other aspect here is that it is possible to design coatings made from a
material with an intermediate index, which for one single wavelength will
allow though all the light from one medium to another. There is a Wikipedia
article on optical coatings, and you may wish to demonstrate the expression for
the matching-layer thickness mentioned there using the Fresnel relationships.
So, in principle, for a single-wavelength LED, with careful design all the light
could be emitted.
As for the laser issue: coatings are an important factor here, but this time
they are used to reflect, rather than match for transmission. The basic aim with
this question is to encourage an understanding of the lasing process, the
thinking that eventually leads to the laser equation and the concept of recom-
bination rates. Figure A6.1 may provide a useful starting point.
Problem 5
This problem is about the way in which plastics induce stress-related birefrin-
gence between two crossed polarisers, so that a component comes through the
second polariser. In general this component will be coloured, since the
birefringence will change (almost double) in going from long (red) to shorter
wavelengths.
With careful analysis this can be quantified. However, accurate knowledge
of the stress-optic coefficients involved is sparse (but how would they be
used?), and in practice a very rough guide can be derived from the variations
in colour. Blue is more sensitive (less stress needed to introduce this colour)
than red. Why is this the case?
Appendices 149
Problem 6
Here are some hints on this problem.
– In depletion layer an applied voltage will remove all the free carriers. This
leaves behind the charges on the remaining lattice at a charge density
equivalent to the doping (free-carrier) density times the electronic charge.
This can then be put into Gauss’s law to obtain a field slope. However, for
an intrinsic region it is a reasonable approximation to assume that remaining
charge density is low enough that the slope of the field is zero in this region.
In the p+ and n+ (substrate) regions it is also safe to assume zero field slopes.
This simplification gives the need for a bias voltage of 20 kV/cm times
10 microns to achieve saturated velocity. The calculated bias voltage is
worthy of comment!
– The depletion layer width has to be traversed in a time which is small
compared with any modulation on the optical signal. You may like to
consider why this is the case, and also what might happen if, for example,
the modulation frequency is twice the inverse of the transit time. The carrier
velocity can be taken as 107 cm/s.
– The next issue lies in how far the photons will reach into the depletion
region before the carrier generation rate drops below, say, 10% of the
maximum reached very close to the surface. The p+ contact is 1 micron
thick. Can this thickness be ignored? How significant is this absorption of
photons in the overall quantum efficiency of the device? How do all these
answers change with wavelength at photon energies above the band gap?
The graph at the end of this section shows carrier velocity plots in silicon
and gallium arsenide (see Figure A6.2).
Figure A6.2 Carrier velocity vs Electric Field Characteristics for Si and GaAs.
150 Appendices
– This discussion should have indicated that, while the basic design of a
photodetector is relatively straightforward, the proverbial devil lies in the
detail. In particular, compatibility with external circuit values (bias voltages
and dimensions especially) and operational bandwidth come to the fore,
coupled strongly with the operating wavelength. These considerations form
another topic for discussion!
A6.4 Chapter 4
Problem 1
For a thin lens, perhaps the simplest way to conceive this might be to consider
a stack of thin isosceles-triangle prisms made from the lens glass, and then to
calculate the angle of exit from this prism as a function of distance from the
principal axis of the lens. The angles in the isosceles triangle will be deter-
mined by the slope of the curved surface of the lens at a particular point.
To get the perfect curvature for focussing regardless of the distance from the
axis, i.e. away from the paraxial approximation for a thin lens, all that is needed
is to generalise the paraxial approach used above – but to use it ‘backwards’ in
order to obtain the necessary slope as a function of distance from the axis.
This will not give the exact answer or a prescription for a practical lens, but
it will give the general idea. And it is interesting to discuss the reasons why it
won’t be exact.
Problem 2
This is about a generalised approach to Snell’s law, but in a variety of possible
planes of incidence. This would be a good problem for group discussion, using
a large sheet of paper on which to sketch the various possible situations.
Problem 3
The first part concerns Rayleigh scattering, the route through the atmosphere
and the inverse fourth-power law. But the story for clouds will need you to
explore a somewhat different type of scattering – Mie scattering.
Problem 4
The first part is about the geometry of a parabola and ‘specular’ reflection, for
which the angle of incidence equals the angle of reflection. The second part is
about image receptor sizes, and what to do to compensate for the many sources
Appendices 151
of turbulence. When one considers the logistics of realising these ideas, things
really become interesting.
Problem 5
This is simply diffraction – with a laser pointer and a piece of cloth. Quantify
the process, measure distances, find the wavelength for the pointer (or make an
informed estimate) and explain the multiple spots which appear in terms of a
Fourier transform which could describe the essential features of the cloth. And
try a different sample – a very sharp edge for example.
Problem 6
This is basically a manipulation to find the Fourier series for A + B cos kz, for
A < B and even for A << B. Thereafter there are hints in the main text.
Problem 7
The first part of this question is about integrating all the possible diffraction
contributors at an ‘input angle = output angle’ condition (see Figure 4.12).
Incidentally, why should these two angles be equal? Does the idea work for
unequal angles? The integration process is somewhat involved though
straightforward.
As for the second part, the answer is that the frequency of the directed beam
is shifted by an amount equal to the ultrasonic frequency in the travelling
wave. The wavelength of the ultrasound in water (sound velocity ~1500m/s)
needs to be small compared with the optical beam width and also small enough
to produce a diffracted beam well separated from the input. The normal
Doppler-shift concepts apply, but they do need to be carefully configured to
fully appreciate what's happening here!
Problem 8
The first part implicitly points to an assumption that the electric fields in the
waveguide are zero at the interface between the high-index (core) and low-index
(cladding) materials. You will then need to look at a fixed guide width and
construct angles at which these conditions can be met using two interfering beams
directed symmetrically about the guide axis. There will be a set of discrete angles
for the emerging beams, and there will also be a threshold at which these angles
152 Appendices
will correspond to incident beams at angles exceeding the critical angle between
core and cladding. Then the cut-off for that waveguide has been reached.
The second part asks you to critically analyse these results and deduce why,
and by roughly how much, these first intuitive estimates need to be modified to
get closer to actuality. This is another topic for exploration and discussion
within your group.
A6.5 Chapter 5
Problem 1
This problem concerns the design for an optical receiver. First you will need to
calculate the shot noise (along the lines of the calculations underlying
Figure 5.1 but for a different wavelength). Then there is the question of how
much shot noise there is in the specified bandwidth. However, is it really
necessary to consider this aspect at this stage? In fact all that is needed is the
minimum load resistance needed for the shot noise to exceed the thermal noise.
This defines the resistance needed for the load, which should be as low as
possible to get the necessary RC, i.e. load resistance times local capacitance
(the latter assumed here to be the photodiode operational capacitance). Is this
value of capacitance, needed for a 1 GHz bandwidth, compatible with a viable
photodiode? If not, then what can be done about it? Lots of scope here for
critical insights and productive discussion.
Problem 2
This requires a combination of appropriate SNR (signal to noise ratio) esti-
mates for shot and thermal noise, but in the context of a photodiode array. You
will also need to make assumptions about the total power from the illuminating
source and the spectral uniformity of this white light source.
The other aspects of this problem include consideration of the aperture of
the lens used to focus the absorption spectrum onto the array and the resulting
‘diffraction-limited blur’ on the spot on each detector. Is it OK to assume that
the lens will be diffraction limited, and if not what would you do about it? The
final point is to highlight the immense potential contributions from the use of
long integration times, reducing the bandwidths to a few Hz or often far less, in
the detection process. Looked at in reverse, there are many atomic processes
which can be best understood using transient (nanosecond and better) obser-
vation. What are the implications here?
Appendices 153
Problem 3
The whole of this story can be deduced from Figure 5.3, the text around it and
some basics on Fourier transforms: notably, what determines the resolution
and dynamic range in a Fourier transform calculation? The only distraction is
that the resolution in the first part of the question is specified in wavelengths,
whereas the Fourier transform system fundamentally works in optical frequen-
cies. Additionally, the frequency range is in wavelength terms (sometimes you
will see spectroscopic results in terms of ‘wavenumbers’ – the inverse of
wavelengths).
Problem 4
Part (a) of this problem is in principle straightforward – what is the minimum
bandwidth on a 1 micron source required to achieve a coherence length of less
than 5 microns? There is some juggling to be done with the units (time, distance,
frequency, wavelength in microns and bandwidths in nm) used here. There is
also a very important but unstated need here in that the source needs to be
spatially coherent. Another unstated need is depth resolution, but in what
medium? It is probably safe to assume that this medium is air, since all the other
media will have a refractive index greater than or equal to 1. Why is this the case?
The second part of the problem brings together the mechanics and drives, the
mirrors, the data storage and processing for the whole system and the need to
put numerical requirements on them all. Hence explore among your group how
practical it might be to go to 1 micron, or even better, spatial resolution.
The last part of the problem is about illumination wavelengths, the transpar-
ency of the sample and the wavelengths which might characterise artefacts of
interest in a particular sample. There is some interesting and useful exploration
to be done around these issues.
Problem 5
The first part of this question is basically a standard derivation, but the ideas
should be within the grasp of readers without any need for hunting elsewhere.
Think about the basic situation. The light is injected in both directions at once
from a particular point (the beam splitter or directional coupler). It then
propagates around the loop, and one can assume for simplicity that it propa-
gates in air. After a delay in the loop, the light then meets itself coming in the
opposite direction, as it were, but the lengths of the clockwise and counter-
clockwise paths will be slightly different by an amount corresponding to the
154 Appendices
rotation rate times the time delay (multiplied by 2 or not? Think that one
through carefully).
This time delay corresponds to a phase delay and the rest is straightforward.
Interestingly, the result is independent of the refractive index of the medium
through which the light is travelling around the loop.
The second part concerns making reasonable estimates. How long could the
fibre be without too much loss, allowing for consistent bending? How sensitive
could such a precision fibre interferometer be, when designed by a skilled
person?
Problem 6
Straightforward radiation pressure calculations are needed for part (a), and
some careful descriptive discussion around the situation which pertains for part
(b), including the very pertinent query, in which direction does or should
gravity work?
A6.6 Chapter 6
Problem 1
The key to this problem lies in the fact that lithium niobate is electro-optic: its
dielectric constant can be changed by applying an electric field via the surface
electrodes. Therefore, the phase delay in the upper path in the diagram can be
modulated. This is an interferometer and, from the symmetry in the design, it
can be assumed that equal optical power goes along each path. Consequently,
if the two paths emerge via the ‘light out’ path in phase, all the optical power
goes into that port. (The two paths will be in antiphase at the other port, so
nothing exits.) Consequently, electrically changing the phase difference
changes the output. There are more subtleties in this when the output ports
are not in phase.
In part (b) the frequency limits are to do with transit times through the
modulator and what RC time constants are possible. What would be needed for
(i) a phase modulation system and (ii) a frequency modulator? You may also
like to consider a frequency shifter system. Although this ventures into com-
munication system theory, it is relevant since such systems are in use in optical
communications.
Finally, another aspect of this – what might happen if these functions were
implemented in silicon photonics? This leads into the next problem.
Appendices 155
Problem 2
The answers to this problem (and more background on the foundations of
silicon photonics) can be gleaned from G.T. Reed et al. ‘Silicon optical
modulators’ in Nature Photonics, vol. 4, pp. 518–526, August 2010, and other
articles in the ‘Focus’ special section on silicon photonics in this issue.
Problem 3
The first part of this problem is about phase matching along the interface. You
will need to dig a little to derive an accurate value for the phase velocity of the
wave along the interface. However, some useful insight into how this all works
is gained by making some ‘first guesses’.
Note that, whilst this is rarely stated, the top overlay (often referred to as the
‘analyte’) is often water based and is implicitly of a lower refractive index than
the substrate material. The phase velocity along the interface is consequently
higher than in the bulk material. We also have a mirror in the metallic layer and
this will preferentially reflect the incident wave, unless there is an exact
coupling into this hybrid ‘substrate–metal–analyte-layer’ wave structure (as
indicated, and with an effective index close to the average value between the
two sides of the very thin metal layer). At this point there will be a dip in the
reflected wave when this coupling takes place, corresponding to phase match-
ing between the substrate incident wave and the surface wave defined by the
properties of the interfaces of the analyte, thin metallic and substrate layers.
There’s a nice review (30 pages) of the principles of all this in K. M. Mayer
and J. H. Hafner, ‘Localised Surface Plasmon Resonance Sensors’, Chemical
Reviews 2011 (dx.doi.org/10.1021/cr100313v | Chem. Rev. 2011, 111,
3828–3857).
The second part of this problem on TM or TE polarisation states at the
interface was discussed in the text. As for the ‘needle’ structure in (b) in the
figure: think about the incoming direction of the incident wave, the dimensions
of the ‘needles’ and the operation of a lightning conductor!
Problem 4
The basic feature here is that, by changing the bias on the central electrode, it
is possible to change the delay time for the current flowing through the
graphene by the equivalent of 180 at 10.6 microns, which corresponds to
about 28 THz or a period of about 35 fs. Thus the time-delay change needed
across the ~300 nm device corresponding to this change in transit time is
156 Appendices
equivalent to a change in the electron velocity of ~20 106 m/s! This should
be checked carefully. It gives some insight into the velocity of the carriers
in graphene compared with the velocity of the carriers in silicon (see
Figure A6.2).
You could also work through the sheet resistance figures in Figure 6.12 to
gain some insight into the mobility of electrons as carriers in graphene.
Investigation into the electron density in the graphene sheet is needed in order
to gain this insight. You will need investigate the basic ideas of sheet
resistance.
Problem 5
For this problem, look at the order of magnitude of the dimensions of the
resonator as shown in the diagram. Might the structure have a resonance in the
optical range? The overall device does need to be cooled down to work;
thermally induced vibrations in such a tiny structure could cause considerable
trouble. They would also reduce the effective optical Q factor of the cavity
(why?). Even so, only 0.5 dB of squeezing is produced.
The original idea was published in A.H. Safavi-Naeini et al., Nature,
Vol. 500, p.185, August 2013. There was also a follow up based on silicon
but with a somewhat better performance, this time using very high Q factor
resonators. You can find this at arXiv1309_6371v1 where it is openly
available.
As for the future – well – what do you think?
Problems 6–8
These problems are intended as joint research projects with your colleagues.
They give an opportunity to explore factors which will come into play as
photonics continues to expand both its contributions into society and its ‘tool
kit’ of esoteric devices performing intriguing, in many cases as yet undreamt
of, tasks. There is a fascinating future yet to evolve – we are just at the
beginning!
Further Reading
157
158 Further Reading
D. Some Experiments
1. R. N. Zare, B. H. Spencer and M. P. Jacobson, Laser Experiments for Beginners,
University Science Books, Sausalito, CA, 1995.
2. R. N. Compton and M. A. Duncan, Laser Experiments for Chemistry and Physics,
Oxford University Press, 2016.
depletion region, 42, 57, 149 hollow core fibre, Bragg guiding condition,
depletion length, 38 77
dielectric constant, 12 holographic imaging applications, 92
diffraction gratings, 65 holography, 69
161
162 Index