Course 36 Siggraph 2001
Course 36 Siggraph 2001
Course 36
SIGGRAPH 2001
Course Organizers
Jill Smolin, Cinesite Visual Effects, co-chair V.E.S. Education Committee
Pam Hogarth, Gnomon, Inc. School of Visual Effects,V.E.S. Education Committee
Table of Contents
10:00 BREAK
11:00 THE SECRET LAB
Research and Production at The Secret Lab
John P. Lewis, Software Director,The Secret Lab
David Oliver,Technical Director,TheSecret Lab
NOON LUNCH
1:30 PDI/DREAMWORKS
Getting Under Shrek’s Skin
Jonathan Gibbs, Senior Effects Animator
Bert Poole, Senior Lighting Animator
Introduction
In our seat at the local multiplex, we experience incredible images that make
us cheer for the computer-generated hero, laugh when a pig converses with a
dog, or cower at monsters that exist only as pixels.The work that goes into
creating these effects starts in the minds of artists, scholars, programmers and
scientists who turn their work into images that manipulate space, animate
extinct species or make time stand still. Over the course of the day, presenters
from six of the industry's top visual effects companies will discuss the myriad of
ways in which they use research to augment the production of this year's major
feature films, including How the Grinch Stole Christmas, Monsters, Inc., Shrek,
Stuart Little 2 and AI.Topics covered in this course include using research and
development to create dynamics and fur, the collaboration between an ornithol-
ogist, engineers and artists to create computer generated feathers and flight;
research and development created in concert with production; a perspective on
different research and software development timelines; developing skin for an
animated character; and the partnership of software engineers and artists in
creating custom toolsets that are then integrated into the production pipeline.
Because film is a visual medium, our presenters will rely on images to present
this in-depth, technical look at the collaboration of r&d and art.
From Ivory Tower to Silver Screen
Six Visual Effects Companies reveal how academic,
SIGGRAPH and studio-based Research and Development finds its way into production.
Speaker Biographies
Armin Bruderlin
Software Department
Sony Pictures Imageworks
Robert Cook
Software Production Manager
Digital Domain
Bob Cook is the Production Manager for the Software Department at Digital Domain. He is a film maker
and artist who has been making images professionally for 23 years, and involved with digital imaging since the
early 1980s. His work has won awards at film festivals and has been collected by major museums.
He has also worked in the administration of systems and software development for 10 years, primarily
within academia. Additionally, while in academia he taught photography, digital imaging, computer animation,
WWW development and design, and critical theory. He received a B.S. in Photography from Texas A&M,
Commerce in 1984, and a M.A. in Humanities from the University of Texas at Arlington in 1994.
Michael Fong,
Technical Director
Pixar Animation Studios
Michael Fong’s life at Pixar began in 1995, during his last semester at UC Berkeley. Beginning as a render-
wrangler, he climbed his way to technical director as a modeller/articulator for A Bug’s Life.Working first with
props and then with characters, Michael found himself researching simulation and rendering with the Bug’s
Life Research and Development effects group. Moving into the role of technical lead/supervisor f the crowds
department, he worked with Bill Reeves and a talented group of artists researching hair and fur. He is cur-
rently sequence supervisor on Monsters, Inc.
Michael’s credits include:Toy Story, A Bugs Life (Crowds Technical Supervisor),Toy Story 2 (for which he
created a preliminary version of monster fur), Monsters, Inc., for which he is fur technical lead and Sequence
Supervisor.
Jonathan Gibbs
Senior Effects Animator
PDI/DreamWorks
Jonathan Gibbs is currently serving as a Lead Effects Animator on the forthcoming PDI/DreamWorks fea-
ture, SHREK. For SHREK he has been leading shader development, working on fur and hair rendering and
continues to develop the crowd system which he co-developed for PDI/DreamWorks' first feature, ANTZ. A
native of St. Louis, Jonathan has a B.S. in Computer Science from Principia College. He holds a Masters degree
in Computer Science from UC-Santa Cruz.
Johnny Gibson
Digital Shader Lead
Digital Domain
John M. Gibson is currently working at Digital Domain where, since 1997, he has worked as a Digital
Artist and Technical Director. His film credits include How the Grinch Stole Christmas, O Brother Where
Art Thou, Supernova and Titanic. Prior to working at Digital Domain Johnny has worked as a Software
Engineer at Disney, a Character Animator at Marvel Films and as a System Designer and Software Engineer at
TRW’s Ballistic Missles Division. Johnny’s various research areas of interest include applied global illumination
techniques, shading systems and renderers. Johnny holds a M.S. in computer science from the University of
California, Riverside.
Mark Henne
Technical Director
Pixar Animation Studios
Mark Henne has spent ten years in production, the last six with Pixar and earlier at Rhythm & Hues.
Lately he has been bringing dynamic clothing into practical use as a Technical Director on Monsters, Inc., a
Pixar/Disney film due out at the end of 2001. His previous projects have included A Bug’s Life and Toy Story,
and the highlights while at R&H were three of the “Polar Bears” spots for Coca-Cola and a car cloth effect
for Lexus.
Mr. Henne has a MS in Computer Science from the University of California Santa Cruz, and a BS from the
University of New Mexico. His expertise is in using simulations for special effects, and the design and imple-
mentation of facial articulation software and controls.
Tony Hudson
Digital Model Supervisor
Industrial Light & Magic a division of Lucas Digital Ltd. LLC
Tony Hudson has been with Industrial Light & Magic in various capacities since 1985. In the 1980’s he
worked in ILM’s model shop as a creature maker, model maker and puppeteer, as well as supervising various
projects. Hudson left ILM in 1990 for Walt Disney Imagineering to work on theme park design, and then he
took a year off to train himself in computer graphics, returning to ILM in 1995 to work on Dragonheart. He
has been interested in animation and visual effects since he was a small child, and is thrilled to be practicing
his passion at ILM. Prior to ILM, Hudson was lead puppeteer and show supervisor for Vagabond Marionettes
and John Hardman Productions.
John P. Lewis
Software Director
The Secret Lab
John (J.P.) Lewis is a software director at the Walt Disney Company’s digital production studio,The Secret
Lab (TSL). Previously John worked in software R&D for Dream Quest Images, Disney?s feature film visual
effects division, now merged with The Secret Lab. He has also worked for visual effects companies
Centropolis and Industrial Light and Magic, as well as think tanks such as Paul Allen’s Interval Research.
John holds several patents and has published research in the areas of computer graphics, neural networks,
computer-human interfaces and computer vision. His research is discussed in books including “The Science of
Fractal Images”, “Computer Facial Animation” and “Music and Connectionism.”
Bert Poole
Senior Lighting Animator
PDI/DreamWorks
Bert Poole is currently a Senior Lighting Animator in the Feature Division of PDI/Dreamworks. Having
joined PDI 5 years ago during the visual development phase of ANTZ, Poole was responsible for surfacing
lead characters such as Bala and Mandible. While spending most of his time on ANTZ and SHREK lighting
production shots, he has been called into visual development on many occasions to handle technically difficult
lighting and surfacing issues. A senior member of the Lighting Department, Poole graduated with a B.F.A.
from Columbus College of Art and Design, Columbus Ohio in 1996.
Jay K. Redd
Digital Effects Supervisor
Sony Pictures Imageworks
Jay Redd is currently the Digital Effects Supervisor on Columbia Picture’s “Stuart Little 2”, the sequel to
the successful file “Stuart Little”. Functioning as CG Supervisor on “Stuart Little”, Jay started his involvement
with the feature in the early days of pre-production and character design, and supervised the technical and
aesthetic research and development for the films’ extensive hair, fur, and lighting requirements. As an amateur
astronomer, he animated and supervised “Contact’s” Opening Shot, a 4,710 frame journey from earth to the
end of the universe, which has received many international awards.
Before joining Imageworks, Jay spent four years at Rhythm & Hues Studios working as a Technical
Director and then Computer Graphics Supervisor on numerous features, commercials, and theme-park rides,
including the Academy Award Winning “Babe”, and the award-winning Seafari. Jay studied at the University of
Utah with an emphasis in film, music composition, and Japanese. His projects thus far have had a running
theme of space and animals.
Doug Roble
Software Creative Director
Digital Domain
Dr. Doug Roble is the Creative Director of the Software Department at Digital Domain, the multiple
Academy Award-winning visual effects studio located in Venice, CA. He has been writing software and doing
research since he joined the company in 1993. In 1999, Doug won a Scientific and Technical Achievement
Academy Award for his 3D tracking/scene reconstruction program, “track”.
Doug’s interests cover widely different areas of computer graphics. He has written many different soft-
ware tools including a fluid dynamics package, a motion capture editing and manipulation tool, a complete
computer vision toolkit and package and many others. He is currently interested in computer vision, physical
simulation, level set theory and motion capture manipulation. Before Digital Domain, Doug received his Ph.D.
in Computer Science from The Ohio State University in 1993. His dissertation was on extracting three
dimensional information from a photograph. He received his Master’s from Ohio State and his Bachelor’s
degree in Electrical Engineering from the University of Colorado.
Jill Smolin
Cinesite Visual Effects
Visual Effects Society
Jill Smolin is the Manager of Artist Development and Education at Cinesite Visual Effects. Prior to working
at Cinesite, Jill worked at Digital Domain. She has been working in and around computer graphics for the past
15 years or so. Having co-chaired the Electronic Schoolhouse for SIGGRAPH 99, Jill also organized and pre-
sented a SIGGRAPH Course in 1999 entitled “A Visual Effects Galaxy.” In addition, she presented a facial ani-
mation panel for SIGGRAPH 2000. Jill is the chair of the Visual Effects Society’s Education Committee, and
currently serves on that organization’s board.
Steve Sullivan
Computer Graphics Software Group
Principal Software Engineer
Industrial Light & Magic, a division of Lucas Digital Ltd. LLC
Steve Sullivan joined Industrial Light & Magic in 1998 as a Principal Engineer. His role specifically focuses
on vision algorithms and computer vision applications for film and video production. Sullivan also heads up
the computer vision team for ILM’s Research and Development group which is currently developing tools for
matchmoving and photogrammetry, as well as facial and full-body motion capture.
In 1996, Sullivan received his PhD in Electrical Engineering from the University of Illinois at Urbana-
Champaign, with an emphasis on automatic object modeling, recognition, and surface representations. After
graduation, Sullivan joined Rhythm & Hues Studios in Los Angeles to develop animation and 3D tracking soft-
ware.
Stuart Sumdia is an Associate Professor of Biology at California State University San Bernardino. He is a
vertebrate paleontologist and provides anatomical and locomotary expertise to the film industry.
His academic research focuses on the structure, function, and evolution of fossil animals near the amphib-
ian to amniote transition; animals that are found in sediments of late Pennsylvanian and early Permian age
(approximately 270 to 300 million years old) -- animals nearly 100 million years older than the earliest known
dinosaurs. Other activities in the CSUSB Laboratory for Vetebrate Paleontology include study of and struc-
ture and biology of theropod and ceretopsian dinosaurs.When he finds time, Dr. Sumida consults extensively
for Film and Animation studios on animal anatomy and function and has worked on several films with the
animators at Sony Pictures Imageworks, including "Stuart Little" and "Hollow Man". At Imageworks, Sumida is
currently providing expertise for the feather development of several leading characters on "Stuart Little 2".
His many publications include phylogenetic context for the origin of feathers for American Zoologist. Sumida
also particpated on the 1999 SIGGRAPH Panel "Visual Effects: Incredible Effects vs. Credible Science".
Introduction
One of the more challenging issues tackled by the crew working on the movie Monster’s, Inc. was the
creation of a believable main character, James P. Sullivan. Sullivan, an endearing monster who learns that scar-
ing children is not the only way of life, was dreamed up by Pete Docter, the director of Monster’s, Inc.
Fortunately for his
production crew,
Pete had clear
ideas in his mind of
what he wanted
the fully CG
Sullivan to look
like. Unfortunately
for the production
crew, Pete’s ideas
generally revolved
around one of the
more complex
problems of com-
puter graphics, fur.
Computer generated fur is generally considered a difficult problem because there is no official solution
or accepted methodology. It is essentially a research issue that spans many different areas in computer graph-
ics, from modeling, to rendering, to animation.These notes will attempt to explore some of the research
which went into the animation of Sullivan’s fur.
RESEARCHING HAIR AND FUR
Research began by examining real life examples of fur.Videos of wolves, dogs, bears, llamas, etc. were
obtained and religiously poured over. Art directors would watch the videos tagging the different clips as hav-
ing desirable characteristics or not. Scenes where the fur gave a lot of secondary motion to the animal’s walk
cycle were often pointed to and described as must-have. Short, clean, well-groomed fur, was to be avoided
and instead direction led towards a long, thick, clumpy coat which seemed to fall somewhere between the fur
of a bear and a llama.Technical members of the fur team would then pour over the same set of videos
attempting to isolate the key features the art directors were responding to. After the general characteristics
of the fur were identified, samples of bear, llama, and faux fur were brought in.These physical samples were
invaluable because they gave the team a concrete base to work from.The samples could be examined when-
ever a question about physical characteristics came up and they also provided common ground for the art
department and the TDs. Real fur samples could be pointed to with no confusion from either department
about what was being described. Surprisingly, the faux fur samples turned out to be just as informing as the
real fur.The fur team was able to develop ideas about what made the faux fur look and behave in a “fake”
way. By contrasting the real fur against the synthetic, certain real fur characteristics were isolated and identi-
fied as being mandatory for believability.
Armed with lofty goals, the next step was to discover what research and work had been done previously
in the field of fur. Previous work is always priceless in that even if it doesn’t seem to apply directly to the
problem at hand there is always something to learn and perhaps augment. Fortunately for the fur team,
there’s a wealth of knowledge and studies out there regarding hair and fur.The majority of wealth doesn’t
come from the computer graphics field, however, but rather from cosmetics and textile research. Quite a bit
of work has gone into researching what gives hair body, what causes baldness, what induces style retention,
and what promotes a thicker, healthier coat. All this work, though not directly related to Monster’s, Inc., help
shed some light on the underlying hair structure that the fur team was hoping to duplicate.
A Physical Description
A hair is a long, thin appendage that grows through the epidermis from follicles set deep in the skin layer
known as the dermis. Examination of a cross section of a hair often starts with the thick, transparent, over-
lapping scale structure on the outside of the shaft known as the cuticle. One layer below the cuticle is the
cortex where pigmented, fibrous protein cells are found.Within the cortex is the medulla, a center shaft of
loosely packed cells. Intercellular cement binds everything together and cystine linkages give the hair struc-
ture stability. As the hair extends from the follicle, the cuticle, cortex and medulla become dehydrated and
cornified giving the hair the majority of its motion and behavior related properties.The more important of
these properties being:
* elastic deformation (stretching, bending and torsion)
* follicle angle
* cross-section
* friction
* static charge
* cohesive forces (oils, dirt, hairspray, etc.)
An animal’s coat of fur is just a heterogeneous collection of densely packed hairs.The fur obtains its
properties from the single-fiber characteristics of the different types of strands.This point about different
types of hairs making up the coat is a simple but important one. Cheaper versions of faux fur have a very
regular quality to them that gives away their synthetic roots. Real animal fur is made of many different types
of hairs that have different functions and thus different physical characteristics.
Development of the Sullivan character design was happening concurrently with the above hair and fur
research.The design which Pete Docter and the art department were narrowing in on involved a character,
10 feet tall, covered from head to toe in long wavy fur. Only Sullivan’s palms and the bottoms of his feet
would be bare. Long wavy fur meant
there would be a lot of secondary
motion as well as a lot of complex
interaction with the environment.
Compounding this, millions of hairs
would be needed to cover a 10 foot
monster which meant that the typical
key-frame animation done at the studio
was impractical. Instead, the fur team
focused on a more automatic solution
in the form of a hair simulator.
©Disney/Pixar 2001
The Simulator
Like almost all simulators, the heart of the Monster’s, Inc. fur simulator is its physics model. Early on, the
model chosen for Sullivan’s fur was a simple mass/spring system.This model, despite its lack of complexity,
has the advantages of being fast and efficient. It was also easily incorporated into an existing physics simulator
previously used by the studio.The first version of Sullivan’s fur was developed with a modified version of the
clothing simulator used in the short movie, Geri’s Game.This simulator was later replaced with a completely
new version, but despite the massive reworking, both simulators operated around the same five main con-
cepts:
• Points - are infinitely small masses
• Springs - connect two points and exert a force on them when the spring is stretched or compressed
• Hinges - prevent the angle between two springs from varying from some rest value
• Nailed points - are points that derive their motion from sources external to the simulation
• Forces - like gravity and collisions exert an influence on points
Unfortunately, the five main simulator concepts were not enough to model all the hair properties identi-
fied in previous paragraphs as contributing to the motion and behavior of fur. Stretching and bending could be
modeled but some properties like torsion and static charge couldn’t be incorporated easily so were left out.
This reduction of reality simplified the hair model but would lead to problems during production.
A single hair from Sullivan’s coat ended up being represent-
points and ed by a small number of points linked in a chain by a set of stiff
spring model springs. Spring constants and rest lengths controlled how the
hair stretched. Hinges were established along the chain at each
interior point to give rigidity to the hair.The hinge constants
and rest angles, along with the point masses, dictated how
much the hair would bend.The root of point of each hair was
then “nailed” to Sullivan’s skin and thus derived its position and
motion directly from Sullivan’s movement. In this way, as
Sullivan’s skin moves, the nailed points are
©Disney/Pixar 2001
dragged with it. As the nailed points
move, forces are exerted through the
springs and hinges to cause the rest of
the hair to follow along.
Various causes
of hair
movement ©Disney/Pixar 2001
CG FUR IN PRACTICE
From a production pipeline perspective, character animators are asked to animate a bald Sullivan through
a shot.This animation is then fed into the fur simulator which nails a huge number of point/spring/hinge
chains to Sullivan’s skin. As the simulator walks through the shot in time, with timesteps much smaller than a
single frame, the skin moves and the simulator computes the corresponding movement of the hairs. In
essence, the fur simulator plays the role of a secondary animator taking cues from the work done by a lead
character animator.
The ideal solution to Sullivan’s fur would be to model and simulate each and every single strand of the
millions of hairs on Sullivan’s body. Unfortunately, this solution suffers from obvious time and space concerns
and was, rather early on, ruled out as impractical. Instead the decision was made to approximate Sullivan’s fur
solution by simulating only a sparse set of hairs, keyhairs.These keyhairs, which are found at the control ver-
tices of Sullivan’s skin mesh, describe the macro level features of Sullivan’s fur. Instead of representing a single
strand, a keyhair represents the nearby region of fur. Keyhairs are converted into a vector description and fed
into a generalized Catmull-Clark subdivision algorithm to produce interpolated hairs known as inbetween
hairs. In this way, the same subdivision algorithm used to generate Sullivan’s skin is used to interpolate the
hairs and attach them to the surface.
©Disney/Pixar 2001
©Disney/Pixar 2001
if (this region of skin has been identified as having extra wavy hair)
{
hairWave = hairWave * 2.0;
}
The above builder psuedocode produces guard hairs 10% of the time, underhairs 20% of the time and
normal hairs all other times. Guard hairs, often referred to as shield hairs, are longer and wider than normal
hairs and generally act as a form of sensory input for an animal.They are particularly important to consider
when developing any type of faux fur because they break up the silhouette edge in an organic way. Underhairs
are very fine hairs found “under” the majority of fur and serve to keep the animal warm and insulated. It was
originally believe that these underhairs hairs would help increase the apparent density of Sullivan’s fur but
were dropped because they didn’t add much to the overall picture and turned out to be expensive to render.
Continuing with the above builder example, depending on where on the skin the hair is being grown, a hair
may also end up being wavier than normal.This type of individual strand control was achieved with a combi-
nation of scalar fields and texture maps which dictate hair characteristics in particular regions of the body.
Texture maps put the control into the art departments hands which allowed Sullivan’s look to progress much
more quickly than if it had been transcribed by technical hands.
Simulation Error
Builders gave the fur team the ability to produce the necessary visual complexity required for CG fur but
they also introduced simulation error because the inbetween hairs were no longer behaving exactly like the
keyhairs. This disparity leads to situations where the keyhairs respond correctly to collisions but the inbe-
tween hairs behave poorly because they are being procedurally modifed without taking collisions into
account. Interpolation, by its very nature, also introduces error because it suffers from sampling problems.
Since collisions can only be detected by keyhairs, very thin collision objects can come into contact with inbe-
tween hairs and no collision is registered.
These were fundamental problems with Sullivan’s fur design
that were never actually solved. Instead, workarounds were devel-
oped to prevent visual artifacts.When needed, collision objects
were scaled up in size.This scaling happened mathematically, not
visually, so thin objects would appear larger than they were to
the keyhairs.The keyhair resolution was also increased to both
better match what the builder was producing as well as increase
the number samples taken for collisions. Lastly, a fix tool was
developed to allow the user to select and modify portions of
Sullivan’s fur pre and post simulation to correct problems.This
©Disney/Pixar 2001 final aid, despite having only a small feature set, turned out to be
thin collision object falling between keyhairs invaluable in getting shots approved. It allowed simulation param-
eters to be tweaked on a keyhair basis as well as allowed the
user to directly manipulate the output of the simulator.
The inability to fully model all the different characteristics governing hair motion often led to undesirable
behavior even for the simulated hairs. Real hairs can only be stretched a certain amount (depending on the
humidity) until either the root is torn from the follicle or the hair breaks. Nailed, point and spring chains, on
the other hand, can be stretched to any length if the necessary force is applied to them.This unreleastic
behavior caused problems when Sullivan’s fur was snagged by collision objects. Keyhairs would get sand-
wiched by objects and then yanked with great force. Hairs would stretch impossible amounts and the illusion
of believable fur would break down.
Sandwiching keyhairs was a very common problem and it would appear all over Sullivan’s body even when
other collision objects weren’t present. Fur could get sandwiched inside the armpits, the elbows, the neck
folds, etc. Sandwiched hairs, because they are in an illegal state and are feeling strong collision forces from
many different directions, would often oscillate in distracting ways that made Sullivan’s fur almost seem as if it
was infested with rats.
The solution to many of the inadequacies of the physics model was to identify and attack each problem
separately.The simulation coders of the fur team spent many hours tracking down the cause for each stretch-
ing and oscillation case that came their way. In the end a variety of different heuristics were developed to
handle the problem scenarios.
Over-stretching was solved by endowing the simulator with the concept of a hair. Previously, everything
was just points, springs, hinges, nails and collision objects. Now, a chain of points, springs and hinges could be
grouped into a hair object which could then have a maximum stretch amount attached to it.When a keyhair
is stretched beyond its maximum, the points realize that they are in an invalid state and essentially nail them-
selves into place until the offending collision objects have moved away. In practice, this appears as if the hairs
have simply slipped out of the grasp of the collision object.
Oscillations were caused by many different things and some of the cases were solved by endowing the
simulator with the concept of a scalp.The Sullivan skin collision object was identified as the fur’s scalp and
was treated differently than other collision objects. Hairs could detect when they were buried inside the
scalp and instead of oscillating, would simply ignore the extreme forces on them and behave in a more sub-
dued way.
CONCLUSION
The research into Sullivan’s fur began well before the production of Monster’s, Inc. started and didn’t stop
even when production was in high gear. Research like this never really stops because the problem is too open
ended.The software, ideas, and solutions go through evolutionary stages during production as the different
components are put to the test. Often times the original ideas failed because of the demanding scenarios a
monster movie can require. When these ideas failed, the code was augmented or reworked to hopefully pro-
duce a better piece of software.
REFERENCES
D. Baraff. Linear-time dynamics using Lagrange multipliers. Computer Graphics Proceedings, Annual
Conference Series: 137-146, 1996
Tony Derose, Michael Kass,Tien Truong. Subdivision Surfaces in Character Animation. Computer Graphics
Proceedings, Annual Conference Series: 85-94, 1998
Creating Clothing for Monsters, Inc.
Mark Henne
Pixar Animation Studios
This talk covers topics on jumping from the ivory tower down to the pavement.
Real Physics vs. Cartoon Physics
When making a movie, you’ve got it right when the director says it’s right. In making an animated film,
stylization is the rule; just because something works a particular
way in real life doesn’t mean it’s right for the film.
Our shirt does not have folds as small as one normally gets
with t-shirt fabric. This is intentional, because Pete Docter, the
director of Monsters, Inc., wanted broader folds. Pete felt that
the tight and busy folds characteristic of a real t-shirt would
have too much detail and be distracting. He preferred a smaller
number of clean lines, helping to accentuate the body arcs. The
fabric weight we chose was closer to that of a sweatshirt.
In animation, char-
acters can move much
faster than people do
in real life, yet we still
©Disney/Pixar 2001 want a reasonable
result in the simula-
Boo’s Shirt Resolution tion. Reasonable may
not be the same as
realistic. In our prototype testing, we had Moo (the test version of
Boo) jumping down from on top of Sullivan’s head. Trouble was, her
shirt would ride up too far because of the fast acceleration. The
solution? An Inertial Field Generator(patent pending), which caps
excessive acceleration from the base coordinate frame. ©Disney/Pixar 2001
Next, adjacent triangles fold along their shared edge. Bending along the fold line is given a resistance
value.
The next most important force is shape retention. Because of internal friction between the threads of
real fabric, once a fold pattern is achieved, the garment will like to hold onto it. Some fabrics, like silk, have
less internal friction and come out of their folds more easily. Finding a good shape retention force will keep
the cloth from oozing once a character’s pose is relatively constant.
The cloth also needs to have knowlege of its own form, and not pass through itself. That means noticing
during a calculation timestep that a point has passed through a triangle cloth face. Since a cloth surface does
not have an inside and an outside this test is challenging to get right, but increasing the number of timesteps
within a frame makes it easier for the solver. There will be times that a knot forms, especially when the cloth
is sandwiched between skin layers, and the best thing the solver can do is notice the knot and temporarily
deactivate internal forces to allow the knot to simply fall away.
For some garments, it may be of value to simulate a semi-rigid curve along a seam, like a wire stiffener.
We didn’t find this of value in our t-shirt, but can see the
potential for other clothing articles.
Collision Objects
Collision objects are things in the environment colliding
against the shirt. The most important collision object in our
case is Boo’s body. Our character collsion objects share a
subset of the data points from the original model, but with
unneccessary detail culled.
The biggest problem with collision objects is sandwiching and interpenetration. Sandwiching occurs when
the cloth is trapped between two body parts or the body and
another collision object. A typical example of this is what can
happen if your garment is a pair of pants, trapped between the
calf and thigh of a squatting character. The simulation software
needs to detect that the cloth points are inside two surfaces,
and try to find a
place of compro-
mise between
the two. Most
importantly, it
needs to be sta-
©Disney/Pixar 2001 ble and not wig-
gle under these
Pants Fabric Sandwiched Between Calf & Thigh
conditions.
©Disney/Pixar 2001
Moo Trying a Somersault Cutaway - Moo’s Chin Still Cutaway - Moo’s Chin Tucks Inside Her Chest
Outside Her Chest
This case of Moo trying a somercault illustrates a particularly evil example. Early in the motion her chin
is clear of her chest, but as she tucks into the roll, it suddenly passes inside. The shirt had a good surface to
collide against at one frame, and that surface is suddenly gone the next frame. The end result is that the col-
lar passes through her neck. In this test case, we cheated to get a good test by adjusting the animation data
seen by the simulator so that her head doesn’t tuck in as much.
It still worked even though the rendered animation was kept in
its original state.
Results
In the end, our software and model were very robust. Nine
out of ten shots went through the first time with the default
setup. For the remaining ten percent, we used tricks like turn-
ing off collisions in a particular region for a few frames, or tem-
porarily glued down a few points in the mesh. There were
some shots where we asked for a change in the animation, but
those were few in number. We only had to ask for a little
cleanup, not anything that would change the acting.
Thus, Margalo is a somewhat imaginary bird, both in terms of her looks and, like Stuart, in terms of her
rather human-like behaviors. With respect to our efforts to producing cg-feathers, this meant that we didn’t
have to match an existing bird exactly, but rather satisfy the creative vision of the director. So instead of a
“real” feather coat, Margalo needed “realistic”, convincing and believable feathers. Currently, we are develop-
ing the look of another bird character in the movie, a Peregrine falcon, which is required to look very much
like a falcon in the real world.
From these requirements, we knew that we would have to be able to produce realistic cg-feathers as an
end-result. But we also knew that our feather pipeline needed to be practical, flexible (easy to modify and add
new effects), robust (reliable to push lots of frames through), efficient (as inexpensive as possible, in time and
storage), and easy to use by animators. During an initial research phase — which included field trips to
museums, live-bird demos, in-house lectures on bird anatomy, behaviors and looks, studying books on birds
and feathers — we realized that there was little published material in the computer graphics literature on
this topic. Based on our experience and the information gathered, we decided to divide our research and
development efforts into two camps: designing a realistic feather primitive, and designing a pipeline for gener-
ating a feather coat (consisting of these realistic feather primitives). Both are addressed briefly in the follow-
ing sections.
Feather Primitive
The tools to generate a realistic, individual feather primitive fall into two major categories: design or mod-
eling tools, and rendering or shading tools.
Real feathers come in all kinds of different shapes, sizes, colors and configurations.This is certainly true
for feathers of different birds, but also for the feathers which cover just a single bird.There are contour feath-
ers, flight feathers (remiges), tail feathers (rectrices), down feathers, etc.We wanted to be able to create all
these feather types within a single model. For this purpose, we have identified over 200 descriptive feather
attributes, which our modeling software then maps to specific feather geometries. Examples of these attrib-
utes are: shaftLength, shaftRadius, vaneWidthLeft, vaneWidthRight and downHairDensityVaneRight. Each of
the attributes has a default value, and certain combinations of attribute values define different feather types,
such as down or contour feathers.The final feather geometry is composed of a NURBS surface for the shaft
(quill) of the feather, and one or more NURBS surfaces for each left and right vane, plus any number of
NURBS curves for the down hairs and barbs, depending on the attribute values.With this model, we have
been successful in accurately defining the shapes of a wide variety of feathers.
Real feathers also interacts with light in many intricate ways, and due to simplified NURBS models of our
feathers, special shading methods are necessary to account for effects like reflection, opacity, texturing, color
and self-shadowing of feathers.We also need to simulate the anisotropic (*) nature of the vanes of a feather.
We have developed special feather shaders to account for these effects.
(*) a surface is considered anisotropic if the light intensity reflected to the viewer varies when it is rotat-
ed around its normal while light and viewer direction remain unchanged.
Feather Coat
Generating a convincing feather coat in computer graphics which covers the skin of a whole bird requires
dedicated solutions to a number of problems. First, it is infeasible to individually model and perhaps animate
all of the large number of feathers individually. Other problems which can arise are collision of feathers, both
between neighboring feathers and between feathers and the underlying surface. Finally, feathers are not static,
but move and break up as a result of the motion of the underlying skin and muscles, as well as due to exter-
nal influences, such as wind.
We designed a feather generation system to address these issues. Based on the experience we gained
from the development of a related system to generate hair/fur for “Stuart Little,” we have implemented a
new, improved and more powerful pipeline to put feathers on a bird. Some of the design features have been:
- define and animate a few simple key curves, ribbons or feathers primitives (hundreds), then auto-
matically produce the full feather coat (tens of thousands) from these primitives plus their attrib-
utes. Each final feather produced this way is called a procedural feather.
- provide grooming (combing) and animation tools for these primitives.
- separate static calculations (once-only, e.g. feather position on skin) and frame-dependent (e.g.
feather orientation) calculations for efficiency. The static calculations are done before render-
time, frame dependent information is computed during rendering.
- include a mathematical expression language to modify feather attributes, both during static calcula-
tions and at render-time. For example, based on their feather-id, certain feathers could be length-
ened or shortened at render-time from their predefined length according to an arbitrary
expression; or, we might want to break up the groomed feather look a bit by small random off-
sets (expressions) in the rotations around the underlying surface normal axis, twist axis or lay-
down axis of each final feather.
- provide feather collision detection and response, i.e. prevent feathers from interpenetrating, on
demand as a pre-rendering step.
- supply a variety of automated and semi-automatic feather placement tools. These range from ran-
dom placement, grid positioning, placement based on density maps or particle repulsion, place-
ment along isoparms, direct placement or combinations of these.These tools are necessary to
achieve various desired looks.Whereas in real birds, feather follicles are often placed along
well-defined dense tracks on skin called pterylae, Margalo does not necessarily follow this rule as
a imaginary bird.
- provide mechanism to turn a procedural feather into a hand-animated “proxy” feather on demand,
for instance if the script requires a feather or feathers in a certain area to move a certain way.
- accomplish seamless integration of all feathers: the procedural feathers, the proxy feathers and the
“hand-animated” secondary and primary feathers along the wings.The latter are specially
physiqued and part of the primary animation, as they express gesturing and are used to interact
with objects.
- achieve smooth migration of all files through the pipeline by a reliable versioning and publishing sys-
tem.
We are still adding new features to the feather pipeline and finetuning the performance as we apply the
tools to the second computer-generated bird character, the falcon. As the look of Margalo is in its final
approval stages, we have been able to produce convincing feathers with our approach.
Research and Production at the Secret Lab
John P. Lewis, David Oliver
The Secret Lab
Contents:
• Research-Production Lag
• Case studies
• Software-Meets-Production Anecdotes
Research-Production Lag
• Template Matching
(feature tracking, image registration)
Pratt, “Correlation Techniques of Image Registration,” 1974
Early production use: Forest Gump 1994, Speed 1994
• Optic Flow
Horn and Schunck, “Determining Optical Flow,” 1980.
Commercial product: Cinespeed, 1995
Notable production: What Dreams May Come, 1998; Matrix, 1999.
We know complexity when we see it, but how can we define it formally?
Kolmogorov complexity formalizes an intuitive notion of complexity. Consider
the three patterns:
11111111111111...
12312312312312...
30547430729732...
These strings may be of the same length but the first two strings appear to be
simpler than the third. This subjective ranking is reflected in the length of the
programs needed to produce these strings. For the first string the program is a few
bytes in length, e.g.,
The program for the second string is slightly longer since it will contain either
nested loops or the literal ’123’. If there is no obvious pattern to the third string,
the shortest program to produce it is the program that includes the whole string as
literal data and prints it — the string is incompressible or algorithmically random.
Ideally the complexity of an object should be a property only of the object itself,
but the choice of computer and programming language affects program lengths.
Kolmogorov complexity handles this issue by considering the complexity to be
well defined only for large objects. A translator or emulator from any language or
machine to any other is a fixed-size program, possibly of about 100K bytes or less,
so the choice of an inelegant language or machine adds only a constant amount to
the algorithmic complexity; this amount becomes insignificant in the limit of large
objects.
Objective Estimation is not Possible: Argument
Conclusions
• Objective estimation is not possible, but subjective estimation is. Find ex-
perienced people with good opinions. They won’t always be right.
In order to provide more realistic and efficient character deformation tools for feature film, we developed
a technique that produces complex skin reaction to underlying bone animation.The system was developed in
response to a production need for a easy to use sys-
tem that would allow thousands of pieces of charac-
ter animation to be sent through a production
department, with the least amount of per scene
adjustment. Critical to the success of the system was
the extremely rapid turnaround of new software ver-
sions due to the involvement of the software team on
the production process.
Building characters
We use a multilayered approach, in which sim-
ple bone motion, created using a typical inverse kine-
matics system, drives the motion and shape of nurbs
surfaces representing muscles.These muscles in turn,
drive the motion of the nurbs skin layer. The muscle
layer is a collection of surfaces that move in response
to bone motion. By changing the behavior of individ-
ual muscles involved in the motion of a joint, a complex skin behavior can be produced based on less com-
plex bone motion.
The process of building a muscle and skinned character begins with the creation of an IK animation
rig, which determines the position and length of each bone, and the creation of a modeled character skin.
The muscles are then applied, based on the physiology of
real or similar characters, and on the visible muscles in the
modeled skin. As the muscles are being created around a
joint, the joint is moved through its range of motion to
ensure the correct muscle response when the joint is moved
during an production animation scene.
Once the muscles have been created, the modeled character skin is attached to
the muscles, and to any other surfaces that have been created to model fat, or simulate
other non muscle based skin motion. Each vertex in the skin is uniquely associated
with a point on a muscle, so that as the muscles are moved by the bone animation, the
skin is the moved by the muscles.
This initial positioning of the skin vertices is based only on the motion of the
muscles and other driving surfaces, and does not take into account the position of
other skin vertices.This creates a rough
surface, with sharp muscle boundary arti-
facts, and excessive local stretching and
folding.To reduce these artifacts, the skin is
then relaxed, by iteratively applying forces
to each vertex based on distance to neigh-
boring skin vertices. These distances are
compared to the neutral position, and
forces applied based on the change in
length.The neutral length may be altered by
using an attribute map to provide areas of
increased skin tension.The movement created by these changes in length is con-
strained by the skin surface normal, so that movement against the normal is penalized, preventing excessive
loss of volume during extremes of joint movement.
Use in Production.
This system has been used to create over 2000 produc-
tion shots using over 40 characters over several productions. One
of the primary goals in developing the system was to provide ani-
mators and technical directors with the most effective way to pro-
duce realistic and subtle skin motion in the shortest time, with the
least amount of adjustment on a per shot basis. In order to
achieve this, we developed a
series of tools and techniques:
1) Scripts to automatically do a
first pass application of animation
onto a muscle and skinned char-
acter.
2) Groupings of muscles which
allow for limb by limb control of
muscle response.
3) Geometry blending tools to
correct skinning artifacts.
4) Overall skin tension controls
allowed folds to be pulled when
needed.
5) Editing methods for changing skin to muscle association.
An animation typically is applied to a muscled character after the animation has been approved as
seen on a low resolution, marionette type rig.The first results of the skinning process are generated automat-
ically from scripts, and, as the basic skin system does not use previous frame information, the task of generat-
ing per frame geometry files does not have to performed sequentially, and can be distributed across available
CPUs.This means that initial turnaround from marionette to skinned geometry is quite fast. After evaluating
the results of the first pass, changes can be made either to the animation performance, to the muscle behav-
ior, or to the skin parameters, and the animation re-run. In production, the vast majority of shots are
approved, in terms of skin geometry, in two or three passes.The remaining shots that require more attention
are achieved by adjusting local skin tension through attribute map values, changing muscle attachments and
response, specific temporal geometry targets, successive skin relaxation passes, the addition of shot specific
muscles or other skin driving geometry, and a variety of other muscle and skin system tools.
Because of the intermediate muscle layer between bone and skin, the motion of the skin above a joint
can be much more complex than can be achieved using a simple weighted binding technique. Although the
motion of the skin is, ultimately, based on the motion of the bones, the muscle layer provides a way to build
surfaces that interpret bone motion into a more complex association of muscle surface changes.
The more convincing our characters become, the more more aware we become of the deficiencies of
this technique of character deformation. Although the system successfully models a number of skin parame-
ters, there is no method for allowing the skin to move transversely over its underling structure.This creates
more pinching and localized stretching than should occur. Similarly, by using a fixed attachment between mus-
cle and skin to initially position the skin vertices, we often create folds that are difficult to resolve in the
relaxer phase.
We continue to develop the system toward both realistic and non realistic applications. Each produc-
tion develops its own approach to building the intermediate muscle layer, depending on their own needs.We
have incorporated a layer upon layer approach that uses an inner skin layer to allow for more control and
effects in the final, outer skin. As well, we are developing dynamic response, and alternate surface description
methods.
Acknowledgements
The muscle and skin sytem at Walt Disney Feature Animation / The Secret Lab was created through the
unwaivering efforts of Ross Kameny, Dr. Don Alvarez, and myself. Also critical to the success of the system
are the contributions of Kevin Rogers, Philip Schneider, Sean Phillips, Mark Empey and the rest of the charac-
ter finalling team for the “Dinosaur” project.
Pose Space Deformation: A Unified Approach to Shape Interpolation and
Skeleton-Driven Deformation
J. P. Lewis∗, Matt Cordner, Nickson Fong
Centropolis
Abstract while a purely image-based approach can achieve very realistic im-
Pose space deformation generalizes and improves upon both shape ages, this advantage may be lost if one needs to introduce geome-
interpolation and common skeleton-driven deformation techniques. try and surface reflectance in order to re-light characters to match
This deformation approach proceeds from the observation that sev- preexisting or dynamically computed environments. Film and en-
eral types of deformation can be uniformly represented as mappings tertainment applications require fanciful creatures that fall outside
from a pose space, defined by either an underlying skeleton or a the scope of image-based approaches.
more abstract system of parameters, to displacements in the ob- Some of the most impressive examples of geometry-based (as
ject local coordinate frames. Once this uniform representation is opposed to image-based) human and creature animation have been
identified, previously disparate deformation types can be accom- obtained in the entertainment industry. These efforts traditionally
plished within a single unified approach. The advantages of this use shape interpolation for facial animation and a standard but
algorithm include improved expressive power and direct manipula- variously-named algorithm that we will term skeleton subspace de-
tion of the desired shapes yet the performance associated with tradi- formation (SSD) for basic body deformation [25, 9]. While shape
tional shape interpolation is achievable. Appropriate applications interpolation is well-liked by production animators, it is not suitable
include animation of facial and body deformation for entertainment, for skeleton-driven deformation. On the other hand SSD produces
telepresence, computer gaming, and other applications where direct characteristic defects and is notoriously difficult to control.
sculpting of deformations is desired or where real-time synthesis of These issues, which will be detailed in the next section, lead us
a deforming model is required. to look for a more general approach to surface deformation. We
consider the following to be desirable characteristics of a skeleton-
CR Categories: I.3.5 [Computer Graphics]: Computational based surface deformation algorithm:
Geometry and Object Modeling—Curve, surface, solid and ob-
ject modeling I.3.6 [Computer Graphics]: Methodology and
• The algorithm should handle the general problem of skeleton-
Techniques—Interaction techniques I.3.7 [Computer Graphics]:
influenced deformation rather than treating each area of
Three-Dimensional Graphics and Realism—Animation
anatomy as a special case. New creature topologies should be
Keywords: Animation, Deformation, Facial Animation, Morph- accommodated without programming or considerable setup
ing, Applications. efforts.
Figure 8a. Comparison of PSD and SSD on an animating shoulder – PSD using only two sculpted poses.
Figure 8b. SSD on an animating shoulder. The shoulder area is especially problematic for SSD due to the large range of rotational movement.
Figure 9. Comparison of PSD (at left) and SSD on the extreme pose of an elbow.
Figure 10. Smooth interpolation of four expressions (frown, neutral, smirk, smile) arranged along a single axis in a pose space, c.f. the discussion of Figure 7.
Pose Space Deformation Notes
Pose Space Deformation (PSD) is a simple algorithm (it can be implemented in a few
dozen lines of code) that combines skinning and blend-shape approaches to deforma-
tion, and offers improvements and additional control.
• Shapes are not independent. A major consideration in designing face models for
shape interpolation is finding sets of shapes that do not “fight” with each other.
Animators describe this common problem with shape interpolation: the model
is adjusted to look as desired with two targets. Now a third target is added; it
interferes with the other two, so the animator must go back and adjust the previ-
ous two sliders. And so on for the fourth and subsequent sliders. Sophisticated
models (e.g. those on Disney’s Dinosaur) can have as many as 100 blend shapes,
so this is a lot of adjustment due to shape “fighting”.
Likewise, the authors of shape interpolation programs have described artists’
complaints relating to lack of shape independence – with highly correlated shapes
it is not clear which slider should be moved. Some shapes reinforce, others can-
cel, sometimes a small slider movement results in a large change, sometimes
not.
• Animation control is dictated by sculpting Each slider controls one key shape,
each key shape is controlled by one slider, as it has been for 15 years of facial
animation.
• Linear interpolation
The problem of shapes “fighting” is because the shapes are simply added. PSD inter-
polates, so keyshapes do not interfere.
It is necessary to be able to control the influence of each keyshape, but the one-for-one
mapping is not the only way to do this.
• Non-control shapes. Suppose “excited” and “happy” are two distinct target
shapes, but in a direct crossfade the intermediate shape is not adequate and a
new model is required. With SI one would need to introduce a new slider for the
intermediate “half-excited-half-happy” model, and this simple crossfade then re-
quires manipulating three sliders. Arguably this is complexity caused by the
system rather than desired by the animator. With PSD, place the halfway shape
halfway between the key shapes and it will automatically be interpolated during
the crossfade.
PSD allows smooth interpolation if desired, whereas with shape interpolation, in going
from shape A to B and then to C, an individual cv moves in a piecewise linear way –
there is a kink at B. Easing in/out of the transition does not change this fact.
Linear Algebra View of Shape Interpolation
Linear algebra gives another viewpoint on the character of motion resulting from shape
interpolation: Shape interpolation of n shapes each having m control vertices
n
S= wk Sk
k
can be written as a vector-matrix multiply with the keyshape vertices arranged in the
columns of a m × n matrix.
c1x | | ··· |
c1y | | · · · |
c1z | | · · · |
w1
c2x | | · · · |
w2
c2y = S1 S2 · · · Sn
..
.. | | · · · | .
.
wn
. | | ··· |
.. | | · · · |
cnz | | ··· |
The range of this matrix is at most of dimension n, so the animation is restricted to this
subspace of dimension n << 3m reflecting the fact that individual cvs cannot move
independently. The ‘Bruton’ Dino model appears to have 60*52 + 4*21 + 4*18 + 4*21
+ 15*35 + 16*35 + 11*35 + 18*35 + 17*21 + 17*16 + 17*21 + 11*21 = 6677 cvs and
so can be represented in a 3 (x,y,z) * 2 (symmetry) * 6677 length vector. On the other
hand it appears that there are under a hundred key shapes used to animate this head.
The preceding vector interpretation is valid; the next analogy is only that (an analogy).
Consider the cv’s as “samples” representing the resolution of the model – so the Bruton
model has 18k samples. Also consider the number of samples needed to represent an
object in the subspace of possible movement: 100 or less. This ratio of 100/18k reflects
a movement deficiency - it indicates how much modeling resolution is not used in the
animated movement.
A similar vector space interpretation of PSD is more complex but indicates that the
PSD motion is richer than that produced by shape interpolation. A single coordinate of
a particular cv is deformed as
c= wk R(|θ − θk |)
where θ is the vector of PSD parameters. The matrix R changes depending on θ, and
wk are different from one coordinate to the next, so the range is not a simple subspace
– each cv has some amount of independent movement.
PSD versus Shape-by-Example
whereas PSD is
x[j](p) = wk R(|p − pk |)
k
Since the sqrt involved in the distance can be folded into R, PSD deformation appears
to be more efficient.
The shape-by-example paper has some useful tips on improving RBF interpolation.
Implementation notes
www.idiom.com/˜zilla/Work/PSD/
where pts are the 2-D locations with data, values are the values at those locations,
and width is the width of the Gaussian kernel. A Thin-Plate RBF routine has the
constructor
This routine does a thin plate + affine fit to the data. The thin plate minimizes the
integrated second derivative of the fitted surface (approximate curvature).
In fact any radial kernel different than a linear function will work, so one can choose a
smooth piecewise polynomial for efficiency, or store the kernel in a precomputed table.
It is often desirable that the interpolated function decay to zero far from the data, which
is not true of the thin-plate interpolation. The affine component of the thin-plate code
is useful; this should be incorporated in the Gaussian RbfScatterInterp routine.
Both routines implement the interface
Research and development does not always transition smoothly into production.
Can we learn anything from cases where the R&D was not used?
On Disney’s 102 Dalmatians, the story requires that the lead dog character have no
spots through most of the movie; the spots reappear at the end. Animal handlers
determined that there was no effective and safe makeup that could be applied to
the dogs to remove the spots. This left Disney TSL with the task of painting out
all the spots on the dog.
...”It turned out to be a huge job, much bigger than any of us had
imagined.”
— Jody Duncan, “Out, Out, Damned Spot,” Cinefex 84 (January 2001), p. 56.
The Disney TSL software group produced two rapid prototypes of automated spot
removal algorithms (one is described in the Siggraph 2001 Tech Sketch Lifting
Detail from Darkness). The latter algorithm had advantages over the painting
procedure both in time savings and in that it recovered the fur detail from the
spots when possible, resulting in accurate detail without the chattering or sliding
that results from imperfect painting.
Lessons
1. Early prototyping is beneficial, but is not enough in itself
2. Work directly with the artists who will use the software.
(Comment from a 102 artist: “why aren’t we using this?”)
3. Need a project champion in management
• A facility had a software help phone number, to which each programmer was
assigned on a rotating basis. Programmers shared offices. When one particular
programmer answered the phone, we would usually hear "Hello" (silence for a
while as the artist explains the issue) "Why would you want to do that?"
• I never had any problem finding the soft dev person on our show. At the end of
the day I'd always run into him when I was walking out the door.
• In the early stages of the particle software for our show, we'd get together with
our soft dev person to see the progress. He'd show us a bunch of vectors moving
around and ask "What do you think?" We would ask in reply "Are you making
fun of us?"
• All I wanted was a plug-in for our tracking software. What I got was a two year
plan for writing our own tracking software.
• A particular show was using custom software to produce its signature effect. The
show and its effect were recognized at an industry conference, and an effects
supervisor gave presentations showcasing the work. At an internal meeting he
said that he never wanted to use custom software again.
• Someone was offended when we couldn't say how long it would take to make one
of the features of the program work better. Work better how?
• A programmer was working literally every possible waking hour to produce the
signature software for a show. On his birthday his colleagues arranged a surprise
party for him because he had been working so hard, but he was told that he could
not leave to go to the party. %The software was finished, and the %effect was
well recognized in the industry. After finishing the %software he quit the
company; we know not to mention his %name inside the company.
• "You guys get away with murder." -- A visual effects supervisor, speaking of the
software team.
• A software company sent around an announcement saying that their new feature
would be shown at Siggraph, but it wasn't. Two years later, they told us that their
new feature would be ready in time for Siggraph...
• A particular program was requiring massive resources, both time and memory.
Speeding the program up had been on the production's to-do list for months. The
particular program ran on shared memory multiprocessor machines. A consultant
with experience in the esoteric cache/processor interactions on these machines
was called in, and he quickly wrote a simulation program that showed some
simple reorganization could speed the program up by a dramatic factor (like
10,000), since it was spending most of its time waiting to lock down memory on
other processors in order to write. The consultant believed that the required
reorganization would take a few days. Production was asked whether they wanted
to pay for the consultant to return. They didn't.
• Compositors were complaining that the 3D elements they received were not lit
properly, so a software TD was assigned to investigate. The software TD asked
one of the responsible artists, "what color space are you working in? Are the
background plates linear?" The artists responded, "what space are you talking
about? This is MAYA SPACE!"
• and once again someone was assigned to investigate. The investigator found that
one artist had his monitor gamma correction arbitrarily set to 0.8 and the color
balance set to a strong green. The investigator told the artist that he needed to use
the latest standard lookup table. The artist responded "no, it hurts my eyes when
I surf the web."
Simulating the visual quality of skin is critical to the success of having reasonably realistic computer-
generated characters occupy lead roles in feature films. During close-ups, skin is the single surface that
occupies most of our field of view -- the canvas on which the essence of the story unfolds as the emotions of
the characters are revealed.
Skin has complex structures and its reaction to light is not simple. Most of us don't really understand why
skin looks and behaves the way it does, but we are all intimately familiar with its appearance. We know when
it looks right and are distracted when it looks wrong.
So the minimum challenge is to create skin that is not distracting, which is acceptable enough in its
appearance that it doesn't detract from the emotional message that the characters are trying to convey. Skin
that lets the acting of the characters come through and helps us accept them as real.
Beyond that, if we have an understanding of what makes skin appear the way it does, and with control
over those qualities, it should be possible to go beyond mere acceptance. This makes it possible to create skin
that actually enhances the character’s performance; skin that in subtle ways might be beautiful, luminous, or
creepy; perhaps more so than real skin could ever be.
With these goals in mind, we developed a process for simulating the look and reflective quality of real skin
under a wide range of lighting conditions and character motion. This process was applied to SHREK’s lead and
secondary characters. The fundamental component of the process is a shader that defines a lighting model
based on a simulation of skin's physical structure. Hand-painted maps that add blemishes, control the thickness
of different skin layers, and define the distribution of moisture augment the results of the shader. In addition,
portions of the shader can be controlled by the animation system. For example, wrinkles can be applied
around areas of compression or stretching like around the corners of the eyes.
In addition, we had certain practical requirements that the shader must meet. The shader had to be cheap
enough to use on multiple characters in over 1500 shots. This means it can’t take much longer to shade than
our typical, standard, shaders do. It also had to fit into our standard rendering pipeline. Our standard renderer
is a modified a-buffer renderer and although we can do ray tracing we could not afford to use it on a regular
basis for the main characters. A more subtle requirement is that control is more important the strict physical
realism. Since we were not creating a photo-realistic film, the shader has to be adaptable to the wishes of the
directors and art directors. At the time this was being developed, SHREK was still in visual development and
designs were apt to change. Also, we wanted to use this shader on a wide variety of human and human-like
characters.
There are three visual properties of skin that immediately stood out. First, the color of the skin is view
dependent. This is because of the sub-surface scattering, but the effect is that skin is a different color
depending on whether you view it head on or at a grazing angle. Second, the color is the skin is dependent on
light hitting the surface “nearby”. This is also because of the sub-surface scattering. Unlike hard objects, the
light hitting the skin at a certain location may affect the color at a nearby location because light can be
transported though the skin and emerge at a different location than it entered. Finally, the specular behavior of
skin is also more complex than the standard specular formula currently in use. There are at least two different
sorts of specular behavior: one term is from the skin itself, and one is from the oils and moisture resting on
top of the skin. The specular term from the skin itself is particularly anisotropic.
With this information, we set out to find a solution in the computer graphics literature. Unfortunately, we
quickly found out that there has been very little published on rendering skin. The only appropriate reference
we found was the Hanrahan/Krueger paper from SIGGRAPH 1993 titled “Reflection from Layered Surfaces
due to Subsurface Scattering” [1]. This covers skin as well as other similar layered materials. However, we
quickly decided that this approach would be too expensive for Shrek to use and possibly too difficult to
control. So, back at the drawing board we came up with two separate plans. Plan A was to explore the use of
the Kubelka-Munk model for layered pigments. Plan B was to follow a standard CG tradition and look for
ideas in other fields.
Plan A: Kubelka-Munk
Kubelka and Munk devised the Kubelka-Munk model in 1931 to describe the optical property of a turbid
medium that absorbs and scatters light. A medium that transmits light largely by a diffuse transmission such
that objects are not directly seen through it is considered a turbid medium. It was originally developed for
paint films, but seemed to be a good possibility for skin due to its elegant treatment of layered models. The
Kubelka-Munk model was recently applied to metallic patinas [2] and watercolor rendering [3] where we first
noticed it. Haase and Meyer first introduced the model to the CG community in 1982 in a paper called
“Modeling Pigmented Materials for Realistic Image Synthesis” [4].
In this model, each layer is described by three values: the thickness, the diffuse back-scattering coefficients
and the absorption coefficients. Immediately this seemed like a great idea for skin as it seemed to fit the model
perfectly. It is computationally cheap as it is an implicit model and doesn’t require taking multiple samples
across the depth of the layers or firing many rays to investigate the region below the surface. We began
playing with the model and found it powerful, but very unintuitive. It is difficult to specify a desired color using
these terms, and the same color can be made in various ways especially with a multi-layered system. Excited
about the idea, but unsure if it would lead to a robust model of skin or be usable in production we also began
on Plan B.
Due to this layered nature, the optical pathways in skin are very complex, as shown in the next diagram.
About 5% of the incident light will reflect off the skin in a specular manner. The rest will penetrate the
epidermis and bounce around inside it. Some of it will bounce back out of the epidermis, some will bounce off
the boundary between epidermis and dermis, some will be absorbed, and some will proceed down into the
dermis. Once inside the dermis the light will bounce around more, possibly getting absorbed and possible
making it back up to the epidermis and possibly making it back out and becoming visible again.
The final skin shader simulates several key aspects of real skin: translucency, oily highlights, softness, and
selective backlighting. There are four important layers: the oily layer on top and the two layers of the skin
itself (epidermis and the dermis) and the bone layer underneath. Controlling the thickness of the epidermis
layer via maps allows more or less of the dermis to be visible in different regions (such as the lips). This
layered model was shaded according to an implicit model of sub-surface scattering, giving the warm "flush",
softness, and tonal variation so critical to representing realistic skin. Another key feature of the skin shader is
the ability to map bone regions, which allowed for the painting of areas that do not contain actual bone and
allow backlighting to glow through, such as the ears. The specular aspect of the lighting model has one
component that accounts for the moisture of the skin and another that accounts for the anisotropic effects
caused by the pores.
Texture maps for freckles, blemishes, and other detail are applied to either the dermis or epidermis in order
to get effects that work well in conjunction with the lighting model. If all of the detail sits in one layer, you do
not get the dynamic accumulation and loss of detail as the character moves through light in a scene.
The implicit model of sub-surface scattering has two primary parts. The first is that the skin layers are
combined using the Kubelka-Munk model. The thickness and back-scattering parameters are parameters to
the shader. Instead of providing the absorption as a parameter, we instead gave the “forward scattering” and
computed the absorption as one minus forward scattering. This makes the shader much easier to use since
the animator or TD setting up the shader needs only think about scattering and not about absorption. In the
simple case, the back and forward scattering terms can be identical. The apparent thickness of a layer is
depends on the view angle, so this model gave us the view-dependent colors in a very natural way.
Underneath the skin is the “bone” layer. It is a white diffuse layer with an adjustable transparency. Under most
of the skin it would be opaque, but under certain parts like the nose and ears it would be partially transparent
to simulate cartilage instead of bone. When the bone was not opaque we consider lights from behind the
surface as well as those in front to get a natural backlit effect.
The second part of the implicit sub-surface scattering model is less elegant, but effective nonetheless. As is
typical in most renderers, the lights only compute information about how much light falls on the current
shading sample. This makes it hard to get lighting information for “nearby” points that play an important role
in skin illumination. We solved this problem in a purely visual manner. The effect of nearby light was only
important were the amount of light changed rapidly due to shading or shadow boundaries or high-frequency
bumps. Identifying those locations allows us to mimic the effect by shading certain samples twice and tinting
and mixing the results. For example, we may query the lights with the shadows both on and off and mix in a
slightly redder version the shadows-off color into the shadows-on color based on how much the current point
is in shader. Similar techniques can be applied to other aspects of the shader.
The following images were the early tests of the skin shader before it was given to production. The first image
is a very early proto-Fiona model. The second image is the same skin parameters, but with only one back light
turned on. The next two are very early proto-Shrek models. As you can see, the basic effects are in place, but
we are far from finished. The result of the process is tied up in the visual development of Shrek’s main
characters, particularly Princess Fiona. How we got from these prototype images to the final look you see in
the film is the subject of the rest of these notes.
Final Shrek
REALISTIC SKIN IN A FANTASY WORLD
In completing the research and design of any computer animation technology, the true marks of
refinement can only be found through the fires of production. The skin technology at PDI/DreamWorks is the
classic example, where thorough investigation and design formed only the husk of what was to come. Real
world experience was needed.
In September 1998, SHREK first entered the world of visual development at PDI/DreamWorks. With a
host of humanoid characters filling most of the screen, suspending disbelief was going to be a huge challenge.
Having proved it’s facial animation techniques on ANTZ were more than adequate to push the boundaries, the
lighting sophistication would need an advance of its own to pull off the illusion of humanoid characters in a real
world. In ANTZ the surfaces were visually dynamic and rich, but lacked depth and volume. Much like the hard
shells of their real world counterparts, the closest that the surfaces in ANTZ got to fleshy was a slightly satiny
feel. Much like Shrek’s favorite food, onions, the skin surfaces in Shrek would need layers. The skin system was
developed to do just that, and so in that September, Fiona entered visual development.
One of the main challenges from the onset of SHREK was to discover just how much reality there was
versus how much fantasy there was to be. It was definitely a fine line walked on all sides of production, and in
Fiona, we found the pinnacle of challenges. How fantastic was she? Should she have the visual interest of real
skin, with all its blemishes and black heads? Should her skin be without blemish, in a state of perfection like
that of a Barbie doll? Or should she be given the spa treatment so as to have her face plastered with a
foundation and airbrushed with blush-like make-up? Having modeled a less that real looking Fiona with
exaggerated features, the initial decision was to explore the latter, where there were no visible blemishes, and
her face mainly took on a made-up quality. The fine line had been drawn, moving away from the real world,
and favoring the fantasy world of SHREK.
When development hits production, as in the case of the skin shader, the needs of the moment dictate
the path of evolution, and testing becomes something that happens on the fly. In the world of proprietary
software, we are all high paid beta testers. Dailies sessions featuring Fiona became a period of personal pain as
those of us who slaved the most stumbled about and tried to establish what humans in SHREK’s world would
look like. Some of this was certainly due to the fact that Fiona was a unique experiment that would need much
iteration no matter how good the effort. Creating a world not yet seen is not an easy task. But part of the
pain was certainly due to the fact that we were putting a new technology created with the intention of
mimicking real world phenomena into a fantasy situation and exploring that complex issue in the context of
that artificial world. It was not going to work. SHREK would need a helping hand from the world of reality.
Almost one year later, that help would come from PDI/DreamWorks other division for commercial and film
effects in the form of a test for a potential film job.
We moved ahead, and decided we would make changes where it was necessary. The decision was made
to develop a slightly custom version of the skin, so as not to disturb SHREK’s production, and have the
freedom to explore alterations. This gave us some freedom due to our small staff of three or four to explore
the issue even more in depth and find those cracks that still had yet to be filled.
The Dermis and Epidermis in Practice
At this point, we did some quick testing of real world conditions versus what the shader could do. We
soon came to the conclusion that the epidermis and dermis relationships established by the initial research
would work well. Experimenting with various combinations, we were able to get believable fleshy feeling
spheres and teapots that seemed to react well to the lighting situations in which we put them. How to then
combine these delicate settings with useful data to generate believable human skin was another thing
altogether. We soon discovered that with the skin system, it was very easy to get the same visual result for a
single frame 6 or 7 different ways. The crucial and extremely difficult task was deciding which of those ways
was the right path. The physically based dermis and epidermis relationship was a hard thing to wrangle and
required a change in both thinking and practice when surfacing. Typically in surfacing jobs at
PDI/DreamWorks, materials involve a diffuse, specular and bump component with the occasional need for a
few other attributes. This is a very straightforward manner of working. What you see is what you get. If you
need a rock painted, you go and dig up your favorite rock textures and go to work. Unfortunately, in the case
of skin, this doesn’t work (as the many plastic-skinned zombies out there in the film world will attest).
Texturally, skin is easy, as we have billions of photo references from around the world to pull from. But as we
have seen from the skin research, this is but the cumulative effect of layers and layers of translucent pores and
cells blanketed over sheets of veins and bone. A great looking still image of skin only takes a decent scan of
someone’s face. A system that works and behaves properly when put in motion though a number of lighting
scenarios is something else altogether.
The dermis-epidermis relationship is a complex one involving abstract methods to achieve a complete
fleshy appearance. Unfortunately, up to that point, we too tried to cheat the function and feed it insufficient
and misplaced data. As a result, the skin we got was a thick deadened plastic look. If we were going to do it
right, we would have to apply our skin in layers. Veins, muscles, blood and subsurface discolorations are in the
dermis, and freckles, pimples, flaked skin, and superficial discolorations are in the epidermis. To break this
honor system would result in failure. As we worked the skin, we had to be honest and clean about it, keeping
these various attributes separate. It really forced us to search and examine each portion and it‘s real world
counterpart. For instance, let us say we wanted to create something as subtle as sun-freckles. There are four
conceivable ways of achieving essentially the same look. The first would be to darken the underlying dermis
portion significantly enough as to dampen down the epidermis layer, and create a darkened blemish.
Unfortunately, this might work for something straight on, but as the model rotated away from camera and
into some light, over those pixels we would have an accumulation of epidermis along the profile edge
obscuring the freckle. The blemish would appear to come and go as the models moved through the scene.
This is an effect that is desirable for both veins and underlying discolorations, but not appropriate for
something as superficial as a freckle. We could indicate the blemish through a thinning of the epidermal
thickness, and use some of the darkened tissue from underneath to reveal it, but this color would shift
depending on the accumulation of epidermis as the normal change with respect to view. We could use the
diffuse multiplier to darken the effect of the skin altogether, but this would result in a deadened inconsistent
appearance. In general, this type of simplistic linear multiplier on top of the skin effect is a bad idea for
something as layered and varying as skin. Unfortunately, it is also probably the most tempting resource
because of its instant gratification. Finally, the proper solution is to let the sun-freckle lie right out there on the
epidermis, and thicken that skin up a bit so as to receive little of the color accumulation from below. Not by
accident, this is in fact the situation with a thickened freckled chunk of cells on any one of us. The freckle is an
easy example, but the entire complexion works this way, and requires that type of investigation and becomes
extremely difficult when the cumulative effect consists of several layers. This is exactly what we found as we
worked through the problem on this real world test. The alien would require an aged look, with freckles and
pores to match a real world human counterpart we had been given that was shot against a blue screen. The
task was daunting and took some time to get the hang of.
The top image shows Fiona’s cheek dead on with lots of accumulated detail coming from thickness variations in the epidermis with a single
freckle actually residing on the surface. The bottom image shows how the information is lost when the thickness of the epidermis is increased
due to the viewing angle, while the freckle riding the surface remains visible.
If You Can Cheat, Cheat
As we worked through the problem concerning the dermis and epidermis relationship, we found that
something was lacking. We soon found that matching reality isn’t the necessarily the best goal when making
the unbelievable believable. In regards to the translucency, we found we had to set our aims a little higher, and
go a bit over what we would consider a match for the human actor. When looking at skin, we want to
perceive that fleshy feel. It’s easy to have a cadaver on your screen. So we in fact pushed the epidermis to a
point where it was both brighter and thinner than one would expect to find in reality and in fact much
different than the approximations made based on the papers. What this did was provide a hypersensitivity to
lighting conditions and the incident viewing direction. Where the skin rolled across the surface toward the
profile edge, if the surface at that point was perpendicular to a key light, you would have an accumulation of
bright epidermis—the result being a pleasing thick fleshy contrast to the skin found in the regions where you
were looking through less epidermis and into more dermis, or flat on from the viewing direction. This was, in
fact, discovered after watching a one-year-old baby walking around in the early weekend sun. The appearance
of baby skin was the proper goal for an egg-shaped alien head, and because we lacked the sophistication of a
volume-based accumulation of light through the pigment of fatty baby tissue, we cheated where we could.
Another discovery we made was that skin rarely features hard shadows. This provided some interesting
discussion and insights regarding the nature of skin and shadow. We determined that the results of sub-surface
scattering and the possibility of retro-reflective properties on a pore-scale may diffuse both penumbra regions
of shadows and terminator boundaries in shading. Three hacks came out of this to simulate these effects. First,
stemming from discussions regarding the retro-reflective qualities of the moon’s surface and the resulting hard
terminator that gives it the crescent nature in waxing and waning periods, we used a diffuse exponent term to
broaden the diffuse region and pinch the shading region. Second, we applied a shading color, so as the surface
rolled away from a light and into shadow, it would receive an additional color term in that falloff region. This
was especially useful for tinting high-frequency bump mapped regions of the skin where, in reality, light should
be entering the lit side of the pore, moving through the skin, and influencing the back side of the pore with a
color accumulation because of the translucent scattering properties. In a bump mapped situation, with no
displaced geometry on this high-frequency scale, there is no way to take advantage of the physically accurate
way of handling this. Thirdly, we added a shadow color term, so that shadows could role through a warm
contribution in the penumbra region. All three of these heightened the fleshy feel and helped sell the illusion of
volume and depth to our flat surfaces.
Having worked through these issues to a point where we had a fairly believable fleshy soft-skinned alien, we
now turned to the specular nature of skin. Our reference actor was extremely greasy at the photo shoot, and
fortunately for us, this provided a thorough and excellent test for our skin system from which to learn.
Certainly this aspect would be one we would not have tackled on our SHREK heroine.
First, skin generally has a broad sheen that occurs in grazing lighting conditions. We observed that as we
held an arm out on a bright sunny day, a hard broad bright area would rim the arm. At first we suspected that
hair and follicles were responsible, as in the case with peach fuzzed cheeks. Soon, though, we realized this
specularity was a result of a dry dead skin layer that occupies the top layer of the epidermis. We believe this
accounts for the broad specular nature of the skin, not including the oils and moisture that is found. So we
allowed the specular term that existed in the shader to speak for these conditions and be represented by a
large dull sheen on the skins surface.
Second, we determined that the other forms of specularity generally associated with skin are that of the
moisture and oils that reside on the surface and in the pores. These are micro-scale events, occurring in
surfaces that actually ride on top of the skin and are apart from the skin itself. The topology of these globules
of moisture are much more varied and non-uniform than that of the larger skin surface they ride, and as a
result, over a patch of moisture, there will be several incidences of the surface being in the specular direction.
This along with the smooth hard surface of the moisture accounts for the many sharp specular highlights that
are kicked back. To compensate for our inability to reproduce this effect with our large approximations, we
gave the moisture its own bump term that would be applied on top of the skins term. In order to be able to
push this bump to the severity needed to catch highlights at times when the skin surface was not in the
specular direction, we allowed the underlying shading of the skin to be unaffected. In short, this was only used
to calculate the specular contribution of the moisture. No other aspects were considered or needed. This
component took the role of our moisture specular term.
On the alien in particular though, we ran into a problem and an unexpected solution. The large folding
eyelids of the alien screamed for having some moisture accumulation, as most eyelids do. As the skin folded
into the contracted position, we wanted small glints of specularity to be reflected back indicating the fleshiness
of the skin. This looked great once we got it in place, but we saw that as the lid relaxed back down during
blinks, the highlights would be strewn across eyelids like zebra stripes. The problem was that our illusion of
moisture was falling apart. In reality, as the skin bunches up and folds, the moisture should be collecting in the
pores and kicking back in lines across the folds. As the eyelid then unfolds back over the eye, the skin is pulled
tight and the moisture is no longer concentrated in the specular direction. This is what would happen in
reality, but there is no such complexity to our cheat. Fortunately, a technology that was providing for the
wrinkles in our alien would come to the droplet’s rescue. We had been using a wrinkle shader to handle both
bump and displacement wrinkles in the eyelids as the contracted. This was done through applying a function
based on the compression and U and V. As long as the eyelid remained relaxed, so did the UVs. As the eyelid
folded, the compression would occur, and the desired effect would be multiplied in. We then took this same
evaluation, and applied it to our moisture. As the eyelid folded up, the particular highlights would take their
marks. As the UVs decompressed, the highlights would disappear. It worked extremely well.
Having added all of these features, we found that our specular model was still wanting. While our highlights
were of the right size and intensity, we found that they lacked visual interest. As we went back to our human
drawing board and looked at more images, we found that our highlights were lacking the interesting shape and
contour found in real skin. As light moves across a face, it bends and contorts to fit the skin. We had known
that skin pulled tightly over bone witnessed a tight and directed highlight while thicker muscular or fatty skin
loosely hanging tended to have a more uniform diffused specularity. After looking into it further, we
determined that the skin has an anisotropic quality due to the stretching of pores in a particular direction.
Depending on the flex and bow of the skin, the highlight will flex tighter and looser depending on the
stretching of the pore structure. The best way to describe this is with the classic example of brushed
aluminum. Minute directional grooves or events, when bunched will affect the broader perspective. As a result
we added an additional anisotropic direction for the specular and moisture specular to compensate. This
perhaps made the greatest difference in regards to specularity. It made everything look that much better and
helped fight the uniform specular treatment found in most computer-generated surfaces.
At this point, our alien was complete, and our skin system had undergone a large transformation, both in
terms of user knowledge and actual features. As the alien test shipped, and Shrek’s skin continued to struggle,
the decision was made to take the newfound knowledge and apply it to SHREK’s main human character, Fiona.
Back to the Drawing Board
At that point, Fiona’s model had undergone a number of changes. Her appearance was much more human
and much less like one of SHREK's fantasy freaks. While this was a great improvement, the skin still took a
deadened flat appearance. As the good and bad was sifted from the previously separated skin systems, work
began on updating Fiona with the new knowledge and features. An additional directional diffuse term was later
added to the specular components that allowed for a ramp from the diffuse region into the specular region.
This came out of observations where it seemed that areas of the skin approaching the specular direction
warmed up due to the light penetrating through more dermis and less epidermis. After we added this feature,
her ramp into specular regions appeared much more natural, and her skin had a bit more added depth and
volume to it.
As the Fiona grew closer to completion, a make-up artist was brought in to do a one-day session on a
certain production coordinator who had a fair appearance and reddish auburn hair. Many photos were taken
both indoor s and outdoors for skin and hair reference. These became invaluable. Not only were they used
for makeup and hair reference, but we used them to study how much loss of detail occurred at certain ranges,
and tried to match this effect through the use of different setups. While this was rarely utilized in the end,
some very thorough observations helped us fine-tune what was to be our final setup.
Soon, Fiona’s skin had undergone the royal treatment and giant leaps had been made over previous
incarnations. One day, as she came up during dailies, she looked vibrant and alive. Fiona took on some
additional tweaks here and there, but generally her new self was found. Other SHREK characters in
development, such as Farquaad, were retrofitted using Fiona as a template.
BIBLIOGRAPHY
1. Pat Hanrahan and Wolfgang Krueger. Reflection from Layered Surfaces due to Subsurface Scattering.
Proceedings of SIGGRAPH 93, Computer Graphics Proceedings, Annual Conference Series, pages 165-
174.
2. Julie Dorsey and Pat Hanrahan. Modeling and Rendering of Metallic Patinas. Proceedings of SIGGRAPH 96,
Computer Graphics Proceedings, Annual Conference Series, pages 387-396.
3. Cassidy J. Curtis and Sean E. Anderson and Joshua E. Seims and Kurt W. Fleischer and David H. Salesin.
Computer-Generated Watercolor. Proceedings of SIGGRAPH 97, Computer Graphics Proceedings, Annual
Conference Series, pages 421-430.
4. Chet S. Haase and Gary W. Meyer. Modeling Pigmented Materials for Realistic Image Synthesis. ACM
Transactions of Graphics, 11(4), pages 305-335 (October 1992).
5. R. R. Anderson and J. A. Parrish. Optical Properties of Human Skin. The Science of Photomedicine, 1982,
pages 147-194.
From Research and Development to Production:
A Studio’s Perspective
Robert Cook
Digital Domain
At Digital Domain research and development is a production driven activity.The research may be in
response to immediate production needs, or in anticipation of what might be desired in the future.We organ-
ize these two types of activities — immediate and anticipated needs — into a continuum of projects distin-
guished by their respective development cycle timeframes: immediate production needs tend toward short-
term development, while more speculative needs demand mid to long-term development.Thus we have
short-term, mid-term, and long-term projects, each requiring specific research and development methods par-
ticular to their development cycle.
The research and development cycle in long, mid, and short-term projects differs slightly — yet those
slight differences dictate where the work gets done organizationally. Digital Domain divides the projects
between two departments: Software and Technical Directors. Generally, the Software Department handles the
long-term projects, while the Technical Directors handle the short-term projects. Mid-term projects are gen-
erally split between the two, based on the specific demands of the project.The primary distinguishing feature
between the two departments is that technical directors are attached to specific productions, while software
developers are show independent.This division of labor creates a more efficient and cost-effective research
and development pipeline for each of the three different types of projects.
This development cycle tends to be much more recursive than the other two in that by the time the
project is complete, it is time to reconsider its features and functionality. It begins with a recognized need.
This may be in the form of a production or infrastructural demand, or a suggestion of what “might be possi-
ble” that would give Digital Domain a competitive edge in future production seasons. Extensive research is
completed by reading and considering academic research, journal publications, books, trade shows, SIG-
GRAPH publications and presentations, and one-on-one conversations with people in the field. A project pro-
posal is created, with an assessment of the current issues, and the proposed ideal solution. Of course, we all
know that in this business ideal is a luxury we rarely get to realize, so a scaled-back, real-world implementa-
tion plan is established.The proposal is considered in light of other projects, budget constraints, and viability.
We then establish a development timeline and budget, then pass that forward to management for approval.
Now don’t get me wrong here: I’m not saying our long-term project programmers spend a lot of time putting
together slick, well written documents! Sometimes they do, its true, but sometimes these proposals are sim-
ply discussed conversationally, then Doug (Software Creative Director) and myself (Software Production
Manager) put together the timeline and budget, and we go from there. I don’t expect thesis proposals, just
great ideas!
Next we go into development which consists of a loop of coding, implementation, user feedback, more
coding, further implementation, etc . . . Eventually, with many of these projects, such as Nuke, our in-house
compositing software, by the time we have reached our goals for the proposed project, technology and mar-
ket forces cause us to re-examine the project, and begin the process all over again from the top.
Mid-term Development Cycle
This cycle generally begins by someone hearing a talk at SIGGRAPH, or reading a journal article or book
and seeing how that research might lead to something useful within our production pipeline.These projects
generally become part of our internal software tools that augment either third party packages, or our own
proprietary packages. Such projects are forwarded, again, in either written or verbal form.We establish a
timeline and budget then discuss the merits of such a project with artists and CG supervisors. If the idea
seems to have relevance and merit (within the context of who we are and what we do), a test, proof-of-con-
cept is developed.That proof-of-concept is implemented sparingly, say as a small effect in a feature or com-
mercial. If the outcome is positive, further time is spent developing it into a full-blown application. If it does
not work, it goes back into the proof-of-concept stage, then is released again and tested. At some point we
either move forward with the idea, or the idea is shelved or scrapped altogether.
This cycle is the most difficult to manage and budget for because they often become victims of “feature
creep” or some other product is released which does the same thing. It is extremely important to define the
feature set before the project commences so that expectations can be managed and the project actually
ends! It is not often that we go down the development path on these sorts of projects and abandon the proj-
ect because it just didn’t work. Most of the time we stop such projects because a commercial product
becomes available which is better or more cost effective that the one we created.There is no discredit to the
developer here. For example, we might be creating a new plugin for Maya.We have good relationships with
Alias so we often know that they are working on a similar addition to Maya. However, we may be able to cre-
ate something more specific to our particular needs in a more timely manner, but without the full functionali-
ty that the Maya developers are creating. If the Maya release actually does what we need it to, then we gener-
ally quit development and/or support for our own internal plugin and use the Maya product. Other times the
release does not have a good match for our production pipeline and artists end up using both.
This cycle almost always begins with a specific demand from a show in production. Generally it is the
artists which create the initial specification.This ranges from, “It would be really cool if we could . . .” to very
detailed parameters outlined by the artists and visual effects team.The problem is assigned to a Technical
Director, or sometimes Software production support person, and some preliminary research is completed.
There is a short development cycle here largely due to the production time constraints.The developer cre-
ates a quick, short-term implementation, usually testing it themselves. It will turn in this phase, between
design, implementation and testing, all by a single person, for as long as possible, each iteration improving the
desired effect. Once the developer feels they have something interesting to show, it will be shown to the vfx
team for discussion and input. At that point a determination is made as to whether more work should be
done, or if the problem is solved for the moment.
Many times these projects are then revisited when the developer is between productions. Full documen-
tation of the problem presented and the solution created is completed. Since these projects tend to be very
narrow in focus, solving a very specific problem within a particular production pipeline, many times the proj-
ect is then considered for continued development in order to broaden and generalize the scope of the proj-
ect.The project development might then continue with that one person, or be handed to the Software
department for further research, development and implementation.
Managing Research and Development in a Production Environment
The production environment, as I’m sure you all know, is quite different than most other working envi-
ronments, especially when it comes to research and development of software. Managing projects in this envi-
ronment demands a great deal of flexibility and agility on the part of the developer.There is usually not the
luxury of project managers, design architects, coders, and testers. Most of the time the programmer is all
those things at one time.They must be able to identify projects, create proposals, make scheduling estimates,
design the feature sets, do the coding, complete testing, compile, release, and bug fix.This puts an enormous
amount of responsibility on the programmer.
All of this responsibility is the result of a small staff with a big list.The development times are as short as
possible, even for the long-term projects. Being late on a delivery may mean not just pushing a release date,
but holding up a production involving millions of dollars. Research must be completed as quickly as possible. If
a problem comes up in production, then part of the budget for that project will be earmarked for research.
However, we also wanted a way to track how much time we spent developing new ideas.To that end, we cre-
ated a line item specifically in the Software annual budget for general research. Developers bill to this item
when researching new ideas that are net yet attached to a production. Since part of what is expected of the
developers is the infusion of new ideas and methods, we needed a way to account for that time, and establish
its importance.
Another difficulty for the developers is the quick implementation times, which usually means a very tight
development, testing and debugging cycle. On all projects, short, mid and long-term, the software is released
as soon as it is compiled. For very large projects we might have a small group of artists who do the initial
testing, but most of the time the software is released to the entire company. Now if it works great – no
problem! However, if it doesn’t work perfectly, or perhaps breaks something else people are using the
response will be immediate and usually made direct to the developer. No tech support or marketing buffers
here! So not only do developers have to be great coders, they have to be resilient to direct criticism from
the users.We have established a release policy in order to minimize these problems.The policy includes rules
for not releasing on Fridays or before holidays, and insists on the creation of user documentation.
Furthermore, all the developers are expected to multi-task projects. Some may be working on as many as
four major projects, as well as dealing with day-to-day bug fixing and routine updating — not to mention
maintaining code written by people who don’t even work there anymore.Thus they must be specialist and
generalists. All the developers we hire have an area of expertise, and we expect them to move us forward in
those areas. But we also throw a lot of other issues at them, with varying degrees of relevance to their area
of expertise.
For myself, as Production Manager, identifying the staff’s strengths is crucial to completing the projects
slated. Some are better at managing projects, others at coding; others are more interested in research. No
one has the luxury of doing the thing they do best, or enjoy the most, to the exclusion of everything else —
but finding those points and organizing projects around them creates a much more positive working environ-
ment. Some of the long-term projects may have several people on them, and by assigning tasks based on
known strengths and interests improves the success rate of the projects. Sometimes developers express an
interest in some area they had not been involved with previously.We work to try and give that person an
opportunity to pursue those interests. In asking the staff to be a flexible as possible, the management must be
flexible as well, and open to new ideas and interests.
In Conclusion
As a production company, we unfortunately do not have the luxury of having our own research group in
software.We must rely on the work of others within academia and elsewhere. Communication is the key.
Everything from formal publications to informal phone calls and e-mail keeps those lines open. Once in devel-
opment, that communication must continue, in order to push the limits of the ideas — without it we run the
risk of stagnating the imagination.
Software Development and
Research at Digital Domain
Doug Roble
Digital Domain
Introduction
After I recieved my Ph.D. and had accepted a job at Digital Domain, I asked Wayne Carlson (Faculty at
Ohio State and an executive at Cranston-Csuri, a Visual Effects company that was big in the 1980s) what I
could expect. He said that I was going to spend most of my time programming and I’d get much faster at it.
He was completely right and he had a very good grasp of the type of work I’d be doing. In this talk I will
examine what it is like to work at a visual effects production company as a software engineer/researcher. It is
very different from grad school!
The Difference between a Visual Effects Production Company and Grad School
By and large, production companies don’t have the time or money to fund pure research in computer
graphics. In general, the software developers are working problems that have a known (at least pretty well)
solution.We look at papers in journals and try to figure out a way to approach a problem that will, in all like-
lihood, work.
Then, we start designing a tool that is solid - it’s going to be released into our artist community and it
just doesn’t do to have the software crashing all the time.
We are lucky in that our users are in the same company (we rarely sell the code we write).This user
base knows the developer’s phone extension and will not hesitate to call if something goes wrong. It’s also,
for the most part, a very understanding user base. If something doesn’t quite work, yet there is a slightly
inconvenient work-around, the users will accept it to get the current job out the door.
More Differences
The software we develop has to work on all kinds of data, not just idealized data. It never fails that an
artist, after being given a new tool that the software engineer is particularly proud of, will call the engineer
up, complaining about how it doesn’t work on a particular instance of data.
This data, is usually something the engineer didn’t even consider and it’s the first thing that the artist
tried. If you develop a tool, it really has to be able to handle anything.
Research projects from academic institutes normally don’t require this. Most projects are proof of con-
cept and leave the details up to the software engineers who make the concept into a real tool.This is how it
should be, but it is something that we have to deal with.
Industry Secrets
Companies keep things secret so that they have an advantage over other companies.The bidding process
for films is very competitive and if one company has the ability to do an effect that another company can’t
(or can do an effect cheaper), then that company has a leg up on getting a film.
This flies in the face of traditional research behavior.We understand this, but really have to shrug our
shoulders and say, “Hey, that’s the way it is.”
As mentioned above, Digital Domain tries to help by collaborating with researchers in ways that it can.
Transfer of Knowledge
SIGGRAPH and other conferences and journals are very important to Visual effects companies. It is
through these things that we can see what other parts of the industry are doing and get ideas for new proj-
ects.
As an example of this, we will discuss a project that has been developed in the long term group of the
software department at Digital Domain.
3D Tracking
When I got to Digital Domain, one of the first things I did was take my Ph.D. research and make a robust,
working tool out of it. It analyzed images and figured out where the camera was when the image was pho-
tographed.This, in itself, was a project that had a long history in computer vision.
My tool added sub-pixel precision techniques, and a lot of control to help the tracking artist figure out
the information for the hardest of shots. In computer vision, the test images that are used are usually very
controlled.The lighting doesn’t change and the thing that is photographed certainly doesn’t catch fire!
Unfortunately, in filmmaking, that kind of thing happens all the time. Modifications to typical computer vision
techniques needed to be made.
The tool has been one of my successes and I have worked on it over the years, adding new features,
improving the work-flow, removing some really bone-headed design decisions...
Optical Flow
Track has a very nice pattern tracker built into the program. If you indicate a section of an image, it will
follow that section as it moves through the sequence.This makes a lot of the tracking fairly automatic.
Optical flow is a computer vision technique that’s been around for a while. Basically, in optical flow, one
tries to figure out where every pixel in one image moves to in the next image. If it works, it’s awesome, you
can figure out all sorts of information about the contents of the image.
I had been aware of this technique in computer vision and noodled around with it, but had little success.
Optical flow is not really suited to real world images, most of the papers on it will assume some constraint
on the camera to make the problem more tractable.
Within the last 4 or 5 years, these limitations have started to be addressed by the computer vision
research community. In fact, Richard Szeliski and James Coughlan have written some lovely papers on a tech-
nique that increased the robustness of optical flow and sped it up as well!
In fact, I had one after-a-SIGGRAPH-panel conversation with Richard Szeliski (that he probably doesn’t
remember) where we discussed some of his techniques.This convinced me that further effort would be
worth it. I already knew how I could use the technique in my tracking program and now that it was better
suited for real world images, I pushed for the project at Digital Domain.
Another point in my favor was that an external company was already working on an Optical Flow prod-
uct.This company was already showing off their version 2.0 of their product and it proved that really interest-
ing things could be done with the concepts.
We decided to develop the solution in-house in that it wouldn’t be too big of a project, having a site-
license was a real plus, there were limitations with the external solution (one couldn’t access some of the
results of the program for other uses and it was expensive) and finally, I had some ideas to make it even bet-
ter.
In the development of the project, I collected many papers on the topic and talked with some of my
friends at Universities. After developing a framework for experimentation, I implemented a couple of different
optical flow techniques and finally merged a couple for my final solution. I will present some results at the
course presentation.
Conclusion
Working at a Visual Effects Production Facility is quite different than working at a University.The financial
pressures are different and the type of work that is expected is different.
We appreciate the work that “The Ivory Tower” does to no end and use the concepts developed there all
the time. Our main challenge is modifying the algorithms so that they perform robustly in a real-world envi-
ronment.
Photomodeling and Hair Combing for AI
Steve Sullivan and Tony Hudson
Industrial Light and Magic
ILM R&D worked closely with several production groups to develop photomodeling and hair combing
tools for work on AI (to be released Summer 2001).
The first part of the presentation describes the use of digital reference and plate photography to reconstruct geo-
metrically accurate models of set pieces, which provide the basis for 3d camera and object matchmoves.
R&D at ILM
-28 people, 7 teams
-focused on applications
-long-term solutions
-also: Systems R&D and R&D TDs
AI Matchmove/Photomodeling
2d tracking
Follow a 2d pattern
--relatively mature
--many commercial solutions
--still plenty of room for improvement
Tracking Reference:
Three-Dimensional Computer Vision, O. Faugeras, MIT Press, 1993
Matchmove/Photomodel
compute camera/object position
--Used to be manual (restricted camera moves)
--Anything goes now - required for most shots
--Relatively mature (commercial solutions)
--Still time-consuming, still an art
--Structure solvers very handy
Nonlinear optimization
--Fundamentally hard
--Generally useful
--Initial conditions important (user input)
Optimization reference:
Linear and Nonlinear Programming, D. Luenberger, Addison Wesley, 1984
AI FC37 - conclusion
-Combination of techniques
-Mix 'n match manual with automatic
-Always driven by a user
Developing tools to model and animate hair
Tony Hudson
Industrial Light and Magic
The second part of this talk discusses a suite of tools developed for hair modeling, animation, and editing.
They permit accurate, interactive placement of guide hairs, with visualization tools for previewing the behav-
ior of interpolated hairs.
Even though digital hair has been around for years the ways we use the technology has evolved and as a
result tools are now being develop to where “styling” digital hair has become possible and just in the nick of
time.
Past uses of hair have mostly been limited to overall "furry" characters, where styling was accomplished
by a limited set of paint tools for controlling length and distribution over the topology. However, on Artificial
Intelligence or "A.I.", it quickly became obvious that a technique for "styling" digital hair would be necessary
to achieve the look for characters, which have specific haircuts, and that these hairstyles would need to be
able to receive dynamic forces.This eventually led to the development of hair creation and styling tools. This
give the modelers a very flexible interface in which hair could be modeled with lo-resoulution splines and a
quick "preview" function, which allows the artist to interact in a meaningful way with the hair and see exactly
what he’s doing.
ILM's software R & D department ultimately provided us with tools to place and plant hairs along a sur-
face, to "style" the hair while keeping the root planted, and to clone sculpted hairs to other locations along
the topology.We can also import hair distribution maps, so that we can see how the hair will ultimately be
rendered.
In addition to styling, the R&D group has also provided us with new tools that allow the animation of hair.
Our "Caricature" animation tool now gives us the ability to add dynamically animated hair and to tweak the
parameters to create specific use. Also possible is a form of "digital hairspray", to keep those nasty locks of
hair in place.