Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
31 views46 pages

Animation Techniques

The document provides an overview of 3D animation and its concepts, detailing the processes of 3D modeling, representation types, and various modeling techniques such as polygonal and NURBS modeling. It discusses the importance of scene setup, including lighting and texturing, as well as different types of 3D animation including CGI and motion capture. Additionally, it covers skeletal animation and the principles of kinematics and dynamics in 3D animation workflows.

Uploaded by

tmpriyasvcas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views46 pages

Animation Techniques

The document provides an overview of 3D animation and its concepts, detailing the processes of 3D modeling, representation types, and various modeling techniques such as polygonal and NURBS modeling. It discusses the importance of scene setup, including lighting and texturing, as well as different types of 3D animation including CGI and motion capture. Additionally, it covers skeletal animation and the principles of kinematics and dynamics in 3D animation workflows.

Uploaded by

tmpriyasvcas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 46

Unit III

3D Animation & Its Concepts


Introduction
In 3D computer graphics, 3D modeling is the process of developing a mathematical,
wireframe representation of any three-dimensional object (either inanimate or living) via
specialized software. The product is called a 3D model. It can be displayed as a two-
dimensional image through a process called 3D rendering or used in a computer simulation of
physical phenomena. The model can also be physically created using 3D Printing devices.

Models may be created automatically or manually. The manual modeling process of


preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting.

Models
3D models represent a 3D object using a collection of points in 3D space, connected
by various geometric entities such as triangles, lines, curved surfaces, etc. Being a collection
of data (points and other information), 3D models can be created by hand, algorithmically
(procedural modeling), or scanned.

Representation

Almost all 3D models can be divided into two categories.

Solid - These models define the volume of the object they represent (like a rock).
These are more realistic, but more difficult to build. Solid models are mostly used for non
visual simulations such as medical and engineering simulations, for CAD and specialized
visual applications such as ray tracing and constructive solid geometry

Shell/boundary - these models represent the surface, e.g. the boundary of the object,
not its volume (like an infinitesimally thin eggshell). These are easier to work with than solid
models. Almost all visual models used in games and film are shell models.

Because the appearance of an object depends largely on the exterior of the object,
boundary representations are common in computer graphics. Two dimensional surfaces are a
good analogy for the objects used in graphics, though quite often these objects are non-
manifold. Since surfaces are not finite, a discrete digital approximation is required: polygonal
meshes (and to a lesser extent subdivision surfaces) are by far the most common
representation, although point-based representations have been gaining some popularity in
recent years. Level sets are a useful representation for deforming surfaces which undergo
many topological changes such as fluids.

The process of transforming representations of objects, such as the middle point


coordinate of a sphere and a point on its circumference into a polygon representation of a
sphere, is called tessellation. This step is used in polygon-based rendering, where objects are
broken down from abstract representations ("primitives") such as spheres, cones etc, to so-
called meshes, which are nets of interconnected triangles. Meshes of triangles (instead of e.g.
squares) are popular as they have proven to be easy to render using scanline rendering.
Polygon representations are not used in all rendering techniques, and in these cases the
tessellation step is not included in the transition from abstract representation to rendered
scene.
3D models are widely used anywhere in 3D graphics. Actually, their use predates the
widespread use of 3D graphics on personal computers. Many computer games used pre-
rendered images of 3D models as sprites before computers could render them in real-time.

Today, 3D models are used in a wide variety of fields. The medical industry uses
detailed models of organs. The movie industry uses them as characters and objects for
animated and real-life motion pictures. The video game industry uses them as assets for
computer and video games. The science sector uses them as highly detailed models of
chemical compounds. The architecture industry uses them to demonstrate proposed buildings
and landscapes through Software Architectural Models. The engineering community uses
them as designs of new devices, vehicles and structures as well as a host of other uses. In
recent decades the earth science community has started to construct 3D geological models as
a standard practice.

Modeling processes

There are five popular ways to represent a model:

Polygonal modeling - Points in 3D space, called vertices, are connected by line segments to
form a polygonal mesh. Used for example by 3DS Max. The vast majority of 3D models
today are built as textured polygonal models, because they are flexible and because
computers can render them so quickly. However, polygons are planar and can only
approximate curved surfaces using many polygons.

NURBS modeling - NURBS Surfaces are defined by spline curves, which are influenced by
weighted control points. The curve follows (but does not necessarily interpolate) the points.
Increasing the weight for a point will pull the curve closer to that point. NURBS are truly
smooth surfaces, not approximations using small flat surfaces, and so are particularly suitable
for organic modeling. Maya and Rhino 3d are the most well-known commercial software that
uses NURBS natively.

Splines & Patches modeling - Like NURBS, Splines and Patches depend on curved lines to
define the visible surface. Patches fall somewhere between NURBS and polygons in terms of
flexibility and ease of use.

Primitives modeling - This procedure takes geometric primitives like balls, cylinders, cones
or cubes as building blocks for more complex models. Benefits are quick and easy
construction and that the forms are mathematically defined and thus absolutely precise, also
the definition language can be much simpler. Primitives modeling is well suited for technical
applications and less for organic shapes. Some 3D software can directly render from
primitives (like POV-Ray), others use primitives only for modeling and convert them to
meshes for further operations and rendering.

Sculpt modeling - Still fairly new method of modeling 3D sculpting has become very
popular in the few short years it has been around. There are 2 types of this currently,
Displacement which is the most widely used among applications at this moment, and
volumetric. Displacement uses a dense model (often generated by Subdivision surfaces of a
polygon control mesh) and stores new locations for the vertex positions through use of a
32bit image map that stores the adjusted locations. Volumetric which is based loosely on
Voxels has similar capabilities as displacement but does not suffer from polygon stretching
when there are not enough polygons in a region to achieve a deformation. Both of these
methods allow for very artistic exploration as the model will have a new topology created
over it once the models form and possibly details have been sculpted. The new mesh will
usually have the original high resolution mesh information transferred into displacement data
or normal map data if for a game engine.

The modeling stage consists of shaping individual objects that are later used in the
scene. There are a number of modeling techniques, including:
 constructive solid geometry
 implicit surfaces
 subdivision surfaces

Modeling can be performed by means of a dedicated program (e.g., form•Z, Maya,


3DS Max, Blender, Lightwave, Modo) or an application component (Shaper, Lofter in 3DS
Max) or some scene description language (as in POV-Ray). In some cases, there is no strict
distinction between these phases; in such cases modeling is just part of the scene creation
process (this is the case, for example, with Caligari trueSpace and Realsoft 3D).

Complex materials such as blowing sand, clouds, and liquid sprays are modeled with
particle systems, and are a mass of 3D coordinates which have either points, polygons,
texture splats, or sprites assign to them.

Scene setup
Scene setup involves arranging virtual objects, lights, cameras and other entities on a
scene which will later be used to produce a still image or an animation.

Lighting is an important aspect of scene setup. As is the case in real-world scene


arrangement, lighting is a significant contributing factor to the resulting aesthetic and visual
quality of the finished work. As such, it can be a difficult art to master. Lighting effects can
contribute greatly to the mood and emotional response effected by a scene, a fact which is
well-known to photographers and theatrical lighting technicians.

It is usually desirable to add color to a model's surface in a user controlled way prior
to rendering. Most 3D modeling software allows the user to color the model's vertices, and
that color is then interpolated across the model's surface during rendering. This is often how
models are colored by the modeling software while the model is being created. The most
common method of adding color information to a 3D model is by applying a 2D texture
image to the model's surface through a process called texture mapping. Texture images are no
different than any other digital image, but during the texture mapping process, special pieces
of information (called texture coordinates or UV coordinates) are added to the model that
indicate which parts of the texture image map to which parts of the 3D model's surface.
Textures allow 3D models to look significantly more detailed and realistic than they would
otherwise.

Other effects, beyond texturing and lighting, can be done to 3D models to add to their
realism. For example, the surface normals can be tweaked to affect how they are lit, certain
surfaces can have bump mapping applied and any other number of 3D rendering tricks can be
applied.
3D models are often animated for some uses. They can sometimes be animated from
within the 3D modeler that created them or else exported to another program. If used for
animation, this phase usually makes use of a technique called "keyframing", which facilitates
creation of complicated movement in the scene. With the aid of keyframing, one needs only
to choose where an object stops or changes its direction of movement, rotation, or scale,
between which states in every frame are interpolated. These moments of change are known as
keyframes. Often extra data is added to the model to make it easier to animate. For example,
some 3D models of humans and animals have entire bone systems so they will look realistic
when they move and can be manipulated via joints and bones, in a process known as skeletal
animation.

Types of 3D Animation
Animation is the process of rendering images in quick succession so as to give an
impression of continuous motion. In 3D animation, the images that are used are three
dimensional. They can be shown from all possible directions, are made to adhere to natural
physics laws, and therefore appear exceedingly real to the viewers.

There are many different types of three dimensional animation. Some of the main
ones are stop motion animation, claymation, computer generated imagery (CGI) and motion
capture animation. These animations are done using hand-crafted as well as computer-
generated puppets, objects, backgrounds and special effects.

In stop motion animation, the 3D characters and their backgrounds are designed by
hand. First the animator makes an armature or skeleton using metal and then fashions the
objects on this using clay, plastic or rubber. Different materials like wood, cloth, hair gel,
toothpaste and cotton wool may be used to create finer details.

The handmade objects are arranged in a scene and a photograph is taken. Then the
objects are moved slightly and photographed again. After photographing the objects every
time their position is altered, the animator uses the photographic frames to create the 3D
animation.

Claymation is a stop motion animation using clay models. The clay is worked on
over an aluminum wire frame and shaped with fingers and clay tools. To prevent the sculpted
figures from being misshapen in handling, quick hardening clay is used. Once it has dried, the
figures are painted as required.

The use of computers has, in many ways, revolutionized the three dimensional
animation. Animators are no longer required to draw, sculpt or color everything by hand. By
switching to the digital medium, the entire 3D animation process can be carried out in a
faster, less cumbersome and more cost-saving manner.

So much so that CGI is also used in non-animated movies. The computer graphics
create scenes that might have otherwise been very expensive or difficult to produce. Some
scenes would be even impossible to create without computer generated imagery.

In computer generated 3D animation, the animator builds virtual skeletons and models
3D figures over them. Then the digital models are rigged and can be manipulated as required
for the animation. The animator must take care to get all the physical movements and details
right so that the 3D animation looks real.
Motion capture animation goes a step further and makes use of live action. Here, an
actor is outfitted with sensory devices that plot the movement and positioning of the actor's
joints. The actor then goes through the motions in place of an already designed character that
is to be animated.

The sensory devices record the resulting motions and transmit the video-recording
into a computer. The computer now uses these inputs to give movement to the character. This
brings motion capture animation as very close to movments seen in real life.

Some well-known animated films are Shrek, Finding Nemo, and Wall-E. 3D
animation films are becoming more and more inclined towards realism.

Skeleton & Kinetic 3D Animation


Overview
Visualization Pipeline procedural workflow was introduced in K-3D. The project has
been split into three parts.

Skeletal / Deformation Bones


This is what most people think about when they hear "skeletal" animation. Simply put
it is a way to deform a mesh. You associate one group of points in a mesh with a bone and
another group with a second bone and when a bone is reoriented the associated points will do
so accordingly.

However there are several takes on how to deform the mesh because the deformation
at the joints is always a problem. A bone may be associated with an entire mesh, e.g. when
dealing with mechanical assemblies (think of animating a solid robotic arm).

A bone object in K-3D would have the following properties:


 input mesh
 output mesh
 input matrix
 output matrix

The main processing for the bone would be to deform the input mesh, based on the
difference between the input matrix and some "resting" state.

Kinematics
Bones have to be linked together using joints to make a skeleton. In ODE's
documentation, bodies connected to each other with joints are called islands.

Kinematics is the process of calculating the position in space of the end of a linked
structure given the angles of all the bones at the joints. It is easy, and there is only one
solution. Inverse Kinematics does the reverse. Given the end point of the structure, what
angles do the joints need to be in to reach that end point? It can be difficult, and there are
usually many or infinitely many solutions.

Dynamics engine
After creating a skeletal object (deformation bones + joints), really want to animate it
using kinematics, inverse kinematics, and even physics.
ODE is a very nice an competent library for simulating rigid structure physics. It has
advanced joint types and integrated collision detection with friction.

There are issues when it comes to ODE like K-3D's nonlinear approach to time, but
time is going to be an issue with any dynamics library.

Texturing & Lighting of 3D Animation


Understanding the Art of Lighting
Like every aspect of 3d, lighting must be created from scratch. Unfortunately, the
techniques for emulating the real world are not always obvious or intuitive. Luckily, a wealth
of lighting theory exists in the form of historical artwork, photography, and motion pictures.
There are five different categories in lighting. Before moving into the various lighting
methods, there are few common lighting terms to be defined.

Key the most intense light in a scene. The key light’s source is generally identifiable (the sun,
a lamp, and so on). The key light usually produces the strongest shadow in the scene.

Fill a secondary light that is less intense than the key. this light “fills” in the dark areas of a
subject and the shadows produced by the key. Fill lights often represent light from a key that
has bounced off a surface, such as a wall.

Rim an intense light source placed behind a subject that strikes the subject along the edge.
Rim lights are often employed as hair lights. these lights are commonly known as backlights
or kickers.

Using 1-Point Lighting


The 1-point lighting scheme is dramatic, sometimes stark, and often foreboding. The
lighting involves a single, easily identifiable key light source, with no significant
supplemental sources. 1-point lighting can be found in the following situations:

 A man lights a cigarette in an otherwise dark alley.


 A woman drives a car down a dark country road, lit only by the car’s instrument panel.
 Sunbeams burst through the window of an otherwise unlit interior.
 A theater audience is illuminated by the light of the movie screen.

The motion picture genre that most closely emulates 1-point lighting is film noir. Film
noir is a style historically associated with crime dramas of the 1940s and 1950s. The style is
typified by black-and-white film stock, sparsely lit characters, and deep black shadows.

Aesthetically, the lighting stemmed from stories with cynical, paranoid, or nihilistic
outlooks. Technically, the stark lighting was the result of placing only a few lights on the set,
in some cases because of budgetary restrictions. Although multiple lights were generally
needed for any given shot for proper exposure, the result often appears as if a single light
source exists.

Using 2-Point Lighting


The 2-point lighting scheme matches many of the lighting scenarios that encounter in
our everyday lives. The scheme often involves a strong key and an extremely diffuse fill. The
following are examples of 2-point lighting:
 Sunlight streams through a window. the light bounce from the interior walls serves as a
fill.
 Office workers sit in a windowless room lit with overhead fluorescent lights. The light
bounce from the walls, desks, and floor serves as a fill.
 A couple walks down a sidewalk on a sunny day. the light bounces off the concrete,
providing fill to the bottom of their hands, the underside of their chins, and their eye
sockets

Often 2-point lighting in painted portraits. For example, in the above Figure a man is lit
by a strong key light arriving from the left. a second light source delivers fill from the right;
thus, no part of the person or his outfit is left unlit. this painting was created by Frans hals
whose loose, powerful brushstrokes inspired the impressionism movement. This style of
lighting is called short lighting in studio photography; the side of the head facing away from
the camera receives the key. The opposite style of lighting is called broad lighting, in which
the side of the head facing the camera receives the key.

Using 3-Point Lighting


Perhaps the most commonly discussed and applied lighting technique is 3-point
lighting. Descriptions can be found in numerous 3d, film, and video instructional materials.
Although 3-point lighting is a reliable way to light many scenes, it has inherent drawbacks. In
the standard 3-point lighting scheme, a strong key is placed to one side of a subject
(approximately 15 to 45 degrees off the camera axis). A fill light is placed on the opposite
side and is at least half the intensity of the key (Figure below). A rim light is placed behind
the subject so that it grazes the subject’s edge.

Using Naturalistic Lighting


Naturalistic lighting is an adaptable scheme that matches the natural lighting scenario
of the subject location. Any light that is visible is logically driven by a recognizable source.
Naturalistic lighting is sometimes called “transparent” in that no artificial lighting methods
can be detected. Another way to define naturalistic lighting is to list what it lacks:
 Unmotivated shadows
 Impossibly distinct rim light
 Perfectly placed lights that never permit a character to fall into shadow or be
unglamorously lit

In the field of motion pictures, there are numerous examples of non-naturalistic lighting.
Many films feature stylized or exaggerated lighting. this is particularly evident with musicals,
which are fantastic by their very nature. Such films as The Band Wagon (1953) and Silk
Stockings (1957) employ high-key lighting, in which the fill light is intense and there is a low
key-to-fill ratio. The characters in these films are therefore evenly lit and carry a minimum
number of deep, dark shadows. High-key lighting is also evident in many television sitcoms,
in which it is necessary to keep a character well lit at all positions on the set. Similar lighting
is employed for advertising and catalog art (see Figure Below).

In other situations, non-naturalistic lighting is a result of technical limitations or time


and budget restrictions. a common problem with older motion pictures is the unintended
creation of unmotivated, multiple shadows. For example, light representing the sun casts
multiple shadows of a character on the ground. More commonly, a lamp casts multiple,
distinct shadows of its own fixture (see Figure below). This is caused by a need to illuminate
a set with multiple lights to attain correct exposure even though the desired light source—in
terms of the story—is singular.

Using Stylized Lighting


Stylized lighting pays no heed to the real world but fabricates fantastic sources of
light or simply ignores the lighting information altogether. The oldest form of stylized
lighting can be called 0-point lighting. In this case, lighting plays no part in the artistic
representation. You can see this in prehistoric art, as well as in the art of ancient or primitive
cultures (see Figure below). To this day, 0-point lighting survives as line-art cartoons.

Stylized lighting is well suited for 3d animation, since the medium places no
limitation on the type of lighting employed.

3D Textures
3D textures are procedural. That is, they are generated mathematically through
predefined algorithms. Procedural textures are resolution independent and do not have
defined edges or borders. Many of the algorithms employed by Maya make use of fractal
math, which defines non regular geometric shapes that have the same degree of non
regularity at all scales. Thus, Maya 3D textures are suitable for many shading scenarios found
in the natural world. For example, the addition of 3D textures to a shading network can
distress and dirty a clean floor and wall (see Figure).

When you MMB-drag a 3D texture into the Hypershade work area or choose it
through the create render node window, a 3D placement utility is automatically connected to
the texture and named place3dTexture (see Figure).
The scale, translation, and rotation of the 3D placement utility’s placement box affect
the way in which the texture is applied to the assigned object. if the assigned object is scaled,
translated, or rotated, it will pick up different portions of the texture. By default, new
placement boxes are positioned at 0, 0, 0 in world space and are 2  2  2 units large. If the
3D placement utility is deleted or its connection is broken, Maya assumes that the 3D texture
sample is at its default size and position.

The 3D placement utility determines the color of each surface point by locating the
point’s position within the placement box. each position derives a potentially unique color.
This process is analogous to a surface dipped into a square bucket of swirled paint or a
surface chiseled from a solid cube of veined stone. Should the surface sit outside the
placement box, the surface continues to receive a unique piece of texture border at the edge
of the placement box. A significant advantage of 3D textures, and the use of the 3D
placement utility, is the disregard of a surface’s UV texture space.

Camera Tracked 3D Animations


Introduction
One of the biggest advancements in 3D animation is camera tracking. Put simply, it is
the process of combining real video footage with 3D animation. The finished result is
amazing as the subtle irregularities of reality are picked up perfectly in 3D. A well prepared
scene will have 3D additions that aren't noticeable at all.

This kind of special effect used to be restricted to big-budget Hollywood, but now
most companies can have access to it.

Shooting the video footage


At Archiform 3D, People now use helicopters and High-Definition video cameras the
most. Aerial footage is the best way to portray large, unique projects and helicopters give us
the flexibility we need to get almost any kind of shot.

Advancements in camera technology also means that the typical gyro-stabilized


camera mounts are not necessary as the image stabilization in the camera and the work we
can do in post gets a shot that is rock steady. Large budget movies can afford to have
expensive stabilization pods and even better cameras, but made for TV, corporate or real
estate budgets can’t stretch that far. Archiform 3D has always been a specialist in getting
maximum value for money, so hand held, in harness shots are 90% of our work.

High-definition video cameras also give us an incredibly clear, sharp image with the
resolution we need to accurately process the footage. And to make the process even faster,
these cameras interface either directly or via a tape deck to our work stations.

While Archiform 3D's rates are fixed we do not include the helicopter costs, so you
need to budget them. Helicopters cant be the least expensive ones either - they should
preferable be large jet turbine models that don't bounce around and have the power to carry
out the maneuvers that we need to get the best shots. For an hour of time in the air your
should allow between $800 and $1200.

Tracking the video


Firstly the video is stabilized. This process, while taxing on a workstation, doesn't
actually require much human time - just hours of CPU time. All the bumps and jumps from
the footage are removed and the result is silky smooth video. If the shot is god then less
stabilization is needed, but if it was taken from a small helicopter or in turbulent weather then
more is needed and there is a slight reduction in finished quality.

The actual tracking of the video is partly automated task although still requires a great
deal of human time. The simple fact is that it is a very complex process that cannot be done
by man or computer alone. Points in the video are marked and through a process of manual
input and automated tracking the computer determines a set of moving 3D coordinates that
depict where the camera was at each point in time. As you may imagine, especially if you
have a mathematical background, the computational power to determine this and track the
points is somewhat huge and would have been next to impossible in the recent past.

The process of tracking a few minutes of video footage can take days, possible weeks,
depending on the quality of the shoot and the subject.

Adding the 3D animation


Building the 3D model itself is a different process again, so from this point on we will
assume it is done. The camera coordinates are brought into the 3D scene as is the stabilized
footage. Then there is another painstaking process, which is aligning the camera and the 3D
model with the video background. Even though the footage is tracked the coordinates will not
actually relate to the 3D scene until adjustments are made. This may take hours or days to get
right. Then the lighting in the 3D scene needs to be adjusted to match the lighting on the
actual day the video was shot. The idea is to get a seamless combination of the model and
reality.

Other factors come in to play as well, such as foreground objects in the video. These
require a technique called rotoscoping, which is a moving layer of transparency that is added
to the final 3D rendered footage to ensure that anything that is supposed to be in front of the
building is still there after the 3D footage has been added. Once again this is a
computer/human combination that can take some time. Normal rates for Archiform 3D to
create camera tracked 3D animations do NOT include rotoscoping and we will shoot to avoid
it's need, but if you require it we can do it for an additional fee.

Uses for tracked 3D animations


It is likely you have seen camera tracked 3D animations many times, you just haven't
known it. The technique started where all cool 3D techniques come from - Hollywood. The
use of 3D has grown so much and the need for virtual sets has also grown. But virtual sets
cant always be perfect and the need to blend reality became overwhelming.

For example, imagine you want a scene of travelers on camels riding to the pyramids
of Egypt but set in BC times. You could build it all in 3D but to get it perfect and have your
actors in the scene gets hard and expensive. Camera tracking allows you to use real actors,
real camels and real desert, fly over in a helicopter and then add the pyramids, city and
people in 3D. You now have a perfect scene that can have your actor somewhere that is real
but thousands of years ago.

Archiform 3D has used the technique for new buildings and features. We fly over in a
helicopter and then add in the new work so the viewer sees it as though it was complete. If
the property has some kind of unique or attractive feature, like a beach, lake, park or great
location, then this technique is quite valuable.
Applications & Software of 3D Animation
Uses
3D modelers are used in a wide variety of industries. The medical industry uses them
to create detailed models of organs. The movie industry uses them to create and manipulate
characters and objects for animated and real-life motion pictures. The video game industry
uses them to create assets for video games. The science sector uses them to create highly
detailed models of chemical compounds. The architecture industry uses them to create
models of proposed buildings and landscapes. The engineering community uses them to
design new devices, vehicles and structures as well as a host of other uses. There are typically
many stages in the "pipeline" that studios and manufacturers use to create 3D objects for film,
games, and production of hard goods and structures.

Features
Many 3D modelers are general-purpose and can be used to produce models of various
real-world entities, from plants to automobiles to people. Some are specially designed to
model certain objects, such as chemical compounds or internal organs.

3D modelers allow users to create and alter models via their 3D mesh. Users can add,
subtract, stretch and otherwise change the mesh to their desire. Models can be viewed from a
variety of angles, usually simultaneously. Models can be rotated and the view can be zoomed
in and out.

3D modelers can export their models to files, which can then be imported into other
applications as long as the metadata is compatible. Many modelers allow importers and
exporters to be plugged-in, so they can read and write data in the native formats of other
applications.
Most 3D modelers contain a number of related features, such as ray tracers and other
rendering alternatives and texture mapping facilities. Some also contain features that support
or allow animation of models. Some may be able to generate full-motion video of a series of
rendered scenes (i.e. animation).

Major packages

A comprehensive comparison of significant 3D packages and software are as follows:

3ds Max (Autodesk), originally called 3D Studio MAX. 3ds Max is used in many industries
that utilize 3D graphics. It is used in the video game industry for developing models and
creating cinema cut-scenes. It is used in architectural visualizations because it is highly
compatible with AutoCAD--also developed by Autodesk. Additionally 3ds Max is used in
film production. With its price of around $3500 USD, it is one of the more expensive
products in the market for this type of work.

AC3D (Inivis) is another 3D modeling application that began in the 90's on the Amiga
platform. While it is used in a number of industries, MathWorks actively recommends it in
many of their aerospace related articles due to price and compatibility. Additionally it is the
first commercial 3D modeler to integrate full support for exporting models to the metaverse
platform Second Life. AC3D is priced in the range of $79 USD and is available for Mac OS
X, Windows and Linux.
Aladdin4D (DiscreetFX), first developed for the Amiga, was originally developed by
Adspec Programming in Ohio. It developed an enthusiastic following and sold over 18,000
copies on the Amiga platform alone. After being acquired by visual effects company
DiscreetFX, the package has finally been repositioned as a multi-platform product for Mac
OS X, Amiga OS 4.1, MorphOS, Linux, AROS and Windows. It is priced at $99.95 USD and
is available for Windows, Linux, MorphOS, Amiga OS 4 and AROS.

Cinema 4D (MAXON) is a slightly lighter package than the others in its basic configuration.
The software is claimed to be artist-friendly, and is designed with the less-technical user in
mind. It has a lower initial entry cost due to a modular a-la-carte design for purchasing
additional functions as users need them. For example, a module called MoGraph allows the
user to create motion graphics titles more easily. Originally developed for the Commodore
Amiga, it is also available for Mac OS X, Windows and Linux.

Electric Image Animation System (EI Technology Group) is a 3D animation and


rendering package available on both Mac OS X and Windows. Mostly known for its
rendering quality and rendering speed it does not include a built-in modeler. EIAS features
the ability to handle very large polygon counts. Recently, the blockbuster film "Pirates of the
Caribbean: The Curse of the Black Pearl" and the Television hit "Lost" utilized the software.

Inventor (Autodesk) The Autodesk Inventor product line provides a comprehensive and
flexible set of software for 3D mechanical design, product simulation, tooling creation, and
design communication that help cost-effectively take advantage of a Digital Prototyping
workflow to design and build better products in less time.

LightWave 3D (NewTek), first developed for the Amiga, was originally bundled as part of
the Video Toaster package and entered the market as a low cost way for TV production
companies to create quality CG for their programming. It first gained public attention with its
use in the TV series "Babylon 5". Lightwave is used extensively in film production. It is
priced at $795 USD and is available for both Windows and Mac OS X.

Maya (Autodesk) is currently used in the film and television industry. Maya has a high
learning curve but has developed over the years into an application platform in and of itself
through extendibility via its MEL programming language. A common alternative to using the
default built in rendering system named mental ray is Pixar's Renderman. In 2005, Autodesk,
acquired Alias--the original creator of Maya. Maya comes in two versions: Maya Complete
($1999 USD) and Maya Unlimited ($4995 USD).

Softimage (Autodesk) is feature-similar to Maya and is sold as a competitive alternative. It


is used in the production of professional films, commercials, video games, and other media.
Softimage (formerly Softimage|XSI) is a 3D modeling and animation package that integrates
with mental ray rendering. Prior to its acquisition by Avid and later Autodesk, Softimage, Co.
originally promoted its predecessor for use in the video game industry and secured its
promotion as part of the Nintendo N64 SDK.

Solid Edge ( Siemens PLM Software) is a commercial application for design, drafting,
analysis, and simulation of products, systems, machines and tools. All versions include
feature-based parametric modeling, assembly modeling, drafting, sheetmetal, weldment,
freeform surface design, and data management. Application-programming interfaces enable
scripting in Visual Basic and C programming. The Solid Edge Velocity series is
Siemens' mid-range CAD software product family.

Swift 3D (Electric Rain) Swift 3D is a powerful, approachable, and relatively


inexpensive 3D design, modeling, and animation application targeted to entry-
level 3D users and Adobe Flash designers. Swift 3D is the de-facto Mac OS X &
Windows application for vector and raster-based 3D animations for Adobe Flash
and Microsoft Silverlight XAML. Swift 3D is the only 3D software to directly
integrate with Flash through the Swift 3D File Importer and SmartLayer
Technology, which separates the exported 3D animation into different layers
(Colors, Outlines, Shadows, Highlights, Reflections and Transparency), giving
designers additional creative control. Initially a pioneer in the 3D-to-vector Flash
world, Swift 3D is now accepted as a highly-capable and versatile 3D authoring
application, while maintaining its reputation as the marquee tool for producing
vector-based 3D content for Flash and Microsoft Silverlight. It is priced at $249
USD.

*** End of Unit III ***

You might also like