• Computer graphics refers to:
o Digital representation and manipulation of image data
o Technologies for creating and modifying images
o A sub-field of computer science focused on methods for incorporating and
manipulating visual content, often called rendering
• Types of Computer Graphics:
o Interactive Computer Graphics
o Non-Interactive Computer Graphics
Applications of Computer Graphics:
• Graphical User Interface (GUI):
o Multiple windows, icons, and menus improve computer setup efficiency.
• Computer Art:
o Painting packages allow artists to create electronic paintings simulating
various brush strokes, widths, and colors.
o Photorealistic techniques, morphing, and animations are essential for
commercial art, films (24 frames/sec), and video monitors (30 frames/sec).
• Computer-Aided Design (CAD) for Engineering and Architecture:
o Objects displayed in wireframe form.
o Multi-window environments enable different zooming scales and views.
o Animations help test performance.
Two Basic Color Display Techniques:
1. Beam–Penetration Method:
o Used with a random scan monitor for displaying pictures.
o Two phosphorus layers: Red and Green are coated inside the screen.
o Electron beam penetration determines the displayed color:
▪ A powerful beam passes through the red layer, exciting the green
layer.
▪ A slower beam excites only the red layer.
▪ Medium-speed electrons create a mix of red and green, producing
orange and yellow.
o Advantages:
▪ Better resolution
▪ Lower cost
▪ Inexpensive
o Disadvantages:
▪ Only four possible colors
▪ Time-consuming
2. Shadow–Mask Method:
o Used with a raster scan monitor for displaying pictures.
o Provides a wider color range than the beam penetration method.
o Structure:
▪ Three phosphorus color dots (Red, Green, Blue) at each pixel position.
▪ Metal screen or plate just before the phosphorus screen.
▪ Shadow grid behind the phosphorus-coated screen with triangular
holes.
o Working:
▪ The shadow mask directs the electron beam to specific dots on the
screen, displaying a picture.
▪ It uses three electron guns that direct beams through the shadow
mask to hit the desired pixel.
▪ The shadow mask allows for a broader range of colors and images
than the beam penetration method.
Two Kinds of Redrawing Mechanisms:
1. Raster / Bitmap-Scan:
o Each pixel has one of 2ⁿ intensities or colors, where n is the number of bits
per pixel.
o For true color, 24 bits per pixel are used (8 bits for red, green, and blue, each
with 256 levels of intensity).
o Picture data is stored in a frame buffer (called a bitmap in black-and-white
systems, pix map in color systems).
o The electron beam scans the screen from top to bottom, refreshing the
screen one row at a time, turning the beam on and off to create illuminated
spots.
o This scanning process is called refreshing, and a full scan is called a frame.
o Frame rate is typically 60–80 frames per second (60 Hz to 80 Hz).
2. Random / Vector-Scan:
o The image is represented by continuous geometric objects (lines, curves, etc.)
using mathematical formulas.
o The electron beam is directed only to the parts of the screen where a picture
is to be drawn.
o The picture definition is stored as line-drawing commands in a refresh display
file or buffer.
o Random-scan systems generally have higher resolution than raster systems
and can produce smooth line drawings, but they cannot display realistic
shaded scenes.
Display Controller:
• A raster display device reads the frame buffer and generates control signals for
horizontal and vertical scanning.
• Display controllers include a color map, which maps the input pixel value to the
output color.
Anti-Aliasing:
• Aliasing occurs when jagged or stair-step appearances appear due to integer pixel
positions.
• One way to reduce aliasing is by displaying objects at a higher resolution, but there is
a limit to how large the frame buffer can be while maintaining an acceptable refresh
rate.
Scan Conversion:
• The process of converting basic, low-level objects into their corresponding pixel map
representations.
• This is often an approximation of the object, as the frame buffer is a discrete grid
Types of Graphics Software:
1. General Programming Package:
o Provides an extensive set of graphics functions for high-level programming
languages (e.g., C, FORTRAN).
o Includes basic drawing elements like lines, curves, polygons, color
transformations, etc.
o Example: OpenGL.
2. Special-Purpose Application Package:
o Customized for specific applications, providing required facilities with user
interfaces, so users don't need to worry about programming.
o Example: CAD, medical, and business systems.
Coordinate Representations:
• Most general packages use Cartesian coordinates for graphics.
• If coordinates are in a different reference frame, they must be converted to Cartesian
coordinates before being input.
• Special-purpose packages may allow other coordinate systems suited to the
application.
• Modeling Coordinates (or Local/Master Coordinates) are used to define the shape
of objects.
• Once shapes are defined, objects are placed in World Coordinates.
• The World Coordinates are converted to Device Coordinates (or Screen Coordinates)
for display on output devices.
• Often, Normalized Device Coordinates (ranging from 0 to 1) are used before
converting to specific device coordinates.
Graphics Functions:
• A general-purpose graphics package provides functions for creating and manipulating
pictures.
• Output Primitives: Basic building blocks such as characters, strings, points, lines,
curves, filled areas, and shapes.
• Input Functions: Used for controlling and processing input from devices like mouse
or tablet.
• Control Operations: Manage housekeeping tasks, such as clearing the display screen.
• Graphics Functions: Built-in functions available for user tasks, such as drawing,
manipulation, and control of graphic elements.
Images on the Screen:
• Objects in the real world have size and location, which can be represented on the
screen using a screen coordinate system.
• The screen coordinate system ensures the object is displayed in the same relative
position as in the real world.
Windows and Clipping:
• World Coordinate System: Defines the position of objects in the real world.
• Windowing: Refers to displaying only part of an object inside a window, which is a
rectangular pane.
• Viewport: A section of the screen where the images encompassed by the window in
the world coordinate system will be drawn.
• Coordinate Transformation: Required to transform the image from the world
coordinate system to the screen's coordinate system (viewport).
Clipping Process:
• Sometimes, only part of an object is visible, and parts outside the window are "cut
off" from view. This is known as clipping.
• Clipping involves checking each line or object to determine if it is inside or outside
the window.
• If a line crosses the window's boundary, the intersection is calculated and only the
portion inside the window is displayed.
Coordinate Transformation Equations:
To map the coordinates of an object from the world to the viewport:
1. Normalize Window Coordinates:
Normalized Point on Window=(XW−XWminXWmax−XWmin,YW−YWminYWmax−YWmin)\tex
t{Normalized Point on Window} = \left( \frac{X_W - X_{Wmin}}{X_{Wmax} - X_{Wmin}},
\frac{Y_W - Y_{Wmin}}{Y_{Wmax} - Y_{Wmin}} \right)Normalized Point on Window=(XWmax
−XWminXW−XWmin,YWmax−YWminYW−YWmin)
2. Normalize Viewport Coordinates:
Normalized Point on Viewport=(XV−XVminXVmax−XVmin,YV−YVminYVmax−YVmin)\text{Nor
malized Point on Viewport} = \left( \frac{X_V - X_{Vmin}}{X_{Vmax} - X_{Vmin}}, \frac{Y_V -
Y_{Vmin}}{Y_{Vmax} - Y_{Vmin}} \right)Normalized Point on Viewport=(XVmax−XVminXV
−XVmin,YVmax−YVminYV−YVmin)
3. Final Transformation:
XV=XVmin+(XW−XWmin)×SxX_V = X_{Vmin} + (X_W - X_{Wmin}) \times S_xXV=XVmin+(XW
−XWmin)×Sx YV=YVmin+(YW−YWmin)×SyY_V = Y_{Vmin} + (Y_W - Y_{Wmin}) \times S_yYV
=YVmin+(YW−YWmin)×Sy
Where S_x and S_y are scaling factors.
Clipping Algorithms:
1. Point Clipping:
o Determines whether a point lies within the window.
o If the point's X and Y coordinates fall within the window's bounds, it is inside
the window.
2. Line Clipping (Cohen-Sutherland Algorithm):
o Clips line segments that fall outside the window.
o Uses a 4-bit code for each endpoint of a line to determine its position relative
to the window.
o Logical operations (AND, OR) of the endpoint codes help decide whether the
line is inside, outside, or intersects the window boundary.
Cohen-Sutherland Line Clipping Steps:
o Assign a 4-bit code for each endpoint (left, right, bottom, top).
o If both endpoints are inside the window, accept the line.
o If both endpoints are outside, reject the line.
o If one endpoint is outside, calculate the intersection with the window and
continue the process.
3. Liang-Barsky Line Clipping:
o Uses parametric equations to find intersection points, solving inequalities to
find the parameter range for the line segment within the window.
4. Polygon Clipping (Sutherland-Hodgman Algorithm):
o Clips polygons against each edge of the clipping window.
o Handles cases of edges that are inside, outside, or crossing the window.
Polygon Clipping Types:
o Edges completely inside the window: Add the inside vertex point.
o Edges leaving the window: Add intersection points.
o Edges entirely outside: Discard.
o Edges entering the window: Save intersection and inside points.
5. Text Clipping:
o There are three methods for text clipping:
▪ All or None String Clipping: If the entire string is inside, keep it.
Otherwise, discard it.
▪ All or None Character Clipping: If a character is outside, discard the
portion of it that lies outside the window.
Summary of Key Concepts:
• Windowing and viewporting allow us to display part of an object in a specific region
of the screen.
• Clipping ensures only the visible part of an object is displayed on the screen,
removing anything outside the viewing area.
• Several algorithms (Cohen-Sutherland, Liang-Barsky, Sutherland-Hodgman) are used
to handle clipping for points, lines, polygons, and text efficiently.
THREE-DIMENSIONAL VIEWING
Key Concepts
• 3D viewing lets us see objects from different spatial angles (front, side, inside, etc.).
• 3D objects must be projected onto 2D viewing surfaces like a computer screen.
• Clipping boundaries define the view volume.
• Projection operations convert 3D coordinates to 2D coordinates on the view plane.
Modeling Transformation
• Transforms objects from local coordinates to a world coordinate system.
• Advanced 3D view methods:
o Flexible vibrating mirrors used for depth simulation.
o Stereoscopic devices provide separate views for each eye to simulate depth.
PROJECTION METHODS
1. Parallel Projection
• Projects 3D points to the 2D plane using parallel lines.
• Preserves size and proportions, but not realistic.
• No perspective depth—used in technical drawings.
Types of Parallel Projection
1. Orthographic Projection
o Projection lines are perpendicular to the view plane.
o Types:
▪ Top
▪ Front
▪ Side
▪ Axonometric (e.g., Isometric)
o Isometric Projection:
▪ Equal scale along all 3 axes.
▪ Preserves line parallelism but not angles.
2. Oblique Projection
o Projection lines are not perpendicular to the view plane.
o Types:
▪ Cavalier Projection: 45° angle; true length preserved.
▪ Cabinet Projection: ~63.4° angle; depth reduced to ½ length.
2. Perspective Projection
• More realistic: parallel lines converge at a point (vanishing point).
• Size of object decreases with distance.
• Center of projection is finite.
Perspective Projection Types
• One-point: Simple, one vanishing point.
• Two-point: Adds more depth, two vanishing points.
• Three-point: Most realistic, includes vertical vanishing point.
Steps in Perspective Projection
1. Modeling Transformation: To world coordinates.
2. Viewing Transformation: To viewing coordinates.
3. Perspective Transformation: 3D object appears deformed for realism.
4. Parallel Projection: Final 2D projection.
Formula concept (e.g., using similar triangles): y=f⋅pypzy = \frac{f \cdot py}{pz}y=pzf⋅py
Where f is focal length and pz is depth.
VIEW VOLUMES
• Static view: View plane at origin; projection center adjusted.
• Animated view: Move through scene by shifting the viewing coordinate system.
THREE-DIMENSIONAL TRANSFORMATIONS
1. Translation
• Moves object in x, y, z directions.
• Translation vector: Tx, Ty, Tz
• Equations:
o x1 = x + Tx
o y1 = y + Ty
o z1 = z + Tz
2. Scaling
• Changes the size of objects.
• Scaling factors: Sx, Sy, Sz
• Uniform scaling: Sx = Sy = Sz
• Differential scaling: scaling factors differ.
Scaling relative to a fixed point (a, b, c):
1. Translate object to origin.
2. Scale.
3. Translate back.