Unit 4
Visible Surface Detection
(Hidden Surface Removal)
Hidden surface removal or visible surface determination is the process used to determine which
surfaces and parts of surfaces are not visible from a certain viewpoint.
Hidden surface determination is necessary to render an image correctly, so that one may not
view features hidden behind the model itself, allowing only the naturally viewable portion of
the graphics to be visible.
Visible surface detection methods are broadly classified according to whether they deal with
objects or with their projected images.
These two approaches are:
1) Object space method: An object space method compares objects and parts of objects
to each other within scene definition to determine which surfaces are visible.
2) Image space method: Visibility is decided point by point at each pixel position on the
projection plane.
Most visible surface detection algorithm use image-space-method but in some cases object
space methods are also used for it.
Algorithms
Back-face Detection (Plane equation Method)
A fast and simple object space method used to remove hidden surface from a 3D object drawing
is known as "Plane equation method".
A point (x, y, z) is inside a surface with plane parameter A, B, C & D if
𝐴𝑥 + 𝐵𝑦 + 𝐶𝑧 + 𝐷 < 0
i.e. it is in back face.
We can simplify this test by considering the normal vector N to a polygon surface which has
Cartesian components (A, B, C).
If V is the vector in viewing direction from the eye position then this polygon is a back face if,
V.N > 0
If object descriptions have been converted to projection coordinates and our viewing direction
is parallel to the viewing zv axis, then V = (0, 0, Vz) and
V.N = VzC
In a right-handed viewing system with viewing direction along the negative z, axis, the
polygon is a back face if C < 0. Also, we cannot see any face whose normal has z component
C=0, since our viewing direction is grazing that polygon. Thus, in general, we can label any
polygon as a back face if its normal vector has a z-component value:
C<=0
In a Left-handed viewing system with viewing direction along the positive z, axis, the
polygon is a back face if C > 0. Also, we cannot see any face whose normal has z component
C=0, since our viewing direction is grazing that polygon. Thus, in general, we can label any
polygon as a back face if its normal vector has a z-component value:
C>=0
Limitations:
This method works fine for convex polyhedral, but not necessarily for concave
polyhedral or overlapping objects. So, we need to apply other methods to further
determine where the obscured faces are partially or completely hidden by other objects
(e.g. Using Depth-Buffer Method or Depth-sort Method).
This method can only be used on solid objects modeled as a polygon mesh
Depth Buffer Method ( Z-Buffer Method)
It is an image space method for detecting visible surface.
A commonly used image-space approach to detecting visible surfaces is the depth-buffer
method, which compares surface depths at each pixel position on the projection plane.
This procedure is also referred to as the z-buffer method, since object depth is usually
measured from the view plane along the z axis of a viewing system.
Each surface of a scene is processed separately, one point at a time across the surface. The
method is usually applied to scenes containing only polygon surfaces, because depth values
can be computed very quickly and the method is easy to implement. But the method can be
applied to non planar surfaces.
With object descriptions converted to projection coordinates, each (x, y, z) position on a
polygon surface corresponds to the orthographic projection point (x, y) on the view plane.
Therefore, for each pixel position (x, y) on the view plane, object depths can be compared
by comparing z values.
Below Figure shows three surfaces at varying distances along the orthographic projection
line from position (x ,y ) in a view plane taken as the xy, plane. Surface 1, is closest at this
position, so its surface intensity value at (x, y) is saved. As implied by the name of this
method, two buffer areas are required.
The Z-buffer method compares surface depths of each pixel position on the projection
plane. Normally z-axis is represented as depth.
In this method, two buffers are used:
1) Depth Buffer: Stores depth values for each (x, y) position.
2) Frame Buffer: Stores the intensity values for each position.
Initially, all positions in the depth buffer are set to 0 (minimum depth), and the refresh buffer
is initialized to the background intensity. Each surface listed in the polygon tables is then
processed, one scan line at a time, calculating the depth (z value) at each (x, y) pixel position.
The calculated depth is compared to the value previously stored in the depth buffer at that
position. If the calculated depth is patter than the value stored in the depth buffer, the new
depth value is stored, and the surface intensity at that position is determined and in the same
xy location in the refresh buffer.
Algorithm:
1) Set the buffer values:
DepthBuffer(x, y)=0,
FrameBuffer(x, y)=Background color
2) Process each polygon, one at a time as follows:
For each projected (x, y) pixel position of a polygon, calculate depth ‘z’.
If z>DepthBuffer(x,y), compute surface color and set
DepthBuffer(x, y)=z,
FrameBuffer(x, y)=surfacecolor(x, y)
After all surface have been processed, the depth buffer contains depth values for
visible surface and frame buffer contains corresponding color values for those
surface.
In the figure, at view plane position (x, y), surface 𝑠1 has the smallest depth from the view
plane, so it is visible at that position.
Advantages:
- It is easy to implement.
- It reduces the speed problem if implemented in hardware.
- It processes one object at a time
Disadvantages:
- It requires large memory
- It is a time-consuming process as it requires comparison for each pixel instead of for the
entire polygon.
- Deals only with opaque object but not for transparent object.
A-Buffer Method
The A-buffer method is an extension of the depth-buffer method.
This method is also known as anti-aliased or area-averaged or accumulation buffer.
A-Buffer method in computer graphics is a general hidden face detection mechanism suited
to medium scale virtual memory computers.
This method is also known as anti-aliased or area-averaged or accumulation buffer. This
method extends the algorithm of depth-buffer (or Z Buffer) method.
As the depth buffer method can only be used for opaque object but not for transparent
object, the A-buffer method provides advantage in this scenario.
Although the A buffer method requires more memory, but different surface colors can be
correctly composed using it.
Being a descendant of the Z-buffer algorithm, each position in the buffer can reference a
linked list of surfaces.
The key data structure in the A buffer is the accumulation buffer.
Each position in the A buffer has 2 fields :
1) Depth field
2) Surface data field or Intensity field
A depth field stores a positive or negative real number. A surface data field can store
surface intensity information or a pointer to a linked list of surfaces that contribute to that
pixel position.
if the value of depth is >= 0,
The number stored at that position is the depth of single surface overlapping the
corresponding pixel area.
The 2nd field, i.e, the intensity field then stores the RGB components of the surface
color at that point and the percent of pixel coverage.
if the value of depth is < 0,
Multiple-surface contributions to the pixel intensity is indicated by depth < 0.
The 2nd field, i.e, the intensity field then stores a pointer to a linked list of surface
data.
It proceeds just like the depth buffer algorithm.
Here, the depth and opacity are used to determine the final color of the pixel.
As shown in the figure above, the A buffer method can be used to show the
transparent objects.
The surface buffer in the A buffer method includes :
1. Depth
2. Surface Identifier
3. Opacity Parameter
4. Percent of area coverage
5. RGB intensity components
6. Pointer to the next surface
Advantage:
- It provides anti-aliasing in addition to what Z-buffer does.
- On applying the A-buffer method on all the six surfaces indicated below, the corresponding
colors are as :
- In A-buffer method, each pixel is made up of a group of sub-pixels.
- The final color of a pixel is computed by summing up all of its sub-pixels.
- Due to this accumulation taking place at sub-pixel level, A-buffer method gets the name
accumulation buffer.
Disadvantage:
- It is slightly costly than Z-buffer method because it requires more memory in
comparison to the Z-buffer method.
Scan-Line Method
1. This method is extension of the scan-line algorithm for filling polygon interiors
2. This method is an example of image space method.
3. For all polygons intersecting each scan line
Processed from left to right
Depth calculations for each overlapping surface
The intensity of the nearest position is entered into the refresh buffer polygon tables:
The following polygon tables are used to store coordinate descriptions of polygons along with
Surfaces.
Vertex Table: contains all vertices and their coordinates
Edge table: contains all edge names and their coordinate endpoints
Surface facet table: contains all surfaces along with their corresponding edge names.
Edge table
Coordinate endpoints for each line
Slope of each line
Pointers into the polygon table
Identify the surfaces bounded by each line
Surface table
Coefficients of the plane equation for each surface
Intensity information for the surfaces
Pointers into the edge table
4. For each scanline, maintain the following things: Active Edge table and Surface Flag
Active edge list
Contain only edges across the current scan line
Sorted in order of increasing x
Flag for each surface
Indicate whether inside or outside of the surface
At the leftmost boundary of a surface, the surface flag is turned on
At the rightmost boundary of a surface, the surface flag is turned off
Example:
Active list for scan line 1
Edge table - AB, BC, EH, and FG
Between AB and BC, only the flag for surface S1 is on
No depth calculations are necessary
Intensity for surface S1 is entered into the refresh buffer
Similarly, between EH and FG, only the flag for S2 is on
For scan line 2, 3
Edge Table - AD, EH, BC, and FG
Between AD and EH, only the flag for S1 is on
Between EH and BC, the flags for both surfaces are on
Depth calculation is needed
Intensities for S1 are loaded into the refresh buffer until BC
Advantage:
- Any number of overlapping polygon surfaces can be processed with this method.
Limitation:
- The scan line method runs into trouble when surface cut through each other or otherwise
cyclically overlap. Such surface need to be divided.
Depth-Sorting Method
Also known as Painter’s Algorithm.The techniques used by these algorithms are image
space and object space.
The name of this algorithm is Painter’s because it’s working is like a painter who creating
an oil painting. Just like an artist paints, he start his painting with an empty canvas, the first
thing the artist will do is to create a background layer for the painting, after this layer he
start creating another layers of objects one-by-one. In this way he completes his painting,
by covering the previous layer partially or fully according to the requirement of the
painting.
This algorithm is basically used to paint the polygons in the view plane by considering
their distance from the viewer. The polygons which are at more distance from the viewer
are painted first. After that the nearer polygons are started painted on or over more distant
polygons according to the requirement.
In this algorithm the polygons or surfaces in the scene are firstly scanned or then
painted in the frame buffer in the decreasing distance from view point of the viewer
starting with the polygons of maximum depth or we can say minimum z-value.
Firstly the depth sort is performed in which the polygons are listed according to
their visibility order or depth priority.
As this algorithm uses the concept of depth priority so it is also called as depth priority
algorithm or priority algorithm.
The frame buffer is painted with the background color. After that the polygon which is
farthest enter to the frame buffer. For this, the pixel information will get changed i.e.
information of the background which has the farthest polygon get replaced with that of the
background. This is going to be repeatedly changed as we move from one polygon to the
other and end up with the nearest polygon.
Usually comparisons are performed whenever the polygons are going to overlap each
other. The most common method used for the comparison is called as mini-max method.
For this purpose, the rectangles are drawn around the polygons such that the rectangles
exactly fit the polygons.
Then the rectangles are going to check to see whether they overlap each other or not. If the
rectangles are observed as they do not overlap then we consider that the surfaces are also
not overlap. If the rectangles are overlapped then the surfaces are also overlapped which is
as shown in the following figure:
BSP Tree Method
A binary space partitioning tree (bsp–tree) is a binary tree whose nodes contain polygons.
Binary space partitioning, or BSP, divides space into distinct sections by building a tree
representing that space.
Used to sort polygons.
A BSP takes the polygons and divides them into two groups by choosing a plane, usually
taken from the set of polygons, and divides the world into two spaces.
It decides which side of the plane each polygon is on, or it may also be on the plane.
If a polygon intersects the splitting plane it must be split into two separate polygons, one on
each side of the plane.
The tree is built by choosing a partitioning plane and dividing the remaining polygons into
two or three lists: Front, Back and On lists – done by comparing the normal vector of the
plane with that of each polygon.
For each node in a bsp-tree the polygons in the left subtree lie behind the polygon at the node
while the polygons in the right subtree lie in front of the polygon at the node.
Each polygon has a fixed normal vector, and front and back are measured relative to this
fixed normal.
Once a bsp-tree is constructed for a scene, the polygons are rendered by an in order traversal
of the bsp-tree.
Recursive algorithms for generating a bsp-tree and then using the bsp-tree to render a scene
are presented below.
Algorithm for Generating a BSP–Tree
1. Select any polygon (plane) in the scene for the root.
2. Partition all the other polygons in the scene to the back (left subtree) or the front (right
subtree).
3. Split any polygons lying on both sides of the root (see below).
4. Build the left and right subtrees recursively.
BSP-Tree Rendering Algorithm (In order tree traversal)
If the eye is in front of the root, then
Display the left subtree (behind)
Display the root
Display the right subtree (front)
If eye is in back of the root, then
Display the right subtree (front)
Display the root
Display the left subtree (back)
Area-Subdivision Method (Warnock’s Algorithm):
An interesting approach to the hidden-surface problem was developed by Warnock.
He Developed area subdivision algorithm which subdivides each area into four equal
squares.
In this method, the total viewing area is successively divided into smaller and smaller
rectangles until each small area is simple i.e. it is a single pixel or is covered wholly by a
part of a single visible surface or no surface at all.
Fig: Dividing a square area into equal-sized quadrants at each step.
At each stage in the recursive-subdivision process, the relationship between projection of
each polygon and the area of interest is checked for four possible relationships:
1. Surrounding polygon – One that completely encloses the (shaded) area of interest.
2. Overlapping or Intersecting Polygon –One that is partly inside and partly outside the
area.
3. Inside or Contained Polygon – One that is completely inside the area.
4. Outside or Disjoint Polygon- One that is completely outside the are
After checking four relationship we can handle each relationship as follows:
1. If all the polygons are disjoint from the area, then the background colour is displayed in the
area.
2. If there is only one intersecting or only one contained polygon, then the area is first filled with
background colour, and then the part of the polygon contained in the area is filled with colour of
polygon.
3. If there is a single surrounding polygon, but no intersecting or contained polygons, then the
area is filled with the colour of the surrounding polygon.
4. If there are more than one polygon intersecting, contained in, or surrounding the area then we
must do some more processing.
Fig a) The four intersections of surrounding polygon are all closer to the viewpoint than any of
the other intersections. Therefore, the entire area is filled with the colour of the surrounding
polygon.
Fig b) That surrounding polygon is not completely in front of the intersecting polygon. The
Warnock’s algorithm stops subdivision of area only when the problem is simplified or even area
is only a single pixel.
Algorithm:
1. Initialize the area to be the whole screen.
2. Create the list of polygons by sorting them with their z-values of vertices. Don’t include disjoint
polygons in the list because they are not visible.
3. Find the relationship of each polygon.
4. Perform the visibility decision test,
a. If all the polygons are disjoint from the area, then fill area with background colour.
b. If there is only one intersecting or only one contained polygon then first fill entire area
with background colour and then fill the part of the polygon contained in the area with the
colour of polygon.
c. If there is a single surrounding polygon, but no intersecting or contained polygons, then
fill the area with the colour of the surrounding polygon.
d. If the surrounding polygon is closer to the viewpoint than all other polygons, so that all
other polygons are hidden by it, fill the area with the colour of the surrounding polygon.
e. If the area is the pixel (x, y) and neither a, b, c, nor d apply, compute the z coordinate at
pixel (x, y) of all polygons in the list. The pixel is then set to colour of the polygon which is
closer to the viewpoint.
5. If none of the above tests are true, then subdivide the area and go to step 2.
Advantages:
1. It follows the divide-and-conquer strategy; therefore, parallel computers can be used to speed
up the process.
2. Extra memory buffer is not required.
Octree Method
In these methods, octree nodes are projected onto the viewing surface in a front-to-back
order.
Any surfaces toward the rear of the front octants (0,1,2,3) or in the back octants (4,5,6,7)
may be hidden by the front surfaces.
With the numbering method (0,1,2,3,4,5,6,7), nodes representing octants 0,1,2,3 for the
entire region are visited before the nodes representing octants 4,5,6,7.
Similarly the nodes for the front four sub octants of octant 0 are visited before the nodes for
the four back sub octants.
When a colour is encountered in an octree node, the corresponding pixel in the frame buffer
is painted only if no previous color has been loaded into the same pixel position.
In most cases, both a front and a back octant must be considered in determining the correct
color values for a quadrant. But
o If the front octant is homogeneously filled with some color, we do not process the
back octant.
o If the front is empty, it is necessary only to process the rear octant.
o If the front octant has heterogeneous regions, it has to be subdivided and the sub-
octants are handled recursively.
Ray-Casting Method
Ray tracing also known as ray casting is efficient method for visibility detection in the
objects.
A ray is fired from the eye through each pixel on the screen in order to locate the
polygon in the scene closest to the eye.
The color and intensity of this polygon is displayed at the pixel.
Ray casting is easy to implement for polygonal models because the only calculation
required is the intersection of a line with a plane.
Fig: A ray is fired from the center of projection through each pixel to which the window
maps, to determine the closest object intersected.
Ray Casting Algorithm
Through each pixel, fire a ray to the eye:
Intersect the ray with each polygonal plane.
Reject intersections that lie outside the polygon.
Accept the closest remaining intersection -- that is, the intersection with the
smallest value of the parameter along the line.
Advantage:
Ray casting can be used even with non-polygonal surfaces. All that is needed to implement
the ray casting algorithm for hidden surfaces is a line/surface intersection algorithm for each
distinct surface type.
Disadvantage:
This method is slow. Ray casting is a brute force technique that makes no use of pixel
coherence.
Polygon (surface) Rendering Method
Application of an illumination model to the rendering of standard graphics objects those
formed
with polygon surfaces are key technique for polygon rendering algorithm.
Calculating the surface normal at each visible point and applying the desired illumination
model at
that point is expensive. We can describe more efficient shading models for surfaces defined by
polygons and polygon meshes.
Scan line algorithms typically apply a lighting model to obtain polygon surface rendering
in one of
two ways. Each polygon can be rendered with a single intensity, or the intensity can be obtained at
each point of the surface using an interpolating scheme.
1. Constant Intensity Shading (Flat Shading)
The simplest model for shading for a polygon is constant intensity shading also called as
Faceted Shading or flat shading.
This approach implies an illumination model once to determine a single intensity value that
is then used to render an entire polygon.
Constant shading is useful for quickly displaying the general appearance of a curved
surface.
This approach is valid if several assumptions are true:
a) The light source is sufficiently far so that N.L is constant across the polygon face.