Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
6 views86 pages

11 Visible - Surface - Detection - Algorithms Final

NA

Uploaded by

04shubhgoel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views86 pages

11 Visible - Surface - Detection - Algorithms Final

NA

Uploaded by

04shubhgoel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 86

Visible Surface Detection/Hidden

Surface Removal
When rendering a 3D scene on a 2D screen,
we need to determine which surfaces are visible
and which ones are hidden behind other objects.

This is known as Visible Surface Detection or Hidden Surface Removal (HSR).


Without HSR, all surfaces would be drawn, causing overlapping objects to appear
incorrect, leading to visual artifacts and incorrect depth perception.
(HSR can be done before or after conversion to a 2D scene.)
Back-face detection algorithm

The simplest visibility test - back-face detection algorithm,


-fast and effective as an initial screening to eliminate many polygons
from further visibility tests.
For a single convex polyhedron,
-back-face detection eliminates all hidden surfaces but,
In general, - back-face detection cannot completely identify all hidden
surfaces.
Depth-buffer algorithm
A commonly used method for identifying all visible surfaces in a scene
-highly efficient, but it does have extra storage requirements.
-Two buffers are needed: one to store pixel colors and one to store the depth values
for the pixel positions.
Fast, incremental, scan-line methods are used to process each polygon in a scene to
calculate surface depths.
As each surface is processed, the two buffers are updated.
An extension of the depth-buffer approach - A-buffer, which provides additional
information for displaying antialiased* and transparent surfaces.
*Antialiased surfaces refer to smooth-rendered surfaces in computer graphics where
jagged edges or pixelation (aliasing) are minimized
Other visibility-detection methods
The scanline method processes all surfaces at once for each scan line.
With the depthsorting method (painter’s algorithm), objects are “painted” into
the refresh buffer according to their distances from the viewing position.
Subdivision schemes for identifying visible parts of a scene include the BSP-tree
method, area subdivision, and octree representations.
Visible surfaces can also be detected using raycasting methods, which project
lines from the pixel plane into a scene to determine object intersection positions
along these projected lines.
Ray-casting methods are an integral part of ray-tracing algorithms, which allow
scenes to be displayed with global-illumination effects.
I) Back-Face Detection
A fast and simple object-space method* for locating the back faces of a polyhedron is
based on front-back tests
A point (x, y, z) is behind a polygon surface if: Ax + By + Cz + D < 0
where A, B,C, and D are the plane parameters for the polygon.
___________________________________________________________________________________
*In computer graphics, transformations and visibility determination are often performed in two
different coordinate spaces: Object Space (Model Space) and Image Space (Screen Space)
Object space refers to the local coordinate system of an object before any transformations are applied.
-the coordinate system in which an object is first defined before being placed into a world or scene.
Image space (also called screen space or pixel space) refers to the final 2D projection of objects onto
the screen.
We can simplify the back-face test by considering the direction of the normal
vector N for a polygon surface.

If Vview is a vector in the viewing direction from our camera position, as shown in
Figure 1, then a polygon is a back face if Vview· N > 0

Furthermore, if object descriptions have been converted to projection coordinates (3D


scene has been transformed into a 2D representation) and our viewing direction is
parallel to the viewing zv axis, then we need to consider only the z component of
the normal vector N
In a right-handed viewing system with the viewing direction along the
negative zv axis (Figure 2), a polygon is a back face if :
the z component, C, of its normal vector N satisfies C < 0.
Also, we cannot see any face whose normal has z component C = 0, because
our viewing direction is grazing that polygon. (If a polygon's normal vector
has Nz = 0, then its surface is parallel to the view direction.)
Thus, in general, we can label any polygon as a back face if its normal vector
has a z component value that satisfies the inequality C ≤ 0
For other objects, such as the concave polyhedron in Figure 3, more tests must be carried
out to determine whether there are additional faces that are totally or partially obscured by
other faces.
Back-Face Detection Algorithm

1⃣ Compute the normal vector N for each polygon using the cross product.
2⃣ Find the viewing vector V (often along the z-axis).
3⃣ Compute the dot product N⋅V
4⃣ If N⋅V≥0, discard the polygon (back face).
5⃣ If N⋅V<0, render the polygon (front face).

In 3D rendering, objects are represented as polygon meshes, which are collections of polygons defining the
shape.
(In many 3D graphics systems, including OpenGL, the default viewing direction is along the negative z-axis.
The viewing vector is: V=(0,0,−1), The negative z-axis points into the screen, meaning the viewer is looking towards decreasing
z-values.)
Consider a triangle in 3D space with vertices:

A(1,1,1), B(2,3,1), C(3,1,1)


1⃣ Compute two edge vectors:
V1=B−A=(1,2,0)
V2=C−A=(2,0,0)

2⃣ Compute the normal using the cross product:

N=V1×V2=(0,0,−4)

3⃣ The z-component Nz=−4, which is less than 0, so the triangle is a back face and can be culled.
Other issues
Back-face detection only removes faces that point away from the viewer but does not
check if a front face is hidden behind another object.

If the object is concave or has holes, back-face detection may incorrectly classify some
faces as visible.

Faces that are nearly parallel to the viewing direction (like at 90° or 30° angles) may still
be visible but will not be correctly handled by back-face detection.

In scenes with glass or water, back-face detection removes polygons incorrectly, failing to
render objects properly.
II) Depth-Buffer Method (Z-buffer)
Compares surface depth values throughout a scene for each pixel position on the projection
plane
It is an Image space method. Image space methods are based on the pixel to be drawn on 2D.
The algorithm is usually applied to scenes containing only polygon surfaces, because depth
values can be computed very quickly and the method is easy to implement.
But we could also apply the same procedures to nonplanar surfaces.
Also called z-buffer method, because object depth is usually measured along the z axis of a
viewing system.
However, rather than using actual z coordinates within the scene, depth-buffer algorithms
often compute a distance from the view plane along the z axis
Figure 4 shows three surfaces at varying distances along the orthographic projection line
from position (x, y) on a view plane.
These surfaces can be processed in any order.
As each surface is processed, its depth from the view plane is compared to previously
processed surfaces.
If a surface is closer than any previously processed surfaces, its surface color is
calculated and saved, along with its depth
For these methods, the running time complexity is the number of pixels times number of
objects.
And the space complexity is two times the number of pixels because two arrays of pixels are
required, one for frame buffer and the other for the depth buffer.
The visible surfaces in a scene are represented by the set of surface
colors that have been saved after all surface processing is completed.
Implementation of the depth-buffer algorithm is typically carried out in
normalized coordinates, so that depth values range from 0 at the near
clipping plane (the view plane) to 1.0 at the far clipping plane.
As implied by the name of this method, two buffer areas are required.
A depth buffer is used to store depth values for each (x, y) position as
surfaces are processed,
and the frame buffer stores the surface-color values for each pixel
position.
Initially, all positions in the depth buffer are set to 1.0 (maximum depth),
and the frame buffer (refresh buffer) is initialized to the background
color.
Each surface listed in the polygon tables is then processed, one scan line at a
time, by calculating the depth value at each (x, y) pixel position.
This calculated depth is compared to the value previously stored in the
depth buffer for that pixel position.
If the calculated depth is less than the value stored in the depth buffer, the
new depth value is stored.
Then the surface color at that position is computed and placed in the
corresponding pixel location in the frame buffer.
Depth-buffer processing

This algorithm assumes that depth values are normalized on the range
from 0.0 to 1.0 with the view plane at depth = 0 and the farthest depth =
1.
We can also apply this algorithm for any other depth range, and some
graphics packages allow the user to specify the depth range over which the
depth-buffer algorithm is to be applied.
Within the algorithm, the variable z represents the depth of the polygon (that
is, its distance from the view plane along the negative z axis).
Depth-Buffer Algorithm
1. Initialize the depth buffer and frame buffer so that for all buffer positions (x, y),
depthBuff (x, y) = 1.0, frameBuff (x, y) = backgndColor
2. Process each polygon in a scene, one at a time, as follows:
● For each projected (x, y) pixel position of a polygon, calculate the depth z (if
not already known).
● If z < depthBuff (x, y), compute the surface color at that position and set
depthBuff (x, y) = z,
frameBuff (x, y) = surfColor (x, y)
After all surfaces have been processed, the depth buffer contains depth values for the
visible surfaces and the frame buffer contains the corresponding color values for
those surfaces.
Given the depth values for the vertex positions of any polygon in a scene, we can
calculate the depth at any other point on the plane containing the polygon.
At surface position (x, y), the depth is calculated from the plane equation as
z = −Ax − By − D/C
For any scan line (Figure below), adjacent horizontal x positions across the line
differ by ±1, and vertical y values on adjacent scan lines differ by ±1.
If the depth of position (x, y) has been determined to be z, then the depth z’ of the
next position (x + 1, y) along the scan line is obtained from Eq. above as
The ratio −A/C is constant for each surface, so succeeding depth values across a scan line
are obtained from preceding values with a single addition.
Processing pixel positions from left to right across each scan line, we start by calculating
the depth on a left polygon edge that intersects that scan line (Figure 6).

For each successive position across the scan line, we then calculate the depth value using
Eq. above.
We can implement the depth-buffer algorithm by starting at a top vertex of
the polygon.
Then, we could recursively calculate the x-coordinate values down a left edge
of the polygon.
The x value for the beginning position on each scan line can be calculated
from the beginning (edge) x value of the previous scan line as x’ = x − 1/m
where m is the slope of the edge. Depth values down this edge are obtained
recursively as

If we are processing down a vertical edge, the slope is infinite and the recursive
calculations reduce to
For polygon surfaces, the depth-buffer method is very easy to implement, and it
requires no sorting of the surfaces in a scene.
But it does require the availability of a second buffer in addition to the refresh
buffer.
A system with a resolution of 1280 × 1024, for example, would require over 1.3
million positions in the depth buffer, with each position containing enough bits to
represent the number of depth increments needed.
One way to reduce storage requirements is to process one section of the scene at a
time, using a smaller depth buffer.
After each view section is processed, the buffer is reused for the next section.
Example
Let's consider three triangles:

Triangle 1: (Red)

○ Vertices: A(1,1,0.2), B(4,1,0.2), C(2.5,4,0.2)

Triangle 2: (Blue)

○ Vertices: D(2,2,0.5), E(5,2,0.5), F(3.5,5,0.5)

Triangle 3: (Green)

● Vertices A=(1, 1, 0.5), B=(4, 1, 2.0), and C=(2.5, 4, 1.0)

Implement Depth buffer…

We need to compute the depth (Z-value) for any pixel (x, y) inside the triangle using
the plane equation.
Step 1: Initialize Buffers
● Depth Buffer: Set all values to 1.0 (max depth).
● Frame Buffer: Set all pixel colors to the background.

Step 2: Rasterize Each Triangle


● Compute the depth (Z) value for each pixel.
● Compare Z with the existing depth buffer:
○ If the new Z is smaller, update both framebuffer (color) and depth buffer
(Z-value).
○ If the new Z is larger, discard it.
For a given pixel (x, y), the depth Z is computed using the plane equation:
Given vertices of a triangle, we can compute A, B, C, D using the normal
vector.
Plane Eqn?
The normal vector to a plane is found using the cross-product of two edges.
For Triangle 1 (Red):
Take two edges:
Edge 1: AB=(4−1,1−1,0.2−0.2)=(3,0,0)
Edge 2: AC=(2.5−1,4−1,0.2−0.2)=(1.5,3,0)
N=AB×AC=| |

N=(0,0,9)

So, (A, B, C) = (0, 0, 9).

● Using point A (1,1,0.2) to find D:

0(1)+0(1)+9(0.2)+D=0
D=−1.8

Thus, the plane equation for Triangle 1 is:

0x+0y+9z−1.8=0

z=1.8/9=0.2

Since C ≠ 0, Z remains constant at 0.2 for every pixel in this triangle.


For Triangle 2 (Blue):
Take edges:?
Compute the cross product:?
D=?
Thus, the plane equation for Triangle 2 is:
z=?
Since C ≠ 0, Z remains constant at ___ for every pixel in this triangle.
For Triangle 2 (Blue):
Take edges:?
Compute the cross product:?
D=−4.5
Thus, the plane equation for Triangle 2 is:
z=4.5/9=0.5
Since C ≠ 0, Z remains constant at 0.5 for every pixel in this triangle.
Step 3: Compare Depth Values
For each pixel (x, y):
● Triangle 1 has Z = 0.2
● Triangle 2 has Z = 0.5
Since 0.2 is smaller than 0.5, Triangle 1 (Red) remains visible, and
Triangle 2 (Blue) is hidden where they overlap.
The pixels of Triangle 1 replace those of Triangle 2 in the frame buffer.
Here, Triangle 1 is closer (Z = 0.2) than Triangle 2 (Z = 0.5).
The expected output is that Triangle 1 occludes Triangle 2 where they
overlap.
A drawback of the depth-buffer method is that it identifies only one visible surface at each
pixel position.

In other words, it deals only with opaque surfaces and cannot accumulate color values for
more than one surface, as is necessary if transparent surfaces are to be displayed
III) Scan-Line Method
This image-space method for identifying visible surfaces computes and
compares depth values along the various scan lines for a scene.
As each scan line is processed, all polygon surface projections intersecting
that line are examined to determine which are visible.
Across each scan line, depth calculations are performed to determine which
surface is nearest to the view plane at each pixel position.
When the visible surface has been determined for a pixel, the surface color
for that position is entered into the frame buffer.
IV) A-buffer method

- expands the depth-buffer algorithm so that each position in the buffer can
reference a linked list of surfaces.
- This allows a pixel color to be computed as a combination of different
surface colors for transparency or antialiasing effects.
Each position in the A-buffer has two fields:
• Depth field: Stores a real-number value (positive, negative, or zero).
• Surface data field: Stores surface data or a pointer.
If the depth field is nonnegative, the number stored at that position is the
depth of a surface that overlaps the corresponding pixel area.
The surface data field then stores various surface information, such as the
surface color for that position and the percent of pixel coverage.
If the depth field for a position in the A-buffer is negative, this indicates
multiple-surface contributions to the pixel color.
The color field then stores a pointer to a linked list of surface data, as in
Figure below
Surface information in the A-buffer includes

• RGB intensity components

• Opacity parameter (percent of transparency)

• Depth

• Percent of area coverage

• Surface identifier

• Other surface-rendering parameters


Using the opacity factors and percent of surface coverage, the rendering
algorithms calculate the color for each pixel as an average of the contributions
from the overlapping surfaces.
(*In the A-buffer algorithm, the depth field usually holds a nonnegative value
representing the distance from the viewer to a single surface. However, if a pixel
receives contributions from more than one surface, the algorithm may use a
negative value in the depth field as a flag.)
V) Depth-Sorting Method

Depth-Sorting Method - Using both image-space and object-space operations,


the depth-sorting method performs the following basic functions:
1. Surfaces are sorted in order of decreasing depth.
2. Surfaces are scan-converted in order, starting with the surface of greatest depth.
This visibility-detection method is often referred to as the painter’s algorithm.
Why? In creating an oil painting, an artist first paints the background colors.
Next, the most distant objects are added, then the nearer objects, and so forth.
At the final step, the foreground is painted on the canvas over the background
and the more distant objects.
Each color layer covers up the previous layer.
Using a similar technique, we first sort surfaces according to their distance from the view
plane.

The color values for the farthest surface can then be entered into the refresh buffer.

Taking each succeeding surface in turn (in decreasing depth order), we “paint” the surface
onto the frame buffer over the colors of the previously processed surfaces (blended color).
Steps: (Painting polygon surfaces into the frame buffer according to depth is carried out in several
steps.)

1) Assuming we are viewing along the z direction, surfaces are ordered on the first pass
according to the smallest z value on each surface.
2) The surface S at the end of the list (with the greatest depth) is then compared to the
other surfaces in the list to determine whether there are any depth overlaps.
3) If no depth overlaps occur, S is the most distant surface and it is scan-converted.
Figure shows two surfaces that overlap in the xy plane but have no depth overlap.
4) This process is then repeated for the next surface in the list.
5) So long as no overlaps occur, each surface is processed in depth order until all have
been scan-converted.
If a depth overlap is detected at any point in the list, we need to make some additional
comparisons to determine whether any of the surfaces should be reordered
Let us take the polygon P at the end of the list.
Consider all polygons Q whose z-extents overlap P’s.
Before drawing P, we make the following tests.
If any of the following tests is positive, then we can assume P can
be drawn before Q.
1. Do the x-extents not overlap?
2. Do the y-extents not overlap?
3. Is P entirely on the opposite side of Q’s plane from the viewpoint?
4. Is Q entirely on the same side of P’s plane as the viewpoint?
5. Do the projections of the polygons not overlap?
If all the tests fail, then:
we split either P or Q using the plane of the other.
The new cut polygons are inserting into the depth order and the process
continues.
We make the following tests for each surface that has a depth overlap with S. If any
one of these tests is true, no reordering is necessary for S and the surface being tested.
The tests are listed in order of increasing difficulty: (explained in previous two slides)

1. The bounding rectangles (coordinate extents) in the xy directions for the two
surfaces do not overlap.

2. Surface S is completely behind the overlapping surface relative to the viewing


position.

3. The overlapping surface is completely in front of S relative to the viewing position.

4. The boundary-edge projections of the two surfaces onto the view plane do not
overlap
The algorithm follows these steps:

1. Sort all polygons (or objects) by depth (Z-value).


○ Objects farthest from the camera are drawn first.
○ Objects closer to the camera are drawn last.
2. Render objects in sorted order.
○ Each object is drawn on top of the previous ones.
○ Overlapping objects naturally obscure those behind them.
3. Repeat for every frame to handle dynamic scenes.

This method assumes that later drawings will always cover earlier ones, effectively handling hidden
surface removal.
Painter’s Algorithm is useful in scenarios like:

● Rendering transparent objects (e.g., glass, water, fog).


● Offline rendering where sorting isn't expensive.
● Older graphics systems without hardware Z-buffer support.
?
VI) BSP

A BSP tree is a binary tree that partitions space into two halves at each
node using a hyperplane (object space technique).
Each node in the tree represents a region of space, and the two children of
that node represent the subregions on either side of the partitioning
hyperplane.
Applying a BSP tree to visibility testing involves identifying surfaces that
are behind or in front of the partitioning plane at each step of the space
subdivision, relative to the viewing direction.
With plane P1, we first partition the space into two sets of objects.
One set of objects is in back of plane P1 relative to the viewing direction, and the
other set is in front of P1.
Because one object is intersected by plane P1, we divide that object into two
separate objects, labeled A and B.
Objects A and C are in front of P1, and objects B and D are behind P1.
Because each object list contains more than one object, we partition the space again
with plane P2, recursively processing the front and back object lists.

This process continues until all object lists contain no more than one object.

This partitioning can be easily represented using a binary tree such as the one shown
in Figure.

In this tree, the objects are represented as terminal nodes, with front objects
occupying the left branches and back objects occupying the right branches.

The location of an object in the tree exactly represents its position relative to each of
the partitioning planes.
When the BSP tree is complete, we interpret the tree relative to the position of
our viewpoint, beginning at the root node.
If the viewpoint is in front of that partitioning plane, we recursively process
the back subtree, then recursively process the front subtree.
If the viewpoint is behind the partitioning plane, we process the front subtree
followed by the back subtree.
Thus, the surfaces are generated for display in the order back to front, so that
foreground objects are painted over the background objects.
Fast hardware implementations for constructing and processing BSP trees are
used in some systems.
BSP trees partition the scene into convex regions, which allows for a
systematic, back-to-front (or front-to-back) traversal.
This makes it easier to decide which surfaces are visible without needing to
test every pixel individually.
VII) Area Subdivision

An image-space method,
but object-space operations can be used
to accomplish depth ordering of surfaces.
The area-subdivision method takes
advantage of area coherence in a scene
by locating those projection areas that
represent part of a single surface.
We apply this method by successively dividing the total view-plane area into
smaller and smaller rectangles until each rectangular area contains the projection
of part of a single visible surface, (only one surface (or part of one surface) is
visible. At this point, the region is “simple” because there is no ambiguity about
what is seen in that area.)
contains no surface projections (no part of any surface appears in that area.
This makes it trivial to process since there is nothing to render.) , or
the area has been reduced to the size of a pixel (If the region is very small
(down to a single pixel), then even if it contains parts of several surfaces, you can
decide its final color using methods like sampling or blending because the area is
so small that detailed subdivision is unnecessary.)
There are four possible relationships that a surface can have with an area of the subdivided
view plane.

We can describe these relative surface positions using the following classifications
An easy way to do this is to successively divide the area into four equal parts at
each step, as shown in Figure 20.
This approach is similar to that used in constructing a quadtree.
A viewing area with a pixel resolution of 1024 × 1024 could be subdivided ten
times in this way before a subarea is reduced to the size of a single pixel.
No further subdivisions of a specified area are needed if one of the
following conditions is true.
Condition 1: An area has no inside, overlapping, or surrounding surfaces
(all surfaces are outside the area).
Condition 2: An area has only one inside, overlapping, or surrounding
surface.
Condition 3: An area has one surrounding surface that obscures all other
surfaces within the area boundaries (even if there are other surfaces present behind or
inside the area, the single "surrounding" surface is the only one that will be visible within that
region. Therefore, the final color (or other visual attribute) of that entire area is determined
solely by that surface, and no additional subdivision is necessary to resolve hidden details.)
Ray tracing- Self study component
Comparison of visible-surface detection method

The effectiveness of a visible-surface detection method depends on the


characteristics of a particular application.
If the surfaces in a scene are widely distributed along the viewing direction so
that there is very little depth overlap, a depthsorting or BSP-tree method is
often most efficient.
When there are few overlaps of the surface projections on the view plane, a
scan-line or area-subdivision approach is a fast way to locate visible surfaces.
As a general rule, either the depth-sorting algorithm or the BSP-tree method is a
highly effective approach for scenes with only a few surfaces.

We can use the scan-line, depth-sorting, or BSP-tree method to identify visible


surfaces effectively for scenes with up to several thousand polygon surfaces.

With scenes that contain more than a few thousand surfaces, the depth-buffer
method or octree approach performs best.
The depth-buffer method exhibits relatively low performance with simple scenes
and relatively high performance with complex scenes.
BSP trees are useful when multiple views are to be generated using different view
reference points.
If a scene contains curved-surface representations, we can use octree or
ray-casting methods to identify visible parts of the scene.
Rough

When using perspective projection, back-face detection is applied before


rendering the 2D scene to eliminate faces that are not visible to the viewer.
In perspective projection, all faces of 3D objects are projected into a 2D scene.
This means: Each 3D point (x, y, z) is transformed into a 2D screen coordinate
(x', y') using perspective transformation.
Depth (z-value) is preserved in a depth buffer to determine visibility.
Even hidden surfaces are initially projected, but visibility tests (like Z-buffering)
ensure that only the closest surface is displayed.
● The 3D scene is transformed into a normalized viewing space where:
○ Z = 0 represents the near clipping plane (the closest possible depth).
○ Z = 1 represents the far clipping plane (the farthest possible depth).
● Depth values outside this range are clipped (discarded).

Normalization ensures that all depth values fall within a fixed interval,
regardless of the actual scene depth range.
Perspective projection does not remove hidden surfaces by itself.
Instead, it flattens 3D objects into a 2D view while retaining depth information.
Once in 2D, depth values help determine which surfaces are visible using
algorithms like:
● Z-buffering (Depth Buffer)
● Painter’s Algorithm
● Scanline Z-buffering
● Ray tracing
You can do these algorithms in parallel as well as perspective projections

Donot overlap: After projecting the 3D shapes onto a 2D plane, each polygon occupies a
distinct, non-intersecting region on the screen. In other words, the 2D areas where each
polygon is drawn do not cover or interfere with one another.

View plane represents an imaginary flat surface where the 3D scene is projected onto to
create the 2D image, while the camera view position refers to the actual location of the
virtual camera in the 3D space from which it is viewing the scene; essentially, the camera
is "looking" at the view plane

You might also like