Game Programming
A. Avinash, Ph.D.,
Assistant Professor
School of Computer Science and Engineering(SCOPE)
Vellore Institute of Technology (VIT), Chennai
RAY
A ray is a mathematical object that has an origin (starting
position) and a direction.
The direction specifies where the ray is going, and it is
considered infinite.
ray 🡪 infinite line
RAY CASTING VS RAY TRACING
RAY CASTING RAY TRACING
Uses raster images with no ray splitting Uses properties of light
Calculates ray depend on object it hits Depends on reflection and refraction path of the
ray
Not recursive recursive
faster slower
RAY CASTING
technique used to test objects against a ray can be used to test
if one object can see another, collision detection, rendering,
user interactions, and so forth.
RAY TRACING
• techniques used for generating rendered images on the
screen
• Used in
• Entertainment industries ( animated movies)
RAY TRACING
• Ray tracing is a rendering technique that can realistically
simulate the lighting of a scene and its objects by rendering
physically accurate reflections, refractions, shadows, and
indirect lighting.
• Ray tracing generates computer graphics images by tracing the
path of light from the view camera (which determines your view
into the scene), through the 2D viewing plane (pixel plane), out
into the 3D scene, and back to the light sources.
RAY TRACING
• As it traverses the scene, the light may reflect from one object
to another (causing reflections), be blocked by objects (causing
shadows), or pass through transparent or semi-transparent
objects (causing refractions).
• All of these interactions are combined to produce the final color
and illumination of a pixel that is then displayed on the screen.
This reverse tracing process of eye/camera to light source is
chosen because it is far more efficient than tracing all light rays
emitted from light sources in multiple directions.
WHY RAY TRACING
RAY TRACING
The objects you’re seeing are illuminated by beams of light.
Now turn that around and follow the path of those beams backwards
from your eye to the objects that light interacts with. That’s ray tracing.
RAY TRACING
RAY TRACING
• In ray tracing the idea is to create and cast multiple rays
from all light sources into a scene going in different
directions.
• If a ray hits an object in the scene and if that object is in the
camera’s view, then that point of intersection is recorded.
• That 3D point of intersection can be projected to screen
space where the corresponding pixel takes on the color of
the object that was intersected.
• Types
• Forward Ray Tracing
• Backward Ray Tracing
TYPES OF RAY TRACING
Forward Ray Tracing Backward Ray
Tracing
RAY TRACING ALGORITHMS
rasterized shadows ray-traced shadows
Ray Tracing Algorithm-How it works
1.Create an image made of pixels.
2.For each pixel in the image, it shoots a primary ray into the
scene.
• The direction of that primary ray is obtained by tracing a line
from the eye to the center of that pixel.
3.Check every object of the scene to see if it intersects with any of
them.
4.Select the intersection point of the object which is closest to the
eye.
5.Then shoot a shadow ray from the intersection point to the light
• If this particular ray does not intersect an object on its way to
the light, the hit point is illuminated.
• If it does intersect with another object, that object casts a
RAY TRACING ALGORITHMS-Pseudocode
RAY TRACING ALGORITHMS
RAY TRACING-UNITY
• The Raycast function is extremely useful for creating connections
between objects in Unity.
• For example,
• if you want to select an object just by looking at it or clicking on
it.
• Or if you want to shoot an object with a weapon or obstruct a
line of sight.
• Or perhaps if you need to check how high the player is from the
ground, or what it is they’re standing on.
RAY TRACING-UNITY
The Raycast function in Unity allows you to check if a Ray collides
with another object in the scene, saving the hit data to a Raycast Hit
variable if it does.
RAY TRACING ALGORITHMS
https://www.pcmag.com/how-to/what-is-ray-tracing-and-what-it-means-for-pc-gaming
https://www.youtube.com/watch?v=gsZiJeaMO48
https://www.youtube.com/watch?v=H-CMDCmehy4
https://gamedevbeginner.com/raycasts-in-unity-made-easy/
LIGHT SOURCES
• Lighting is often managed in a game engine by a system
of lights and materials that specify how objects in the
scene are illuminated and interact with light.
• In order to calculate the shading of a GameObject that is
affected by lighting, Unity needs to know the intensity,
direction, and color of the light that falls on it. This
information is provided by light sources.
TYPES OF LIGHT SOURCES
Point Spot Directional Light Area Light
Light Light
Light that’s defined
Light that’s Light that’s
Light that’s by a rectangle or disc
located at a located
located at a point in the Scene, and
point in the infinitely far
in the Scene and emits light in all
Scene and emits away and
emits light in all directions uniformly
light in a cone emits light in
directions equally across its surface
shape one direction
area but only from
TYPES OF LIGHT SOURCES IN A SCENE
Spot Light
Point Light
Directional
Light Area Lights
TYPES OF LIGHT SOURCES
Point Light Spot Light Directional Area Lights
Light
The intensity Spot Light is constrained All objects in the scene The intensity of the
diminishes with to an angle, resulting in a are illuminated as if the illumination provided
distance from the cone-shaped region of light is always from the by an Area Light
light, reaching zero at illumination. The center same direction. The diminishes at a rate
a specified range. of the cone points in the distance of the light determined by the
Light intensity is forward (Z) direction of from the target object inverse square of the
inversely proportional the light object. Light also isn’t defined and so the distance from the light
to the square of the diminishes at the edges of light doesn’t diminish. source
distance from the a Spot Light’s cone
source.
• useful for • used for artificial light • used to simulate the • use to create a
simulating lamps or sources such as sun or moon realistic streetlight
candles, make a flashlights, car • utilized to replicate or a bank of lights
spark or explosion headlights and natural lighting close to the player
• used to create searchlights effects.
dynamic lighting
effects, such as
flickering lights or
pulsing neon signs
LIGHTS IN UNITY
Create a light of the desired type and place it where you
want it in the scene
GameObject > Light > Point Light
• A directional light can be placed anywhere in the scene (unless it’s using a Cookie) with the
forward/Z axis indicating the direction.
• A Spot Light also has a direction but since it has a limited range, its position does matter.
• The shape parameters of spot, point, and area lights can be adjusted from the inspector or by
using the lights’ Gizmos directly in the scene view
A light’s color and intensity (brightness) are properties you can set from the
inspector.
LIGHTING TERMS
• Emissive
• Ambient
• Diffuse
• Specular
• Attenuation
• Visibility
Emissive (glowing object)
Surface Very expensive Global illumination Constant color
light that an object Require code to algorithms Emissive
itself emits, making Used for
calculate the light lighting is
it appear self-
luminous leaving and entering preprocessed often not used
each surface lighting algorithms or is a small
throughout the scene such as radiosity constant value
Emissive = several times to
e
calculate this one
Ambient
Surface Very expensive Global illumination Constant color
Ambient lighting
Incident light Require code to algorithms
Used for is often not used
coming into the calculate the light
preprocessed or is a small
surface leaving and entering
lighting constant value
•indirect, each surface Ambient
algorithms such
scattered light throughout the scene It is the calculation of
in a scene that is
occlusion
as radiosity
reflected off several times to pervertex or per-pixel
multiple calculate this one visibility terms that is
surfaces
light multiplied by the ambient
Ambient = ka *
Diffuse
Surface
Rough Surface Scattered
Light gets reflected Diffuse surface is As the light is
and scattered after microscopically scattered essentially
hitting a surface in rough, causes light to all around each point
many different reflect in directions. of the surface, the
directions. Don’t mirror the viewing angle at
Examples:
• Carpet, Cloth incoming light ray’s which a diffusely lit
• Dirt, rough rock direction. surface is viewed does Diffuse = kd *
• dry grass diffuse_cal
not affect how the
Specular (shininess of object)
Surface Smooth Surface Mirror image
Mirrors are so
Light has reflected Able to reflect light
smooth that they
in the, or near the, sharply in the
mirror direction. nearly perfectly
mirror (opposite)
reflect light,
direction of the
creating a direct
incoming light.
mirror image on
Examples:
Plastic the surface
Metal Specular = ks * specular_cal
Polished
leather
Visibility
Visibility is a Boolean flag used to represent if a light ray is being blocked by another piece of
geometry.
light source
some part of
another surface
shadows
surface being
rendered
Attenuation
⮚ It is used to represent how light loses intensity over distance.
⮚ Assuming the light does not become more intense as it travels through distance, this
value is often between 0 and 1.
⮚ Attenuation only affects the diffuse and specular lights.
Light = Emissive + Ambient + Visibility * (Diffuse + Specular) *
Attenuation
LIGHT MODELS
A lighting model refers to an algorithm that is used to perform
lighting.
Two of the most popular types of lighting:
• Lambert diffuse
(diffuse)
• Phong lighting
(diffuse+specular)
LIGHT MODELS-Lambert diffuse
• Lambertian Reflectance, or Diffuse Light, is one of the most common,
lighting models used
• Use Lambert’s law to compute the diffuse light
• This lighting model is view independent, meaning the surface's apparent
brightness should not change based on viewpoint
• It is good for rough surfaces without specular highlights
• The Lambertian diffuse model assumes that light reflected from a rough
surface is dependent only on the surface normal and light direction, and
therefore a Lambertian surface is equally bright in all directions.
Lambert’s law
The intensity of light reflected from a surface is
proportional to the cosine of the angle between
the light source and the surface normal
Lambertian reflectance is defined as that
property of substances due to which they appear
equally bright when from any angle.
LIGHT MODELS-Lambert diffuse
• Because of the diffusion of light, it does not matter at which angle we look
at the material.
LIGHT MODELS-Lambert diffuse
• All that matters is on which angle the light actually reaches the material.
More bright
The reflected intensity
decreases as the angle
increases, which means
the brightness is
highest when the
surface is
LIGHT MODELS-Lambert diffuse
How much light reaches a surface
unit?
the amount of light received
and thus reflected is directly
related to the angle between
the surface and the light
direction
LIGHT MODELS-Lambert diffuse
Lambert’s law states that the diffuse contribution of the lighting
equation can be calculated using two vectors
• vertex normal (normal vector)
• light vector
The light vector is calculated by subtracting the light’s position
from the vertex position and normalizing the results.
light vector=normalize(vertex position-light position)
Since the only two properties used are the light and normal
vectors, the camera’s position and orientation would not affect
how a diffuse surface appears.
LIGHT MODELS-Lambert diffuse
• The angle between two vectors that are
unit-length (normalized) can be calculated
by using the dot product as follows.
Angle = dot(normal, light_vec)
• Using the dot product on two unit-length
vectors gives the cosine angle between
them, and this value’s maximum is 1 for
unit-length vectors.
• The angle value can be multiplied by the
light’s color to calculate the diffuse
contribution as follows.
Diffuse = light_color * Angle
LIGHT MODELS-Lambert diffuse
float3 SurfaceColor; //objects color
float3 LightColor; //lights color * intensity
float LightAttenuation; //value of light at point (shadow/falloff)
float NdotL =max(0.0,dot(normalDirection,lightDirection));
float LambertDiffuse = NdotL * SurfaceColor;
float3 finalColor = LambertDiffuse * LightAttenuation *
LightColor;
LIGHT MODELS-Lambert diffuse
Summary
LIGHT MODELS-Lambert diffuse-script
using System;
using System.Numerics; // Use System.Numerics for Vector3 support
public class LambertDiffuseModel
{
// Calculate the Lambertian Diffuse lighting
public static Vector3 CalculateLambertDiffuse(Vector3 normal, Vector3 lightDirection, Vector3 lightColor, float lightIntensity,
Vector3 surfaceColor)
{
// Normalize the normal and light direction vectors
Vector3 normalizedNormal = Vector3.Normalize(normal);
Vector3 normalizedLightDirection = Vector3.Normalize(lightDirection);
// Compute the dot product between the normal and light direction (cosine of the angle)
float dotProduct = Math.Max(0, Vector3.Dot(normalizedNormal, normalizedLightDirection));
// Calculate the diffuse intensity using Lambert's law
Vector3 diffuse = lightColor * lightIntensity * dotProduct;
// Apply the surface color to the diffuse lighting
Vector3 finalColor = Vector3.Multiply(diffuse, surfaceColor);
return finalColor;
}
LIGHT MODELS-Lambert diffuse
public static void Main(string[] args)
{
// Example values for testing
Vector3 normal = new Vector3(0, 1, 0); // Surface normal pointing upward
Vector3 lightDirection = new Vector3(0, -1, 0); // Light coming directly from above
Vector3 lightColor = new Vector3(1, 1, 1); // White light
float lightIntensity = 1.0f; // Light intensity
Vector3 surfaceColor = new Vector3(1, 0, 0); // Red surface color
// Calculate the Lambert diffuse lighting
Vector3 resultColor = CalculateLambertDiffuse(normal, lightDirection, lightColor, lightIntensity,
surfaceColor);
// Output the result
Console.WriteLine("Diffuse color: " + resultColor);
}
}
LIGHT MODELS-Lambert diffuse
vec4(1, 1, 1, 1): This represents a color in RGBA format.
In this case, it corresponds to white (1, 1, 1) with full opacity (1)
LIGHT MODELS-Lambert diffuse
Surface color : (0.8, 0.5, 0.3) // RGB values (normalized between
0 and 1)
Light Color: (1.0, 1.0, 1.0) // White light with full intensity
Light Attenuation : 0.8 // Light intensity at the point
Normal Direction : (0.0, 0.0, 1.0) // Normal pointing directly up
Light Direction : (0.0, 1.0, -1.0) // Light coming from above and
to the side
Solution:
Calculate NdotL:
NdotL=max(0.0,dot((0.0,0.0,1.0),(0.0,0.0,1.0)))
Dot product calculation:
dot((0.0,0.0,1.0),(0.0,0.0,1.0))=(0.0∗0.0)+(0.0∗0.0)+(1.0∗1.0)=1.0
NdotL=max(0.0,1.0)=1.0
Compute LambertDiffuse:
LambertDiffuse=NdotL×SurfaceColor
LambertDiffuse=1.0×(0.8,0.5,0.3)=(0.8,0.5,0.3)
Determine finalColor:
finalColor=LambertDiffuse×LightAttenuation×LightColor
finalColor=(0.8,0.5,0.3)×0.8×(1.0,1.0,1.0)=(0.8×0.8,0.5×0.8,0.3×0.8)=(0.64,0
LIGHT MODELS-Phong Lighting
• The Phong model is a lighting model used in computer graphics
to simulate how surfaces reflect light.
• developed by Bui Tuong Phong in 1973
• It provides a way to calculate the color and brightness of a
surface point based on three main components: ambient,
diffuse, and specular reflection.
LIGHT MODELS-Phong Lighting
LIGHT MODELS-Phong Lighting
• It is based on the assumption that the way a surface reflects
light is a combination of the diffuse reflection of rough
surfaces with the specular reflection of shiny surfaces.
• The specular reflectance, unlike diffuse light, is dependent
on the view.
• As the object moves or the eyes move around the object, the
highlight changes.
LIGHT MODELS-Phong Lighting
LIGHT MODELS-Phong Lighting
• Phong specular model uses two vectors to compute the
specular contribution
The dot product of
• view vector these two vectors is
• reflected light vector often raised to an
exponent to simulate
surface roughness
• The view vector is calculated by subtracting the camera’s
position from the vertex position.
• The reflected light vector is the light vector reflected off of
the surface normal.
• As the view and reflected vectors become more perpendicular,
the highlight looks brighter. This is equivalent to saying that if
the light is shining directly on the surface, the shine of the
LIGHT MODELS-Phong Lighting
LIGHT MODELS-Phong Lighting
It combines three types of reflection to create a realistic
representation of how light interacts with surfaces
• ambient
• diffuse
• specular
LIGHT MODELS-Phong Lighting
If position is given instead of
vectors
LIGHT MODELS-Phong Lighting
LIGHT MODELS-Phong Lighting
LIGHT MODELS-Phong Lighting
R=2Np-L
Where Np is projection vector of L
onto N
projection vector
of x onto v
LIGHT MODELS-Phong Lighting
function PhongLighting(surfaceNormal, lightDirection, viewDirection):
// Normalize the input vectors
surfaceNormal = normalize(surfaceNormal)
lightDirection = normalize(lightDirection)
viewDirection = normalize(viewDirection)
// Compute Ambient lighting
ambientComponent = ambientColor * ambientIntensity
// Compute Diffuse lighting (Lambertian reflection)
diffuseFactor = max(dot(surfaceNormal, lightDirection), 0)
diffuseComponent = objectColor * lightColor * diffuseFactor *
diffuseIntensity
// Compute Specular lighting (Phong reflection)
reflectDirection = reflect(-lightDirection, surfaceNormal)
// Reflect light around the normal
specularFactor = max(dot(viewDirection, reflectDirection), 0) ^ shininess
specularComponent = lightColor * specularFactor * specularIntensity
// Final color = ambient + diffuse + specular
finalColor = ambientComponent + diffuseComponent +
specularComponent
LIGHT MODELS-Phong Lighting
MATERIALS
A material defines how a surface is rendered in
computer graphics and is usually a collection of
properties and assets that collectively creates a specific
effect.
• Decals
• Bump and normal maps
• Height maps
• Gloss (specular) maps
• Alpha maps
• Displacement maps
• Programmable shaders
• Diffuse material (kd)
• Ambient material (ka)
• Specular material (ks)
• Emissive material
MATERIALS
Decals 2D or 3D textures that are applied to the surfaces of 3D models, similar
to stickers
Bump Maps grayscale images that simulate the appearance of surface texture on a
3D model.
Normal Maps advanced type of bump map that use RGB color information to represent
surface normals
Height maps refer to grayscale images that indicate the height at each point on a
surface.
Gloss (specular) maps control the shininess and reflective properties of a surface
Alpha maps define transparency levels in textures
Displacement maps modifying the geometry of a 3D model based on height information
provided in a grayscale image
Programmable shaders custom programs written to control how graphics hardware renders
pixels and vertices in 3D graphics
Diffuse material (kd) base color of a surface under direct light
Ambient material (ka) how much ambient light is reflected by a surface regardless of direct
lighting conditions
Specular material (ks) how reflective a surface is under direct lighting conditions
MATERIALS - UNITY
Properties🡪Assets🡪Right click and choose Material
Inspect its properties in Inspector tab
MATERIALS
Textures🡪 https://polyhaven.com/
BUMP MAPPING
BUMP MAPPING
• Bump mapping is a texture mapping technique in computer
graphics for simulating bumps and wrinkles on the surface of an
object.
• make a rendered surface look more realistic by simulating small
displacements of the surface
• unlike displacement mapping, the surface geometry is not
modified.
• Instead only the surface normal is modified as if the surface had
been displaced.
BUMP MAPPING
Bump Map
• Bump maps are grayscale textures you map to objects to create the illusion of
surface relief (elevations and depressions) on an otherwise flat object.
• Bump maps just change the direction of the surface’s normals based on the
bump map’s Alpha Gain value.
• Bump maps create the illusion of depth and texture on the surface of a 3D model
using computer graphics
Bump maps🡪 https://stock.adobe.com/in/search/images?k=bump%20map
terrain without a bump map (left) and with (right)
BUMP MAPPING
Bump Map –How it works
• Bump maps are grayscale images that are limited to 8-bits of color information.
That’s only 256 different colors of black, gray or white.
• These values in a bump map are used to tell the 3D software basically two
things.
• Up or down
• When values in a bump map are close to 50% gray, there's little to no detail that
comes through on the surface.
• When values get brighter, working their way to white, details appear to pull out
of the surface.
• When values get darker and closer to black, they appear to be pushing into the
surface.
A bump map does not alter the shape of the 3D
mesh.
Rather, the degree of grayscale at each point
determines the variation in the surface’s apparent
depth, from black (down) to white (up).
BUMP MAPPING
In bump mapping instead of using the interpolated normal,
the algorithm uses a normal retrieved out of a texture image
known as a bump map.
The pixels in a bump map image allow a per-pixel level of
normal vector specification in the image.
By altering these normal vectors from one another, the
lighting can be changed on a pixel level. This change in
lighting can be used to create patterns and detail that isn’t
actually there.
These patterns are then used to simulate detail on the
surface that can
have real-time lighting applied to it without adding more
polygons
• Cracks
• Bumps
BUMP MAPPING
•For a flat surface, the normal at each point is the same (perpendicular to the
plane).
•For a curved or bumpy surface, the normals vary at different points.
Normal maps use a 2D image (texture) to store information about how light
should reflect off the surface at each point. Instead of using color data (as in
a regular texture), the RGB values in a normal map represent the direction
of the normal vector at each texel (texture pixel).
•RGB values in a normal map correspond to the X, Y, and Z components of a
3D normal vector:
• R (Red) = X component of the normal vector (horizontal direction)
• G (Green) = Y component of the normal vector (vertical direction)
• B (Blue) = Z component of the normal vector (depth direction, usually
the one sticking out of the surface)
BUMP MAPPING
CREATING BUMP MAP TEXTURE
• A bump map image is essentially a color map image.
• In bump mapping the image data represent normal vectors instead of
colors.
• In a bump map each vector has a length of 1.
• The data in images are stored as unsigned values, which means they
have a range of (0, 1).
• Normal vectors, on the other hand, are signed and have a range of (-1,
1).
• When reading a normal vector from the bumpmap in the pixel shader, it
must be converted to the signed range.
To turn the signed vector into an unsigned one, which can be useful
when calculating and storing normal vectors in a bump map,
BUMP MAPPING
CREATING BUMP MAP TEXTURE
• Convert the image to black and white.
• For each pixel in the height map sample the current pixel, the
pixel to the right, and the pixel above the current pixel.
• Create two edge vectors by subtracting the above pixel from
the current pixel (edge 1) and the right pixel from the current
pixel (edge 2).
• Calculate the cross-product of the two edges.
• Normalize the cross product.
• Convert the normal to the unsigned range.
• Store the normal in the image.
BUMP MAPPING
CREATING BUMP MAP TEXTURE
BUMP MAPPING
REFERENCES
https://docs.unity3d.com/Manual/Lighting.html
https://www.jordanstevenstechart.com/lighting-models
http://www.cs.ucr.edu/~shinar/courses/cs130-fall-2015/lectures/Lecture8.
pdf
Lambert Model
https://cglearn.eu/pub/computer-graphics/shading-and-lighting#material-
lambert-lighting-model
https://www.youtube.com/watch?v=xoLpDLeiAlU
Specular reflection
https://en.m.wikipedia.org/wiki/Specular_reflection#cite_note-9
https://www.youtube.com/watch?v=VTV1GTrrtBQ&list=PLW3Zl3wyJwW
McLmUYXMIIxCiLKGOWLETh&index=9
https://www.youtube.com/watch?v=naaeH1qbjdQ&t=56s