Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
10 views2 pages

Explanation

Uploaded by

bharathsidram
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views2 pages

Explanation

Uploaded by

bharathsidram
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Explanation:

1. Create_cube(size=2):
a. Computes eight 3d corner points of a cube centered at the origin, in
homogeneous coordinates
b. Size/2 gives you the half-edge length s; you then list all combinations of +/- s in x,
y, z and append a 1 for homogeneous form
2. Rot_y(theta)
a. Builds a 3x3 rotation matrix around the y-axis by angle theta
b. Uses the standard cosine-sine pattern so that when you apply it, the camera
“turns” left or right
3. Create_camera(K,R,t):
a. Stacks rotation R and translation t into a 3x4 matrix [R|t], then multiplies by the
intrinsic matrix K to get the full projection matrix P = K.[R|t]
4. Project_points(P, points_3d)
a. Projects each homogeneous 3d point into the image:
i. Compute P@points_3d -> 3xN homogeneous image coordinates
ii. Divide first two rows by the third to normalize into pixel(x,y)
5. Triangulate_point_multi(pts_2d, P_list):
a. For one 3d point seen in multiple views, sets up a linear system A.X = 0 by
stacking the epipolar constraints (x.P2-P0 and y.P2-P1 for each view)
b. Solves via SVD, taking the last row of VT as the homogeneous solution and
normalizing by its fourth component
6. Generate_camera_positions(n, radius=7)
a. Evenly samples n angles from -90 to 90, computes each camera’s world position
on a circle (x=r.cos, z=r.sin), then derives R=rot_y(-angle) and t=-R.pos so the
camera faces the origin
7. Main loop in main()
a. Initializes cube points and intrinsics K
b. For each n_cams from 1 to 10:
i. Get camera extrinsics via generate_camera_positions
ii. Build each projection matric with create_camera
iii. Project cube into each view
iv. If only one camera, skip triangulation (reconstruction=original).
Otherwise, run triangulate_multi
v. Compute per-point Euclidean errors, then track mean and standard
deviation
vi. For 2+ cameras, plot a 3d scatter of original vs reconstructed points
alongside a bar chart of errors
8. Final error plot
a. After looping, draws an error-versus-number-of-cameras graph with error bars
showing the mean +/-std
When the cameras are nearly co-located(sit on almost on top of each other), their lines of sight
to each 3d point become almost parallel. In that configuration the linear system you build for
triangulation(matrix A), is very close to rank-deficient, so small image noise or numerical
rounding translates into huge depth errors. Meaning – the mean error shoots up and the per-
point error bars become enormous.

Graph Evaluation summary: Error vs Number of Cameras curve clearly shows that a small
number of well-spaced cameras is critical for acceptable depth accuracy, and bringing in
additional viewpoints steadily improves reconstruction until you hit the numerical noise floor.

You might also like