Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Add more comments to hypot() #102817

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Mar 18, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 12 additions & 18 deletions Modules/mathmodule.c
Original file line number Diff line number Diff line change
Expand Up @@ -2447,9 +2447,8 @@ Since lo**2 is less than 1/2 ulp(csum), we have csum+lo*lo == csum.
To minimize loss of information during the accumulation of fractional
values, each term has a separate accumulator. This also breaks up
sequential dependencies in the inner loop so the CPU can maximize
floating point throughput. [4] On a 2.6 GHz Haswell, adding one
dimension has an incremental cost of only 5ns -- for example when
moving from hypot(x,y) to hypot(x,y,z).
floating point throughput. [4] On an Apple M1 Max, hypot(*vec)
takes only 3.33 µsec when len(vec) == 1000.

The square root differential correction is needed because a
correctly rounded square root of a correctly rounded sum of
Expand All @@ -2473,7 +2472,7 @@ step is exact. The Neumaier summation computes as if in doubled
precision (106 bits) and has the advantage that its input squares
are non-negative so that the condition number of the sum is one.
The square root with a differential correction is likewise computed
as if in double precision.
as if in doubled precision.

For n <= 1000, prior to the final addition that rounds the overall
result, the internal accuracy of "h" together with its correction of
Expand Down Expand Up @@ -2514,12 +2513,9 @@ vector_norm(Py_ssize_t n, double *vec, double max, int found_nan)
}
frexp(max, &max_e);
if (max_e < -1023) {
/* When max_e < -1023, ldexp(1.0, -max_e) would overflow.
So we first perform lossless scaling from subnormals back to normals,
then recurse back to vector_norm(), and then finally undo the scaling.
*/
/* When max_e < -1023, ldexp(1.0, -max_e) would overflow. */
for (i=0 ; i < n ; i++) {
vec[i] /= DBL_MIN;
vec[i] /= DBL_MIN; // convert subnormals to normals
}
return DBL_MIN * vector_norm(n, vec, max / DBL_MIN, found_nan);
}
Expand All @@ -2529,17 +2525,14 @@ vector_norm(Py_ssize_t n, double *vec, double max, int found_nan)
for (i=0 ; i < n ; i++) {
x = vec[i];
assert(Py_IS_FINITE(x) && fabs(x) <= max);

x *= scale;
x *= scale; // lossless scaling
assert(fabs(x) < 1.0);

pr = dl_mul(x, x);
pr = dl_mul(x, x); // lossless squaring
assert(pr.hi <= 1.0);

sm = dl_fast_sum(csum, pr.hi);
sm = dl_fast_sum(csum, pr.hi); // lossless addition
csum = sm.hi;
frac1 += pr.lo;
frac2 += sm.lo;
frac1 += pr.lo; // lossy addition
frac2 += sm.lo; // lossy addition
}
h = sqrt(csum - 1.0 + (frac1 + frac2));
pr = dl_mul(-h, h);
Expand All @@ -2548,7 +2541,8 @@ vector_norm(Py_ssize_t n, double *vec, double max, int found_nan)
frac1 += pr.lo;
frac2 += sm.lo;
x = csum - 1.0 + (frac1 + frac2);
return (h + x / (2.0 * h)) / scale;
h += x / (2.0 * h); // differential correction
return h / scale;
}

#define NUM_STACK_ELEMS 16
Expand Down