-
Notifications
You must be signed in to change notification settings - Fork 5k
Double-to-Decimal conversions are not as faithful as they could be. #42775
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I couldn't figure out the best area label to add to this issue. If you have write-permissions please help me learn by adding exactly one area label. |
Addendum:
Of course, a faster way that I could use to do this would be to directly reimplement parts of a |
Tagging subscribers to this area: @tannergooding, @pgovind, @jeffhandley |
I do agree the current behavior seems confusing. However, its worth noting that Additionally, It looks like the issue here is that https://source.dot.net/#System.Private.CoreLib/Decimal.DecCalc.cs,1726 assumes I think there are 3 possible paths that could be taken here:
|
Indeed -- sorry, I should have been more precise in those first few sentences. I'll fix them now.
Indeed -- from my experimentation before reporting this, I got the impression that the intended behavior of the This is very similar to how Stopping at 15 digits like this seems inconsistent with any of the answers that I can give for "what should it do?". Other than, of course, "it should do what it's always done", which is of course valid, but if that's the resolution, then I'd ask for at least some kind of mention in the documentation for the conversion operator to say how it deals with these issues.
I see the concern about confusion, but it's also confusing that Plus, truncating to 15 places still doesn't make the issue of compounding rounding error go away, it just means you have to compound more rounding error before it happens.
I considered proposing something like this, but IMO the downsides of I honestly wouldn't be too sad about this resolution, though, because at least it would be consistent and sane: after all, converting I'd have to find another way to achieve my goal, of course, but that's just how it goes sometimes. |
Due to lack of recent activity, this issue has been marked as a candidate for backlog cleanup. It will be closed if no further activity occurs within 14 more days. Any new comment (by anyone, not necessarily the author) will undo this process. This process is part of our issue cleanup automation. |
This issue will now be closed since it had been marked |
Description
Converting a
System.Double
value to aSystem.Decimal
value does not seem to follow consistent rules. Packing more significant digits into the source value seems to result in a converted value that's more precise, but only to a point.I probably would have let that go if not for the fact that
System.Double.ToString()
is (now) perfectly capable of taking a value and finding the smallest decimal string that represents that value, so I'm now looking at a situation where I'm encouraged to format a value to a string and then parse it right back out.Considering the below code snippet, given how things work with inputs like
0.1
,0.2
,0.3
, etc., I would expect all lines to write the same value, but they do not. Of course, I wouldn't attempt to run this code on anything older than .NET Core 3.0, because the IEEE-754 formatting / parsing improvements are a significant quality-of-life improvement when investigating this.It would also make some sense if the
valToDecimal
line were to write a value with more precise digits, something closer to174.28491752098176448271260597
, since that's also a faithful representation of thedouble
value.ConsoleApp0.csproj
Program.cs
dotnet run
OutputConfiguration
Regression?
Unknown.
Other information
Context: in NetTopologySuite, we're porting an algorithm from JTS that involves trying to infer the precision of a set of coordinate values. The fastest way I found to do this (in cases where the precision is actually realistic) is by converting the
System.Double
values toSystem.Decimal
and extracting the scale byte from it, with a fallback that converts throughSystem.String
if needed.I was expecting this fallback to only get hit for "haha, gotcha!" cases, where either double --> decimal --> double can't work (NaN / too large / too small) or would be lossy because the value is exceptionally close to zero. However, it seems to get hit for the overwhelming majority of values that I randomly generate.
Round-tripping through a string (which, I guess, I could
stackalloc
) is an OK workaround, and it also reveals some insight into the issue: whenever I format a randomly generatedSystem.Double
value (ABS(value) between 1 and 180), chop off the last two characters of its string representation, and parse it back toSystem.Double
, the double --> decimal --> double pipeline has been perfectly faithful. I've only tried a few hundred million of these, though, so there might still be some sleepers in there.Here's how I'm testing my hundred million:
The text was updated successfully, but these errors were encountered: