#8823 closed bug (invalid)
showFloat for higher precision types produces strange results for some values
Reported by: | axman6 | Owned by: | |
---|---|---|---|
Priority: | low | Milestone: | |
Component: | Prelude | Version: | 7.8.1-rc1 |
Keywords: | Cc: | ||
Operating System: | Unknown/Multiple | Architecture: | Unknown/Multiple |
Type of failure: | Incorrect result at runtime | Test Case: | |
Blocked By: | Blocking: | ||
Related Tickets: | Differential Rev(s): | ||
Wiki Page: |
Description
I've written a library which is a quda-double type a la the QD C/C++ package, and showFloat does not behave correctly for numbers with such high precision.
My type has ~212 bits of precision, and when using showFloat from Numeric, I get strange results for integer values:
show (1 :: QDouble) = "0.00000000000000000000000000000000000000000000001e47" show (1.1 :: QDouble) = "1.1" show (1000 :: QDouble) = "0.00000000000000000000000000000000000000000000001e50" -- These seems to suggest it happens for any number with only a -- few high bits set to 1 in the result of decodeFloat show (1.125 :: QDouble) = "0.00000000000000000000000000000000000000000000001125e47" show (1.625 :: QDouble) = "0.00000000000000000000000000000000000000000000001625e47"
The problem seems to be related to the result of floatDigits, which starts causing problem when it's larger than 56 (floatDigits x, show x):
(56,"1.0") (57,"01.0") (60,"001.0") (212,"0.00000000000000000000000000000000000000000000001e47")
My fix has been to use a modified version of showFloat from Numeric by changing the floatToDigits function to include a fix for times when large numbers of zeros are produced:
fixup2 (xs,k) = let (zs,ys) = span (==0) xs in (ys,k-length zs) in fixup2 (map fromIntegral (reverse rds), k)
This fixes the symptom but not the issue itself (though it seems like a reasonable thing to have any result returned by floatToDigits.
I have attached as minimal test case as I could come up with. Using floatToDigits from Numeric causes the strange behaviour, while floatToDigits' included in the test case does not.
Attachments (1)
Change History (4)
Changed 4 years ago by
Attachment: | TestCase.hs added |
---|
comment:1 Changed 4 years ago by
To reproduce the bug, try changing the value of floatDigits to ~56 and running floatToDigits 10 {FOne,FOneThousand,FTwo}
. As can be seen with FPi the correct results are produced.
comment:2 Changed 4 years ago by
Resolution: | → invalid |
---|---|
Status: | new → closed |
Your RealFloat
instance is invalid. See the documentation for decodeFloat
:
If decodeFloat x yields (m,n), then x is equal in value to m*b^{n}, where b is the floating-point radix, and furthermore, either m and n are both zero or else b^{(d-1)} <= abs m < b^{d}, where d is the value of floatDigits x.
comment:3 Changed 4 years ago by
Hmmm, it seems you're right, but nonetheless my implementation of decodeFloat is the only one that makes sense for my type since it can represent more than 212 bits of data (for example 1.0000000...005 is represented with the doubles 1.0 and 0.5eN, and 0 for the two other doubles]). It seems that perhaps in my case I may need floatDigits to be very large indeed or use another method to show the number.
Minimal test showing strange behaviour