NSDecimalNumber confusion

Here is a small program to help me understand NSDecimalNumber, but I’m even more confused now and hope someone can explain what’s going on. This example simply inverts a number, then inverts that result with the goal of obtaining the original value. This seems possible with small numbers but as the exponent power of 10 increases inaccuracies creep in that I cannot explain (so that I can make corrections). Further, as I increase the number of times a number is inverted the greater the inaccuracies become; however these behaviors only occur with certain powers of ten.

The sample program centers on inverting the number 7, and 7 raised to various powers of 10: 7e+0 though 7e+99. For each of these numbers a recursive function is called that performs the actual inversion a given number of times. And there is a particular power of ten that demarcates “good” behavior from “bad” behavior: around e+16 (ah, double accuracy). I am aware of standard floating point gotchas, all I need is approximately 15 significant digits, but with repeatable results. So far I have not discovered how to use scale and rounding properly, apparently.

If I use doubles I can perform this inversion any number of times.

It doesn’t make any difference if the initial NSDecimalNumber is created with decimalNumberWithString: or decimalNumberWithMantissa:exponent:isNegative:.

This is a subset of the results, ranging from 7e+15 through 7e+42, using recursion counts 0, 2 and 4. When the recursion count is 0 the NSDecimalNumber is never really inverted, we get to see its initial value which itself seems odd to me in many cases. Note that the recursion count should always be even to get the original value back.

With 2 inversions failures begin at 7e+17 through 7e+34 - and what’s with 7e+32 ~= 10e+32 ?

With 4 inversions 7e+20 isn’t even close, as are various others. Interestingly, as the power of 10 becomes even larger results return to “reasonable”.

There must be something very basic I am missing.







Replies

I did a quick search and found a number of people who have examined NSDecimalNumber and found some pretty serious integer overflow bugs. Apparently, it can only handle data in the same range as other basic types. If you try to go beyond that, you risk wildly inaccurate values. I can't post external links here in the forums, but just search for "NSDecimalNumber accuracy".

Why are you bothering? Why not just use double, or long double?
Thanks for your response. I have an HP-25c simulator and I'm trying to replicate its BCD results. Doubles work, mostly, NSDecimalNumber works better in some situations, but obviously not here. I just assumed NSDecimalNumber would magically work better, but it may be that is true only for "small" numbers less than e+-13 or so, more experimenting required.