Here is a small program to help me understand NSDecimalNumber, but I’m even more confused now and hope someone can explain what’s going on. This example simply inverts a number, then inverts that result with the goal of obtaining the original value. This seems possible with small numbers but as the exponent power of 10 increases inaccuracies creep in that I cannot explain (so that I can make corrections). Further, as I increase the number of times a number is inverted the greater the inaccuracies become; however these behaviors only occur with certain powers of ten.
The sample program centers on inverting the number 7, and 7 raised to various powers of 10: 7e+0 though 7e+99. For each of these numbers a recursive function is called that performs the actual inversion a given number of times. And there is a particular power of ten that demarcates “good” behavior from “bad” behavior: around e+16 (ah, double accuracy). I am aware of standard floating point gotchas, all I need is approximately 15 significant digits, but with repeatable results. So far I have not discovered how to use scale and rounding properly, apparently.
If I use doubles I can perform this inversion any number of times.
It doesn’t make any difference if the initial NSDecimalNumber is created with decimalNumberWithString: or decimalNumberWithMantissa:exponent:isNegative:.
This is a subset of the results, ranging from 7e+15 through 7e+42, using recursion counts 0, 2 and 4. When the recursion count is 0 the NSDecimalNumber is never really inverted, we get to see its initial value which itself seems odd to me in many cases. Note that the recursion count should always be even to get the original value back.
With 2 inversions failures begin at 7e+17 through 7e+34 - and what’s with 7e+32 ~= 10e+32 ?
With 4 inversions 7e+20 isn’t even close, as are various others. Interestingly, as the power of 10 becomes even larger results return to “reasonable”.
There must be something very basic I am missing.
The sample program centers on inverting the number 7, and 7 raised to various powers of 10: 7e+0 though 7e+99. For each of these numbers a recursive function is called that performs the actual inversion a given number of times. And there is a particular power of ten that demarcates “good” behavior from “bad” behavior: around e+16 (ah, double accuracy). I am aware of standard floating point gotchas, all I need is approximately 15 significant digits, but with repeatable results. So far I have not discovered how to use scale and rounding properly, apparently.
If I use doubles I can perform this inversion any number of times.
It doesn’t make any difference if the initial NSDecimalNumber is created with decimalNumberWithString: or decimalNumberWithMantissa:exponent:isNegative:.
This is a subset of the results, ranging from 7e+15 through 7e+42, using recursion counts 0, 2 and 4. When the recursion count is 0 the NSDecimalNumber is never really inverted, we get to see its initial value which itself seems odd to me in many cases. Note that the recursion count should always be even to get the original value back.
With 2 inversions failures begin at 7e+17 through 7e+34 - and what’s with 7e+32 ~= 10e+32 ?
With 4 inversions 7e+20 isn’t even close, as are various others. Interestingly, as the power of 10 becomes even larger results return to “reasonable”.
There must be something very basic I am missing.