Why Float generate random number after converting from String?

let a = "45.9"

let b = Float(a ?? "0") ?? 0

why b got a value of 45.9000015?

How to generate of a value of 45.9 only?

Replies

When I test in playground, I get b = 45.9

let a = "45.9"
let b = Float(a) ?? 0
print(a, b)

45.9 45.9


Note: a cannot be nil, what is the purpose of

a ?? "0"


How do you get this result ? Which version of XCode ? Playground or code ?


If you get 45.9000015, it is not random number, it is the precision limit of Float (even though I think precision is higher).


Try:

let c = Float(Int(Float(a)! * 10)) / 10

Also tested in code (XCode 10.0ß), same correct result.


You can also

let b = Float(String(format: "%.2f",(Float(a) ?? 0)))!

If you get 45.9000015, it is not random number, it is the precision limit of

Float

Correct. The classic paper on this is What Every Computer Scientist Should Know About Floating-Point Arithmetic. However, I find that a little too abstract. I’ve just finished reading this series of blogs posts and I highly recommend them.

Share and Enjoy

Quinn “The Eskimo!”
Apple Developer Relations, Developer Technical Support, Core OS/Hardware

let myEmail = "eskimo" + "1" + "@apple.com"

Thanks Quinn. I'll bookmark your document.


What I do not understand is why I get "correct" results, and not Joe ?

In addition, 45.9000015 seems below the expected precision ?

Float has 24-bit mantissa (including hidden leading `1`), so its approximate decimal presision is 7. (24×log_10(2))


Seems the first 7 digits are precisely expressing the right result.

As already noted, in binary system, 45.9 (decimal) makes cyclic infinite fraction. So, Float, 32-bit floating point number can just hold an approximate value to 45.9.


And the default String repesentations of floating point numbers may be different in Swift versions.

let a: String? = "45.9"
let b = Float(a ?? "0") ?? 0
print(b)
debugPrint(b)
/* Xcode 8.3.3
 45.9
 45.9000015
 */
/* Xcode 9.4.1
 45.9
 45.9000015
 */
/* Xcode 10 beta 4
 45.9
 45.9
 */
let nf = NumberFormatter()
nf.usesSignificantDigits = true
nf.maximumSignificantDigits == 7
print(nf.string(from: b as NSNumber)!) //->45.9

You should better not depend on the default representation of floating point numbers in Swift.


Other than using `String(format:...)`, you can use NumberFormatter.

let nf = NumberFormatter()
nf.usesSignificantDigits = true
nf.maximumSignificantDigits == 7
print(nf.string(from: b as NSNumber)!) //->45.9

24 bits should mean 0,000 000 1 precision (10^^-7.2) more than 0,000 001, isn't it ?

In fact, if I print

print(Float.ulpOfOne)

I get

1.1920929e-07 = 0.000 000 119


Tested in 9.4 (beta), and XCode 7.3: I get 45.9

24 bits should mean 0,000 000 1 precision

That's true. But the 24-bit matissa needs to represent both integer part and fraction part. Not only fraction part.

Think how many bits you can use for fraction part when you represent 45 (it takes 6 bits).

My version was Swift 9.4.1

what's the difference between print and debugPrint? The data was sent to the iCloud server was the same as the debugPrint.

same output for

let b = Float(String(format: "%.2f",(Float(a) ?? 0)))!

The output form debugPrint is expected to contain enough digits to reproduce the original bits of the Float value.

(Seems Swift 4.2 has changed the algorithm.)


The data was sent to the iCloud server was the same as the debugPrint.

Then you keep the value as is. And when you show the value to users, format it properly.