Float/Double equality and comparison in Swift

I cannot find any mention of this anywhere: does Swift's implementation of Float and Double provide any kind of distance checking when doing comparisons? For example, does it consider 0.1 + 0.2 == 0.3? Or does the imprecision of floating point numbers throw that off? Traditionally I have been able to use type constants (e.g., CGFLOAT_MIN) to implement this myself in C/Obj-C, but I have not seen any such constants for Swift.

Replies

does Swift's implementation of Float and Double provide any kind of distance checking when doing comparisons? For example, does it consider 0.1 + 0.2 == 0.3?

Let's find out!


let float1 = 0.1
let float2 = 0.2
let float3 = float1 + float2

print("equal: \(float3 == 0.3)")


outputs:


equal: false


Looks like Signs Point To No.

[Edited to reflect some of the additional comments after posting this]


Thanks. I was hoping I had just missed something given the lack of class constants for the minimum and maximum values of the floating point types. In case this helps anyone else, here are some comparison operators for Double. The delta value is chosen based on the note in "The Swift Programming Language" that states Doubles have a precision of at least 15 decimal digits. The delta value needs to be tailored to your range of expected values, whether it ends up being 0.000000000000001 or 100,000,000,000. For more detail, see Jens' reply below that illustrates a few examples on the extremes for the Double type.


infix operator ==~ { precedence 130 }
func ==~ (left: Double, right: Double) -> Bool
{
  return fabs(left.distanceTo(right)) <= 1e-15
}
infix operator !=~ { precedence 130 }
func !=~ (left: Double, right: Double) -> Bool
{
  return !(left ==~ right)
}
infix operator <=~ { precedence 130 }
func <=~ (left: Double, right: Double) -> Bool
{
  return left ==~ right || left <~ right
}
infix operator >=~ { precedence 130 }
func >=~ (left: Double, right: Double) -> Bool
{
  return left ==~ right || left >~ right
}
infix operator <~ { precedence 130 }
func <~ (left: Double, right: Double) -> Bool
{
  return left.distanceTo(right) > 1e-15
}
infix operator >~ { precedence 130 }
func >~ (left: Double, right: Double) -> Bool
{
  return left.distanceTo(right) < -1e-15
}

Sorry to be a killjoy, but this is just plain wrong.


The problem with 0.1 + 0.2 != 0.3 in Swift is not that floating point is "imprecise", but that those Swift constants don't actually represent the real numbers 0.1, 0.2 and 0.3, so the comparison that's actually being made isn't the one that you intended.


If you want to say, yes but the actual values used are the nearest floating point representations of the number I want to compare, so there's some error introduced into the calculations, then it's still not correct to use an arbitrary delta to mean "near enough" because the representational error is dependent on the numbers involved. The floating point representations of 0.1 and 0.2 aren't "wrong" by the same amount.


Next, you are incorrect that the "precision" of doubles is 1e-15. The 15 decimal digits are *significant* digits, not an absolute error. You can't compare 0.1e20, 0.2e20 and 0.3e20 with significance to 1e-15. They're only equal within about 1e5, which is a pretty big number.


Next, in practice, many floating point numbers used in software are actually measurements of something outside the code, like a hardware sensor reading. So in practice, the numerical error may be much larger than the representational "error", in which case the fixed precision error delta is meaningless. Near-enough comparisons must take real-world information into account.


Finally, in terms of error, whether representational or from measurement, when you make floating point calculations the error increases at every numerical operator. Even if you start with values that are "correct" to within 1e-15, if you add up 100 of them the sum is correct only within about 1e-13 (unless your data is subject to a theoretical argument about the error distribution that guarantees something better), and it gets worse as you make more calculations. Comparing the result value with something using your suggested operators isn't the right approach.


Numerical computing is a deep and difficult subject. There's a reason why Swift and other languages don't have built-in "approximate" operations along the lines you envisage — they just don't work.

Here's an example of why and how your code is wrong:

import Foundation

infix operator ==~ { precedence 130 }
func ==~ (left: Double, right: Double) -> Bool
{
    return fabs(left.distanceTo(right)) <= 1e-15
}

// Testing that operator:

let a = DBL_MAX // Largest number (except +infinity) that is representable as a Double.
let b = a - 10_000.0 // Guess what this is?

print(a ==~ b) // true
// And in fact, a and b are actually are equal even when using the standard == operator:
print(a == b) // true
// Because they _are_ exactly the same, since at that magnitude, the floating(!) precision is
// a lot lot lot lot worse than eg 1e-15, it's actually a lot lot lot worse than even 10 000.
// The precission (smallest distance between representable numbers) at that magnitude is:
// 1.99584030953472e+292
// Here's proof in code:
let c = nextafter(a, 0.0) // (Returns next representable Double going from a towards zero.)
print(a - c) // 1.99584030953472e+292
// This (c) is the _gap_ between Double-representable numbers there, and it is even a lot lot lot
// larger than the largest representable UInt64 which is:
print(Double(UInt64.max)) // 1.84467440737096e+19


And also, there's no difference between Swift's and eg ObjC/C's floating point representations, they are both just standard IEEE-754.

You can do anything in Swift that you can in ObjC/C although names of constants etc might not be the same. Here are some constants available in Swift from Darwin:

/* Characteristics of floating point types, C99 5.2.4.2.2 */

public var FLT_EVAL_METHOD: Int32 { get }

public var FLT_RADIX: Int32 { get }

public var FLT_MANT_DIG: Int32 { get }
public var DBL_MANT_DIG: Int32 { get }
public var LDBL_MANT_DIG: Int32 { get }

public var DECIMAL_DIG: Int32 { get }

public var FLT_DIG: Int32 { get }
public var DBL_DIG: Int32 { get }
public var LDBL_DIG: Int32 { get }

public var FLT_MIN_EXP: Int32 { get }
public var DBL_MIN_EXP: Int32 { get }
public var LDBL_MIN_EXP: Int32 { get }

public var FLT_MIN_10_EXP: Int32 { get }
public var DBL_MIN_10_EXP: Int32 { get }
public var LDBL_MIN_10_EXP: Int32 { get }

public var FLT_MAX_EXP: Int32 { get }
public var DBL_MAX_EXP: Int32 { get }
public var LDBL_MAX_EXP: Int32 { get }

public var FLT_MAX_10_EXP: Int32 { get }
public var DBL_MAX_10_EXP: Int32 { get }
public var LDBL_MAX_10_EXP: Int32 { get }

public var FLT_MAX: Float { get }
public var DBL_MAX: Double { get }

public var FLT_EPSILON: Float { get }
public var DBL_EPSILON: Double { get }

public var FLT_MIN: Float { get }
public var DBL_MIN: Double { get }

public var FLT_TRUE_MIN: Float { get }
public var DBL_TRUE_MIN: Double { get }

See eg FLT_MAX, DBL_MAX, FLT_MIN, DBL_MIN.


You might want to read up on IEEE-754 (single and double precision) and make sure you understand how they really work.

Here's a little program that demonstrates some of the basic properties of the IEEE-754 (single precision / 32 bit) floating point format:

import Foundation

let maxFloatFractionBits: UInt32 = (1 << 23) - 1
let maxFloatExponentBits: UInt32 = 254 << 23
let maxFloatBits = maxFloatExponentBits | maxFloatFractionBits // (sign bit 0)
let maxFloatFromBits = unsafeBitCast(maxFloatBits, Float.self)
print("Max float 1:", String(format: "%.2f", maxFloatFromBits))
print("Max float 2:", String(format: "%.2f", FLT_MAX))

let minFloatFractionBits: UInt32 = 0
let minFloatExponentBits: UInt32 = 1 << 23
let minFloatBits = minFloatExponentBits | minFloatFractionBits // (sign bit 0)
let minFloatFromBits = unsafeBitCast(minFloatBits, Float.self)
print("Min float 1:", String(format: "%.160f", minFloatFromBits))
print("Min float 2:", String(format: "%.160f", FLT_MIN))

let trueMinFloatFractionBits: UInt32 = 1
let trueMinFloatExponentBits: UInt32 = 0
let trueMinFloatBits = trueMinFloatExponentBits | trueMinFloatFractionBits // (sign bit 0)
let trueMinFloatFromBits = unsafeBitCast(trueMinFloatBits, Float.self)
print("Min float 1:", String(format: "%.160f", trueMinFloatFromBits))
print("Min float 2:", String(format: "%.160f", FLT_TRUE_MIN))

In order to (successfully!) do numeric computation, you should have no problem wrting (from the top of your head) and understanding that code and its output.


And as QuinceyMorris already pointed out, Double having "15 significant decimal digits precision" doesn't mean it has 15 fractional digits precision, it means 15 decimal digits in total, counting the digits both before and after the decimal mark, as demonstrated here:

import Foundation
func makeDecimalNumberStringWithMaximumNumberOfSignificantDigits(num: Int) -> String {
    // Generate an array of 15 random decimal digits:
    var digits = (0 ..< num).map { _ in String(arc4random_uniform(10)) }
    // Insert randomly positioned decimal point:
    digits.insert(".", atIndex: 1 + Int(arc4random_uniform(UInt32(digits.count - 2))))
    // Remove leading and trailing zeroes (for it to match the String representation of a Float):
    while digits[0] == "0" && digits[1] != "." { digits = Array(digits.dropFirst()) }
    while digits.last == "0" && digits[digits.count - 2] != "." { digits = Array(digits.dropLast()) }
    // Join the array to a string and return it:
    return digits.joinWithSeparator("")
}
for i in 0 ..< 10_000 {
    let str = makeDecimalNumberStringWithMaximumNumberOfSignificantDigits(15) // See what happens with eg 20 instead of 15!
    guard let strToDbl = Double(str) else { fatalError() }
    let dblBackToStr = String(strToDbl)
    if str != dblBackToStr {
        print(
            "Converting this decimal string:\n\(str)\n" +
            "to Double looses precision:\n\(dblBackToStr)")
        print(String(format: "%.50f", strToDbl))
        exit(0)
    }
    if i < 5 {
        print("Example", i + 1)
        print("  Random decimal String:", str)
        print("    converted to Double:", strToDbl) // Prints using Swift's default Double-to-decimal-string conversion.
        print("     with more decimals:", String(format: "%.50f", strToDbl)) // Prints with 50 decimal fraction digits.
    }
}
print("\nAll String -> Double -> String-conversions produced the same String!")

Thanks for the detailed response. I have unmarked my followup as correct because it obviously isn't except in a narrow set of potential values. I am going to leave that post in place but add the qualification to it.


My misunderstanding arose from my reading of the Swift book and its wording of the limitation. From my corrected understanding now, it looks like it's impractical to have a one-size-fits all test for Double equality/comparison. As you said, the value of the delta has to change based on the left and right operands and the tolerance the developer wants/needs.


In this case, the custom operators are useful only insofar as the delta value fits the range of expected values. They're not universally usable once the values reach enough digits of significance on either side of the decimal that the step size between values becomes larger than the hardcoded delta.

Thanks for the detailed response. Like I said in another response above, I have unmarked my followup as correct because it obviously isn't except in a narrow set of potential values. I am going to leave that post in place but add the qualification to it.

But you could use nextafter() to test if values are equal within 1 ULP

import Foundation
var str = "Use nextafter() to test if values are equal within 1 ULP"
let float1 = 0.1
let float2 = 0.2
let float3 = float1 + float2
print("equal: \(float3 == nextafter(0.3,DBL_MAX) || float3 == nextafter(0.3,-DBL_MAX) || float3 == 0.3 )")

let float1b = 0.1e20
let float2b = 0.2e20
let float3b = float1b + float2b
print("equalb: \(float3b == nextafter(0.3e20,DBL_MAX) || float3b == nextafter(0.3e20,-DBL_MAX) || float3b == 0.3e20)")

let float1c = 0.1e-20
let float2c = 0.2e-20
let float3c = float1c + float2c
print("equalc: \(float3c == nextafter(0.3e-20,DBL_MAX) || float3c == nextafter(0.3e-20,-DBL_MAX) || float3c == 0.3e-20)")


Results


equal: true
equalb: true
equalc: true

Here is the Swift 3.0 implementation of the operators:


infix operator ==~ : AssignmentPrecedence
public func ==~ (left: Double, right: Double) -> Bool
{
    return fabs(left.distance(to: right)) <= 1e-15
}
infix operator !=~ : AssignmentPrecedence
public func !=~ (left: Double, right: Double) -> Bool
{
    return !(left ==~ right)
}
infix operator <=~ : AssignmentPrecedence
public func <=~ (left: Double, right: Double) -> Bool
{
    return (left ==~ right) || (left <~ right)
}
infix operator >=~ : AssignmentPrecedence
public func >=~ (left: Double, right: Double) -> Bool
{
    return (left ==~ right) || (left >~ right)
}
infix operator <~ : AssignmentPrecedence
public func <~ (left: Double, right: Double) -> Bool
{
    return left.distance(to: right) > 1e-15
}
infix operator >~ : AssignmentPrecedence
public func >~ (left: Double, right: Double) -> Bool
{
    return left.distance(to: right) < -1e-15
}

In Swift, all comparison operators are declared as ComparisonPrecedence, not AssignmentPrecedence. You'd better follow.