I'm encountering an issue where the following code:
let minInset = UIEdgeInsets(top: .leastNonzeroMagnitude,
left: .leastNonzeroMagnitude,
bottom: .leastNonzeroMagnitude,
right: .leastNonzeroMagnitude)
print(minInset == .zero)
print(minInset.top.isZero)
// Output
// Rosetta (Intel-based) iOS simulator
// true
// false
// non-Rosetta (ARM-based) iOS simulator
// false
// false
Environment:
Xcode Version: Xcode 15.3(15E204a)
iOS Target: iOS 17.4
Simulator Architectures: Rosetta (Intel) and non-Rosetta (ARM)
I would expect minInset
to be a non-zero UIEdgeInsets
value since it's constructed using .leastNonzeroMagnitude
. However, on the Rosetta simulator, minInset == .zero
evaluates to true,
while minInset.top.isZero
evaluates to false
, which seems contradictory.
On the non-Rosetta simulator, both expressions evaluate as expected (false
), indicating that .leastNonzeroMagnitude
is treated as a non-zero value.
Can someone explain why this behavior differs between the two simulator architectures
I can roughly guess the possible reason, which is the different precision between the two CPU architectures, especially the double precision issue on Intel in the Rosetta environment. However, this still does not explain the contradictory behavior where minInset == .zero
evaluates to true
, while minInset.top.isZero
evaluates to false
.
Additionally, I'm curious if there are any other pitfalls or potential issues in the Rosetta environment ?