vU128 (and vU256) multiplication behavior

I'm implementing some fast finite field arithmetics and found big number types in Accelerate framework. Unfortunately multiplication behavior is strange and Acceleration framework is not too well documented.


Here is some code


public typealias U128 = vU128


public func add(_ a: U128) -> U128 {

return U128(v: vU128Add(self.v, a.v))

}


public func mul(_ a: U128) -> U256 {

var result = U256(v: (BigNumber.vZERO, BigNumber.vZERO)) // just empty init

var aCopy = a

var selfCopy = self

withUnsafePointer(to: &selfCopy) { (selfPtr: UnsafePointer<vU128>) -> Void in

withUnsafePointer(to: &aCopy, { (aPtr: UnsafePointer<vU128>) -> Void in

withUnsafeMutablePointer(to: &result, { (resultPtr: UnsafeMutablePointer<vU256>) -> Void in

vU128FullMultiply(selfPtr, aPtr, resultPtr)

})

})

}

return result

}


Addition of two numbers works well ( 1 + 2 == 3), although fullWidth multiplication results in 1 byte left shifted result ( 1 * 2 = 512 ! )

Replies

I'm very curious how you have checked your result, but first of all, you can write your `mul` method a little more simply.


extension U128 {
    public func add(_ a: U128) -> U128 {
        return U128(v: vU128Add(self.v, a.v))
    }

    public func mul(_ a: U128) -> U256 {
        var result = U256()
        var aCopy = a
        var selfCopy = self
        vU128FullMultiply(&selfCopy, &aCopy, &result)
        return result
    }
}


And to make checking the result easy, I have prepared some extensions.

extension vUInt32 {
    public var isZero: Bool {
        return self.x == 0 && self.y == 0 && self.z == 0 && self.w == 0
    }
    
    public init(_ value: UInt32) {
        self = vUInt32(x: value, y: 0, z: 0, w: 0)
    }
}

public typealias U128 = vU128
extension U128: CustomStringConvertible {
    public init(_ value: UInt32) {
        self = U128(v: vUInt32(value))
    }
    
    public var description: String {
        var str = "";
        var num = self
        let div = vUInt32(10)
        var rem = vUInt32()
        repeat {
            num = U128(v: vU128Divide(num.v, div, &rem))
            let ch = UnicodeScalar(rem.x + ("0" as UnicodeScalar).value)!
            str.insert(Character(ch), at: str.startIndex)
        } while !num.isZero
        return str
    }
    
    public var isZero: Bool {
        return self.v.isZero
    }
}

public typealias U256 = vU256
extension U256: CustomStringConvertible {
    public init(_ value: UInt32) {
        self = U256(v: (vUInt32(value), vUInt32(0)))
    }
    
    public var description: String {
        var str = "";
        var num = self
        var div = U256(10)
        var result = U256()
        var rem = U256()
        repeat {
            vU256Divide(&num, &div, &result, &rem)
            num = result
            let ch = UnicodeScalar(rem.v.0.x + ("0" as UnicodeScalar).value)!
            str.insert(Character(ch), at: str.startIndex)
        } while !num.isZero
        return str
    }
    
    public var isZero: Bool {
        return v.0.isZero && v.1.isZero
    }
}


And tested.

print(U128(1).add(U128(2))) //-> 3
print(U128(1).mul(U128(2))) //-> 2
print(U128(1_000_000_000).mul(U128(2_000_000_000))) //-> 2000000000000000000

Nothing wrong, nothing strange.


So, how have you checked your result? Please show the code you have used for checking.

Looks like my init didn't work properly, I've assumed the vUInt32 vector structure to be BE like, so in my code it was placed in .w part of the vectors. I'll reimplement init from raw bytes and get back

I see. In fact, vecLib/vBigNum in x86_64 or ARM-64 places all fragments in Little Endian order.

Hope you can adjust your code to actual vecLib soon.