I'm trying to convert an CGImage to MTLTexture, and for this, I'm using this code:
let width = cgimage.width
let height = cgimage.height
let channels = cgimage.bitsPerPixel / cgimage.bitsPerComponent
let colorSpace = cgimage.colorSpace!
let bitsPerComponent = cgimage.bitsPerComponent
let bytesPerComponent = cgimage.bitsPerComponent / 8
let bytesPerPixel = cgimage.bitsPerPixel / 8
let bytesPerRow = width * bytesPerPixel
let options = CGImageAlphaInfo.premultipliedLast.rawValue
var pixelValues = [UInt8](repeating: 0, count: width * height * channels)
let contextRef = CGContext(data: &pixelValues, width: width, height: height,
bitsPerComponent: bitsPerComponent,
bytesPerRow: bytesPerRow,
space: colorSpace, bitmapInfo: options)
contextRef?.draw(cgimage, in: CGRect(x: 0.0, y: 0.0, width: CGFloat(width), height: CGFloat(height)))
My problem is that CGContext is extremely finicky, and it just returns nil without any explanation if some of it's obscure options are not perfectly set, for example in bitmapInfo.
Now, I've seen some tutorials online in ObjC where it was using CGContextRef which provided errors with nice, detailed explanation.
However in Swift, it's just an empty nil, without any warning lines or anything. This is just extremely hard to develop anything like this, I'm pretty much just guessing in the dark.
For example, I'm trying to figure out, why does this function work with
UInt8
and
UInt16
but not with
Float
, and I don't even know where to start debugging.