NSImage from Video Data

Hi

I am getting a frame of video data that is from a third party SDK. The object contains the data buffer, data length, y buffer, u buffer, v buffer and a few more bits that are related to the SDK. The data is I420 (from the name of the object). I am using the following code to try and make an NSImage from the data.

 var pseudoVideoData = Data.init(bytes: buffer!,
                                 count: Int(bufSize))

            let cgImg = pseudoVideoData.withUnsafeMutableBytes { (ptr) -> CGImage in
                let ctx = CGContext(
                    data: ptr.baseAddress,
                    width: Int(Double(windowWidth) * 0.562),      // Why?
                    height: Int(Double(windowHeight) * 0.519),    // Why? again???
                    bitsPerComponent: 8,
                    bytesPerRow: Int(4 * streamWidth),
                    space: CGColorSpace(name: CGColorSpace.sRGB)!,
                    bitmapInfo: bmInfo
                )!

                return ctx.makeImage()!
            }

            let imgSize = NSSize(width: CGFloat(windowWidth), height: CGFloat(windowHeight))

            let img = NSImage.init(cgImage: cgImg, size: imgSize)
            self.pseudoVideoView.image = img
}

I can blast the image into an NSImageView.image but the image is missing colour. I can get the y buffer, u buffer and v buffer, but I don't know how to mash all the data into a nice coloured image.

Can someone point me to a URL or some sample code that I can look at to get over this problem?

Thanks and Best Regards

John

I can point you in the right direction, i420 or YUV has 3 planes (or buffers), you're only decoding one, Not knowing how you obtained your initial buffer, I can't tell you how to get the rest. But once you have all three you have to combine them together, typically using a fragment shader

I can point you in the right direction, i420 or YUV has 3 planes (or buffers), you're only decoding one, Not knowing how you obtained your initial buffer, I can't tell you how to get the rest. But once you have all three you have to combine them together, typically using a fragment shader

NSImage from Video Data
 
 
Q