Re-scaling mlmodel response back to 0-255 range

I think resolving this problem will be useful to many developers.

I built a semantic image segmentation network. It takes RGB images and output grayscale image with the same resolution (224x224)RGB In, (224x224)Grayscale Out.

Here is the problem. The model requires input 0-1 and gives output 0-1.

coremltools while building mlmodel scales nicely input signal so swift knows what format to pass, but

when I try to display response of the network it is a black image. I did my investigation (I stopped the code, viewed the PixelBuffer through the Preview and then examined it inside Python code) and view the values and indeed model produces a proper mask - ones where the object is, and zeros everywhere else.



Is there any way to force coremltools to scale back the model response to the range 0-255?


Or maybe there is a way to do the rescaling inside of swift?


Here is my code so far:


This function is grabbing pixelBuffer right from the model response and translates it to CGImage that I immediately put on display:

extension CVPixelBuffer{
   
    func toCGImage() -> CGImage?{
        CVPixelBufferLockBaseAddress(self, .readOnly)
        let width = CVPixelBufferGetWidth(self)
        let height = CVPixelBufferGetHeight(self)
        let data = CVPixelBufferGetBaseAddress(self)!
       
       
        let outContext = CGContext(data: data,
                                   width: width,
                                   height: height,
                                   bitsPerComponent: 8,
                                   bytesPerRow: CVPixelBufferGetBytesPerRow(self),
                                   space: CGColorSpaceCreateDeviceGray(),
                                   bitmapInfo: CGImageAlphaInfo.none.rawValue)!
//                                    space: CGColorSpaceCreateDeviceRGB(),
//                                    bitmapInfo: CGImageByteOrderInfo.order32Little.rawValue | CGImageAlphaInfo.noneSkipFirst.rawValue)!
//        outContext.translateBy(x: 0, y: 1)
//        outContext.scaleBy(x: 0, y: 255)
        let outImage = outContext.makeImage()!
        CVPixelBufferUnlockBaseAddress(self, .readOnly)
       
        return outImage
    }   
}

I was experimenting with bitmapinfo as orderMask only to crash the app.


bitmapInfo: CGImageByteOrderInfo.orderMask.rawValue | CGImageAlphaInfo.noneSkipFirst.rawValue)!


Is there any out of the box method to rescale output of this function from 0-1 to 0-255.


this is how I display results, of course, the UIView is black

            DispatchQueue.main.async {
            self.imageView?.image = UIImage(cgImage: cgImage)
            }

I apologize, this is my first swift app so I am oblivious to probably simple solutions.

Please help.