Posts

Post not yet marked as solved
1 Replies
1.2k Views
My mlmodel is similar to style transfer models input and output are images. It is a version of UnetThe input is a color image and the output is a grayscale image.this is what model's in .h file showsinput: /// image as color (kCVPixelFormatType_32BGRA) image buffer, 224 pixels wide by 224 pixels high open var image: CVPixelBuffer public init(image: CVPixelBuffer)output: /// mask as grayscale (kCVPixelFormatType_OneComponent8) image buffer, 224 pixels wide by 224 pixels high open var mask: CVPixelBuffer publicinit(mask: CVPixelBuffer)I am putting the image from Mat object (OpenCV) to the model and attempt to return output into another Mat object.before feeding image to the model I transform it Mat -> CVPixelBuffer -> MLMODEL -> CVPixelBuffer -> MatHere is how I use the modeleverything works fine until I attempt to transform output into Mat (line 8) Mat toSqeeze = obrazek.clone(); cvtColor(toSqeeze, toSqeeze, CV_BGRA2BGR); resize(toSqeeze, toSqeeze, cv::Size(224,224)); imageIn = [self matToImageBuffer: toSqeeze]; NSError *error; output = [self->_myModel predictionFromImage: imageIn error:&error]; Mat imageOut = [self imageBufferToMat: output.mask];Iuse the following methods:(color) Mat -> CVPixelBuffer works OK- (CVImageBufferRef) matToImageBuffer: (cv::Mat) mat { cv::cvtColor(mat, mat, CV_BGR2BGRA); int width = mat.cols; int height = mat.rows; NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, [NSNumber numberWithInt:width], kCVPixelBufferWidthKey, [NSNumber numberWithInt:height], kCVPixelBufferHeightKey, nil]; CVPixelBufferRef imageBuffer; CVReturn status = CVPixelBufferCreate(kCFAllocatorMalloc, width, height, kCVPixelFormatType_32BGRA, (CFDictionaryRef) CFBridgingRetain(options), &imageBuffer) ; NSParameterAssert(status == kCVReturnSuccess && imageBuffer != NULL); CVPixelBufferLockBaseAddress(imageBuffer, 0); void *base = CVPixelBufferGetBaseAddress(imageBuffer) ; memcpy(base, mat.data, mat.total()*4); CVPixelBufferUnlockBaseAddress(imageBuffer, 0); return imageBuffer; }(grayscale) CVPixelBuffer -> Mat here are problems ( line 5)- (cv::Mat) imageBufferToMat: (CVImageBufferRef) buffer { cv::Mat mat ; CVPixelBufferLockBaseAddress(buffer, 0); // I get here <- EXC_BAD_ACCESS (code=1, address=0x59b2b0290) void *address = CVPixelBufferGetBaseAddress(buffer); int width = (int) CVPixelBufferGetWidth(buffer); int height = (int) CVPixelBufferGetHeight(buffer); mat = cv::Mat(height, width, CV_8UC1, address, 0); CVPixelBufferUnlockBaseAddress(buffer, 0); return mat; }While attempting to run the code I get error: Thread 1: EXC_BAD_ACCESS (code=1, address=0x59b2b0290)The closest thing I narrowed is the grayscale nature of the output kCVPixelFormatType_OneComponent8.Am I doing something wrong?Is there another approach when model output is grayscale?Please help. Thanks in advance
Posted
by arucoCode.
Last updated
.
Post not yet marked as solved
1 Replies
1.8k Views
I am working on mlmodel which output is MLMultiArray but in fact, the model produces an image It is a grayscale mask 224x224.My App is a mixture of OpenCV, Objective-C, C++I am working in Objective-C. I am trying to translate MlMultiArray to Mat matrix for further OpenCV processing.My approach is to perform conversion MLMultiArray -> UIImage -> Mat.There is a convenience method in OpenCV to obtain Mat from UIImage butI am stuck on the MLMultiArray -> UIImage portion.I see various posts around forums that partially resolve the problem but I have no idea how to put these pieces together. I am good at Deep Learning but not exactly skilled in Objective-C / SwiftHere are some links that get close to the problem but none shows the whole formula.https://stackoverflow.com/questions/47828706/how-to-access-elements-inside-mlmultiarray-in-coremlhttps://developer.apple.com/documentation/coreml/mlmultiarray/2879222-strides?language=objchttps://developer.apple.com/documentation/coreml/mlmultiarray/2879231-objectforkeyedsubscript?language=objcShould I use a loop to go over all elements of MLMultiArray or is there a more efficient way to transform the array.Maybe there is a direct way to convert MLMultiArray into OpenCV Mat object?here is a mysterious link MLMultiArray -> Mat through dataPointer but I couldn't find anything more: https://stackoverflow.com/questions/47828706/how-to-access-elements-inside-mlmultiarray-in-coremlPlease help, I am chasing my tail on various forums.Thanks in advance
Posted
by arucoCode.
Last updated
.