I have a project where I capture live video from the camera then pass it through a chain of CIFilters and render the result into an MTLTexture. It is all working well except that each time a call to CIContext's render toMTLTexture function is called the memory usage increases by ~150Mb and never goes back down. This causes the app to be killed by the OS after about 15-20 images are processed due to memory issues.
I have isolated the issue to the following process image function:
// Initialise required filters
CIFilter *grayScaleFilter = [CIFilter filterWithName:@"CIColorMatrix" keysAndValues: @"inputRVector", [CIVector vectorWithX:1 / 3.0 Y:1 / 3.0 Z:1 / 3.0 W:0], nil];
CIFilter *blurFilter = [CIFilter filterWithName:@"CIBoxBlur" keysAndValues:kCIInputRadiusKey, [NSNumber numberWithFloat:3.0], nil];
const CGFloat dxFilterValues[9] = { 1, 0, -1, 2, 0, -2, 1, 0, -1};
CIFilter *dxFilter = [CIFilter filterWithName:@"CIConvolution3X3" keysAndValues:kCIInputWeightsKey, [CIVector vectorWithValues:dxFilterValues count:9], nil];
const CGFloat dyFilterValues[9] = { 1, 2, 1, 0, 0, 0, -1, -2, -1};
CIFilter *dyFilter = [CIFilter filterWithName:@"CIConvolution3X3" keysAndValues:kCIInputWeightsKey, [CIVector vectorWithValues:dyFilterValues count:9], nil];
// Phase filter is my custom filter implemented with a Metal Kernel
CIFilter *phaseFilter = [CIFilter filterWithName:@"PhaseFilter"];
// Apply filter chain to input image
[grayScaleFilter setValue:image forKey:kCIInputImageKey];
[blurFilter setValue:grayScaleFilter.outputImage forKey:kCIInputImageKey];
[dxFilter setValue:blurFilter.outputImage forKey:kCIInputImageKey];
[dyFilter setValue:blurFilter.outputImage forKey:kCIInputImageKey];
[phaseFilter setValue:multiplierFilterDx.outputImage forKey:@"inputX"];
[phaseFilter setValue:multiplierFilterDy.outputImage forKey:@"inputY"];
// Initialize MTLTextures
MTLTextureDescriptor* desc = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatR8Unorm width:720 height:1280 mipmapped:NO];
desc.usage = MTLTextureUsageShaderWrite | MTLTextureUsageShaderRead;
id<MTLTexture> phaseTexture = [CoreImageOperations::device newTextureWithDescriptor:descriptor];
// Render to MTLTexture
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Memory usage increases by ~150Mb after the following function call!!!
[context render:phaseFilter.outputImage toMTLTexture:phaseTexture commandBuffer:commandBuffer bounds:phaseFilter.outputImage.extent colorSpace:colorSpace];
CFRelease(colorSpace);
return phaseTexture;
I profiled the memory usage using instruments and found that most of the memory was being used by IOSurface objects with CoreImage listed as the responsible library and CreateCachedSurface as the responsible caller. (See screenshot below)
This is very strange because I set up my CIContext to not cache intermediates witht the following line:
const CIContext *context = [CIContext contextWithMTLCommandQueue:commandQueue options:@{ kCIContextWorkingFormat: [NSNumber numberWithInt:kCIFormatRGBAf], kCIContextCacheIntermediates: @NO, kCIContextName: @"Image Processor" }];
Any thoughts or advice would be greatly appreciated!