There seems to be a new MLE5Engine in iOS 17 and macOS 14, that causes issues with our style transfer models:
- The output is wrong (just gray pixels) and not the same as on iOS 16.
- There is a large memory leak. The memory consumption is increasing rapidly with each new frame.
Concerning 2): There are a lot of CVPixelBuffer
s leaking during prediction. Those buffers somehow have references to themselves and are not released properly. Here is a stack trace of how the buffers are created:
0 _malloc_zone_malloc_instrumented_or_legacy
1 _CFRuntimeCreateInstance
2 CVObject::alloc(unsigned long, _CFAllocator const*, unsigned long, unsigned long)
3 CVPixe Buffer::alloc(_CFAllocator const*)
4 CVPixelBufferCreate
5 +[MLMultiArray(ImageUtils) pixelBufferBGRA8FromMultiArrayCHW:channelOrderIsBGR:error:]
6 MLE5OutputPixelBufferFeatureValueByCopyingTensor
7 -[MLE5OutputPortBinder _makeFeatureValueFromPort:featureDescription:error:]
8 -[MLE5OutputPortBinder _makeFeatureValueAndReturnError:]
9 __36-[MLE5OutputPortBinder featureValue]_block_invoke
10 _dispatch_client_callout
11 _dispatch_lane_barrier_sync_invoke_and_complete
12 -[MLE5OutputPortBinder featureValue]
13 -[MLE5OutputPort featureValue]
14 -[MLE5ExecutionStreamOperation outputFeatures]
15 -[MLE5Engine _predictionFromFeatures:options:usingStream:operation:error:]
16 -[MLE5Engine _predictionFromFeatures:options:error:]
17 -[MLE5Engine predictionFromFeatures:options:error:]
18 -[MLDelegateModel predictionFromFeatures:options:error:]
19 StyleModel.prediction(input:options:)
When manually disabling the use of the MLE5Engine, the models run as expected.
Is this an issue caused by our model, or is it a bug in Core ML?