CoreML model with same input produces different results in Simulator with M1 chip

Hey

We are testing a project on xcode 14 beta 5 and we have an issue with a model that is simply Apple's Vision Feature Print (embeddings). The model has the input 299x299, then a visionFeaturePrint layer and the output is float64[2048]. The model is in Core ML Package v3 and was created using CoreML Tools, cutting the layer added by Create ML into a classification model. The result depends solely on the interaction that invokes the prediction despite the input image (simulator/Apple M1 chip). On the device works as expected.


let config = MLModelConfiguration()
               
#if targetEnvironment(simulator)
config.computeUnits = .cpuOnly
#else
 config.computeUnits = .all
#endif
model = try! ImageSemanticInfo_iOS(configuration: config)
 
let buffer = thumb!.toCVPixelBuffer()!

for _ in 0..<3{
   let results = try! model!.prediction(image: buffer).sceneprint
}

For example, if we take just the first entry of the embedding, we will always get the following results, regardless of the input image used:

0.474750816822052 - First call

0.3231460750102997 - Second call

0.37376347184181213 - Third call

Replies

Just noticed that this is also happening with XCode 13.4.1 on an Intel chip. So this looks like an "old" bug in the simulator...

Yes, it does sound like a bug in the simulator. Could you please file a bug report on feedbackassistant.apple.com?