I had code that ran 7x faster in Ventura compared to how it runs now in Sonoma.
For the basic model training I used
let pmst = MLBoostedTreeRegressor.ModelParameters(validation: .split(strategy: .automatic),maxIterations:10000)
let model = try MLBoostedTreeRegressor(trainingData: trainingdata, targetColumn: columntopredict, parameters: pmst)
Which took around 2 secs in Ventura and now takes between 10 and 14 seconds in Sonoma
I have tried to investigate why, and have noticed that when I use
I see these results
useWatchSPIForScribble: NO,
allowLowPrecisionAccumulationOnGPU: NO,
allowBackgroundGPUComputeSetting: NO,
preferredMetalDevice: (null),
enableTestVectorMode: NO,
parameters: (null),
rootModelURL: (null),
profilingOptions: 0,
usePreloadedKey: NO,
trainWithMLCompute: NO,
parentModelName: ,
modelName: Unnamed_Model,
experimentalMLE5EngineUsage: Enable,
preparesLazily: NO,
predictionConcurrencyHint: 0,
Why is the preferred Metal Device null?
If I do
let devices = MTLCopyAllDevices()
for device in devices {
config.preferredMetalDevice = device
print(device.name)
}
I can see that the M1 chipset is available but not selected (from reading the literature the default should be nil?)
Is this the reason why it is so slow? Is there a way to force a change in the config or elsewhere? Why has the default changed, if it has?
Post
Replies
Boosts
Views
Activity
I have accidentally updated to Sonoma, and found my CoreML models are generating nearly 7x slower since the update. I also no longer get the verbose information in terminal (i.e time taken per cycle, deviation from actual result etc). This is using xcode and swift developed for MacOS.
The M1 laptop I am using is also under considerably less stress (i.e it is no longer getting warm)
Is there a flag I need to set to increase performance, a button I need to press, any suggestions would be helpful.
Please note this is 24hr+ since the update, so it should no longer be affected by any usual background tasks after an upgrade.