I had code that ran 7x faster in Ventura compared to how it runs now in Sonoma.
For the basic model training I used
let pmst = MLBoostedTreeRegressor.ModelParameters(validation: .split(strategy: .automatic),maxIterations:10000)
let model = try MLBoostedTreeRegressor(trainingData: trainingdata, targetColumn: columntopredict, parameters: pmst)
Which took around 2 secs in Ventura and now takes between 10 and 14 seconds in Sonoma
I have tried to investigate why, and have noticed that when I use
I see these results
useWatchSPIForScribble: NO,
allowLowPrecisionAccumulationOnGPU: NO,
allowBackgroundGPUComputeSetting: NO,
preferredMetalDevice: (null),
enableTestVectorMode: NO,
parameters: (null),
rootModelURL: (null),
profilingOptions: 0,
usePreloadedKey: NO,
trainWithMLCompute: NO,
parentModelName: ,
modelName: Unnamed_Model,
experimentalMLE5EngineUsage: Enable,
preparesLazily: NO,
predictionConcurrencyHint: 0,
Why is the preferred Metal Device null?
If I do
let devices = MTLCopyAllDevices()
for device in devices {
config.preferredMetalDevice = device
print(device.name)
}
I can see that the M1 chipset is available but not selected (from reading the literature the default should be nil?)
Is this the reason why it is so slow? Is there a way to force a change in the config or elsewhere? Why has the default changed, if it has?