ML Compute

RSS for tag

Accelerate training and validation of neural networks using the CPU and GPUs.

Posts under ML Compute tag

37 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

WebGPU Enabled but WKWebView doesn't have GPU Access
We enabled WebGPU feature flag on Safari on iOS 18.2. This does give Safari an access to GPU but WKWebView still doesn't have GPU access. Can WKWebView not access GPU through Safari feature flag? Is there some other mechanism through which we can enable GPU access for WKWebView? We are testing gpu access by loading : https://webgpureport.org/ Regards Saalis Umer Microsoft Safari Feature Flag - webgpu = true Safari GPU Access: WKWebView GPU Access:
0
0
97
5d
The yolo11 object detection model I exported to coreml stopped working in macOS15.2 beta.
After updating to macOS15.2beta, the Yolo11 object detection model exported to coreml outputs incorrect and abnormal bounding boxes. It also doesn't work in iOS apps built on a 15.2 mac. The same model worked fine on macOS14.1. When training a Yolo11 custom model in Python, exporting it to coreml, and testing it in the preview tab of mlpackage on macOS15.2 and Xcode16.0, the above result is obtained.
1
1
202
1w
Unable to Use M1 Mac Pro Max GPU for TensorFlow Model Training
Hi Everyone, I'm currently facing an issue where TensorFlow is unable to detect the GPU on my M1 Mac for model training. When I run the following code to check for available GPUs: import tensorflow as tf print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU'))) Num GPUs Available: 0 I have already applied the steps mentioned in the developer apple document. https://developer.apple.com/metal/tensorflow-plugin/ System Information: Device: M1 Mac Pro Max Python Version: 3.12.2 TensorFlow Version: 2.17.0 OS: macOS Sequoia (15.1) Questions: Is there any additional configuration required to enable GPU support on M1 Macs? Are there specific TensorFlow versions that I should be using for better compatibility? Has anyone else faced this issue, and how did you resolve it?
0
1
362
Nov ’24
macOS 15.x crashes in MetalPerformanceShadersGraph
In our app we use CoreML. But ever since macOS 15.x was released we started to get a great bunch of crashes like this: Incident Identifier: 424041c3-884b-4e50-bb5a-429a83c3e1c8 CrashReporter Key: B914246B-1291-4D44-984D-EDF84B52310E Hardware Model: Mac14,12 Process: <REMOVED> [1509] Path: /Applications/<REMOVED> Identifier: com.<REMOVED> Version: <REMOVED> Code Type: arm64 Parent Process: launchd [1] Date/Time: 2024-11-13T13:23:06.999Z Launch Time: 2024-11-13T13:22:19Z OS Version: Mac OS X 15.1.0 (24B83) Report Version: 104 Exception Type: SIGABRT Exception Codes: #0 at 0x189042600 Crashed Thread: 36 Thread 36 Crashed: 0 libsystem_kernel.dylib 0x0000000189042600 __pthread_kill + 8 1 libsystem_c.dylib 0x0000000188f87908 abort + 124 2 libsystem_c.dylib 0x0000000188f86c1c __assert_rtn + 280 3 Metal 0x0000000193fdd870 MTLReportFailure.cold.1 + 44 4 Metal 0x0000000193fb9198 MTLReportFailure + 444 5 MetalPerformanceShadersGraph 0x0000000222f78c80 -[MPSGraphExecutable initWithMPSGraphPackageAtURL:compilationDescriptor:] + 296 6 Espresso 0x00000001a290ae3c E5RT::SharedResourceFactory::GetMPSGraphExecutable(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, NSDictionary*) + 932 . . . 43 CoreML 0x0000000192d263bc -[MLModelAsset modelWithConfiguration:error:] + 120 44 CoreML 0x0000000192da96d0 +[MLModel modelWithContentsOfURL:configuration:error:] + 176 45 <REMOVED> 0x000000010497b758 -[<REMOVED> <REMOVED>] (<REMOVED>) No similar crashes on macOS 12-14! MetalPerformanceShadersGraph.log Any clue what is causing this? Thanks! :)
2
1
324
1w
How to Fine-Tune the SNSoundClassifier for Custom Sound Classification in iOS?
Hi Apple Developer Community, I’m exploring ways to fine-tune the SNSoundClassifier to allow users of my iOS app to personalize the model by adding custom sounds or adjusting predictions. While Apple’s WWDC session on sound classification explains how to train from scratch, I’m specifically interested in using SNSoundClassifier as the base model and building/fine-tuning on top of it. Here are a few questions I have: 1. Fine-Tuning on SNSoundClassifier: Is there a way to fine-tune this model programmatically through APIs? The manual approach using macOS, as shown in this documentation is clear, but how can it be done dynamically - within the app for users or in a cloud backend (AWS/iCloud)? Are there APIs or classes that support such on-device/cloud-based fine-tuning or incremental learning? If not directly, can the classifier’s embeddings be used to train a lightweight custom layer? Training is likely computationally intensive and drains too much on battery, doing it on cloud can be right way but need the right apis to get this done. A sample code will do good. 2. Recommended Approach for In-App Model Customization: If SNSoundClassifier doesn’t support fine-tuning, would transfer learning on models like MobileNetV2, YAMNet, OpenL3, or FastViT be more suitable? Given these models (SNSoundClassifier, MobileNetV2, YAMNet, OpenL3, FastViT), which one would be best for accuracy and performance/efficiency on iOS? I aim to maintain real-time performance without sacrificing battery life. Also it is important to see architecture retention and accuracy after conversion to CoreML model. 3. Cost-Effective Backend Setup for Training: Mac EC2 instances on AWS have a 24-hour minimum billing, which can become expensive for limited user requests. Are there better alternatives for deploying and training models on user request when s/he uploads files (training data)? 4. TensorFlow vs PyTorch: Between TensorFlow and PyTorch, which framework would you recommend for iOS Core ML integration? TensorFlow Lite offers mobile-optimized models, but I’m also curious about PyTorch’s performance when converted to Core ML. 5. Metrics: Metrics I have in mind while picking the model are these: Publisher, Accuracy, Fine-Tuning capability, Real-Time/Live use, Suitability of iPhone 16, Architectural retention after coreML conversion, Reasons for unsuitability, Recommended use case. Any insights or recommended approaches would be greatly appreciated. Thanks in advance!
6
1
548
2w
Does iPhone 15 Pro Use a Single Microphone or Multiple Microphones for Voice and Sound Recognition?
Hello, I have a question regarding the voice and sound recognition features on the iPhone 15 Pro. The iPhone 15 Pro is equipped with four microphones, and I understand that for features like Apple’s sound recognition and when invoking Siri, the microphone(s) must always be active. My question is whether the device uses a single microphone (mono channel) for these functions or if multiple microphones are activated simultaneously. I would appreciate clarification on how the microphones are utilized in sound and voice recognition features. Thank you for your assistance. Best regards.
1
0
428
Oct ’24
Optimizing YOLOv8 for Real-Time Object Detection in a Specific Screen Area
I’m working on real-time object detection using YOLOv8, but I only need to detect objects in approximately 40% of the screen area. Is it possible to limit the captureOut method to focus solely on that specific region of the screen? If this isn’t feasible, I’m considering an approach where the full-screen pixel buffer is captured and then cropped to the target area before running detection. However, I’m concerned about how this might affect real-time performance. I’d appreciate any insights on how to maintain real-time performance or suggestions for better alternatives. Thank you!
2
0
371
Oct ’24
Urgent Issue with SoundAnalysis in iOS 18 - Critical Background Permissions Error
We are experiencing a major issue with the native .version1 of the SoundAnalysis framework in iOS 18, which has led to all our user not having recordings. Our core feature relies heavily on sound analysis in the background, and it previously worked flawlessly in prior iOS versions. However, in the new iOS 18, sound analysis stops working in the background, triggering a critical warning. Details of the issue: We are using SoundAnalysis to analyze background sounds and have enabled the necessary background permissions. We are using the latest XCode A warning now appears, and sound analysis fails in the background. Below is the warning message we are encountering: Warning Message: Execution of the command buffer was aborted due to an error during execution. Insufficient Permission (to submit GPU work from background) [Espresso::handle_ex_plan] exception=Espresso exception: "Generic error": Insufficient Permission (to submit GPU work from background) (00000006:kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted); code=7 status=-1 Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1). CoreML prediction failed with Error Domain=com.apple.CoreML Code=0 "Failed to evaluate model 0 in pipeline" UserInfo={NSLocalizedDescription=Failed to evaluate model 0 in pipeline, NSUnderlyingError=0x30330e910 {Error Domain=com.apple.CoreML Code=0 "Failed to evaluate model 1 in pipeline" UserInfo={NSLocalizedDescription=Failed to evaluate model 1 in pipeline, NSUnderlyingError=0x303307840 {Error Domain=com.apple.CoreML Code=0 "Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1)." UserInfo={NSLocalizedDescription=Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1).}}}}} We urgently need guidance or a fix for this, as our application’s main functionality is severely impacted by this background permission error. Please let us know the next steps or if this is a known issue with iOS 18.
12
11
1.4k
2w
Code with Swift Assist
Hello, I would like to inquire about the release date of Swift Assist’s beta version. Apple has stated that it will be released later this year, but they have not provided a specific date or time. Could you please provide information on the beta version’s release date? Additionally, is there a trial version available? If so, when was it released? Thank you for your assistance.
1
0
1.5k
Sep ’24
MLTensor computation took more time than expected.
func testMLTensor() { let t1 = MLTensor(shape: [2000, 1], scalars: [Float](repeating: Float.random(in: 0.0...10.0), count: 2000), scalarType: Float.self) let t2 = MLTensor(shape: [1, 3000], scalars: [Float](repeating: Float.random(in: 0.0...10.0), count: 3000), scalarType: Float.self) for _ in 0...50 { let t = Date() let x = (t1 * t2) print("MLTensor", t.timeIntervalSinceNow * 1000, "ms") } } testMLTensor() The above code took more time than expected, especially in the early stage of iteration.
1
0
518
Aug ’24
MLTensor computation took more time than expected.
func testMLTensor() { let t1 = MLTensor(shape: [2000, 1], scalars: [Float](repeating: Float.random(in: 0.0...10.0), count: 2000), scalarType: Float.self) let t2 = MLTensor(shape: [1, 3000], scalars: [Float](repeating: Float.random(in: 0.0...10.0), count: 3000), scalarType: Float.self) for _ in 0...50 { let t = Date() let x = (t1 * t2) print("MLTensor", t.timeIntervalSinceNow * 1000, "ms") } } testMLTensor() The above code took more time than expected, especially in the early stage of iteration.
0
0
396
Aug ’24
MLTensor computation took more time than expected.
func testMLTensor() { let t1 = MLTensor(shape: [2000, 1], scalars: [Float](repeating: Float.random(in: 0.0...10.0), count: 2000), scalarType: Float.self) let t2 = MLTensor(shape: [1, 3000], scalars: [Float](repeating: Float.random(in: 0.0...10.0), count: 3000), scalarType: Float.self) for _ in 0...50 { let t = Date() let x = (t1 * t2) print("MLTensor", t.timeIntervalSinceNow * 1000, "ms") } } testMLTensor() The above code took more time than expected, especially in the early stage of iteration.
0
0
346
Aug ’24
iOS 18.1 beta - App crashes at runtime while using Translation.TranslationError in project
I'm trying to cast the error thrown by TranslationSession.translations(from:) as Translation.TranslationError. However, the app crashes at runtime whenever Translation.TranslationError is used in the project. Environment: iOS Version: 18.1 beta Xcode Version: 16 beta yld[14615]: Symbol not found: _$s11Translation0A5ErrorVMa Referenced from: <3426152D-A738-30C1-8F06-47D2C6A1B75B> /private/var/containers/Bundle/Application/043A25BC-E53E-4B28-B71A-C21F77C0D76D/TranslationAPI.app/TranslationAPI.debug.dylib Expected in: /System/Library/Frameworks/Translation.framework/Translation
1
1
849
Aug ’24
Use iPad M1 processor as GPU
Hello, I’m currently working on Tiny ML or ML on Edge using the Google Colab platform. Due to the exhaust of my compute unit’s free usage, I’m being prompted to pay. I’ve been considering leveraging the GPU capabilities of my iPad M1 and Intel-based Mac. Both devices utilize Thunderbolt ports capable of sharing connections up to 30GB/s. Since I’m primarily using a classification model, extensive GPU usage isn’t necessary. I’m looking for assistance or guidance on utilizing the iPad’s processor as an eGPU on my Mac, possibly through an API or Apple technology. Any help would be greatly appreciated!
2
0
817
Sep ’24
CoreML 6 beta 2 - Failed to create CVPixelBufferPool
Hello everyone, I am trying to train using CreateML Version 6.0 Beta (146.1), feature extractor Image Feature Print v2. I am using 100K images for a total ~4GB on my M3 Max 48GB (MacOs 15.0 Beta (24A5279h)) The images seems to be correctly read and visualized in the Data Source section (no images with corrupted data seems to be there). When I start the training it's all fine for the first 6k ~ 7k pictures, then I receive the following error: Failed to create CVPixelBufferPool. Width = 0, Height = 0, Format = 0x00000000 It is the first time I am using it, so I don't really have so much of experience. Could you help me to understand what could be the problem? Thanks a lot
6
1
689
4d
No print or logs when debugging watchos
I wrote a watch-only App using Bluetooth wich is running on the watch. But no prints or logs appear on output. I only get: [S:1] Error received: Connection invalidated. [S:3] Error received: Connection invalidated. [S:4] Error received: Connection invalidated. [S:5] Error received: Connection invalidated. Message from debugger: killed Program ended with exit code: 9 In the launch log I find: Showing Recent Messages Launch com.apple.Carousel Platform: watchOS Device Identifier: 00008310-001244D611D1A01E Operating System Version: 10.5 (21T576) Model: Apple Watch Series 9 (Watch7,1) Apple Watch von Draha is connected via network Installing com.apple.Carousel on Apple Watch von Draha Installing on Apple Watch von Draha Successfully installed XPC/App Extension Debugging Setup XPC Debugging for: gwe.WatchBleTest.watchkitapp.WatchBleWidget Console logging policy: Synchronously obtain os_logs via libLogRedirect, and read stdio from File Descriptors Stop XPC Debugging for: gwe.WatchBleTest.watchkitapp.WatchBleWidget View debugging: disabled Insert view debugging dylib on launch: enabled Queue debugging: enabled Memory graph on resource exception: disabled Address sanitizer: disabled Thread sanitizer: disabled Using LLDBRPC. The LLDB framework is from /Applications/Xcode.app/Contents/SharedFrameworks Device support directory: /Users/gertelsholz/Library/Developer/Xcode/watchOS DeviceSupport/Watch7,1 10.5 (21T576)/Symbols Attached to process with pid 469 What could be the cause of the issue, and how to fix it? Thanks for your help!
1
0
840
May ’24