Explore the power of machine learning within apps. Discuss integrating machine learning features, share best practices, and explore the possibilities for your app.

Post

Replies

Boosts

Views

Activity

Poor Quality 2021 MBP Speakers
I've only been using this late 2021 MBP 16 for nearly 2 years, and now the speaker is producing a crackling sound. Upon inquiring about repairs, customer service informed me that it would cost $728 to replace the speaker, which is a third of the price of the laptop itself. It's absolutely absurd that a $2200 laptop's speaker would fail within such a short period without any external damage. The repair cost being a third of the laptop's price is outrageous. I intend to initiate a petition in the US, hoping to connect with others experiencing the same problem. This is indicative of a subpar product, and customers shouldn't bear the burden of Apple's shortcomings. I plan to share my grievances on various social media platforms and if the issue persists, I will escalate it to the media for further exposure.
2
0
623
Feb ’24
Better Results with Separate Sound Classifier Models?
I'm working with MLSoundClassifier to try to look for 2 different sounds in a live audio stream. I have been debating with the team if it is better to train 2 separate models, one for each different sound, or train 1 model on both sounds? Has anyone had any experience with this. Some of us believe that we have received better results with the separate models and some with 1 single model trained on both sounds. Thank you!
0
0
656
Feb ’24
Tensorflow-Metal Errors
Hi i am trying to set up tensorflow-metal as instructed by https://developer.apple.com/metal/tensorflow-plugin/ when running line (python -m pip install tensorflow-metal) I get the following error: ERROR: Could not find a version that satisfies the requirement tensorflow-metal (from versions: none) ERROR: No matching distribution found for tensorflow-metal According to the troubleshooting section: "Check that the Python version used in the environment is supported (Python 3.8, Python 3.9, Python 3.10)." My current version is Python 3.9.12. Any insight would be great!
5
1
1.2k
Mar ’24
Transferable Item in Reality View
Can you use View with Transferable View in the one WindowGroup to another ImmersiveSpace with RealityView? I can drag, but the drop event isn't captured when with RealityView var body: some View { let droppable = Droppable( model: model ) RealityView { content in // Add the initial RealityKit content content.add(floorEntity) } .onDrop( of: ... // or .dropDestination( For ... {} //or .gesture( DragGesture() .targetedToAnyEntity() .onChanged({ value in none of them triggers the drop
0
0
343
Mar ’24
Python - Complex-valued linear algebra on GPU
Hi, I am looking for a routine to perform complex-valued linear algebra on the GPU in python for scientific programming, in particular quantum physics simulations. At the moment I am looking for a routine for complex-valued matrix multiplication. I found MLX has a routine for float matrix multiplication, but it does not directly work for complex-valued matrices. I figured a work-around by splitting the complex valued matrix into real and imaginary part and working with the pair, but it makes it cumbersome to integrate with the remainder of the code. I was hoping for a library-based implementation similar to cupy. I also tried out using the tensorflow linear algebra routines, but I couldn't get them to run on the GPU by now. Specifically, a testfile with a tensorflow.keras.applications.ResNet50 routine runs on the GPU, but the routines from tensorflow.linalg and tensorflow.math that I tested (matmul, expm, eigh) were not running on the GPU. Any advice on how to make linear algebra calculations on mac GPUs work is highly appreciated! For my application the unified memory might be especially beneficial. Thank you!
0
0
705
Mar ’24
Xcode 15.3 AppIntentsSSUTraining warning: missing the definition of locale # variables.1.definitions
Hello! I've noticed that adding localizations for AppShortcuts triggers the following warnings in Xcode 15.3: warning: missing the definition of zh-Hans # variables.1.definitions warning: missing the definition of zh-Hans # variables.2.definitions This occurs with both legacy strings files and String Catalogs. Example project: https://github.com/gongzhang/AppShortcutsLocalizationWarningExample
7
4
1.8k
Mar ’24
The Metal Performance Shaders operations encoded on it may not have completed.
Tensorflow metal was working on my Power Mac Mac M3 until yesterday. Then my code started freezing. I ran the test script from https://developer.apple.com/metal/tensorflow-plugin/ and it now crashes - this used to work fine, but all of a sudden it does not. The results are shown below. Has anyone seen anything like this? Could this be a hardware problem? MacBook-Pro-3: carl$ python mac_tensorflow_test.py Epoch 1/5 1/782 [..............................] - ETA: 51:53 - loss: 6.0044 - accuracy: 0.0312Error: command buffer exited with error status. The Metal Performance Shaders operations encoded on it may not have completed. Error: (null) Ignored (for causing prior/excessive GPU errors) (00000004:kIOGPUCommandBufferCallbackErrorSubmissionsIgnored) <AGXG15XFamilyCommandBuffer: 0x1172515e0> label = <none> device = <AGXG15SDevice: 0x1588e6000> name = Apple M3 Pro commandQueue = <AGXG15XFamilyCommandQueue: 0x17427e400> label = <none> device = <AGXG15SDevice: 0x1588e6000> name = Apple M3 Pro retainedReferences = 1 Error: command buffer exited with error status. The Metal Performance Shaders operations encoded on it may not have completed. Error: (null) Ignored (for causing prior/excessive GPU errors) (00000004:kIOGPUCommandBufferCallbackErrorSubmissionsIgnored) <AGXG15XFamilyCommandBuffer: 0x117257b40> label = <none> device = <AGXG15SDevice: 0x1588e6000> name = Apple M3 Pro commandQueue = <AGXG15XFamilyCommandQueue: 0x17427e400> label = <none> device = <AGXG15SDevice: 0x1588e6000> name = Apple M3 Pro retainedReferences = 1 Many more rows of similar printouts follow.
1
0
870
Mar ’24
Unable to scan barcode from an Image using vision
Hello, I have been working to try to create a scanner to scan a PDF417 barcode from your photos library for a few days now and have come to a dead end. Every time that I run my function on the photo, my array of observations always returns as []. This example is me trying to use it with an automatic generated image because I think that if it works with this, it will work with a real screenshot. That being said, I have already tried with all sorts of images that aren't pre-generated, and they, still, have failed to work. Code below: Calling the function createVisionRequest(image: generatePDF417Barcode(from: "71238-12481248-128035-40239431")!) Creating the Barcode: static func generatePDF417Barcode(from key: String) -> UIImage? { let data = key.data(using: .utf8)! let filter = CIFilter.pdf417BarcodeGenerator() filter.message = data filter.rows = 7 let transform = CGAffineTransform(scaleX: 3, y: 4) if let outputImage = filter.outputImage?.transformed(by: transform) { let context = CIContext() if let cgImage = context.createCGImage(outputImage, from: outputImage.extent) { return UIImage(cgImage: cgImage) } } return nil } Main function for scanning the barcode: static func desynthesizeIDScreenShot(from image: UIImage, completion: @escaping (String?) -> Void) { guard let ciImage = CIImage(image: image) else { print("Empty image") return } let imageRequestHandler = VNImageRequestHandler(ciImage: ciImage, orientation: .up) let request = VNDetectBarcodesRequest { (request,error) in guard error == nil else { completion(nil) return } guard let observations = request.results as? [VNDetectedObjectObservation] else { completion(nil) return } request.revision = VNDetectBarcodesRequestRevision2 let result = (observations.first as? VNBarcodeObservation)?.payloadStringValue print("Observations", observations) if let result { completion(result) print() print(result) } else { print(error?.localizedDescription) //returns nil completion(nil) print() print(result) print() } } request.symbologies = [VNBarcodeSymbology.pdf417] try? imageRequestHandler.perform([request]) } Thanks!
1
0
440
Mar ’24
torch device on cpu0 or cpu1 or mps0
I cannot find the bug ... but run this code (python) on torch device mps0 is slow quicker and cpu0 or cpu1 ... but where is the bug? or run it on neural engine with cpu1? you need a setup like this: #!/bin/bash export HOMEBREW_BREW_GIT_REMOTE="https://github.com/Homebrew/brew" # put your Git mirror of Homebrew/brew here export HOMEBREW_CORE_GIT_REMOTE="https://github.com/Homebrew/homebrew-core" # put your Git mirror of Homebrew/homebrew-core here /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)" eval "$(/opt/homebrew/bin/brew shellenv)" brew update --force --quiet chmod -R go-w "$(brew --prefix)/share/zsh" export OPENBLAS=$(/opt/homebrew/bin/brew --prefix openblas) export CFLAGS="-falign-functions=8 ${CFLAGS}" brew install wget brew install unzip conda init --all conda create -n torch-gpu python=3.10 conda activate torch-gpu conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 -c pytorch conda install -c conda-forge jupyter jupyterlab python3 -m pip install --upgrade pip python3 -m pip install insightface==0.2.1 onnx imageio scikit-learn scikit-image moviepy python3 -m pip install googledrivedownloader python3 -m pip install imageio==2.4.1 python3 -m pip install Cython python3 -m pip install --no-use-pep517 numpy python3 -m pip install torch python3 -m pip install image python3 -m pip install timm python3 -m pip install PlL python3 -m pip install h5py for i in `seq 1 6`; do python3 test.py done conda deactivate exit 0 test.py: import torch import math # this ensures that the current MacOS version is at least 12.3+ print(torch.backends.mps.is_available()) # this ensures that the current current PyTorch installation was built with MPS activated. print(torch.backends.mps.is_built()) dtype = torch.float device = torch.device("cpu",0) #device = torch.device("cpu",1) #device = torch.device("mps",0) # Create random input and output data x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype) y = torch.sin(x) # Randomly initialize weights a = torch.randn((), device=device, dtype=dtype) b = torch.randn((), device=device, dtype=dtype) c = torch.randn((), device=device, dtype=dtype) d = torch.randn((), device=device, dtype=dtype) learning_rate = 1e-6 for t in range(2000): # Forward pass: compute predicted y y_pred = a + b * x + c * x ** 2 + d * x ** 3 # Compute and print loss loss = (y_pred - y).pow(2).sum().item() if t % 100 == 99: print(t, loss) # Backprop to compute gradients of a, b, c, d with respect to loss grad_y_pred = 2.0 * (y_pred - y) grad_a = grad_y_pred.sum() grad_b = (grad_y_pred * x).sum() grad_c = (grad_y_pred * x ** 2).sum() grad_d = (grad_y_pred * x ** 3).sum() # Update weights using gradient descent a -= learning_rate * grad_a b -= learning_rate * grad_b c -= learning_rate * grad_c d -= learning_rate * grad_d print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')
0
0
595
Mar ’24
jax-metal segfaults when running Gemma inference
I tried running inference with the 2B model from https://github.com/google-deepmind/gemma on my M2 MacBook Pro, but it segfaults during sampling: https://pastebin.com/KECyz60T Note: out of the box it will try to load bfloat16 weights, which will fail. To avoid this, I patched line 30 in gemma/params.py to explicitly cast to float32: param_state = jax.tree_util.tree_map(lambda p: jnp.array(p, jnp.float32), params)
2
0
765
Mar ’24
NLEmbedding.wordEmbedding unsupported locale ko.
NLEmembedding.wordEmbedding is not available in your language. This is a very serious issue for any service that caters to Koreans, please fix it quickly. We have added the sample code below. import UIKit import CoreML import NaturalLanguage class MLTextViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() execute() } func execute() { if let embedding = NLEmbedding.wordEmbedding(for: .korean) { let word = "bicycle" if let vector = embedding.vector(for: word) { print(vector) } let specificDistance = embedding.distance(between: word, and: "motorcycle") print("✅ \(specificDistance.description)") embedding.enumerateNeighbors(for: word, maximumCount: 5) { neighbor, distance in print("\(neighbor): \(distance.description)") return true } } } }
0
0
500
Mar ’24
VNRecognizeTextRequest on Chinese text does not recognize vertical text or allow individual character recognition
I am using VNRecognizeTextRequest to read Chinese characters. It works fine with text written horizontally, but if even two characters are written vertically, then nothing is recognized. Does anyone know how to get the vision framework to either handle vertical text or recognize characters individually when working with Chinese? I am setting VNRequestTextRecognitionLevel to accurate, since setting it to fast does not recognize any Chinese characters at all. I would love to be able to use fast recognition and handle the characters individually, but it just doesn't seem to work with Chinese. And, when using accurate, if I take a picture of any amount of text, but it's arranged vertically, then nothing is recognized. I can take a picture of 1 character and it works, but if I add just 1 more character below it, then nothing is recognized. It's bizarre. I've tried setting usesLanguageCorrection = false and tried using VNRecognizeTextRequestRevision3, ...Revision2 and ...Revision1. Strangely enough, revision 2 seems to recognize some text if it's vertical, but the bounding boxes are off. Or, sometimes the recognized text will be wrong. I tried playing with DataScannerViewController and it's able to recognize characters in vertical text, but I can't figure out how to replicate it with VNRecognizeTextRequest. The problem with using DataScannerViewController is that it treats the whole text block as one item, and it uses the live camera buffer. As soon as I capture a photo, I still have to use VNRecognizeTextRequest. Below is a code snippet of how I'm using VNRecognizeTextRequest. There's not really much to it and there aren't many other parameters I can try out (plus I've already played around with them). I've also attached a sample image with text laid out vertically. func detectText( in sourceImage: CGImage, oriented orientation: CGImagePropertyOrientation ) async throws -> [VNRecognizedTextObservation] { return try await withCheckedThrowingContinuation { continuation in let request = VNRecognizeTextRequest { request, error in // ... continuation.resume(returning: observations) } request.recognitionLevel = .accurate request.recognitionLanguages = ["zh-Hant", "zh-Hans"] // doesn't seem have any impact // request.usesLanguageCorrection = false do { let requestHandler = VNImageRequestHandler( cgImage: sourceImage, orientation: orientation ) try requestHandler.perform([request]) } catch { continuation.resume(throwing: error) } } }
1
0
328
Mar ’24
Intermittent Audio Recording Failure and UI Freezing Issue in iOS App.
The application is developed in SwiftUI. Our application is responsible for audio recording, transcribing the audio file and uploading it to the backend. So, the 2 main components on the iOS application are : AVAudioRecorder, SFSpeechRecognizer. The UI compromises a visual design which showcases the recording of audio, and lets the user know if the audio is being recorded on not using a Text component. Lately the customer has been complaining that though the application says “Recording ” on the UI, their audios are not being are not being received at the backend. The customers try restarting there device(iPad) and the application started working normally We haven’t been able to reproduce the issue. But we suspect an intermittent failure in audio transmission or a potential UI freezing. Note : I have tried using Leaks instrument and had not encountered any memory leaks while using the application. Is there a way to determine whether the issue lies with the audio recorder, the speech recognizer, or elsewhere in the app? Are there any known issues or limitations with audio recorder lately on iOS that could be causing this behaviour? Please let me know if you have any suggestions to diagnose this issue. Also, do let me know if more information is required Thank you in advance
0
0
500
Apr ’24
NLModel won't initialize in MessageFilterExtension
i'm trying to create an NLModel within a MessageFilterExtension handler. The code works fine in the main app, but when I try to use it in the extension it fails to initialize. Just this doesn't even work and gets the error below. Single line that fails. SMS_Classifier is the class xcode generated for my model. This line works fine in the main app. let mlModel = try SMS_Classifier(configuration: MLModelConfiguration()).model Error Unable to locate Asset for contextual word embedding model for local en. MLModelAsset: load failed with error Error Domain=com.apple.CoreML Code=0 "initialization of text classifier model with model data failed" UserInfo={NSLocalizedDescription=initialization of text classifier model with model data failed} Any ideas?
1
0
557
Apr ’24
AI text format style in Pages and Numbers
Hi can you add new feature in Pages and Numbers using Ai to apply style from PDF or template to documents, so ai arrange footers and headers and fonts , pages breaks , pages numbers, like one in PDF or templates , so we can auto format documents to desired look standard, also for Numbers. So we can on raw text upload pdf of another documents or report and get documents in that style for export to pdf or print Best regards,
1
0
586
Apr ’24
Tensorflow-Metal TFLite Inference orders of magnitude slower than regular Tensorflow
Hardware: 16" 2023 MBP M3 Pro OS: 14.4.1 Memory: 36 GB python version: 3.8.16 TF-Metal version: tensorflow-metal 1.0.1 installed via pip TF version: 2.13.0 Tensorflow-Metal starts pretty slow, approximately 10s/iteration and over the course of 36 iteration progressively slows down to over 120s/iteration. Info log prints out that TFLite is using XNNPack. Can't share the TFLite model but it is relatively shallow, small, and simple. Uninstalled TF-Metal, and installed tensorflow. Inference speed picks right up and is rock solid at 0.78s/iteration. What is going on??? **TLDR, TFLite inference speed: TF Metal = 120s/iteration TF = 0.78s/iteration**
2
0
662
Apr ’24
Request for Code Material from "Improve Core ML Integration with Async Prediction" Session
I hope this message finds you well. I recently had the opportunity to watch the insightful session titled "Improve Core ML Integration with Async Prediction" and was thoroughly impressed by the depth of information and the practical demonstration provided. The session offered valuable insights that I believe would greatly benefit my ongoing projects and my understanding of Core ML integration. As I am keen on implementing the demonstrated workflows and techniques within my own work, I am reaching out to kindly request access to the source code and any related material presented during the session. Having access to the code would enable me to better understand the concepts discussed and apply them more effectively in real-world scenarios. I believe that being able to review and experiment with the actual code would significantly enhance my learning experience and the implementation efficiency of my projects. It would also serve as a valuable resource for referencing best practices in Core ML integration and async prediction techniques. Thank you very much for considering my request. I greatly appreciate the effort that went into creating such an informative session and am looking forward to potentially exploring the material in greater depth. Best regards, Fabio G.
0
0
624
Apr ’24