Machine Learning

RSS for tag

Create intelligent features and enable new experiences for your apps by leveraging powerful on-device machine learning.

Posts under Machine Learning tag

78 Posts
Sort by:
Post not yet marked as solved
0 Replies
481 Views
First of all this vision api is amazing. the OCR is very accurate. I've been looking to multiprocess using the vision API. I have about 2 million PDFs I want to OCR, and I want to run multiple threads/run parallel processing to OCR each. I tried pyobjc but it does not work so well. Any suggestions on tackling this problem?
Posted
by jsunghop.
Last updated
.
Post not yet marked as solved
0 Replies
492 Views
Hi all, I just tried to integrate my ML model (TF to CoreML) into my Xcode project, but couldn't create a performance report. As far as I'm aware, you only need to drag your .mlmodel file into the Navigator. I took this model from TF Hub and converted it to CoreML, and it has images as inputs and MultiArray as outputs (don't know if that has any significance). Other than that, I haven't made any changes to the model itself. If anyone could point me in the right direction that would be very much appreciated! I've included a screenshot of the error here:
Posted Last updated
.
Post not yet marked as solved
0 Replies
450 Views
I am trying to convert a model I found on TensorFlow hub to CoreML so I can use it in an iOS app I'm developing. Converting the model so far has been quite simple except that I get an NotImplementedError when specifying ImageType as output. This is the code I used: model = tf.keras.Sequential([ tf.keras.layers.InputLayer(input_shape=(256, 256, 3)), tf_hub.KerasLayer( "https://tfhub.dev/rishit-dagli/mirnet-tfjs/1" ) ]) model.build([1, 256, 256, 3]) # Batch input shape. mlmodel = ct.convert(model, convert_to="mlprogram", inputs=[ct.ImageType()], outputs=[ct.ImageType()]) If only the inputs are specified as ImageType, then no error occurs, but when I include a specification for the outputs as ImageType, I get this error: NotImplementedError: Image output 'Identity' has symbolic dimensions in its shape FYI: I'm using TensorFlow version 2.12 and CoreML 6.3 Is there any way around this? Or, am I doing this wrong? I'm quite new to machine learning and CoreML, so any helpful input is much appreciated. Thanks in advance!
Posted Last updated
.
Post marked as solved
1 Replies
785 Views
In the video here, the speaker refers to MPSGraphTool, which is supposed to convert from CoreML and other formats to the new MPSGraphPackage format. Searching for MPSGraphTool on Google returns only that video, and there is no mention of it on the forums here or elsewhere. When can we expect the tool to be released? How can we find out more information about it? My use case is that the ANECompilerService that runs on the Mac / iOS devices to compile CoreML Models / Programs is extremely slow and unreliable for large models. It often crashes entirely, sitting at 100% CPU usage forever and never completing the task at hand, meaning the user is stuck in a loading state. This also applies in Xcode when running a performance test. I would really like to compile the graph once and just run it on device directly.
Posted
by ephemer.
Last updated
.
Post not yet marked as solved
2 Replies
795 Views
Hi all, I am new to the metal Pytorch. I am trying to implement the demo code of customized ops in Pytorch. The demo code However, I think the torch namespace doesn't have "mps" now? The "torch::mps" cannot be found if I try to compile the .mm file into PyTorch cpp extension. After some digging, I think everybody is using Aten namespace with "at::"? How can I use functions in mps and make this demo code work? Thanks in advance. Error message In file included from /Users/ethan/Downloads/CustomizingAPyTorchOperation/CustomSoftshrink.mm:10: /Users/ethan/Downloads/CustomizingAPyTorchOperation/CustomSoftshrink.h:11:30: warning: ISO C++11 does not allow conversion from string literal to 'char *' [-Wwritable-strings] static char *CUSTOM_KERNEL = R"MPS_SOFTSHRINK( ^ /Users/ethan/Downloads/CustomizingAPyTorchOperation/CustomSoftshrink.mm:43:53: error: no member named 'mps' in namespace 'torch' id<MTLCommandBuffer> commandBuffer = torch::mps::get_command_buffer(); ~~~~~~~^ /Users/ethan/Downloads/CustomizingAPyTorchOperation/CustomSoftshrink.mm:47:47: error: no member named 'mps' in namespace 'torch' dispatch_queue_t serialQueue = torch::mps::get_dispatch_queue(); ~~~~~~~^ /Users/ethan/Downloads/CustomizingAPyTorchOperation/CustomSoftshrink.mm:76:20: error: no member named 'mps' in namespace 'torch' torch::mps::commit(); ~~~~~~~^ 1 warning and 3 errors generated. ninja: build stopped: subcommand failed. CustomSoftshrink.mm code /* See the LICENSE.txt file for this sample’s licensing information. Abstract: The code that registers a PyTorch custom operation. */ #include <torch/extension.h> #include "CustomSoftshrink.h" #import <Foundation/Foundation.h> #import <Metal/Metal.h> // Helper function to retrieve the `MTLBuffer` from a `torch::Tensor`. static inline id<MTLBuffer> getMTLBufferStorage(const torch::Tensor& tensor) { return __builtin_bit_cast(id<MTLBuffer>, tensor.storage().data()); } torch::Tensor& dispatchSoftShrinkKernel(const torch::Tensor& input, torch::Tensor& output, float lambda) { @autoreleasepool { id<MTLDevice> device = MTLCreateSystemDefaultDevice(); NSError *error = nil; // Set the number of threads equal to the number of elements within the input tensor. int numThreads = input.numel(); // Load the custom soft shrink shader. id<MTLLibrary> customKernelLibrary = [device newLibraryWithSource:[NSString stringWithUTF8String:CUSTOM_KERNEL] options:nil error:&error]; TORCH_CHECK(customKernelLibrary, "Failed to to create custom kernel library, error: ", error.localizedDescription.UTF8String); std::string kernel_name = std::string("softshrink_kernel_") + (input.scalar_type() == torch::kFloat ? "float" : "half"); id<MTLFunction> customSoftShrinkFunction = [customKernelLibrary newFunctionWithName:[NSString stringWithUTF8String:kernel_name.c_str()]]; TORCH_CHECK(customSoftShrinkFunction, "Failed to create function state object for ", kernel_name.c_str()); // Create a compute pipeline state object for the soft shrink kernel. id<MTLComputePipelineState> softShrinkPSO = [device newComputePipelineStateWithFunction:customSoftShrinkFunction error:&error]; TORCH_CHECK(softShrinkPSO, error.localizedDescription.UTF8String); // Get a reference to the command buffer for the MPS stream. id<MTLCommandBuffer> commandBuffer = torch::mps::get_command_buffer(); TORCH_CHECK(commandBuffer, "Failed to retrieve command buffer reference"); // Get a reference to the dispatch queue for the MPS stream, which encodes the synchronization with the CPU. dispatch_queue_t serialQueue = torch::mps::get_dispatch_queue(); dispatch_sync(serialQueue, ^(){ // Start a compute pass. id<MTLComputeCommandEncoder> computeEncoder = [commandBuffer computeCommandEncoder]; TORCH_CHECK(computeEncoder, "Failed to create compute command encoder"); // Encode the pipeline state object and its parameters. [computeEncoder setComputePipelineState:softShrinkPSO]; [computeEncoder setBuffer:getMTLBufferStorage(input) offset:input.storage_offset() * input.element_size() atIndex:0]; [computeEncoder setBuffer:getMTLBufferStorage(output) offset:output.storage_offset() * output.element_size() atIndex:1]; [computeEncoder setBytes:&lambda length:sizeof(float) atIndex:2]; MTLSize gridSize = MTLSizeMake(numThreads, 1, 1); // Calculate a thread group size. NSUInteger threadGroupSize = softShrinkPSO.maxTotalThreadsPerThreadgroup; if (threadGroupSize > numThreads) { threadGroupSize = numThreads; } MTLSize threadgroupSize = MTLSizeMake(threadGroupSize, 1, 1); // Encode the compute command. [computeEncoder dispatchThreads:gridSize threadsPerThreadgroup:threadgroupSize]; [computeEncoder endEncoding]; // Commit the work. torch::mps::commit(); }); } return output; } // C++ op dispatching the Metal soft shrink shader. torch::Tensor mps_softshrink(const torch::Tensor &input, float lambda = 0.5) { // Check whether the input tensor resides on the MPS device and whether it's contiguous. TORCH_CHECK(input.device().is_mps(), "input must be a MPS tensor"); TORCH_CHECK(input.is_contiguous(), "input must be contiguous"); // Check the supported data types for soft shrink. TORCH_CHECK(input.scalar_type() == torch::kFloat || input.scalar_type() == torch::kHalf, "Unsupported data type: ", input.scalar_type()); // Allocate the output, same shape as the input. torch::Tensor output = torch::empty_like(input); return dispatchSoftShrinkKernel(input, output, lambda); } // Create Python bindings for the Objective-C++ code. PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { m.def("mps_softshrink", &mps_softshrink); }
Posted
by Waxpple.
Last updated
.
Post not yet marked as solved
6 Replies
3k Views
Build and installed Jax and Jax-metal following instructions on a M2Pro Mac-mini from here - https://developer.apple.com/metal/jax/ However, the following check seems to suggest XLA using CPU and not GPU. >>> from jax.lib import xla_bridge >>> print(xla_bridge.get_backend().platform) cpu Has anyone got it working to dump GPU? Thanks in advance!
Posted
by shibd.
Last updated
.
Post not yet marked as solved
0 Replies
596 Views
Hello everyone, I encountered some compiler errors while following a WWDC video on converting a colorization PyTorch model to CoreML. I have followed all the steps correctly, but I'm facing issues with the following lines of code provided in the video: In the colorize() method, there is a line: let modelInput = try ColorizerInput(inputWith: lightness.cgImage!) This line expects a cgImage as input, but the auto-generated Model class only accepts an MLMultiArray or MLShapedArray, not an image. Video conversion step did not cover setting the input or output as ImageType. In the extractColorChannels() method, there are a couple of lines: let outA: [Float] = output.output_aShapedArray.scalars let outB: [Float] = output.output_bShapedArray.scalars However, I only have output.var183_aShapedArray available. In other words, there is no var183_bShapedArray. I would appreciate any thoughts or suggestions you may have regarding these issues. Thank you. Link to the WWDC22 session 10017 https://developer.apple.com/videos/play/wwdc2022/10017/
Posted Last updated
.
Post not yet marked as solved
1 Replies
804 Views
I implement a custom pytorch layer on both CPU and GPU following [Hollemans amazing blog] (https://machinethink.net/blog/coreml-custom-layers ). The cpu version works good, but when i implemented this op on GPU it cannot activate "encode" function. Always run on CPU. I have checked the coremltools.convert() options with compute_units=coremltools.ComputeUnit.CPU_AND_GPU, but it still not work. This problem also mentioned in https://stackoverflow.com/questions/51019600/why-i-enabled-metal-api-but-my-coreml-custom-layer-still-run-on-cpu and https://developer.apple.com/forums/thread/695640. Any idea on help this would be grateful. System Information mac OS: 11.6.1 Big Sur xcode: 12.5.1 coremltools: 5.1.0 test device: iphone 11
Posted
by stx-000.
Last updated
.
Post marked as solved
5 Replies
1.3k Views
Hello, I'm interested in trying the new JAX Metal plug-in and followed the steps in https://developer.apple.com/metal/jax/. Upon installation, I don't see any difference between the backend device detected by JAX and a pure CPU setup: >>> import jax >>> jax.devices() [CpuDevice(id=0)] >>> jax.devices()[0].platform 'cpu' >>> jax.devices()[0].device_kind 'cpu' >>> jax.devices()[0].client.platform 'cpu' >>> jax.devices()[0].client.runtime_type 'tfrt' Is this really using a Metal backend? How can I determine for sure? Thank you!
Posted
by pcuenca.
Last updated
.
Post not yet marked as solved
1 Replies
963 Views
I wish there was a tool to create a Memoji from a photo using AI 📸➡️👨 It is a pity there are no tools for artists
Posted
by Lebizhor.
Last updated
.
Post not yet marked as solved
0 Replies
548 Views
I am seeing an issue in jax.numpy-dot and jax.numpy.matmul as illustrated by this example of jax.numpy.dot: import jax.numpy as jnp import numpy as np x = np.array(np.random.rand(3, 3)) y = np.array(np.random.rand(3)) z = np.array(np.random.rand(3)) print("X: ", x) print("Y: ", y) print("Z: ", z) print("Numpy 1D*1D: ", np.dot(y, z)) print("Jax Numpy 1D*1D: ", jnp.dot(y, z)) print("Numpy 2D*1D: ", np.dot(x, y)) print("Jax Numpy 2D*1D: ", jnp.dot(x, y)) loc("-":4:5): error: type of return operand 0 ('tensor<*xf32>') doesn't match function result type ('tensor<3xf32>') in function @main /AppleInternal/Library/BuildRoots/1a7a4148-f669-11ed-9d56-f6357a1003e8/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphExecutable.mm:1950: failed assertion `Error: MLIR pass manager failed' zsh: abort python test.py As can be seen, dot product between two 1D arrays works for both standard Numpy and jax.numpy. However, 2D*1D only works for standard Numpy while jax.numpy throws an error. I am using: Jax 0.4.11, Jax-metal 0.0.2 and jaxlib 0.4.10. Has anyone else seen this issue?
Posted Last updated
.
Post not yet marked as solved
2 Replies
4.2k Views
Hi everyone, I'm a Machine Learning Engineer, and I'm planning to buy the MacBook Pro M2 Max with a 38-core GPU variant. I'm uncertain about whether to choose the 32GB RAM or 64GB RAM option. Based on my research and use case, it seems that 32GB should be sufficient for most tasks, including the 4K video rendering I occasionally do. However, I'm concerned about the longevity of the device, as I'd like to keep the MacBook up-to-date for at least five years. Additionally, considering the 38-core GPU, I wonder if 32GB of unified memory might be insufficient, particularly when I need to train Machine Learning models or run docker or even kubernetes cluster. I don't have any budget constraints, as the additional $400 cost isn't an issue, but I want to make a wise decision. I would appreciate any advice on this matter. Thanks in advance!
Posted
by Aditya-ai.
Last updated
.
Post marked as solved
1 Replies
1.2k Views
I'm trying to use the randomTensor function from MPS graph to initialize the weights of a fully connected layer. I can create the graph and run inference using the randomly initialized values, but when I try to train and update these randomly initialized weights, I'm hitting a crash: Assertion failed: (isa<To>(Val) && "cast<Ty>() argument of incompatible type!"), function cast, file Casting.h, line 578. I can train the graph if I instead initialize the weights myself on the CPU, but I thought using the randomTensor functions would be faster/allow initialization to occur on the GPU. Here's my code for building the graph including both methods of weight initialization: func buildGraph(variables: inout [MPSGraphTensor]) -> (MPSGraphTensor, MPSGraphTensor, MPSGraphTensor, MPSGraphTensor) { let inputPlaceholder = graph.placeholder(shape: [2], dataType: .float32, name: nil) let labelPlaceholder = graph.placeholder(shape: [1], name: nil) // This works for inference but not training let descriptor = MPSGraphRandomOpDescriptor(distribution: .uniform, dataType: .float32)! let weightTensor = graph.randomTensor(withShape: [2, 1], descriptor: descriptor, seed: 2, name: nil) // This works for inference and training // let weights = [Float](repeating: 1, count: 2) // let weightTensor = graph.variable(with: Data(bytes: weights, count: 2 * MemoryLayout<Float32>.size), shape: [2, 1], dataType: .float32, name: nil) variables += [weightTensor] let output = graph.matrixMultiplication(primary: inputPlaceholder, secondary: weightTensor, name: nil) let loss = graph.softMaxCrossEntropy(output, labels: labelPlaceholder, axis: -1, reuctionType: .sum, name: nil) return (inputPlaceholder, labelPlaceholder, output, loss) } And to run the graph I have the following in my sample view controller: override func viewDidLoad() { super.viewDidLoad() var variables: [MPSGraphTensor] = [] let (inputPlaceholder, labelPlaceholder, output, loss) = buildGraph(variables: &variables) let gradients = graph.gradients(of: loss, with: variables, name: nil) let learningRate = graph.constant(0.001, dataType: .float32) var updateOps: [MPSGraphOperation] = [] for (key, value) in gradients { let updates = graph.stochasticGradientDescent(learningRate: learningRate, values: key, gradient: value, name: nil) let assign = graph.assign(key, tensor: updates, name: nil) updateOps += [assign] } let commandBuffer = MPSCommandBuffer(commandBuffer: Self.commandQueue.makeCommandBuffer()!) let executionDesc = MPSGraphExecutionDescriptor() executionDesc.completionHandler = { (resultsDictionary, nil) in for (key, value) in resultsDictionary { var output: [Float] = [0] value.mpsndarray().readBytes(&output, strideBytes: nil) print(output) } } let inputDesc = MPSNDArrayDescriptor(dataType: .float32, shape: [2]) let input = MPSNDArray(device: Self.device, descriptor: inputDesc) var inputArray: [Float] = [1, 2] input.writeBytes(&inputArray, strideBytes: nil) let source = MPSGraphTensorData(input) let labelMPSArray = MPSNDArray(device: Self.device, descriptor: MPSNDArrayDescriptor(dataType: .float32, shape: [1])) var labelArray: [Float] = [1] labelMPSArray.writeBytes(&labelArray, strideBytes: nil) let label = MPSGraphTensorData(labelMPSArray) // This runs inference and works // graph.encode(to: commandBuffer, feeds: [inputPlaceholder: source], targetTensors: [output], targetOperations: [], executionDescriptor: executionDesc) // // commandBuffer.commit() // commandBuffer.waitUntilCompleted() // This trains but does not work graph.encode( to: commandBuffer, feeds: [inputPlaceholder: source, labelPlaceholder: label], targetTensors: [], targetOperations: updateOps, executionDescriptor: executionDesc) commandBuffer.commit() commandBuffer.waitUntilCompleted() } And a few other relevant variables are created at the class scope: let graph = MPSGraph() static let device = MTLCreateSystemDefaultDevice()! static let commandQueue = device.makeCommandQueue()! How can I use these randomTensor functions on MPSGraph to randomly initialize weights for training?
Posted Last updated
.
Post not yet marked as solved
0 Replies
428 Views
I'm referring to this talk: https://developer.apple.com/videos/play/wwdc2021/10152 I was wondering if the code for the "Image composition" project he demonstrates at the end of the talk (around 24:00) is available somewhere? Would much appreciate any help.
Posted
by kapsystk.
Last updated
.
Post not yet marked as solved
1 Replies
478 Views
I want to know how the preview function is implemented. I have a mlmodel for object detection. I found that when I open the model in xcode, xcode provides a preview function. I put a photo into it and get the target prediction box. I would like to know how this visualization function is implemented. At present, I can only get the three data items of Label, Confidence, and BoundingBox in the playground, and the drawing of the prediction box still requires me to write code for processing. import Vision func performObjectDetection() { do { let model = try VNCoreMLModel(for: court().model) let request = VNCoreMLRequest(model: model) { (request, error) in if let error = error { print("Failed to perform request: \(error)") return } guard let results = request.results as? [VNRecognizedObjectObservation] else { print("No results found") return } for result in results { print("Label: \(result.labels.first?.identifier ?? "No label")") print("Confidence: \(result.labels.first?.confidence ?? 0.0)") print("BoundingBox: \(result.boundingBox)") } } guard let image = UIImage(named: "nbaPics.jpeg"), let ciImage = CIImage(image: image) else { print("Failed to load image") return } let handler = VNImageRequestHandler(ciImage: ciImage, orientation: .up, options: [:]) try handler.perform([request]) } catch { print("Failed to load model: \(error)") } } performObjectDetection() These are my codes and results
Posted Last updated
.
Post not yet marked as solved
1 Replies
574 Views
We have CoreML models in our app, each encrypted with a separate key generated in XCode. After app update we are receiving following error ` `[coreml] Could not create persistent key blob for EFD428E8-CDE7-4E0A-B379-FC169E50DE4D : error=Error Domain=com.apple.CoreML Code=8 "Fetching decryption key from server failed." UserInfo={NSLocalizedDescription=Fetching decryption key from server failed., NSUnderlyingError=0x281d80ab0 {Error Domain=CKErrorDomain Code=6 "CKInternalErrorDomain: 2022" UserInfo={NSDebugDescription=CKInternalErrorDomain: 2022, RequestUUID=D5CF13CF-6A10-436B-AB93-4C5C04859FFE, NSLocalizedDescription=Request failed with http status code 503, CKErrorDescription=Request failed with http status code 503, CKRetryAfter=35, NSUnderlyingError=0x281d80000 {Error Domain=CKInternalErrorDomain Code=2022 "Request failed with http status code 503" UserInfo={CKRetryAfter=35, CKHTTPStatus=503, CKErrorDescription=Request failed with http status code 503, RequestUUID=D5CF13CF-6A10-436B-AB93-4C5C04859FFE, NSLocalizedDescription=Request failed with http status code 503}}, CKHTTPStatus=503}}}` Tried deleting app, restarting device but nothing works. This was released on Appstore earlier and was working fine. It stopped working after update. Any help is appreciated.
Posted
by appmast.
Last updated
.