Hello,
I’m currently working on Tiny ML or ML on Edge using the Google Colab platform. Due to the exhaust of my compute unit’s free usage, I’m being prompted to pay. I’ve been considering leveraging the GPU capabilities of my iPad M1 and Intel-based Mac. Both devices utilize Thunderbolt ports capable of sharing connections up to 30GB/s. Since I’m primarily using a classification model, extensive GPU usage isn’t necessary.
I’m looking for assistance or guidance on utilizing the iPad’s processor as an eGPU on my Mac, possibly through an API or Apple technology. Any help would be greatly appreciated!
ML Compute
RSS for tagAccelerate training and validation of neural networks using the CPU and GPUs.
Posts under ML Compute tag
33 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
We have to convert a local DOC file to PDF without any server interaction. It will be in offline mode.
Any suggestion will be appreciated.
Hello everyone,
I am trying to train using CreateML Version 6.0 Beta (146.1), feature extractor Image Feature Print v2.
I am using 100K images for a total ~4GB on my M3 Max 48GB (MacOs 15.0 Beta (24A5279h))
The images seems to be correctly read and visualized in the Data Source section (no images with corrupted data seems to be there).
When I start the training it's all fine for the first 6k ~ 7k pictures, then I receive the following error:
Failed to create CVPixelBufferPool. Width = 0, Height = 0, Format = 0x00000000
It is the first time I am using it, so I don't really have so much of experience.
Could you help me to understand what could be the problem?
Thanks a lot
Hi everyone,
I was wondering, on how accurate is the Hand Classification ML? For Example: Is it possible to understand the different letters of the Sign Language Alphabet or is it only capable of recognizing simple poses like a thumbs up?
I wrote a watch-only App using Bluetooth wich is running on the watch. But no prints or logs appear on output.
I only get:
[S:1] Error received: Connection invalidated.
[S:3] Error received: Connection invalidated.
[S:4] Error received: Connection invalidated.
[S:5] Error received: Connection invalidated.
Message from debugger: killed
Program ended with exit code: 9
In the launch log I find:
Showing Recent Messages
Launch com.apple.Carousel
Platform: watchOS
Device Identifier: 00008310-001244D611D1A01E
Operating System Version: 10.5 (21T576)
Model: Apple Watch Series 9 (Watch7,1)
Apple Watch von Draha is connected via network
Installing com.apple.Carousel on Apple Watch von Draha
Installing on Apple Watch von Draha
Successfully installed
XPC/App Extension Debugging
Setup XPC Debugging for: gwe.WatchBleTest.watchkitapp.WatchBleWidget
Console logging policy: Synchronously obtain os_logs via libLogRedirect, and read stdio from File Descriptors
Stop XPC Debugging for: gwe.WatchBleTest.watchkitapp.WatchBleWidget
View debugging: disabled
Insert view debugging dylib on launch: enabled
Queue debugging: enabled
Memory graph on resource exception: disabled
Address sanitizer: disabled
Thread sanitizer: disabled
Using LLDBRPC. The LLDB framework is from /Applications/Xcode.app/Contents/SharedFrameworks
Device support directory: /Users/gertelsholz/Library/Developer/Xcode/watchOS DeviceSupport/Watch7,1 10.5 (21T576)/Symbols
Attached to process with pid 469
What could be the cause of the issue, and how to fix it?
Thanks for your help!
Hi all,
I'm having trouble even getting jax-metal latest version to install on my M1 MacBook Pro. In a clean conda environment, I pip install jax-metal and get
In [1]: import jax; print(jax.numpy.arange(10))
Platform 'METAL' is experimental and not all JAX functionality may be correctly supported!
---------------------------------------------------------------------------
XlaRuntimeError Traceback (most recent call last)
[... skipping hidden 1 frame]
File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/xla_bridge.py:977, in _init_backend(platform)
976 logger.debug("Initializing backend '%s'", platform)
--> 977 backend = registration.factory()
978 # TODO(skye): consider raising more descriptive errors directly from backend
979 # factories instead of returning None.
File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/xla_bridge.py:666, in register_plugin.<locals>.factory()
665 if not xla_client.pjrt_plugin_initialized(plugin_name):
--> 666 xla_client.initialize_pjrt_plugin(plugin_name)
667 updated_options = {}
File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jaxlib/xla_client.py:176, in initialize_pjrt_plugin(plugin_name)
169 """Initializes a PJRT plugin.
170
171 The plugin needs to be loaded first (through load_pjrt_plugin_dynamically or
(...)
174 plugin_name: the name of the PJRT plugin.
175 """
--> 176 _xla.initialize_pjrt_plugin(plugin_name)
XlaRuntimeError: INVALID_ARGUMENT: Mismatched PJRT plugin PJRT API version (0.47) and framework PJRT API version 0.51).
During handling of the above exception, another exception occurred:
RuntimeError Traceback (most recent call last)
Cell In[1], line 1
----> 1 import jax; print(jax.numpy.arange(10))
File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/numpy/lax_numpy.py:2952, in arange(start, stop, step, dtype)
2950 ceil_ = ufuncs.ceil if isinstance(start, core.Tracer) else np.ceil
2951 start = ceil_(start).astype(int) # type: ignore
-> 2952 return lax.iota(dtype, start)
2953 else:
2954 if step is None and start == 0 and stop is not None:
File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/lax/lax.py:1282, in iota(dtype, size)
1277 def iota(dtype: DTypeLike, size: int) -> Array:
1278 """Wraps XLA's `Iota
1279 <https://www.tensorflow.org/xla/operation_semantics#iota>`_
1280 operator.
1281 """
-> 1282 return broadcasted_iota(dtype, (size,), 0)
File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/lax/lax.py:1292, in broadcasted_iota(dtype, shape, dimension)
1289 static_shape = [None if isinstance(d, core.Tracer) else d for d in shape]
1290 dimension = core.concrete_or_error(
1291 int, dimension, "dimension argument of lax.broadcasted_iota")
-> 1292 return iota_p.bind(*dynamic_shape, dtype=dtype, shape=tuple(static_shape),
1293 dimension=dimension)
File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/core.py:387, in Primitive.bind(self, *args, **params)
384 def bind(self, *args, **params):
385 assert (not config.enable_checks.value or
386 all(isinstance(arg, Tracer) or valid_jaxtype(arg) for arg in args)), args
--> 387 return self.bind_with_trace(find_top_trace(args), args, params)
File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/core.py:391, in Primitive.bind_with_trace(self, trace, args, params)
389 def bind_with_trace(self, trace, args, params):
390 with pop_level(trace.level):
--> 391 out = trace.process_primitive(self, map(trace.full_raise, args), params)
392 return map(full_lower, out) if self.multiple_results else full_lower(out)
File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/core.py:879, in EvalTrace.process_primitive(self, primitive, tracers, params)
877 return call_impl_with_key_reuse_checks(primitive, primitive.impl, *tracers, **params)
878 else:
--> 879 return primitive.impl(*tracers, **params)
File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/dispatch.py:86, in apply_primitive(prim, *args, **params)
84 prev = lib.jax_jit.swap_thread_local_state_disable_jit(False)
85 try:
---> 86 outs = fun(*args)
87 finally:
88 lib.jax_jit.swap_thread_local_state_disable_jit(prev)
[... skipping hidden 17 frame]
File ~/opt/anaconda3/envs/metal/lib/python3.11/site-packages/jax/_src/xla_bridge.py:902, in backends()
900 else:
901 err_msg += " (you may need to uninstall the failing plugin package, or set JAX_PLATFORMS=cpu to skip this backend.)"
--> 902 raise RuntimeError(err_msg)
904 assert _default_backend is not None
905 if not config.jax_platforms.value:
RuntimeError: Unable to initialize backend 'METAL': INVALID_ARGUMENT: Mismatched PJRT plugin PJRT API version (0.47) and framework PJRT API version 0.51). (you may need to uninstall the failing plugin package, or set JAX_PLATFORMS=cpu to skip this backend.)
jax.__version__ is 0.4.27.
Will macos support amd rx7600?
I hope this message finds you well. I recently had the opportunity to watch the insightful session titled "Improve Core ML Integration with Async Prediction" and was thoroughly impressed by the depth of information and the practical demonstration provided. The session offered valuable insights that I believe would greatly benefit my ongoing projects and my understanding of Core ML integration.
As I am keen on implementing the demonstrated workflows and techniques within my own work, I am reaching out to kindly request access to the source code and any related material presented during the session. Having access to the code would enable me to better understand the concepts discussed and apply them more effectively in real-world scenarios.
I believe that being able to review and experiment with the actual code would significantly enhance my learning experience and the implementation efficiency of my projects. It would also serve as a valuable resource for referencing best practices in Core ML integration and async prediction techniques.
Thank you very much for considering my request. I greatly appreciate the effort that went into creating such an informative session and am looking forward to potentially exploring the material in greater depth.
Best regards,
Fabio G.
Hi
can you add new feature in Pages and Numbers using Ai to apply style from PDF or template to documents, so ai arrange footers and headers and fonts , pages breaks , pages numbers, like one in PDF or templates , so we can auto format documents to desired look standard, also for Numbers. So we can on raw text upload pdf of another documents or report and get documents in that style for export to pdf or print
Best regards,
NLEmembedding.wordEmbedding is not available in your language.
This is a very serious issue for any service that caters to Koreans, please fix it quickly. We have added the sample code below.
import UIKit
import CoreML
import NaturalLanguage
class MLTextViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
execute()
}
func execute() {
if let embedding = NLEmbedding.wordEmbedding(for: .korean) {
let word = "bicycle"
if let vector = embedding.vector(for: word) {
print(vector)
}
let specificDistance = embedding.distance(between: word, and: "motorcycle")
print("✅ \(specificDistance.description)")
embedding.enumerateNeighbors(for: word, maximumCount: 5) { neighbor, distance in
print("\(neighbor): \(distance.description)")
return true
}
}
}
}
I cannot find the bug ... but run this code (python) on torch device mps0 is slow
quicker and cpu0 or cpu1 ... but where is the bug? or run it on neural engine with cpu1?
you need a setup like this:
#!/bin/bash
export HOMEBREW_BREW_GIT_REMOTE="https://github.com/Homebrew/brew" # put your Git mirror of Homebrew/brew here
export HOMEBREW_CORE_GIT_REMOTE="https://github.com/Homebrew/homebrew-core" # put your Git mirror of Homebrew/homebrew-core here
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
eval "$(/opt/homebrew/bin/brew shellenv)"
brew update --force --quiet
chmod -R go-w "$(brew --prefix)/share/zsh"
export OPENBLAS=$(/opt/homebrew/bin/brew --prefix openblas)
export CFLAGS="-falign-functions=8 ${CFLAGS}"
brew install wget
brew install unzip
conda init --all
conda create -n torch-gpu python=3.10
conda activate torch-gpu
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 -c pytorch
conda install -c conda-forge jupyter jupyterlab
python3 -m pip install --upgrade pip
python3 -m pip install insightface==0.2.1 onnx imageio scikit-learn scikit-image moviepy
python3 -m pip install googledrivedownloader
python3 -m pip install imageio==2.4.1
python3 -m pip install Cython
python3 -m pip install --no-use-pep517 numpy
python3 -m pip install torch
python3 -m pip install image
python3 -m pip install timm
python3 -m pip install PlL
python3 -m pip install h5py
for i in `seq 1 6`; do
python3 test.py
done
conda deactivate
exit 0
test.py:
import torch
import math
# this ensures that the current MacOS version is at least 12.3+
print(torch.backends.mps.is_available())
# this ensures that the current current PyTorch installation was built with MPS activated.
print(torch.backends.mps.is_built())
dtype = torch.float
device = torch.device("cpu",0)
#device = torch.device("cpu",1)
#device = torch.device("mps",0)
# Create random input and output data
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
y = torch.sin(x)
# Randomly initialize weights
a = torch.randn((), device=device, dtype=dtype)
b = torch.randn((), device=device, dtype=dtype)
c = torch.randn((), device=device, dtype=dtype)
d = torch.randn((), device=device, dtype=dtype)
learning_rate = 1e-6
for t in range(2000):
# Forward pass: compute predicted y
y_pred = a + b * x + c * x ** 2 + d * x ** 3
# Compute and print loss
loss = (y_pred - y).pow(2).sum().item()
if t % 100 == 99:
print(t, loss)
# Backprop to compute gradients of a, b, c, d with respect to loss
grad_y_pred = 2.0 * (y_pred - y)
grad_a = grad_y_pred.sum()
grad_b = (grad_y_pred * x).sum()
grad_c = (grad_y_pred * x ** 2).sum()
grad_d = (grad_y_pred * x ** 3).sum()
# Update weights using gradient descent
a -= learning_rate * grad_a
b -= learning_rate * grad_b
c -= learning_rate * grad_c
d -= learning_rate * grad_d
print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')
When attempting to load an mlmodel and run it on the CPU/GPU by passing the ComputeUnit you'd like to use when creating the model with:
model = ct.models.MLModel('mymodel.mlmodel', ct.ComputeUnit.CPU_ONLY)
Documentation for coremltools v7.0 says:
compute_units: coremltools.ComputeUnit
coremltools.ComputeUnit.ALL: Use all compute units available, including the neural engine.
coremltools.ComputeUnit.CPU_ONLY: Limit the model to only use the CPU.
coremltools.ComputeUnit.CPU_AND_GPU: Use both the CPU and GPU, but not the neural engine.
coremltools.ComputeUnit.CPU_AND_NE: Use both the CPU and neural engine, but not the GPU. Available only for macOS >= 13.0.
coremltools 7.0 (and previous versions I've tried) now seems to ignore that hint and only runs my models on the ANE. Same model when loaded into XCode and run a perf test with cpu only runs happily on the CPU and selected in Xcode performance tool.
Is there a way in python to get our models to run on different compute units?
I am working on the neural network classifier provided on the coremltools.readme.io in the updatable->neural network section(https://coremltools.readme.io/docs/updatable-neural-network-classifier-on-mnist-dataset).
I am using the same code but I get an error saying that the coremltools.converters.keras.convert does not exist. But this I know can be coreml version issue. Right know I am using coremltools version 6.2. I converted this model to mlmodel with .convert only. It got converted successfully.
But I face an error in the make_updatable function saying the loss layer must be softmax output. Even the coremlt package API reference there I found its because the layer name is softmaxND but it should be softmax.
Now the problem is when I convert the model from Keras sequential model to coreml model. the layer name and type change. And the softmax changes to softmaxND.
Does anyone faced this issue?
if I execute this builder.inspect_layers(last=4)
I get this output
[Id: 32], Name: sequential/dense_1/Softmax (Type: softmaxND)
Updatable: False
Input blobs: ['sequential/dense_1/MatMul']
Output blobs: ['Identity']
[Id: 31], Name: sequential/dense_1/MatMul (Type: batchedMatmul)
Updatable: False
Input blobs: ['sequential/dense/Relu']
Output blobs: ['sequential/dense_1/MatMul']
[Id: 30], Name: sequential/dense/Relu (Type: activation)
Updatable: False
Input blobs: ['sequential/dense/MatMul']
Output blobs: ['sequential/dense/Relu']
In the make_updatable function when I execute
builder.set_categorical_cross_entropy_loss(name='lossLayer', input='Identity')
I get this error
ValueError: Categorical Cross Entropy loss layer input (Identity) must be a softmax layer output.