Hy,
I'm French developer and I downloaded the Recognizing Speech in live Audio sample code from Developer Apple website. I tried to execute data generator command after changing the local identifier from 'en_US' to 'fr' in data generator main file , but when I ran the command in Xcode, I had this error message : " Identifier 'fr' does not parse into two elements."
I checked the xml files associated to the bin archive file and the identifiers are no correct (they keep 'en-US' value).
Thanks for your help !
General
RSS for tagExplore the power of machine learning within apps. Discuss integrating machine learning features, share best practices, and explore the possibilities for your app.
Post
Replies
Boosts
Views
Activity
Running grouped convolutions on an M2 with the metal plugin I get an error. Example code:
Using TF2.11 and no metal plugin I get
import tensorflow as tf
tf.keras.layers.Conv1D(5,1,padding="same", kernel_initializer="ones", groups=5)(tf.ones((1,1,5)))
# displays
<tf.Tensor: shape=(1, 1, 5), dtype=float32, numpy=array([[[1., 1., 1., 1., 1.]]], dtype=float32)>
On TF2.14 with the plugin I received
import tensorflow as tf
tf.keras.layers.Conv1D(5,1,padding="same", kernel_initializer="ones", groups=5)(tf.ones((1,1,5)))
# displays
...
NotFoundError: Exception encountered when calling layer 'conv1d_3' (type Conv1D).
could not find registered platform with id: 0x104d8f6f0 [Op:__inference__jit_compiled_convolution_op_78]
Call arguments received by layer 'conv1d_3' (type Conv1D):
• inputs=tf.Tensor(shape=(1, 1, 5), dtype=float32)
could not find registered platform with id
I have a neural network that should run on my device with 3 different input shapes. When converting it to mlmodel or mlpackage files with fixed input size it runs on ANE.
But when converted it with EnumeratedShape it runs only on CPU.
Why?
I think that the problematic layer is the slice (which converted in the flexible model to SliceStatic), but don't understand why and if there is any way to solve it and run the Enumerated model on ANE.
Here is my code
class TestModel(torch.nn.Module):
def __init__(self):
super(TestModel, self).__init__()
self.dw1 = torch.nn.Conv2d(in_channels=641, out_channels=641, kernel_size=(5,4), groups=641)
self.pw1 = torch.nn.Conv2d(in_channels=641, out_channels=512, kernel_size=(1,1))
self.relu = torch.nn.ReLU()
self.pw2 = torch.nn.Conv2d(in_channels=512, out_channels=641, kernel_size=(1,1))
self.dw2 = torch.nn.Conv2d(in_channels=641, out_channels=641, kernel_size=(5,1), groups=641)
self.pw3 = torch.nn.Conv2d(in_channels=641, out_channels=512, kernel_size=(1,1))
self.block1_dw = torch.nn.Conv2d(in_channels=512, out_channels=512, kernel_size=(5,1), groups=512)
self.block1_pw = torch.nn.Conv2d(in_channels=512, out_channels=512, kernel_size=(1,1))
def forward(self, inputs):
x = self.dw1(inputs)
x = self.pw1(x)
x = self.relu(x)
x = self.pw2(x)
x = self.dw2(x)
x = self.pw3(x)
x = self.relu(x)
y = self.block1_dw(x)
y = self.block1_pw(y)
y = self.relu(y)
z = x[:,:,4:,:] + y
return z
ex_input = torch.rand(1, 641, 44, 4)
traced_model = torch.jit.trace(TestModel().eval(), [ex_input,])
ct_enum_inputs = [ct.TensorType(name='inputs', shape=enum_shape)]
ct_outputs = [ct.TensorType(name='out')]
mlmodel_enum = ct.convert(traced_model, inputs=ct_enum_inputs, outputs=ct_outputs, convert_to="neuralnetwork")
mlmodel.save(...)
Thanks.
I did a clean install of Python (v. 3.10), then Tensorflow & Tensorflow-Metal following exactly the process stated in Apple's plugin support page. Now, every time I run ANY python code with Tensorflow it crashes in the model.fit instruction. It does not matter what I feed into it, even code that used to run perfectly on my previous MacBook (Intel)... I've researched ad-vomitum for answers but Apple washes it's hands stating that is Tensorflow and Tensorflow does the same. Fact is that exactly the same code runs flawlessly on my Windows NVIDIA PC setup.
I purchased the m3 laptop with the hope of having the possibility to train my neural networks "on the go"... now I lost $5,000 usd, I can't make it work, and is a total disaster.
I am extremely competent in Python development and have been developing neural networks for years. So if you are going to comment, please avoid suggestions like "check your Python version" etc. - This is DEFINITIVELY due to the m3 Mac. Exact same setup is working OK on an M1-Ultra Mac Studio. It is just not portable...
Does anyone have any specific advice on how to make a proper setup of Tensorflow for the Mac M3??
WWDC22 video "Explore the machine learning development experience" provides Python code for an interesting application (real-time ML image colorization), but doesn't provide the complete Xcode project, and assumes viewer knows how to do Python in Xcode (haven't heard of such in 10 years of iOS development!).
Any pointers to either the video's example Xcode project, or how to create a suitable Xcode project capable of running Python code?
Hello! I'm implementing cropping an object from an image mechanism.
@MainActor static func detectObjectOnImage(image: UIImage) async throws -> UIImage {
let analyser = ImageAnalyzer()
let interaction = ImageAnalysisInteraction()
let configuration = ImageAnalyzer.Configuration([.visualLookUp])
let analysis = try await analyser.analyze(image, configuration: configuration)
interaction.analysis = analysis
return try await interaction.image(for: interaction.subjects)
}
My app supports iOS 16 and a compiler doesn't complain about the code.However when I run it on simulator with iOS 16, I'm getting "symbol not found" error on the app launch. Does anybody know what can be the issue?
Kia ora,
Been having heaps of trouble recently trying to get TensorFlow working, it just suddenly stopped and the kernel would just crash every time I try to import tf.
I've tried just about everything eg. fresh install of python, reinstalling Xcode dev tools
Below is the relevant lines of pip freeze, using python 1.10.13 btw
tensorboard==2.15.1
tensorboard-data-server==0.7.2
tensorboard-plugin-wit==1.8.1
tensorflow==2.15.0
tensorflow-estimator==2.15.0
tensorflow-io-gcs-filesystem==0.34.0
tensorflow-macos==2.15.0
tensorflow-metal==0.5.0
Below is the cell in question that is killing the kernal
import tensorflow as tf import matplotlib.pyplot as plt
import tensorflow_datasets as tfds
from tensorflow.keras.layers import Conv2D, MaxPool2D, Dense, Flatten, InputLayer, BatchNormalization, Dropout
from tensorflow.keras.losses import BinaryCrossentropy
from tensorflow.keras.optimizers.legacy import Adam
I'll be around all day so if you have anything that can help, I'll be sure to give it a go as soon as you post it and get back to you!
Looking forward to your replies.
Nga mihi,
Kane
After training my dataset, the training, validation, and testing sets all show 0% in detection accuracy and all my test photos show false negative. The dataset has 1032 photos and 2 classes, and I used Roboflow for the image annotation. For network, I choose full network. If there is any way to fix this?
Is there a way to extract the list of words recognized by the Speech framework?
I'm trying to filter out words that won't appear in the transcription output, but to do that I'll need a list of words that can appear. SFSpeechLanguageModel.Configuration can be initialized with a vocabulary, but there doesn't seem to be a way to read it, and while there are ways to create custom vocabularies, I have yet to find a way to retrieve it.
I added the Natural Language tag in case the framework might contribute to a solution
On an Apple M1 with Ventura 13.6.
I followed the steps on the Get started with tensorflow-metal page here:
https://developer.apple.com/metal/tensorflow-plugin/
python3 -m venv ~/venv-metal
source ~/venv-metal/bin/activate
python -m pip install -U pip
python -m pip install tensorflow
python -m pip install tensorflow-metal
With a clean start I also tried a pinning
python -m pip install tensorflow==2.13.0
Where Successfully installed tensorflow-metal-1.0.0
The table here suggested this should work.
https://pypi.org/project/tensorflow-metal/
But I got the same error...
Running Python code without the tensorflow import was not a problem. I found forums with similar error on Mac 1 but none of the proposed solution worked.
Is there suggested steps to get the `get started tutorial working?
Hello,
My understanding of the paper below is that iOS ships with a MobileNetv3-based ML model backbone, which then uses different heads for specific tasks in iOS.
I understand that this backbone is accessible for various uses through the Vision framework, but I was wondering if it is also accessible for on-device fine-tuning for other purposes. Just as an example, if I want to have a model to detect some unique object in a photo, can I use the built in backbone or do I have to include my own in the app.
Thanks very much for any advice and apologies if I didn't understand something correctly.
Source: https://machinelearning.apple.com/research/on-device-scene-analysis
I am currently facing a performance issue while using CoreML on iOS 16+ devices to run a simple grid_sample model. When profiling the model using xcode Profiler, I noticed that before each NPU computation, there is a significant delay caused by the "input copy" and "neural engine-data copy" operations.I have specified that both the input and output of the model are of type float16, there shouldn't be any data type convert.
I would appreciate any insights or suggestions regarding the reasons behind this delay and possible solutions
My simple model is
class GridSample(torch.nn.Module):
def __init__(
self,
):
super().__init__()
def forward(self, input: torch.Tensor, grid: torch.Tensor) -> torch.Tensor:
output = F.grid_sample(
input, grid.to(input), mode='nearest', padding_mode='zeros', align_corners=True,
)
return output
tr_input = torch.randn((8, 64, 512, 512)
tr_grid = torch.randn((8, 256, 256, 2)
simple_model = GridSample()
simple_model.eval()
traced_model = torch.jit.trace(simple_model, [tr_input, tr_grid])
coreml_input = [coremltools.TensorType(name="image_input", shape=tr_input.shape, dtype=np.float16), coremltools.TensorType(name="warp_grid", shape=tr_grid.shape, dtype=np.float16)]
mlmodel = coremltools.converters.convert(traced_model, inputs=coreml_input,
convert_to="mlprogram",
minimum_deployment_target=coremltools.target.iOS16,
compute_units=coremltools.ComputeUnit.ALL,
compute_precision = coremltools.precision.FLOAT16,
outputs=[ct.TensorType(name="x0", dtype=np.float16)],
debug=False)
mlmodel.save("./grid_sample.mlpackage")
os.system(f"xcrun coremlcompiler compile './grid_sample.mlpackage' './')
I haven't used the GPU implementation for over a year now due to constant issues (I use tf.config.set_visible_devices([], 'GPU') to use CPU only.
I have also had a couple of issues with model convergence using GPU, however this issue seems more prominent, and possibly unrelated.
Here is an example of code that causes a memory leak using GPU (I cannot link the dataset, but it is called: Text classification documentation, by TANISHQ DUBLISH on Kaggle.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
df = pd.read_csv('df_file.csv')
df.head()
train_df = df.sample(frac=0.7, random_state=42)
val_df = df.drop(train_df.index).sample(frac=0.5, random_state=42)
test_df = df.drop(train_df.index).drop(val_df.index)
train_dataset = tf.data.Dataset.from_tensor_slices((train_df['Text'].values, train_df['Label'].values)).batch(32).prefetch(tf.data.AUTOTUNE)
val_dataset = tf.data.Dataset.from_tensor_slices((val_df['Text'].values, val_df['Label'].values)).batch(32).prefetch(tf.data.AUTOTUNE)
test_dataset = tf.data.Dataset.from_tensor_slices((test_df['Text'].values, test_df['Label'].values)).batch(32).prefetch(tf.data.AUTOTUNE)
text_vectorizer = tf.keras.layers.TextVectorization(max_tokens=100_000, output_mode='int', output_sequence_length=1000, pad_to_max_tokens=True)
text_vectorizer.adapt(train_df['Text'].values)
embedding = tf.keras.layers.Embedding(input_dim=len(text_vectorizer.get_vocabulary()), output_dim=128, input_length=1000)
inputs = tf.keras.layers.Input(shape=[], dtype=tf.string)
x = text_vectorizer(inputs)
x = embedding(x)
x = tf.keras.layers.LSTM(64)(x)
outputs = tf.keras.layers.Dense(5, activation='softmax')(x)
model_2 = tf.keras.Model(inputs, outputs, name='model_2_lstm')
model_2.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(), optimizer=tf.keras.optimizers.legacy.Adam(), metrics=['accuracy'])
model_2_history = model_2.fit(train_dataset, epochs=50, validation_data=val_dataset, callbacks=[
tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=3, restore_best_weights=True),
tf.keras.callbacks.ModelCheckpoint(model_2.name, save_best_only=True),
tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', patience=5, verbose=1)
])
I'm using DataScannerViewController with SwiftUI to scan text and barcodes from a card. I would like the user to be able to hold the card in front of the device, but I am not finding a way to select the front camera with DataScannerViewController.
Does anyone know of a way to select the front camera?
After migrating my ionic cordova app to ionic capacitor I am encountering a persistent white screen on a particular page. Along with this, I have observed the following error messages in the console:
Error Message: [com.apple.VisionKit.RemoveBackground] Request to remove background on an unsupported device. Error Domain=com.apple.VisionKit.RemoveBackground Code=-8 "(null)"
Error Message: [UILog] Called -[UIContextMenuInteraction updateVisibleMenuWithBlock:] while no context menu is visible. This won't do anything.
The actual page becomes visible after clicking on that white screen.
the same code is working fine for android build but facing issue on ios.
Hello,
We all face issues with the latest tensorflow gpu. Incorrect result, errors etc... We all agreed to pay extra for the M1/2/3 so we could work on a professional grade computer but in the end we must use CPU. When will apple actually comment on that and provide updates. I totally understand these issues aren't fixed overnight and take some time, but i've never seen any apple dev answer saying that they understand and they're working on a fix.
I've basically bought a Mac M3 Pro to be able to run on GPU some stuff without having to purchase a server and it's now useless. It's really frustrating.
I'm exploring my Vision Pro and finding it unclear whether I can even achieve things like body pose detection etc.
https://developer.apple.com/videos/play/wwdc2023/111241/
It's clear that I can apply it to self provided images, but how about to the data coming from visionOS SDKs?
All I can find is this mesh data from ARKit, https://developer.apple.com/documentation/arkit/arkit_in_visionos - am I missing something or do we not yet have good APIs for this?
Appreciate any guidance! Thanks.
Problem
I am trying to use the jax.numpy.einsum function (https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.einsum.html). However, for some subscripts, this seems to fail.
Hardware
Apple M1 Max, 32GB RAM
Steps to Reproduce
follow installation steps from https://developer.apple.com/metal/jax/
conda create -n 'jax_metal_demo' python=3.11
conda activate jax_metal_demo
python -m pip install numpy wheel ml-dtypes==0.2.0
python -m pip install jax-metal
Save the following code in a file called minimal_example.py
import numpy as np
from jax import device_put
import jax.numpy as jnp
np.random.seed(0)
a = np.random.rand(11, 12, 13, 11, 12)
b = np.random.rand(11, 12, 13)
subscripts = 'ijklm,ijk->lmk'
# intended result
print(np.einsum(subscripts, a, b))
# will cause crash
a, b = device_put(a), device_put(b)
print(jnp.einsum(subscripts, a, b))
run the code
python minimal_example.py
Output
I waas expecting
Platform 'METAL' is experimental and not all JAX functionality may be correctly supported!
2024-02-12 16:45:34.684973: W pjrt_plugin/src/mps_client.cc:563] WARNING: JAX Apple GPU support is experimental and not all JAX functionality is correctly supported!
Metal device set to: Apple M1 Max
systemMemory: 32.00 GB
maxCacheSize: 10.67 GB
Traceback (most recent call last):
File "/Users/linus/workspace/minimal_example.py", line 15, in <module>
print(jnp.einsum(subscripts, a, b))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/linus/miniforge3/envs/jax_metal_demo/lib/python3.11/site-packages/jax/_src/numpy/lax_numpy.py", line 3369, in einsum
return _einsum_computation(operands, contractions, precision, # type: ignore[operator]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/linus/miniforge3/envs/jax_metal_demo/lib/python3.11/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
jaxlib.xla_extension.XlaRuntimeError: UNKNOWN: /Users/linus/workspace/minimal_example.py:15:6: error: failed to legalize operation 'mhlo.dot_general'
print(jnp.einsum(subscripts, a, b))
^
/Users/linus/workspace/minimal_example.py:15:6: note: see current operation: %0 = "mhlo.dot_general"(%arg1, %arg0) {dot_dimension_numbers = #mhlo.dot<lhs_batching_dimensions = [2], rhs_batching_dimensions = [2], lhs_contracting_dimensions = [0, 1], rhs_contracting_dimensions = [0, 1]>, precision_config = [#mhlo<precision DEFAULT>, #mhlo<precision DEFAULT>]} : (tensor<11x12x13xf32>, tensor<11x12x13x11x12xf32>) -> tensor<13x11x12xf32>
--------------------
For simplicity, JAX has removed its internal frames from the traceback of the following exception. Set JAX_TRACEBACK_FILTERING=off to include these.
Conclusion
I would greatly appreciate any ideas for workarounds.
I want to use CoreML to process video data. The ML model will take multiple frames as input. How should I get multi frames from ios and process it?
Thanks in advance for any suggestions.
InvalidArgumentError: Cannot assign a device for operation don_nn/model_2/branch_hidden0/MatMul/ReadVariableOp: Could not satisfy explicit device specification '' because the node {{colocation_node don_nn/model_2/branch_hidden0/MatMul/ReadVariableOp}} was colocated with a group of nodes that required incompatible device '/job:localhost/replica:0/task:0/device:GPU:0'. All available devices [/job:localhost/replica:0/task:0/device:CPU:0, /job:localhost/replica:0/task:0/device:GPU:0].