I've fixed the issue and published it to iOS. After reviewing the documentation and source code for coremltools v8.1, I realised that I didn't need to specify the outputs – coremltools automatically handles it. :grin:
Post
Replies
Boosts
Views
Activity
I was able to get the results with this code and with preview working but:
in preview it says "unable to retrieve results from the vision request".
import torch
import coremltools as ct
from fastai.vision.all import *
import json
from PIL import Image
from torchvision import transforms
# Load your Fastai model (replace with your actual path)
learn = load_learner('lemonmodel.pkl')
# Set the model to eval mode before tracing
learn.model.eval()
# Example input image (you can use any image from your dataset)
input_image = PILImage.create('example.jpg')
# Preprocess the image (assuming you used these transforms during training)
resize = transforms.Resize((192, 192)) # Resize image to match model input size (e.g., 192x192)
to_tensor = transforms.ToTensor()
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
# Apply preprocessing: Resize, Convert to tensor, Normalize
input_image = resize(input_image) # Resize to expected size
input_tensor = to_tensor(input_image) # Convert to tensor
input_tensor = normalize(input_tensor) # Normalize with mean and std
# Add a batch dimension
input_tensor = input_tensor.unsqueeze(0)
# Ensure float32 type
input_tensor = input_tensor.float()
# Trace the model (using the batch-size of 1)
trace = torch.jit.trace(learn.model, input_tensor)
# Define the Core ML input type (image type with correct shape for Core ML)
_input = ct.ImageType(
name="input_1",
shape=(1, 3, 192, 192), # Correct shape for Core ML [batch_size, channels, height, width]
bias=[-0.485 / 0.229, -0.456 / 0.224, -0.406 / 0.225], # Mean normalization
scale=1.0 / 255 # Scale normalization
)
# Define the Core ML output type (we do NOT specify the shape, let Core ML infer it)
_output = ct.TensorType(
name="output_1", # Name for the output
)
# Convert the model to Core ML format
mlmodel = ct.convert(
trace,
inputs=[_input],
outputs=[_output], # Let Core ML infer the output shape
minimum_deployment_target=ct.target.iOS14 # iOS deployment target
)
# Set model type as 'imageClassifier' for the Preview tab
mlmodel.type = 'imageClassifier'
# Define labels for classification
labels_json = {
"labels": ["lemon", "lime"], # Replace with your actual class labels
}
# Setting up the metadata with correct 'preview' params
mlmodel.user_defined_metadata['com.apple.coreml.model.preview.params'] = json.dumps(labels_json)
# Set model metadata for Xcode integration
mlmodel.user_defined_metadata["com.apple.coreml.model.preview.type"] = "imageClassifier"
mlmodel.input_description["input_1"] = "Input image to be classified"
mlmodel.output_description["output_1"] = "Classification probabilities for each label"
# Set additional metadata for the Xcode UI (optional)
mlmodel.author = "Your Name or Organization"
mlmodel.short_description = "A classifier for detecting lemon and lime in images."
mlmodel.version = "1.0"
# Save the model as .mlmodel
mlmodel.save("LemonClassifier333.mlmodel")
@Marco Seraphin have you found the solution ?
I have the same question in FB16079804 and posted here.
The error message suggests that the tensorflow-metal package isn't available in your current Python environment. This could be due to the environment's configuration or compatibility issues with Apple Silicon.
ensure you're using the latest version of Python and pip, as older versions might cause compatibility issues. Run the following commands in the terminal:
python3 -m pip install --upgrade pip
python3 --version
make sure that you're using the miniforge version built specifically for Apple Silicon (M1, M2, M3).
You can create a new environment using Miniforge, and ensure it's correctly set up for Apple Silicon. Run:
conda create -n tf_env python=3.9
conda activate tf_env
Once you're in the correct environment, install TensorFlow with metal support:
pip install tensorflow-macos
pip install tensorflow-metal
Since you're trying to access your existing Jupyter notebooks, make sure that your new environment is accessible from Jupyter. Install Jupyter into the environment you created:
pip install jupyter
Then, link your environment to Jupyter by running:
python -m ipykernel install --user --name=tf_env --display-name "Python (tf_env)"
Tips:
Apple Silicon uses a different architecture from Intel-based Macs, so some Python packages or compiled libraries may need to be reinstalled or reconfigured to work with the ARM-based M3 chip
It’s possible that some dependencies or configurations that worked on your Intel Mac might not be fully compatible with the M3. It might be worth considering recreating the environment, ensuring all dependencies are correctly aligned for Apple Silicon.
Many thanks. I do not actually need the image set or the fuels I was looking for either clarification or files to look myself. I am happy with the answer.
Done under FB15811077.
It has been resolved on my side on Xcode 16.1. The issue was that the subscription has to be in certain status (not In Review) to appear synced. After clearing the subscription group from rejected submissions and having only new ready to submit I was able to perform sandbox and TestFlight tests. The file is synced correctly.
The message seen is the standard response generated by the SubscriptionStoreView when a "StoreKit Purchase" of an "Unknown" error Type when it occurs during the purchase process. This is the default behaviour, which suggests retrying the purchase at a later time. Unfortunately, this is how SubscriptionStoreView is designed to handle such errors.
We believe this shall be sufficient enough for user to be guided to try again later.
As SubscriptionStoreView is the recommended approach for subscription management in iOS apps, we are unable to customise the error handling beyond what is provided by the framework itself. While the products are being loaded correctly, the purchase fails due to an error, and at this stage, the only option available is to retry the purchase or wait for a network resolution.
To offer a smoother experience, we would need to abandon the use of SubscriptionStoreView, but this would not align with standard app guidelines and could lead to a less intuitive experience for users.
We need help with this.
In StoreKit Config File simulate Unknown Purchase Error. Try Purchase. As per design SubscriptionStoreView shows popup saying :
"Unable to Complete Request Something went wrong. Try again."
We thought this is fine as it is explanatory.
App Review Team says this is poor user experience and I need code level help to tackle this .
I am facing similar issue th the same error. @jabakobob have you resolve this and how? Many thanks
Tap, you'd need to make a fresh new Subscription ( replicate ) , then the section to add that one will appear under the Add Binary form. Only the ones that are not in review or not approved but in "Waiting Review" will appear there. And Submit them from the App as one bundle. Do not submit Subscription separately.
I’m referring to the injection of a model container into the environment. We accomplish this by declaring the type as follows:
.modelContainer(for: [Book.self, Author.self])
For instance:
struct Example: App {
@State private var appData = ApplicationData.shared
var body: some Scene {
WindowGroup {
ContentView()
.environment(appData)
.modelContainer(for: [Book.self, Author.self])
}
}
}
I was wondering if we could implement a feature similar to the navigationPath() type-erasing functionality. This allows us to define a heterogeneous set of types, as we can do with:
@State private var path = NavigationPath()
Instead of explicitly declaring:
@State private var path = [Book] ()
Is there a way to build ( if not already a way) into Swift Language a similar capability for environment injection, enabling the injection of a heterogeneous set of types without needing to declare the type explicitly?
Apparently it also deletes automatically as per this :
for index in indexSet {
let animalToDelete = animals[index]
if navigationContext.selectedAnimal?.persistentModelID == animalToDelete.persistentModelID {
navigationContext.selectedAnimal = nil
}
modelContext.delete(animalToDelete)
}
}
The sample above uses the SwiftData autosave feature, which is enabled by default when creating a ModelContainerusing the modelContainer(for:inMemory:isAutosaveEnabled:isUndoEnabled:onSetup:) modifier. If this feature is disabled, the removeAnimals method needs to explicitly save the change by calling the ModelContext method save(); for example:
do {
for index in indexSet {
let animalToDelete = animals[index]
if navigationContext.selectedAnimal?.persistentModelID == animalToDelete.persistentModelID {
navigationContext.selectedAnimal = nil
}
modelContext.delete(animalToDelete)
}
try modelContext.save()
} catch {
// Handle error.
}
}
https://developer.apple.com/documentation/swiftdata/deleting-persistent-data-from-your-app
My App is approved and in App Store. I have never changed anything on Subscriptions. Perhaps I shall have and a blank or a dot and resubmit ? Now subscription wasn't approved - rejected due to localisation. Message says :
We have returned your in-app purchase products to you as the required binary was not submitted. When you are ready to submit the binary, please resubmit the in-app purchase products with the binary.
Starting :
https://developer.apple.com/app-store/pathway/
https://developer.apple.com/storekit/
https://developer.apple.com/design/human-interface-guidelines/in-app-purchase/