FB:FB16079804
Hello,
I've made the FastAI's Cat vs Dog model into model that distinguishes lemons from limes and it all works fine in a notebook.
I am now looking to transform this model into Core ML for my iOS app using TorchScript and Apple official guidelines for coremltools.
Model converts but I cannot see the Preview Tab in. Xcode. Have anyone of you tried to convert to Core ML? I guess my input types are not matching with coremltools expectations for preview but I am stuck . Here is my code.
import torch
import coremltools as ct
from fastai.vision.all import *
import json
from torchvision import transforms
# Load your Fastai model (replace with your actual path)
learn = load_learner('lemonmodel.pkl')
# Example input image (you can use any image from your dataset)
input_image = PILImage.create('example.jpg')
# Preprocess the image (assuming you used these transforms during training)
to_tensor = transforms.ToTensor()
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
input_tensor = to_tensor(input_image)
input_tensor = normalize(input_tensor) # Apply normalization
# Add a batch dimension
input_tensor = input_tensor.unsqueeze(0)
# Ensure float32 type
input_tensor = input_tensor.float()
# Trace the model
trace = torch.jit.trace(learn.model, input_tensor)
# Define the Core ML input type (considering your model's input shape)
_input = ct.ImageType(
name="input_1",
shape=input_tensor.shape,
bias=[-0.485/0.229, -0.456/0.224, -0.406/0.225],
scale=1./(255*0.226)
)
# Convert the model to Core ML format
mlmodel = ct.convert(
trace,
inputs=[_input],
minimum_deployment_target=ct.target.iOS14 # Optional, set deployment target
)
# Set model type as 'imageClassifier' for the Preview tab
mlmodel.type = 'imageClassifier'
# Correct structure for preview parameters** (assuming two classes: 'lemon' and 'lime')
labels_json = {
"imageClassifier": {
"labels": ["lemon", "lime"],
"input": {
"shape": list(input_tensor.shape), # Provide the actual input shape
"mean": [0.485, 0.456, 0.406], # Match normalization mean
"std": [0.229, 0.224, 0.225] # Match normalization std
},
"output": {
"shape": [1, 2] # Output shape for your model (2 classes)
}
}
}
# Setting up the metadata with correct 'preview' params
mlmodel.user_defined_metadata['com.apple.coreml.model.preview.params'] = json.dumps(labels_json)
# Save the model as .mlmodel
mlmodel.save("LemonClassifierGemini.mlmodel")
mlmodel = ct.convert(
trace,
inputs=[_input],
minimum_deployment_target=ct.target.iOS14 # Optional, set deployment target
)
# Set model type as 'imageClassifier' for the Preview tab**
mlmodel.type = 'imageClassifier'
# Correct structure for preview parameters** (assuming two classes: 'lemon' and 'lime')
labels_json = {
"imageClassifier": {
"labels": ["lemon", "lime"],
"input": {
"shape": list(input_tensor.shape), # Provide the actual input shape
"mean": [0.485, 0.456, 0.406], # Match normalization mean
"std": [0.229, 0.224, 0.225] # Match normalization std
},
"output": {
"shape": [1, 2] # Output shape for your model (2 classes)
}
}
}
# Setting up the metadata with correct 'preview' params**
mlmodel.user_defined_metadata['com.apple.coreml.model.preview.params'] = json.dumps(labels_json)
# Save the model as .mlmodel
mlmodel.save("LemonClassifierGemini.mlmodel")
My model is :
Input batch shape: torch.Size([32, 3, 192, 192])
Labels batch shape: torch.Size([32])
Validation Loss: None, Validation Metric: None
Predictions shape: torch.Size([63, 2])
Targets shape: torch.Size([63])
Code for the model :
searches = 'lemon','lime'
path = Path('lemon_or_not')
for o in searches:
dest = (path/o)
dest.mkdir(exist_ok=True, parents=True)
download_images(dest, urls=search_images(f'{o} photo'))
time.sleep(5)
resize_images(path/o, max_size=400, dest=path/o)
dls = DataBlock(
blocks=(ImageBlock, CategoryBlock),
get_items=get_image_files,
splitter=RandomSplitter(valid_pct=0.2, seed=42),
get_y=parent_label,
item_tfms=[Resize(192, method='squish')]
).dataloaders(path, bs=32)
dls.show_batch(max_n=6)
learn = vision_learner(dls, resnet18, metrics=error_rate)
learn.fine_tune(3)
is_lemon,_,probs = learn.predict(PILImage.create('lemon.jpg'))
print(f"This is a: {is_lemon}.")
print(f"Probability it's a lemon: {probs[0]:.4f}")
This is a: lemon.
Probability it's a lemon: 1.0000
learn.export('lemonmodel.pkl')
I am stuck to why it doest show the Preview Tab.
Post
Replies
Boosts
Views
Activity
Submited as : FB16052050
I am looking to adopt Machine Learning in a more granular manner, going beyond just using pre-built Metal, Core ML, or Create ML approaches. Specifically, I want to train models using Open Python PyTorch libraries, as these offer greater flexibility compared to Apple's native tools. However, these PyTorch APIs are primarily optimised for NVIDIA GPUs (or TPUs), not Apple's M3 or Apple Neural Engine (ANE).
My goal is to train the models locally without resorting to cloud-based solutions for training or inference, and to then convert the models into Core ML format for deployment on Apple hardware. This would allow me to leverage Apple's hardware acceleration (via ANE, Metal, and MPS) while maintaining control over the training process in PyTorch.
I want to know:
What are my options for training models in PyTorch on local hardware (Apple M3 or equivalent), and how can I ensure that the PyTorch model can eventually be converted to Core ML without losing flexibility in model training and customisation?
How can I perform training in PyTorch and avoid being restricted to inference-only workflows as Core ML typically allows? Is it possible to use the training capabilities of PyTorch and still get the performance benefits of Apple's hardware for both training and inference?
What are the best practices or tools to ensure that my training pipeline in PyTorch is compatible with Apple's hardware constraints and optimised for local execution?
I'm seeking a practical, cloud-free approach on Apple Hardware only that allows me to train models in PyTorch (keeping control over the training process) while ensuring that they can be deployed efficiently using Core ML on Apple hardware.
In the 2019 WWDC session Training Object Detection Models
in Create ML a JSON file named:
annotations_832_newdice_copy.json
was show alongside with the images folder named:
Dice Training Images Two Sets.
Are these resources made available for devs ?
I am looking to understand whether the 6000 annotations were needed to be done manually ?
Meaning, they have annotated around 1000 images making 6 labels on each manually to achieve this source ? Video shows around 1000 images.
Can someone please clarify.
On Xcode 16 and 16.1 the StoreKit Configuration file is syncing rejected Subscription in status "In Review" but not the new one "Waiting Review "
What are the rules to which file will be synced ?
I am getting the subject error while simulating Free trial. It is in console.
Free trial proceeds but I cant figure out the error meaning.
I could not find anything similar on the net or documentation.
iOS 18.1 StoreKit2 SwiftUI
Has anyone had similar issue that could advise ?
The guidance provided only explains how to remove the entire app from review, but not how to remove a single subscription. Currently, I have one incorrect subscription that has been rejected due to localisation issues, and it is stuck in the "In Review" status. Unfortunately, I am unable to make changes to it.
The correct subscription is in the "Waiting for Review" status, and while I am able to remove it, there is no need to do so. What I would like to do is remove the previous, incorrect subscription.
PLATFORM AND VERSION
iOS
Development environment: Xcode 16, macOS 15.0.1
Run-time configuration: iOS 18.1
DESCRIPTION OF THE PROBLEM
I am experiencing inconsistent behaviour in SwiftUI when attempting to hide the status bar programmatically. Specifically, I want the status bar to be hidden only during the AreaQuizView, but the behaviour is not persistent across different views. On some views, the status bar hides correctly, while on others, it does not. I am using .statusBarHidden() to achieve this.
STEPS TO REPRODUCE
In AreasView, apply .statusBarHidden() to the outermost VStack. The status bar hides as expected.
In AreaQuizView, apply the same .statusBarHidden() to the outermost stack. However, in this view, the status bar remains visible.
In the Build Targets, if I set Status bar is initially hidden to YES in the info.plist, I lose control over the status bar entirely.
EXPECTED BEHAVIOUR
I would expect that .statusBarHidden() would behave consistently across all views, allowing me to programmatically hide the status bar only during AreaQuizView and leave it visible for other views.
ADVICE REQUESTED
Could you please advise on how to achieve the desired behaviour, where the status bar is only hidden during AreaQuizView? I would also appreciate any guidance on ensuring consistent behaviour across different views.
Please advise.
Hi everyone,
I’m facing an issue with my app version (changed Subscription from weekly to monthly) being rejected due to a "Loading Subscription" first time, then "Unable to Complete Request" error in the SubscriptionStoreView. I noticed on the screenshot that the review team was using a VPN, and since my app isn’t available in the EU, I believe this may be related to the problem.
I cannot replicate the issue on my end in UK. Does anyone have advice on how to address this?
Thanks in advance!
Hi all,
Is this allowed:
environment(\.container, MyContainer() as Any)
While injecting Model container into the environment we use Type . Is there any universal injection type that can inject container as set of any types rather than array similar to navigationPath() type-eraser ?
Re SwiftData: is my understanding correct : generally speaking and by default insert method inserts objects into the context and context automatically persist - e.g. inserts them into container while the delete method does not - it only deletes from context and context does not delete them from the container unless save is called ? It is not clear from the documentation nor from the definitions :
public func delete<T>(model: T.Type, where predicate: Predicate<T>? = nil, includeSubclasses: Bool = true) throws where T : PersistentModel
//How can I test it ?
I’m keen to learn where I can confirm this in Apple’s documentation or official articles, code definitions, apart from experimenting or consulting third-party materials. Where does it explicitly state that SwiftData includes an automatic saving feature but does not offer automatic deletion?
"Meet SwiftData" (WWDC23): Around the 14:30 mark, Apple mentions that SwiftData automatically saves changes "at opportune moments." But nothing is advised re deleting ?
Are we supposed to be taking hints :
"Build an app with SwiftData" (WWDC23): This session demonstrates using context.save() to persist changes after deleting an object, implies the idea that deletion isn't automatic
How to truly learn if you do not have official materials ? This is exact Science, not archeology or history.
I feel like a speleologist.
App is approved and on App Store but Subscription is in review and localizations rejected. no way to edit.
anyone here that go this flow resolved and how?