am very new to CoreMl and I want to retrieve a model from Coreml model deployment which was released this year at WWDC.
I made an app that just classifies special and rare things and I uploaded that model.archive to the CoreMl Model deployment dashboard.
I successfully deployed the model and its showing as active.
now the problem is I am unable to retrieve that model, I have tried a lot, I even saw all the WWDC sessions on that one and even copied that code from that session but all in vain.
Here is my whole model loading and retrieving code
my classification code which takes an image and does all the loading of CoreML and from CoreML model deployment.
func updateClassifications(for image: UIImage) {
				classificationLabel.text = "Classifying..."
				
				var models = try? VNCoreMLModel(for: SqueezeNet().model)
				
				if let modelsss = models {
						extensionofhandler(ciimage: image, vnmodel: modelsss)
						return
				}
				
				_ = MLModelCollection.beginAccessing(identifier: "TestingResnetModel") { [self] result in
						var modelUrl: URL?
						switch result {
						case .success(let collection):
								modelUrl = collection.entries["class"]?.modelURL
						case .failure(let error):
								fatalError("sorry \(error)")
						}
						let result = loadfishcallisier(from: modelUrl)
						
						switch result {
						case .success(let modelesss):
								models = try? VNCoreMLModel(for: modelesss)
								extensionofhandler(ciimage: image, vnmodel: models!)
						case .failure(let error):
								fatalError("plz \(error)")
						}
				}
			
		}
func loadfishcallisier(from modelUrl: URL?) -> Result<MLModel,Error> {
				if let modelUrl = modelUrl {
						return Result { try MLModel(contentsOf: modelUrl)}
				} else {
						return Result { try MLModel(contentsOf: modelUrl!, configuration: .init())}
				}
		}
		
		func extensionofhandler(ciimage: UIImage,vnmodel: VNCoreMLModel) {
				let orientation = CGImagePropertyOrientation(ciimage.imageOrientation)
				guard let ciImage = CIImage(image: ciimage) else { fatalError("Unable to create \(CIImage.self) from \(ciimage).")
				}
				DispatchQueue.global(qos: .userInitiated).async { [self] in
						
						let handler = VNImageRequestHandler(ciImage: ciImage, orientation: orientation)
						
						do {
								
						try handler.perform([coremlmodel(using: vnmodel)])
						
						} catch {
								fatalError("Check the error")
						}
				}
		}
my Vision request code
func coremlmodel(using: VNCoreMLModel) -> VNCoreMLRequest {
						let request = VNCoreMLRequest(model: using, completionHandler: { [weak self] request, error in
								self?.processClassifications(for: request, error: error)
						})
						request.imageCropAndScaleOption = .centerCrop
						return request
						
		}
my classification code
func processClassifications(for request: VNRequest, error: Error?) {
				DispatchQueue.main.async {
						guard let results = request.results else {
								self.classificationLabel.text = "Unable to classify image.\n\(error!.localizedDescription)"
								return
						}
						// The `results` will always be `VNClassificationObservation`s, as specified by the Core ML model in this project.
						let classifications = results as! [VNClassificationObservation]
				
						if classifications.isEmpty {
								self.classificationLabel.text = "Nothing recognized."
						} else {
								// Display top classifications ranked by confidence in the UI.
								let topClassifications = classifications.prefix(2)
								let descriptions = topClassifications.map { classification in
										// Formats the classification for display; e.g. "(0.37) cliff, drop, drop-off".
									 return String(format: "	(%.2f) %@", classification.confidence, classification.identifier)
								}
								self.classificationLabel.text = "Classification:\n" + descriptions.joined(separator: "\n")
						}
				}
		}
I am pretty sure something is wrong with my model loading code.
Xcode throws no error but it's not recognising anything.
if I have done anything wrong in my code I humbly ask you to show it to me and solve it
and is there any tutorial for retrieving the model from coreml model deployment.
Post
Replies
Boosts
Views
Activity
I am making am app in which I will use Google ADMob Native Ads to earn some money and use it for maintenance of that app
I just want to know does doing like this will create lesser chances of app getting featured.
in WWDC 2020 apple introduced Core ML Model Deployment
I have created a CoreML model which is around 20GB I just want to know
(1) is it free to deploy this model on Core ML Model Deployment platform.
(2) if its not free then what are the charges.
I am making an app with the best interface and following all Human Interface guidelines, solving a problem and using the latest apple technologies like widgets, CoreML, MapKit etc.
i just wanted to know that using cocopods will decrease the chance app getting featured.
Can anyone please help me with this question
here is the link to the detailed question on stackoverflow
https://stackoverflow.com/q/63424040/13748710?sem=2
please help thanks in advance.
How can you recognise a text and display 3D contents according to it
For example: you have a text “GO” when u point the camera at the text you should recognise the text and a 3D arrow-mark should be placed in the real world.
how can it be done using CoreMl, Vision and ARKit.
if it is not possible in these frameworks, what are the other Frameworks for it.