Hello everybody,
I am trying to navigate from one view to another using NavigationView and NavigationLink.
I can't figure out what I am doing wrong, but the way I am doing it I get a warning saying:
"Result of NavigationLink<Label, Destination> is unused".
In fact, I can't navigate to the view I want to open.
Here is my code so far:
NavigationView {
VStack {
Button(action: {
let observation = self.createObservation()
self.records.addObservation(observation)
self.isPresented.toggle()
NavigationLink(destination: ObservationDetails(observation: observation).environmentObject(self.records)) {
EmptyView()
}
}) {
Text("Classify")
}
}
Thank you for your help!
Post
Replies
Boosts
Views
Activity
Hello everyone!
I am trying to implement push notifications in my WatchOS app. I have created the delegate class that handles the registration for remote notifications and that allows me to obtain the device token. Then I take the token and send it to Firebase, like this:
func didRegisterForRemoteNotifications(withDeviceToken deviceToken: Data) {
FirebaseApp.configure()
var ref: DatabaseReference!
ref = Database.database().reference().ref.child("/")
let tokenParts = deviceToken.map { data in String(format: "%02.2hhx", data) }
let token = tokenParts.joined()
if let userID = UserDefaults.standard.object(forKey: "id") as? String {
ref.child("users/\(userID)/token").setValue(token)
}
}
Then I am using a Python Script to communicate with APNS. I am using the http library to get access to HTTP2. This is what I have got:
payload = {
"aps" : {
"alert" : {
"title" : "Hello Push",
"message": "This is a notification!"
},
"category": "myCategory"
}
}
dev_server = "https://api.sandbox.push.apple.com:443"
device_token = "9fe2814b6586bbb683b1a3efabdbe1ddd7c6918f51a3b83e90fce038dc058550"
headers = {
'method': 'POST',
'path': '/3/device/{}'.format(device_token),
'autorization': 'bearer' + 'provider_token',
'apns-push-type': 'myCategory',
'apns-expiration': '0',
'apns-priority': '10',
}
async def test():
async with httpx.AsyncClient(http2=True) as client:
client = httpx.AsyncClient(http2=True)
r = await client.post(dev_server, headers=headers, data=payload)
print(r.text)
asyncio.run(test())
I have also downloaded the .p8 auth key file. But I don't really understand from the Apple Documentation what I have to do with it.
What is the provider token in the headers?
Am I doing the right thing with the token I receive from didRegisterForRemoteNotifications?
Hello everybody,
I am trying to run inference on a CoreML Model created by me using CreateML. I am following the sample code provided by Apple on the CoreML documentation page and every time I try to classify an image I get this error: "Could not create Espresso context".
Has this ever happened to anyone? How did you solve it?
Here is my code:
import Foundation
import Vision
import UIKit
import ImageIO
final class ButterflyClassification {
var classificationResult: Result?
lazy var classificationRequest: VNCoreMLRequest = {
do {
let model = try VNCoreMLModel(for: ButterfliesModel_1(configuration: MLModelConfiguration()).model)
return VNCoreMLRequest(model: model, completionHandler: { [weak self] request, error in
self?.processClassification(for: request, error: error)
})
}
catch {
fatalError("Failed to lead model.")
}
}()
func processClassification(for request: VNRequest, error: Error?) {
DispatchQueue.main.async {
guard let results = request.results else {
print("Unable to classify image.")
return
}
let classifications = results as! [VNClassificationObservation]
if classifications.isEmpty {
print("No classification was provided.")
return
}
else {
let firstClassification = classifications[0]
self.classificationResult = Result(speciesName: firstClassification.identifier, confidence: Double(firstClassification.confidence))
}
}
}
func classifyButterfly(image: UIImage) - Result? {
guard let ciImage = CIImage(image: image) else {
fatalError("Unable to create ciImage")
}
DispatchQueue.global(qos: .userInitiated).async {
let handler = VNImageRequestHandler(ciImage: ciImage, options: [:])
do {
try handler.perform([self.classificationRequest])
}
catch {
print("Failed to perform classification.\n\(error.localizedDescription)")
}
}
return classificationResult
}
}
Thank you for your help!
Hello everybody ,Two days ago I submitted my first iOS app. I was super excited. Yesterday my app came back rejected with “metadata rejected”. I am very inexperienced and I kind of panic and tried everything to solve it. I replied in the resolution center and then I made the mistake of clicking submit for review. And now I am waiting.Sorry to bother you but I really would like to hear from somebody with experience what is going to happen now...- Since I clicked submit for review will my app be again in the beginning of the queue?- what happens to my response in the resolution center? Will my reviewer still read it?- How long, in the worst case scenario will it take to get some feedback back?Thank you so much.Best regards,
Hello everyone,
I trying to draw a custom view inside a for each (list style), that is inside a Scroll View, that is inside a Navigation view. Like this.
Navigation View {
ScrollView {
ForEach(array of objects ...) {
CustomView()
}
}
}
The custom view calls up a sheet that has a button that is able to delete elements inside the collection used in the foreach.
Unless I use this asyncAfter after dismissing the sheet I always get index out of bounds when I try to remove the last element of the array of objects in the for each:
DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) {
workouts.removeAll(where: { $0.id == workoutToRemoveID })
}
I have been trying to solve this bug, but so far no luck. Could you give me a hand?
Thank you for your help!
Hello everybody,I am new to iOS development and I found an error that I can not get past. I have read a lot online and on Stackoverflow but I don't understand why this error keeps coming up.I have a table view controller and I want to write text to other view controller using:navigationController?.pushViewController(viewController, animated: true)I have this class where I already have an outlet for the label.import UIKit
class PetitionDetailsViewController: UIViewController {
@IBOutlet weak var PetitionDetailsOutlet: UILabel!
override func viewDidLoad() {
super.viewDidLoad()
}
}On the other class I have this code where I try to add text to the label in the PetitionDetailsViewController after tapping one of the rows in the table view.override func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) {
let viewController = PetitionDetailsViewController()
print(petitions[indexPath.row].body)
viewController.PetitionDetailsOutlet.text = petitions[indexPath.row].body
navigationController?.pushViewController(viewController, animated: true)
}I don't understand why this error keeps coming up. I have the outlet and initially the label is empty.Why is it nil?
Hello everybody,
I am new to Machine Learning but I want to get started with developing CoreML models to try them out in a few apps of my own.
What is the best way to build a dataset from Apple Watch data to build an activity model?
Do I build an iPhone app that works with the Apple Watch in order to get the data I need, or is there a more direct way to do it through Xcode, maybe?
Thank you for for help.
Best regards,
Tomás
Hello everybody,
For the past week I have been struggling to run inference on a classifier I built using Google's AutoML Vision tool.
At first I thought everything would go smoothly because Google allows to export a CoreML version of the final model. I assumed I would only need to use Apple's CoreML library to make it work. When I export the model Google provides a .mlmodel file and a dict.txt file with the classification labels. For the current model I have 100 labels.
This is my Swift code to run inference on the model.
private lazy var classificationRequest: VNCoreMLRequest = {
				do {
						let classificationModel = try VNCoreMLModel(for: NewGenusModel().model)
						let request = VNCoreMLRequest(model: classificationModel, completionHandler: { [weak self] request, error in
								self?.processClassifications(for: request, error: error)
						})
						request.imageCropAndScaleOption = .scaleFit
						return request
				}
				catch {
						fatalError("Error! Can't use Model.")
				}
		}()
		func classifyImage(receivedImage: UIImage) {
				let orientation = CGImagePropertyOrientation(rawValue: UInt32(receivedImage.imageOrientation.rawValue))
				if let image = CIImage(image: receivedImage) {
						DispatchQueue.global(qos: .userInitiated).async {
								let handler = VNImageRequestHandler(ciImage: image, orientation: orientation!)
								do {
										try handler.perform([self.classificationRequest])
								}
								catch {
										fatalError("Error classifying image!")
								}
						}
				}
		}
The problem started when I tried to pass a UIImage to run inference on the model. The input type of the original model was MultiArray (Float32 1 x 224 x 224 x 3). Using Coremltools library I was able to convert the input type to Image (Color 224 x 224) using Python.
This worked and here is my code:
import coremltools
import coremltools.proto.FeatureTypes_pb2 as ft
spec = coremltools.utils.load_spec("model.mlmodel")
input = spec.description.input[0]
input.type.imageType.colorSpace = ft.ImageFeatureType.RGB
input.type.imageType.height = 224
input.type.imageType.width = 224
coremltools.utils.save_spec(spec, "newModel.mlmodel")
My problem now is with the output type. I want to be able to access the confidence of the classification as well as the result label of the classification. Again using coremltools I was able to to access the output description and I got this.
name: "scores"
type {
	multiArrayType {
		dataType: FLOAT32
	}
}
I am trying to change it this way:
f = open("dict.txt", "r")
labels = f.read()
class_labels = labels.splitlines()
print(class_labels)
class_labels = class_labels[1:]
assert len(class_labels) == 57
for i, label in enumerate(class_labels):
	if isinstance(label, bytes):
		class_labels[i] = label.decode("utf8")
classifier_config = ct.ClassifierConfig(class_labels)
output = spec.description.output[0]
output.type = ft.DictionaryFeatureType
Unfortunately this is not working and I can't find information only that can help me... This I don't know what to do next.
Thank you for your help!
Hello everybody,
I have very little experience developing applications for the Apple Watch. I want to use the Apple Watch to capture accelerometer and gyroscope data to create a CoreML model.
Could you give me some pointers on what I would have to do to be able to gather the data I need from the Apple Watch?
Do I need to create a simple Watch app to gather this data first and save it to a txt file, for exemple?
Thank you for your help.
Best regards,
Tomás
Hello everybody,
I am relatively new to ARKit and SceneKit and I have been experimenting with it.
I have been exploring plane detection and I want to keep only one plane in the view. If other planes are found I want the old ones to be removed.
This is the solution I found: I have an array with all found anchors and before adding a new child node I remove all the anchors from my scene.
What do you think of this solution? Do you think I should do it in any other way? Thank you!
private var planes = [UUID: Plane]()
private var anchors = [UUID: ARAnchor]()
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
// we only care about planes
guard let planeAnchor = anchor as? ARPlaneAnchor else {
return
}
print("Found plane: \(planeAnchor)")
for anchor in anchors {
sceneView?.session.remove(anchor: anchor.value)
}
let plane = Plane(anchor: planeAnchor)
planes[anchor.identifier] = plane
anchors[anchor.identifier] = anchor
node.addChildNode(plane)
}
Hello everybody,
For the last couple of days I have been struggling with preparing my app to limited access to the Photo Library. I want the user to be able to choose limited access.
In my app I need to be able to access the photo, as well as, photo metadata like - creation date and location (if available).
Before showing the picker, I am asking the user for authorisation to access the Photo Library, like this.
let accessLevel: PHAccessLevel = .readWrite
let status = PHPhotoLibrary.authorizationStatus(for: accessLevel)
switch status {
case .authorized:
self.imagePickerIsPresented.toggle()
case .limited:
print("Limited access - show picker.")
self.imagePickerIsPresented.toggle()
case .denied:
self.showPhotosAccessDeniedAlert.toggle()
case .notDetermined:
PHPhotoLibrary.requestAuthorization(for: accessLevel) { newStatus in
switch newStatus {
case .limited:
print("Limited access.")
break
case .authorized:
print("Full access.")
case .denied:
break
default:
break
}
}
default:
break
}
I have used breakpoints to debug this code, and this is working just fine.
When the user wants to import a photo to the application I am using a PHPicker. When the user calls the picker to select a photo, I am still able to see all the user photos, regardless of the previous selection made by the user.
However, I want to be able to see only the Photos the user selected. How can I handle this in the correct way?
Furthermore, regardless of the selection my picker is still able to access the photo as well as the metadata.
Here is the code for my PHPicker.
//
// ImagePicker.swift
// Lepidoptera
//
// Created by Tomás Mamede on 26/09/2020.
// Copyright © 2020 Tomás Santiago. All rights reserved.
//
import SwiftUI
import PhotosUI
@available(iOS 14, *)
struct ImagePicker: UIViewControllerRepresentable {
@Binding var imageToImport: UIImage
@Binding var isPresented: Bool
@Binding var imageWasImported: Bool
func makeUIViewController(context: UIViewControllerRepresentableContext<ImagePicker>) -> some UIViewController {
var configuration = PHPickerConfiguration(photoLibrary: PHPhotoLibrary.shared())
configuration.filter = .images
configuration.selectionLimit = 1
let imagePicker = PHPickerViewController(configuration: configuration)
imagePicker.delegate = context.coordinator
return imagePicker
}
func updateUIViewController(_ uiViewController: ImagePicker.UIViewControllerType, context: UIViewControllerRepresentableContext<ImagePicker>) {}
func makeCoordinator() -> ImagePicker.Coordinator {
return Coordinator(parent: self)
}
class Coordinator: NSObject, PHPickerViewControllerDelegate {
var parent: ImagePicker
init(parent: ImagePicker) {
self.parent = parent
}
func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) {
picker.dismiss(animated: true)
if results.count != 1 {
return
}
if let image = results.first {
if image.itemProvider.canLoadObject(ofClass: UIImage.self) {
image.itemProvider.loadObject(ofClass: UIImage.self) { image, error in
guard error == nil else {
print(error!)
return
}
if let image = image {
let identifiers = results.compactMap(\.assetIdentifier)
let fetchResult = PHAsset.fetchAssets(withLocalIdentifiers: identifiers, options: nil)
let imageMetadata = fetchResult[0]
print(imageMetadata.creationDate!)
print("Image impoted.")
self.parent.imageToImport = image as! UIImage
self.parent.imageWasImported.toggle()
}
}
}
}
self.parent.isPresented.toggle()
}
}
}
My fundamental problem is:
How can I respect the user selection by only showing the photos chosen by the user?
By doing this I am sure I only can access metadata for photos approved by the user.
Thank you for your help.
Hello everyone,
I am new to Core Data and I am trying to implement it on my app. I am concerned with memory leaking and I want to make sure that I am doing things the proper and safer way
At the moment I have two views. Putting it simply, I have one where I create the object and the other where I just display the attributes.
I have a var - that is the type of my Entity in Core Data and I declare it like this:
@State var observation: Observation?
Then inside my view when I press the button I have:
let newObservation = Observation(entity: Observation.entity(), insertInto: managedObjectContext)
newObservation.id = UUID()
newObservation.speciesName = finalLabel
...
do {
		try managedObjectContext.save()
		observation = newObservation
} catch {
		activeAlert = .canNotSaveCoreData
	 showAlert.toggle()
}
I then send the observation object to my other view like this:
Details(sheetIsOpen: $sheetIsPresented, observation: observation!)
What intrigue me is the way I am sending the observation object. Is this correct / the standard way?
What should I be doing differently?
Thank you for your help!
Hello everybody,
Since I have been working on an app for quite some time now. This week I update to iOS 14.1 and updated Xcode to support this latest version.
Also, I have been doing some changes when it comes to user interface, the app flow and how I manage my core data entities.
I don't know if this two previous things are related in anyway but when I run the app on the device for the first time I get unexpected behaviour and the message:
[Memory] Resetting zone allocator with 68631 allocations still alive
If I continue using the app and let it stay on my iPhone it never happens again. I can launch it how many times I want and I don't get any crash or unexpected behaviour.
This is what intrigues me.
I tried locking for the issue in the Instruments app and I see a memory leak with the responsibility message "Allocated prior to attach." I can't figure out what this means.
Has anyone experienced anything similar?
Cheers,
Hello everybody,
I am trying to export a folder to the Files App using SwiftUI. I am using the fileExporter view modifier.
This is my code so far.
.fileExporter(isPresented: self.$presentExportCSV, document: FolderExport(url: createFolderURL()), contentType: .folder) { (res) in
}
struct FolderExport: FileDocument {
let url: String
static var readableContentTypes: [UTType] {[.folder]}
init(url: String) {
self.url = url
}
init(configuration: ReadConfiguration) throws {
url = ""
}
func fileWrapper(configuration: WriteConfiguration) throws -> FileWrapper {
let file = try! FileWrapper(url: URL(fileURLWithPath: url), options: .immediate)
return file
}
}
func createFolderURL() -> String {
var folder: String = ""
DispatchQueue.global(qos: .userInitiated).async {
let fileManager = FileManager.default
//Get apps document directory
let path = fileManager.urls(for: .documentDirectory, in: .userDomainMask)
//Create folder with obseravtion data
let folderName = "Observations Data"
let dataDirectory = path[0].appendingPathComponent("\(folderName)", isDirectory: true)
try? fileManager.createDirectory(at: dataDirectory.absoluteURL, withIntermediateDirectories: true, attributes: nil)
//Create folder with all images
let imagesFolder = "Observation Images"
let imagesDirectory = dataDirectory.appendingPathComponent("\(imagesFolder)", isDirectory: true)
try? fileManager.createDirectory(at: imagesDirectory.absoluteURL, withIntermediateDirectories: true, attributes: nil)
for observation in observationList {
let image = UIImage(data: observation.image!)
do {
let imageName = observation.id!.description
let imageURL = imagesDirectory.appendingPathComponent("\(imageName)" + ".jpeg")
try image?.jpegData(compressionQuality: 1.0)?.write(to: imageURL)
} catch {
print(error.localizedDescription)
}
}
let csvFileURL = dataDirectory.appendingPathComponent("Observation data.csv")
let csvFile = createCSVFile()
do {
try csvFile?.write(to: csvFileURL, atomically: true, encoding: .utf16)
} catch {
print(error.localizedDescription)
}
folder = dataDirectory.description
}
return folder
}
When the fileWrapper function is called in the FolderExport struct the app crashes.
I get this error: Error Domain=NSCocoaErrorDomain Code=263 "The item “System” couldn’t be opened because it is too large." UserInfo={NSFilePath=/, NSUnderlyingError=0x2834e2850 {Error Domain=NSPOSIXErrorDomain Code=27 "File too large"}}
I can't figure out what I am doing wrong.
I would really appreciate your help.
Hello everybody,
I am trying to parse a JSON file that has this structure:
{
"species":[
{
"name":"Aglais io",
"info": {
"family":"Nymphalidae",
"iNatLink":"https://www.inaturalist.org/observations?place_id=any&taxon_id=207977",
"WikipediaLink":"https://en.wikipedia.org/wiki/Aglais_io",
"otherName":"European Peacock Butterfly"
}
},
{
"name":"Aglais urticae",
"info": {
"family":"Nymphalidae",
"iNatLink":"https://www.inaturalist.org/observations?place_id=any&taxon_id=54468",
"WikipediaLink":"https://en.wikipedia.org/wiki/Small_tortoiseshell",
"otherName":"Small Tortoiseshell"
}
}
]
}
I am using a Codable struct to read the data.
Also, I am using this code to read the json file.
struct Species: Codable {
let name: String
struct info: Codable {
let family: String
let iNatLink: String
let WikipediaLink: String
let otherName: String
}
}
func loadSpeciesInfoJSOn() {
if let filePath = Bundle.main.url(forResource: "secondJSON", withExtension: "json") {
do {
let data = try Data(contentsOf: filePath)
let decoder = JSONDecoder()
let speciesList = try decoder.decode([Species].self, from: data)
print(speciesList)
} catch {
print("Can not load JSON file.")
}
}
}
I can not figure out what I am doing wrong.
Thank you for your help.