Posts

Post not yet marked as solved
1 Replies
4.8k Views
On Xcode 13.4.1, when specifically trying to run unit tests, I get an error Undefined symbol: _OBJC_CLASS_$_DDLog. After clicking on it in the Issue navigator, a longer error message says Undefined symbols for architecture arm64:   "_OBJC_CLASS_$_DDLog", referenced from:       objc-class-ref in How might I fix this? I tried running both Xcode and the Simulator in Rosetta, but I get the same error message but for x86.
Posted
by Curiosity.
Last updated
.
Post not yet marked as solved
2 Replies
1.9k Views
In the Health app, it appears that cells and not sections are styled in this way: The closest I know of to getting to this appearance is setting the section to be inset grouped let listConfiguration = UICollectionLayoutListConfiguration(appearance: .insetGrouped) let listLayout = UICollectionViewCompositionalLayout.list(using: listConfiguration) collectionView.collectionViewLayout = listLayout but I'm not sure of a good approach to giving each cell this appearance like in the screenshot above. I'm assuming the list style collection view shown is two sections with three total cells, rather than three inset grouped sections.
Posted
by Curiosity.
Last updated
.
Post not yet marked as solved
0 Replies
564 Views
Apple's Displaying Cell Info tutorial shows using a default cell content configuration to set two lines of text func cellRegistrationHandler(cell: UICollectionViewListCell, indexPath: IndexPath, id: String) { let reminder = Reminder.sampleData[indexPath.item] var contentConfiguration = cell.defaultContentConfiguration() contentConfiguration.text = reminder.title contentConfiguration.secondaryText = reminder.dueDate.dayAndTimeText contentConfiguration.secondaryTextProperties.font = UIFont.preferredFont(forTextStyle: .caption1) cell.contentConfiguration = contentConfiguration } How would I get started extending this to include a third line of text? I would like to keep the built-in text, secondaryText, and accessory control (the tutorial has a done button on each cell), while also adding custom UI elements. I'm assuming this is possible since Apple uses the term "compositional collection views," but I'm not sure how to accomplish this. Is it possible, or would I instead need to register a custom UICollectionViewCell subclass?
Posted
by Curiosity.
Last updated
.
Post not yet marked as solved
2 Replies
2.0k Views
Say that in this example here, this struct struct Reminder: Identifiable { var id: String = UUID().uuidString var title: String var dueDate: Date var notes: String? = nil var isComplete: Bool = false } is instead decoded from JSON array values (rather than constructed like in the linked example). If each JSON value were to be missing an "id", how would id then be initialized? When trying this myself I got an error keyNotFound(CodingKeys(stringValue: "id", intValue: nil), Swift.DecodingError.Context(codingPath: [_JSONKey(stringValue: "Index 0", intValue: 0)], debugDescription: "No value associated with key CodingKeys(stringValue: \"id\", intValue: nil) (\"id\").", underlyingError: nil)).
Posted
by Curiosity.
Last updated
.
Post not yet marked as solved
1 Replies
540 Views
Say that in this example here, the struct struct Reminder: Identifiable { var id: String = UUID().uuidString var title: String var dueDate: Date var notes: String? = nil var isComplete: Bool = false var city: String } is modified slightly to include a city string. In the collection view that displays the reminders, I'd like each section to be each unique city, so if two reminder cells have the same city string then they would be in the same section of the collection view. The progress I've made to this end is sorting the reminders array so that reminders cells are grouped together by city func updateSnapshot(reloading ids: [Reminder.ID] = []) { var snapshot = Snapshot() snapshot.appendSections([0]) let reminders = reminders.sorted { $0.city } snapshot.appendItems(reminders.map { $0.id }) if !ids.isEmpty { snapshot.reloadItems(ids) } dataSource.apply(snapshot) } Where I'm stuck is in coming up with a way to make the snapshot represent sections by unique cities, and not just one flat section of all reminders.
Posted
by Curiosity.
Last updated
.
Post not yet marked as solved
0 Replies
540 Views
This Mac Catalyst tutorial (https://developer.apple.com/tutorials/mac-catalyst/adding-items-to-the-sidebar) shows the following code snippet: recipeCollectionsSubscriber = dataStore.$collections .receive(on: RunLoop.main) .sink { [weak self] _ in guard let self = self else { return } let snapshot = self.collectionsSnapshot() self.dataSource.apply(snapshot, to: .collections, animatingDifferences: true) }
Posted
by Curiosity.
Last updated
.
Post not yet marked as solved
2 Replies
781 Views
Reading a solution given in a book to adding the elements of an input array of doubles, an example is given with Accelerate as func challenge52c(numbers: [Double]) -> Double { var result: Double = 0.0 vDSP_sveD(numbers, 1, &result, vDSP_Length(numbers.count)) return result } I can understand why Accelerate API's don't adhere to Swift API design guidelines, why is it that they don't seem to use Cocoa guidelines either? Are there other conventions or precedents that I'm missing?
Posted
by Curiosity.
Last updated
.
Post not yet marked as solved
0 Replies
605 Views
My activity classifier is used in tennis sessions, where there are necessarily multiple people on the court. There is also a decent chance other courts' players will be in the shot, depending on the angle and lens. For my training data, would it be best to crop out adjacent courts?
Posted
by Curiosity.
Last updated
.
Post not yet marked as solved
0 Replies
598 Views
For a Create ML activity classifier, I’m classifying “playing” tennis (the points or rallies) and a second class “not playing” to be the negative class. I’m not sure what to specify for the action duration parameter given how variable a tennis point or rally can be, but I went with 10 seconds since it seems like the average duration for both the “playing” and “not playing” labels. When choosing this parameter however, I’m wondering if it affects performance, both speed of video processing and accuracy. Would the Vision framework return more results with smaller action durations?
Posted
by Curiosity.
Last updated
.
Post not yet marked as solved
0 Replies
497 Views
Would might be a good approach to estimating a VNVideoProcessor operation? I'd like to show a progress bar that's useful enough like one based the progress Apple vends for the photo picker or exports. This would make a world of difference compared to a UIActivityIndicatorView, but I'm not sure how to approach handrolling this (or if that would even be a good idea). I filed an API enhancement request for this, FB9888210.
Posted
by Curiosity.
Last updated
.