Post

Replies

Boosts

Views

Activity

How do you extend a modern collection view cell's default content configuration?
Apple's Displaying Cell Info tutorial shows using a default cell content configuration to set two lines of text func cellRegistrationHandler(cell: UICollectionViewListCell, indexPath: IndexPath, id: String) { let reminder = Reminder.sampleData[indexPath.item] var contentConfiguration = cell.defaultContentConfiguration() contentConfiguration.text = reminder.title contentConfiguration.secondaryText = reminder.dueDate.dayAndTimeText contentConfiguration.secondaryTextProperties.font = UIFont.preferredFont(forTextStyle: .caption1) cell.contentConfiguration = contentConfiguration } How would I get started extending this to include a third line of text? I would like to keep the built-in text, secondaryText, and accessory control (the tutorial has a done button on each cell), while also adding custom UI elements. I'm assuming this is possible since Apple uses the term "compositional collection views," but I'm not sure how to accomplish this. Is it possible, or would I instead need to register a custom UICollectionViewCell subclass?
0
0
623
May ’22
How do you configure collection view list cells to look inset with rounded corners?
In the Health app, it appears that cells and not sections are styled in this way: The closest I know of to getting to this appearance is setting the section to be inset grouped let listConfiguration = UICollectionLayoutListConfiguration(appearance: .insetGrouped) let listLayout = UICollectionViewCompositionalLayout.list(using: listConfiguration) collectionView.collectionViewLayout = listLayout but I'm not sure of a good approach to giving each cell this appearance like in the screenshot above. I'm assuming the list style collection view shown is two sections with three total cells, rather than three inset grouped sections.
2
0
2.4k
May ’22
What would be a good way to partition instances by a particular matching property, such that each partition is a collection view section?
Say that in this example here, the struct struct Reminder: Identifiable { var id: String = UUID().uuidString var title: String var dueDate: Date var notes: String? = nil var isComplete: Bool = false var city: String } is modified slightly to include a city string. In the collection view that displays the reminders, I'd like each section to be each unique city, so if two reminder cells have the same city string then they would be in the same section of the collection view. The progress I've made to this end is sorting the reminders array so that reminders cells are grouped together by city func updateSnapshot(reloading ids: [Reminder.ID] = []) { var snapshot = Snapshot() snapshot.appendSections([0]) let reminders = reminders.sorted { $0.city } snapshot.appendItems(reminders.map { $0.id }) if !ids.isEmpty { snapshot.reloadItems(ids) } dataSource.apply(snapshot) } Where I'm stuck is in coming up with a way to make the snapshot represent sections by unique cities, and not just one flat section of all reminders.
1
0
581
May ’22
When decoding a Codable struct from JSON, how do you initialize a property not present in the JSON?
Say that in this example here, this struct struct Reminder: Identifiable { var id: String = UUID().uuidString var title: String var dueDate: Date var notes: String? = nil var isComplete: Bool = false } is instead decoded from JSON array values (rather than constructed like in the linked example). If each JSON value were to be missing an "id", how would id then be initialized? When trying this myself I got an error keyNotFound(CodingKeys(stringValue: "id", intValue: nil), Swift.DecodingError.Context(codingPath: [_JSONKey(stringValue: "Index 0", intValue: 0)], debugDescription: "No value associated with key CodingKeys(stringValue: \"id\", intValue: nil) (\"id\").", underlyingError: nil)).
2
0
2.3k
May ’22
How do you determine which threads run loop to receive events, in the context of Combine publishers?
This Mac Catalyst tutorial (https://developer.apple.com/tutorials/mac-catalyst/adding-items-to-the-sidebar) shows the following code snippet: recipeCollectionsSubscriber = dataStore.$collections .receive(on: RunLoop.main) .sink { [weak self] _ in guard let self = self else { return } let snapshot = self.collectionsSnapshot() self.dataSource.apply(snapshot, to: .collections, animatingDifferences: true) }
0
0
581
May ’22
Why does Accelerate appear so out of place in terms of naming style?
Reading a solution given in a book to adding the elements of an input array of doubles, an example is given with Accelerate as func challenge52c(numbers: [Double]) -> Double { var result: Double = 0.0 vDSP_sveD(numbers, 1, &result, vDSP_Length(numbers.count)) return result } I can understand why Accelerate API's don't adhere to Swift API design guidelines, why is it that they don't seem to use Cocoa guidelines either? Are there other conventions or precedents that I'm missing?
2
0
842
Apr ’22
How does the action duration parameter affect performance?
For a Create ML activity classifier, I’m classifying “playing” tennis (the points or rallies) and a second class “not playing” to be the negative class. I’m not sure what to specify for the action duration parameter given how variable a tennis point or rally can be, but I went with 10 seconds since it seems like the average duration for both the “playing” and “not playing” labels. When choosing this parameter however, I’m wondering if it affects performance, both speed of video processing and accuracy. Would the Vision framework return more results with smaller action durations?
0
0
644
Feb ’22
Progress estimate for a `VNVideoProcessor` operation
Would might be a good approach to estimating a VNVideoProcessor operation? I'd like to show a progress bar that's useful enough like one based the progress Apple vends for the photo picker or exports. This would make a world of difference compared to a UIActivityIndicatorView, but I'm not sure how to approach handrolling this (or if that would even be a good idea). I filed an API enhancement request for this, FB9888210.
0
0
532
Feb ’22
Apple sample code "Detecting Human Actions in a Live Video Feed" - accessing the observations associated with an action prediction
Say I have an alert @State var showingAlert = false var body: some View { Text("Hello, world!") .alert("Here's an alert with multiple possible buttons.", isPresented: $showingAlert) { Button("OK") { } Button("Another button that may or may not show") { } } } How could I display the second button based only on some condition? I tried factoring out one button into fileprivate func extractedFunc() -> Button<Text> { return Button("OK") { } } and this would work for conditionally displaying the button content given a fixed number of buttons, but how could optionality of buttons be taken into account?
1
0
424
Jan ’22
Apple sample code "Detecting Human Actions in a Live Video Feed" - accessing the observations associated with an action prediction
I'm having trouble reasoning about and modifying the Detecting Human Actions in a Live Video Feed sample code since I'm new to Combine. // ---- [MLMultiArray?] -- [MLMultiArray?] ---- // Make an activity prediction from the window. .map(predictActionWithWindow) // ---- ActionPrediction -- ActionPrediction ---- // Send the action prediction to the delegate. .sink(receiveValue: sendPrediction) These are the final two operators of the video processing pipeline, where the action prediction occurs. In either the implementation for private func predictActionWithWindow(_ currentWindow: [MLMultiArray?]) -> ActionPrediction or for private func sendPrediction(_ actionPrediction: ActionPrediction), how might I access the results of a VNHumanBodyPoseRequest that's retrieved and scoped in a function called earlier in the daisy chain? When I did this imperatively, I accessed results in the VNDetectHumanBodyPoseRequest completion handler, but I'm not sure how data flow would work with Combine's programming model. I want to associate predictions with the observation results they're based on so that I can store the time range of a given prediction label.
0
0
675
Jan ’22
How do you track when a VNVideoProcessor analysis is finished?
I'm adopting and transitioning to VNVideoProcessor away from performing Vision requests on individual frames, since it more concisely does the same. However, I'm not sure how to detect when analysis of a video is finished. Previously when reading frames with AVFoundation I could check with // Get the next sample from the asset reader output. guard let sampleBuffer = readerOutput.copyNextSampleBuffer() else { // The asset reader output has no more samples to vend. isDone = true break } What would be an equivalent when using VNVideoProcessor?
1
0
643
Jan ’22