Post not yet marked as solved
Does Apple have Xcode Cloud sample code, or any sample code that contains tests?
Post not yet marked as solved
In the Health app, it appears that cells and not sections are styled in this way:
The closest I know of to getting to this appearance is setting the section to be inset grouped
let listConfiguration = UICollectionLayoutListConfiguration(appearance: .insetGrouped)
let listLayout = UICollectionViewCompositionalLayout.list(using: listConfiguration)
collectionView.collectionViewLayout = listLayout
but I'm not sure of a good approach to giving each cell this appearance like in the screenshot above. I'm assuming the list style collection view shown is two sections with three total cells, rather than three inset grouped sections.
Post not yet marked as solved
Apple's Displaying Cell Info tutorial shows using a default cell content configuration to set two lines of text
func cellRegistrationHandler(cell: UICollectionViewListCell, indexPath: IndexPath, id: String) {
let reminder = Reminder.sampleData[indexPath.item]
var contentConfiguration = cell.defaultContentConfiguration()
contentConfiguration.text = reminder.title
contentConfiguration.secondaryText = reminder.dueDate.dayAndTimeText
contentConfiguration.secondaryTextProperties.font = UIFont.preferredFont(forTextStyle: .caption1)
cell.contentConfiguration = contentConfiguration
}
How would I get started extending this to include a third line of text? I would like to keep the built-in text, secondaryText, and accessory control (the tutorial has a done button on each cell), while also adding custom UI elements. I'm assuming this is possible since Apple uses the term "compositional collection views," but I'm not sure how to accomplish this. Is it possible, or would I instead need to register a custom UICollectionViewCell subclass?
Post not yet marked as solved
Say that in this example here, this struct
struct Reminder: Identifiable {
var id: String = UUID().uuidString
var title: String
var dueDate: Date
var notes: String? = nil
var isComplete: Bool = false
}
is instead decoded from JSON array values (rather than constructed like in the linked example). If each JSON value were to be missing an "id", how would id then be initialized? When trying this myself I got an error keyNotFound(CodingKeys(stringValue: "id", intValue: nil), Swift.DecodingError.Context(codingPath: [_JSONKey(stringValue: "Index 0", intValue: 0)], debugDescription: "No value associated with key CodingKeys(stringValue: \"id\", intValue: nil) (\"id\").", underlyingError: nil)).
Post not yet marked as solved
Say that in this example here, the struct
struct Reminder: Identifiable {
var id: String = UUID().uuidString
var title: String
var dueDate: Date
var notes: String? = nil
var isComplete: Bool = false
var city: String
}
is modified slightly to include a city string. In the collection view that displays the reminders, I'd like each section to be each unique city, so if two reminder cells have the same city string then they would be in the same section of the collection view.
The progress I've made to this end is sorting the reminders array so that reminders cells are grouped together by city
func updateSnapshot(reloading ids: [Reminder.ID] = []) {
var snapshot = Snapshot()
snapshot.appendSections([0])
let reminders = reminders.sorted { $0.city }
snapshot.appendItems(reminders.map { $0.id })
if !ids.isEmpty {
snapshot.reloadItems(ids)
}
dataSource.apply(snapshot)
}
Where I'm stuck is in coming up with a way to make the snapshot represent sections by unique cities, and not just one flat section of all reminders.
Post not yet marked as solved
Is Combine replacing NotificationCenter and Key-Value Observing?
Post not yet marked as solved
This Mac Catalyst tutorial (https://developer.apple.com/tutorials/mac-catalyst/adding-items-to-the-sidebar) shows the following code snippet:
recipeCollectionsSubscriber = dataStore.$collections
.receive(on: RunLoop.main)
.sink { [weak self] _ in
guard let self = self else { return }
let snapshot = self.collectionsSnapshot()
self.dataSource.apply(snapshot, to: .collections, animatingDifferences: true)
}
Post not yet marked as solved
#import <Cocoa/Cocoa.h>
int main(int argc, const char * argv[]) {
@autoreleasepool {
// Setup code that might create autoreleased objects goes here.
}
return NSApplicationMain(argc, argv);
}
Post not yet marked as solved
Reading a solution given in a book to adding the elements of an input array of doubles, an example is given with Accelerate as
func challenge52c(numbers: [Double]) -> Double {
var result: Double = 0.0
vDSP_sveD(numbers, 1, &result, vDSP_Length(numbers.count))
return result
}
I can understand why Accelerate API's don't adhere to Swift API design guidelines, why is it that they don't seem to use Cocoa guidelines either? Are there other conventions or precedents that I'm missing?
Post not yet marked as solved
In a SwiftUI scroll view with the page style, is it possible to change the page indicator color?
Post not yet marked as solved
My activity classifier is used in tennis sessions, where there are necessarily multiple people on the court. There is also a decent chance other courts' players will be in the shot, depending on the angle and lens.
For my training data, would it be best to crop out adjacent courts?
Post not yet marked as solved
For a Create ML activity classifier, I’m classifying “playing” tennis (the points or rallies) and a second class “not playing” to be the negative class. I’m not sure what to specify for the action duration parameter given how variable a tennis point or rally can be, but I went with 10 seconds since it seems like the average duration for both the “playing” and “not playing” labels.
When choosing this parameter however, I’m wondering if it affects performance, both speed of video processing and accuracy. Would the Vision framework return more results with smaller action durations?
Post not yet marked as solved
Would might be a good approach to estimating a VNVideoProcessor operation? I'd like to show a progress bar that's useful enough like one based the progress Apple vends for the photo picker or exports. This would make a world of difference compared to a UIActivityIndicatorView, but I'm not sure how to approach handrolling this (or if that would even be a good idea).
I filed an API enhancement request for this, FB9888210.
Post not yet marked as solved
How should I think about video quality (if it's important) when gathering training videos? Does higher video quality of training data make for better predictions, or should it more closely match the common use case (1080p I suppose, thinking about iPhones broadly)?
Post not yet marked as solved
Below, the sampleBufferProcessor closure is where the Vision body pose detection occurs.
/// Transfers the sample data from the AVAssetReaderOutput to the AVAssetWriterInput,
/// processing via a CMSampleBufferProcessor.
///
/// - Parameters:
/// - readerOutput: The source sample data.
/// - writerInput: The destination for the sample data.
/// - queue: The DispatchQueue.
/// - completionHandler: The completion handler to run when the transfer finishes.
/// - Tag: transferSamplesAsynchronously
private func transferSamplesAsynchronously(from readerOutput: AVAssetReaderOutput,
to writerInput: AVAssetWriterInput,
onQueue queue: DispatchQueue,
sampleBufferProcessor: SampleBufferProcessor,
completionHandler: @escaping () -> Void) {
/*
The writerInput continously invokes this closure until finished or
cancelled. It throws an NSInternalInconsistencyException if called more
than once for the same writer.
*/
writerInput.requestMediaDataWhenReady(on: queue) {
var isDone = false
/*
While the writerInput accepts more data, process the sampleBuffer
and then transfer the processed sample to the writerInput.
*/
while writerInput.isReadyForMoreMediaData {
if self.isCancelled {
isDone = true
break
}
// Get the next sample from the asset reader output.
guard let sampleBuffer = readerOutput.copyNextSampleBuffer() else {
// The asset reader output has no more samples to vend.
isDone = true
break
}
// Process the sample, if requested.
do {
try sampleBufferProcessor?(sampleBuffer)
} catch {
/*
The `readingAndWritingDidFinish()` function picks up this
error.
*/
self.sampleTransferError = error
isDone = true
}
// Append the sample to the asset writer input.
guard writerInput.append(sampleBuffer) else {
/*
The writer could not append the sample buffer.
The `readingAndWritingDidFinish()` function handles any
error information from the asset writer.
*/
isDone = true
break
}
}
if isDone {
/*
Calling `markAsFinished()` on the asset writer input does the
following:
1. Unblocks any other inputs needing more samples.
2. Cancels further invocations of this "request media data"
callback block.
*/
writerInput.markAsFinished()
/*
Tell the caller the reader output and writer input finished
transferring samples.
*/
completionHandler()
}
}
}
The processor closure runs body pose detection on every sample buffer so that later in the VNDetectHumanBodyPoseRequest completion handler, VNHumanBodyPoseObservation results are fed into a custom Core ML action classifier.
private func videoProcessorForActivityClassification() -> SampleBufferProcessor {
let videoProcessor: SampleBufferProcessor = { sampleBuffer in
do {
let requestHandler = VNImageRequestHandler(cmSampleBuffer: sampleBuffer)
try requestHandler.perform([self.detectHumanBodyPoseRequest])
} catch {
print("Unable to perform the request: \(error.localizedDescription).")
}
}
return videoProcessor
}
How could I improve the performance of this pipeline? After testing with an hour long 4K video at 60 FPS, it took several hours to process running as a Mac Catalyst app on M1 Max.