How do you create a picker where the user's selection corresponds to different values of an enumerated type?
Post
Replies
Boosts
Views
Activity
Say in a SwiftUI board game, the board view is composed of several cell views and each cell view encapsulates whether it was pressed. What would be a good approach to making the parent board view know about that @State?
How do you make a SwiftUI alert's primary and secondary buttons be navigation links to another view?
With diffable data sources you have to subclass UITableViewDiffableDataSource to support cell editing. What would be a good approach to performing an operation in the parent controller from that overridden method?I'm adopting diffable data sources within a previous table view controller I have that supports cell deletions. Up until now I simply implemented the tableView(_:commit:forRowAt:) delegate callback. With diffable data sources to achieve the same behavior however, I have to subclass `UITableViewDiffableDataSource` and override it there instead. I'm not sure what would be a good approach to performing an operation in the table view controller from within that delegate callback, like updating CloudKit records for example.I don't think the difference in how to support cell editing is documented, but in Apple's Using Collection View Compositional Layouts and Diffable Data Sources sample code you'll see it in a `WiFiSettingsViewController` extension.
What is a good approach to sharing a data model between a UIKit view controller and a SwiftUI view that it presents?
The model property source of truth is declared in the table view controller, and I'd like changes in the presented SwiftUI Form to mutate the model.
If you pass a value type model from a UIKit view controller to a SwiftUI view, how do you manipulate that data (say in a Form) and pass it back?
I'm currently migrating from a UITableView to a modern UICollectionView - https://developer.apple.com/videos/play/wwdc2020/10097/ list with compositional layout to create a UITableView-like interface. In Apple's sample code - https://developer.apple.com/documentation/uikit/views_and_controls/collection_views/implementing_modern_collection_views and API documentation - https://developer.apple.com/documentation/uikit/uilistcontentconfiguration, as far as I can tell the only examples given are with preconfigured system styles (e.g. defaultContentConfiguration()) and convenience properties (e.g. text and secondaryText).
How would I, say, integrate my previous custom UITableViewCell made in Interface Builder (with outlets connected to the custom subclass)? I'm open to rewriting it as a fully programmatic cell since it's small enough in scope and needs UI work anyway.
More generally, in a UICollectionView list with compositional layout, how do you add a fully custom cell?
For a Mac app, I am unable to get signing (release) working and don't understand the problem.
With automatic code signing on, I get the error "Provisioning profile "Mac Team Provisioning Profile: [redacted]" doesn't match the entitlements file's value for the com.apple.developer.networking.networkextension entitlement."
With automatic code signing off, I get the error "No 'Developer ID Application' signing certificate matching team ID "[redacted]" with a private key was found."
I've been looking through Apple's sample code Building a Feature-Rich App for Sports Analysis - https://developer.apple.com/documentation/vision/building_a_feature-rich_app_for_sports_analysis and its associated WWDC video to learn to reason about AVFoundation and VNDetectTrajectoriesRequest - https://developer.apple.com/documentation/vision/vndetecttrajectoriesrequest. My goal is to allow the user to import videos (this part I have working, the user sees a UIDocumentBrowserViewController - https://developer.apple.com/documentation/uikit/uidocumentbrowserviewcontroller, picks a video file, and then a copy is made), but I only want segments of the original video copied where trajectories are detected from a ball moving.
I've tried as best I can to grasp the two parts, at the very least finding where the video copy is made and where the trajectory request is made.
The full video copy happens in CameraViewController.swift (I'm starting with just imported video for now and not reading live from the device's video camera), line 160:func startReadingAsset(_ asset: AVAsset) {
videoRenderView = VideoRenderView(frame: view.bounds)
setupVideoOutputView(videoRenderView)
let displayLink = CADisplayLink(target: self, selector: #selector(handleDisplayLink(:)))
displayLink.preferredFramesPerSecond = 0
displayLink.isPaused = true
displayLink.add(to: RunLoop.current, forMode: .default)
guard let track = asset.tracks(withMediaType: .video).first else {
AppError.display(AppError.videoReadingError(reason: "No video tracks found in AVAsset."), inViewController: self)
return
}
let playerItem = AVPlayerItem(asset: asset)
let player = AVPlayer(playerItem: playerItem)
let settings = [
String(kCVPixelBufferPixelFormatTypeKey): kCVPixelFormatType420YpCbCr8BiPlanarFullRange
]
let output = AVPlayerItemVideoOutput(pixelBufferAttributes: settings)
playerItem.add(output)
player.actionAtItemEnd = .pause
player.play()
self.displayLink = displayLink
self.playerItemOutput = output
self.videoRenderView.player = player
let affineTransform = track.preferredTransform.inverted()
let angleInDegrees = atan2(affineTransform.b, affineTransform.a) * CGFloat(180) / CGFloat.pi
var orientation: UInt32 = 1
switch angleInDegrees {
case 0:
orientation = 1 // Recording button is on the right
case 180, -180:
orientation = 3 // abs(180) degree rotation recording button is on the right
case 90:
orientation = 8 // 90 degree CW rotation recording button is on the top
case -90:
orientation = 6 // 90 degree CCW rotation recording button is on the bottom
default:
orientation = 1
}
videoFileBufferOrientation = CGImagePropertyOrientation(rawValue: orientation)!
videoFileFrameDuration = track.minFrameDuration
displayLink.isPaused = false
}
@objc
private func handleDisplayLink(_ displayLink: CADisplayLink) {
guard let output = playerItemOutput else {
return
}
videoFileReadingQueue.async {
let nextTimeStamp = displayLink.timestamp + displayLink.duration
let itemTime = output.itemTime(forHostTime: nextTimeStamp)
guard output.hasNewPixelBuffer(forItemTime: itemTime) else {
return
}
guard let pixelBuffer = output.copyPixelBuffer(forItemTime: itemTime, itemTimeForDisplay: nil) else {
return
}
// Create sample buffer from pixel buffer
var sampleBuffer: CMSampleBuffer?
var formatDescription: CMVideoFormatDescription?
CMVideoFormatDescriptionCreateForImageBuffer(allocator: nil, imageBuffer: pixelBuffer, formatDescriptionOut: &formatDescription)
let duration = self.videoFileFrameDuration
var timingInfo = CMSampleTimingInfo(duration: duration, presentationTimeStamp: itemTime, decodeTimeStamp: itemTime)
CMSampleBufferCreateForImageBuffer(allocator: nil,
imageBuffer: pixelBuffer,
dataReady: true,
makeDataReadyCallback: nil,
refcon: nil,
formatDescription: formatDescription!,
sampleTiming: &timingInfo,
sampleBufferOut: &sampleBuffer)
if let sampleBuffer = sampleBuffer {
self.outputDelegate?.cameraViewController(self, didReceiveBuffer: sampleBuffer, orientation: self.videoFileBufferOrientation)
DispatchQueue.main.async {
let stateMachine = self.gameManager.stateMachine
if stateMachine.currentState is GameManager.SetupCameraState {
// Once we received first buffer we are ready to proceed to the next state
stateMachine.enter(GameManager.DetectingBoardState.self)
}
}
}
}
}
Line 139 self.outputDelegate?.cameraViewController(self, didReceiveBuffer: sampleBuffer, orientation: self.videoFileBufferOrientation) is where the video sample buffer is passed to the Vision framework subsystem for analyzing trajectories, the second part. This delegate callback is implemented in GameViewController.swift on line 335:
// Perform the trajectory request in a separate dispatch queue.
trajectoryQueue.async {
do {
try visionHandler.perform([self.detectTrajectoryRequest])
if let results = self.detectTrajectoryRequest.results {
DispatchQueue.main.async {
self.processTrajectoryObservations(controller, results)
}
}
} catch {
AppError.display(error, inViewController: self)
}
}
Trajectories found are drawn over the video in self.processTrajectoryObservations(controller, results).
Where I'm stuck now is modifying this so that instead of drawing the trajectories, the new video only copies parts of the original video to it where trajectories were detected in the frame.
As far as I can tell - https://developer.apple.com/documentation/vision/identifying_trajectories_in_video trajectory detection lets you use characteristics of the trajectories detected for, say, drawing over the video as it plays. However, is it possible to mark which time ranges the video has detected trajectories, or perhaps access the frames for which there are trajectories?
I'd like to perform VNDetectHumanBodyPoseRequests on a video that the user imports through the system photo picker or document view controller. I started looking at the Building a Feature-Rich App for Sports Analysis - https://developer.apple.com/documentation/vision/building_a_feature-rich_app_for_sports_analysis sample code since it has an example where video is imported from disk and then analyzed. However, my end goal is to filter for frames that contain certain poses, so that all frames without them are edited out / deleted (instead of in the sample code drawing on frames with detected trajectories). For pose detection I'm looking at the Detecting Human Actions in a Live Video Feed - https://developer.apple.com/documentation/createml/detecting_human_actions_in_a_live_video_feed, but the live video capture isn't quite relevant.
I'm trying to break this down into smaller problems and have a few questions:
Should a full video file copy be made before analysis?
The Detecting Human Actions in a Live Video Feed - https://developer.apple.com/documentation/createml/detecting_human_actions_in_a_live_video_feed sample code uses a Combine pipeline for analyzing live video frames. Since I'm analyzing imported video, would Combine be overkill or a good fit here?
After I've detected which frames have a particular pose, how (in AVFoundation terms) do I filter for those frames or edit out / delete the frames without that pose?
"Code signing 'WatchDeuce Extension.appex' failed."
"View distribution logs for more information."
Does anyone have any suggestions for a solution or workaround? I've filed this as FB9171462 with the logs attached.
A quick web search shows that storing them in a plist is not recommended. What are the best practices here?
For example,
Operation A both fetches model data over the network and updates a UICollectionViewbacked by it.
Operation B filters model data.
What is a good approach to executing B only after A is finished?
When synchronizing model objects, local CKRecords, and CKRecords in CloudKit during swipe-to-delete, how can I make this as robust as possible? Error handling omitted for the sake of the example.
override func tableView(_ tableView: UITableView, commit editingStyle: UITableViewCell.EditingStyle, forRowAt indexPath: IndexPath) {
if editingStyle == .delete {
let record = self.records[indexPath.row]
privateDatabase.delete(withRecordID: record.recordID) { recordID, error in
self.records.remove(at: indexPath.row)
}
}
}
Since indexPath could change due to other changes in the table view / collection view during the time it takes to delete the record from CloudKit, how could this be improved upon?