Posts

Post not yet marked as solved
1 Replies
Oh I see, analyze(_:) returns when the video is finished processing.
Post marked as solved
11 Replies
I've filed this as FB9051526. Currently the Swift project is difficult to reason about, as launching it produces about 70 compiler errors. I'm not aware of any other modern Apple sample code or conceptual documentation on how to use AVAssetWriter.
Post marked as solved
26 Replies
Workaround (if you don't depend on iOS 15 APIs) is to use a Simulator device with a previous runtime like iOS 14.5, CloudKit works there.
Post not yet marked as solved
1 Replies
I've rewritten my problem more concisely below. I'd like to perform pose analysis on user imported video, automatically producing an AVFoundation video output where only frames with a detected pose - https://developer.apple.com/documentation/createml/detecting_human_actions_in_a_live_video_feed are a part of the result. In the Building a Feature-Rich App for Sports Analysis - https://developer.apple.com/documentation/vision/building_a_feature-rich_app_for_sports_analysis sample code, analysis happens by implementing the func cameraViewController(_ controller: CameraViewController, didReceiveBuffer buffer: CMSampleBuffer, orientation: CGImagePropertyOrientation) delegate callback, such as in line 326 of GameViewController.swift. Where I'm stuck is using this analysis to only keep particular frames with a pose detected. Say I've analyzed all CMSampleBuffer frames and classified which ones have the pose I want. How would I only those specific frames for the new video output?
Post not yet marked as solved
4 Replies
Yes, it's here: Detecting Human Actions in a Live Video Feed - https://developer.apple.com/documentation/createml/detecting_human_actions_in_a_live_video_feed.
Post marked as solved
4 Replies
Here it is: https://cs193p.sites.stanford.edu.