Here it is: https://cs193p.sites.stanford.edu.
Post
Replies
Boosts
Views
Activity
Yes, it's here: Detecting Human Actions in a Live Video Feed - https://developer.apple.com/documentation/createml/detecting_human_actions_in_a_live_video_feed.
I've rewritten my problem more concisely below.
I'd like to perform pose analysis on user imported video, automatically producing an AVFoundation video output where only frames with a detected pose - https://developer.apple.com/documentation/createml/detecting_human_actions_in_a_live_video_feed are a part of the result. In the Building a Feature-Rich App for Sports Analysis - https://developer.apple.com/documentation/vision/building_a_feature-rich_app_for_sports_analysis sample code, analysis happens by implementing the func cameraViewController(_ controller: CameraViewController, didReceiveBuffer buffer: CMSampleBuffer, orientation: CGImagePropertyOrientation) delegate callback, such as in line 326 of GameViewController.swift.
Where I'm stuck is using this analysis to only keep particular frames with a pose detected. Say I've analyzed all CMSampleBuffer frames and classified which ones have the pose I want. How would I only those specific frames for the new video output?
Also is this a known issue that others are facing?
Edit: I see in the Code Signing tag's search results that it is.
Also reproduced and filed under FB9171462.
The archive successfully signed and uploaded in Xcode 13.0 beta (13A5154h). Can I expect it will be accepted in review if the archive generation happened with the public Xcode, but the upload happened with Xcode beta? What about for TestFlight, but publicly and internally?
Workaround (if you don't depend on iOS 15 APIs) is to use a Simulator device with a previous runtime like iOS 14.5, CloudKit works there.
I've filed this as FB9051526. Currently the Swift project is difficult to reason about, as launching it produces about 70 compiler errors. I'm not aware of any other modern Apple sample code or conceptual documentation on how to use AVAssetWriter.
Yes sorry, just accepted your answer on the other thread, it worked.
Turns out a simple solution was hiding in plain sight, I just set the gesture recognizer's view frame to the superview frame, since it has a parent scroll view.
Sir, this is a Wendy's.
Oh I see, analyze(_:) returns when the video is finished processing.
Unable to get this working in a Designed for iPad app on the Apple Vision Pro simulator. Is this supported?
After opting out of SwiftData, this is no longer an issue.