Posts

Post marked as solved
1 Replies
Thanks for noticing this issue, you are correct this seems like it should work. As a workaround, please hit Build (⌘-B) on your project, and then you should be able to click on the model class, as in the older interface.
Post not yet marked as solved
3 Replies
Hi Marco, I'm glad you found the issue, and I'm sorry this beta isn't working for you. For reference, here's the issue in the Xcode 13.2 release notes : Create ML Known Issues: In the CreateML App, selecting data and clicking train doesn’t trigger the expected actions. (84309240) I recommend anyone encountering this issue to use a released version of Xcode such as Xcode 13.1 for CreateML training.
Post not yet marked as solved
1 Replies
Hi there I'm sorry you are having problems with the CreateML app. That sounds like a bug, and I'd like to help. It would be great to get some more information, and I have a set of steps that might work around your problem with the missing "Data Source". How much storage space do you have free on your system? Select Menu -> Apple -> About this Mac -> Storage, and check how much is 'available' How long did it take before 'stalling' 10 seconds, 10 minutes or 10 hours? Can you let me know what path your training data is held in? e.g. ~/Documents/My Data/Training Data or /tmp/datafolder/ You can reply here, or (even better) create a report using the feedback app. Or both. It's possible that there is an issue with sandboxing that is preventing the app from reading your data correctly. To work around that, please try the following Can you try the following: Open CreateML App Cmd-Click the CreateML icon in the dock : this opens finder and shows you where CreateML app is. In the Apple Menu, open -> System Preferences Then go to "Security and Privacy" -> Privacy -> "Full Disk Access" Check if CreateML app is there, and turn it on If it's not there, drag the icon from the finder (from step 2) into this list. Then repeat your experiment. Please let me know if that works.
Post not yet marked as solved
1 Replies
Hi! If you haven't already taken a look, I recommend watching the video Classify hand poses and actions with Create ML. Geppy, Brittany and Nathan go over what hand poses are, how to create a model that identifies your custom hand poses, and how to integrate that into an app that adds special effects. There are also code examples. I hope that gets you started. If you have more questions, feel free to post them with the tag [wwdc21-10039]
Post marked as solved
1 Replies
Hi there; great question! Create ML & Core ML, together with the Vision API provide the right technologies to do real-time image recognition. Because CoreML models are accelerated using the neural engine or GPU, they can provide multiple predictions per second (I recommend 2-5 fps) - which will create a smooth user experience. I have personally built apps using SwiftUI and CoreML successfully, but for the moment I'm going to point you at some official Apple Resources such as the great demo apps and WWDC videos that talk about how to do this with with UIKit. The other key technologies you will need to build your app are AVFoundation - which gets video frames from the camera and can display it onscreen, and the Vision framework Demo App Here's a demo app which uses AVFoundation and Vision to do machine learning on the frames of live video. It uses UIKit, not swiftUI. https://developer.apple.com/documentation/createml/detecting_human_actions_in_a_live_video_feed Videos For context, I recommend these WWDC videos (I'm in one of them): https://developer.apple.com/wwdc20/10043 Build and Action Classifier with CreateML https://developer.apple.com/wwdc20/10673 Explore Computer Vision APIs https://developer.apple.com/wwdc21/10039 Classify Hand Poses and Actions with Create ML Technologies This documentation should help you get familiar with some of the technologies involved: https://developer.apple.com/documentation/vision/classifying_images_with_vision_and_core_ml https://developer.apple.com/documentation/vision/tracking_multiple_objects_or_rectangles_in_video Using SwiftUI I don't have any links to official demos for SwiftUI and AVFoundation, but please try searching the web for examples. https://developer.apple.com/av-foundation/
Post not yet marked as solved
5 Replies
Hi Folks, Apple Engineer here. I'm sorry the CreateML App is not working in the Beta. I'd like to look into this issue: if you have time please file a Feedback ticket, and if possible and safe, share an example data set with the ticket, so I can try what you are trying. Apple ML Engineer
Post marked as solved
2 Replies
Hi N070. I'm sorry the Application is producing an error on your data, that's not good. I would like to help, if you can share details such as the project or the data you are using, please do. If you have already created a feedback request, please add your data to that, if not, please create a feedback request and include the number here and I will take a look. Apple Engineer
Post not yet marked as solved
6 Replies
Hi Paweł. Thanks for the sample data, it was very helpful, and I was able to confirm the issue existed on macOS Catalina. I was able to train using your data without an error using MacOS Big Sur beta-2 and Xcode 12 beta-2.
Post not yet marked as solved
6 Replies
Thanks for supplying the data, that's really going to help us. Looks like this feedback is yours: FB7854032
Post not yet marked as solved
5 Replies
You can find out the differences between ARKit's body pose estimation function and the Vision Pose estimator in the "Detect Body and Hand Pose with Vision" presentation. https://developer.apple.com/wwdc20/10653 Fast forward to 17:15 where they cover this topic, and mention that while ARKit supports the Rear-facing camera only, Vision API can do either - or even process video frames from other sources.
Post not yet marked as solved
2 Replies
Thanks Nicholas. We had a lot of fun building the new Action Classification template, and we are excited to see what developers make with it
Post marked as solved
4 Replies
You will need to use Big Sur to train an action classifier model using the CreateML App or the CreateML Framework. The action classifier model produced by CreateML is best experienced on iOS 14.0 and Big Sur, in combination with the Vision API's Body Pose Estimation feature. If you want to provide your own function to compute Body Pose data, you can even use the action classifier model on iOS 13 and macOS Big Sur.
Post marked as Apple Recommended
These days, the best advice would be to "send feedback" using the iOS and macOS feedback assistant app! You can find out more at https://developer.apple.com/bug-reporting/ which states: You can now submit developer feedback and file bug reports to Apple using the native Feedback Assistant app for iOS and Mac, or the Feedback Assistant website. When you file a bug, you’ll receive a Feedback IDto track the bug within the app or on the website. Feedback Assistant replaces Bug Reporter, which is no longer available. As it says, this tool replaces the older "bug reporter" which was sometimes known as "filing a Radar" because it created tickets in the system that Apple engineers know as Radar.