Hey all, we are currently training a Hand Pose model with the current release of CreateML, and during the feature extraction phase, we get the following error:
Assertion failed: (/AppleInternal/Library/BuildRoots/d9889869-120b-11ee-b796-7a03568b17ac/Library/Caches/com.apple.xbs/Sources/TuriCore/turicreate_oss/src/core/storage/DataTable_data/DataColumn_v2_block_manager.cpp:105): seg->blocks.size()>column_id [0 > 0]
We have tried to search this online and mitigate the issue, but we are getting nowhere - has anyone else experienced this issue?
Post
Replies
Boosts
Views
Activity
Is there a way to mimic this functionality found in UIKit in SwiftUI?
Long story short, I am creating an interactive SceneKit view (currently in UIKit) that anchors 2D UIViews over nodes, for navigation and labelling.
I would like to migrate over to SwiftUI, but I am having difficulties mimicking this functionality.
let subViews = self.view.subviews.compactMap{$0 as? UIButton}
if let view = subViews.first(where: {$0.currentTitle == label}) {
view.center = self.getScreenPoint(renderer: renderer, node: node)
}
Can anyone help me with this?
Thanks!
I have been using UIKit to build out a relatively complex SceneKit scene with 2D labels that are being composed over nodes in the scene. I am using the following to fetch the subview 'names' and projecting their 3D position as a 2D point, and then setting the centre of the label views as this point. Code is below:
Project point:
let projectedPoint = renderer.projectPoint(node.position)
let screenPoint = CGPoint(x: CGFloat(projectedPoint.x), y: CGFloat(projectedPoint.y))
return screenPoint
Absolute position update:
viewTest.center = self.getScreenPoint(renderer: renderer, node: node)
What is the SwiftUI version of this approach? I have used a published array containing all CGPoints, but it lags and becomes non-responsive after a few seconds of interactions.
Thank you - happy to share more code is needed!
I have been reading up on the new Create ML Components documentation, mostly the sample code for 'Counting Human Counting human body action repetitions in a live video feed', which can be found here.
I currently have a body action classifier model built with UIKit/SwiftUI front-end and a relatively complex back-end, but this solution looks far more clean and is 100% SwiftUI - which is a big plus for me.
Based on this sample code and documentation, how would I use my own body action classifier with this sample code? Purely interested and amazed by how lightweight it is and would love to see how a CreateML model could be implemented here.