Posts

Post not yet marked as solved
18 Replies
Running Hello World in Xcode 15 beta 5 on a M1 Max still crashes and adding @ObervationIgnored just increases the errors. Ultimately I just refactored ViewModel to @ObservableObject and converted all instances to @ObservedObjects. It required a number of other tweaks too, but those were the primary changes.
Post not yet marked as solved
7 Replies
Extending what has already been suggested, you can modify the predicate of the wrappedValue to dynamically trigger an update to a SwiftUI list. In the example below, toggling the isFiltered @State causes an update of the List and it reloads the FetchedResults with the filtered or unfiltered predicate. @FetchRequest private var items: FetchedResults<Item> @State var isFiltered = false let filteredPredicate: NSPredicate let unfilteredPredicate: NSPredicate init(element: Element) { self.element = element      filteredPredicate = NSPredicate(format: "element == %@ && score > 0.85", element) unfilteredPredicate = NSPredicate(format: "element == %@", element)      self._items = FetchRequest<Item>(entity: Item.entity(), sortDescriptors: [NSSortDescriptor(keyPath: \Item.name, ascending: true),           predicate: unfilteredPredicate,           animation: .default) } var listItems: FetchedResults<Moment> {      get {         _moments.wrappedValue.nsPredicate = isFiltered ? filteredPredicate : unfilteredPredicate           return moments      } } var body: some View {     List {        ForEach(Array(listItems.enumerated()), id: \.element) { index, item in Text(item.name) .toolbar {                ToolbarItem {                    Button {                        withAnimation {                           isFiltered.toggle()                        }                   } label: {                        Label("Filter Items", systemImage: isFiltered ? "star.circle.fill" : "star.circle")                   }                }           } } } }
Post not yet marked as solved
2 Replies
For future viewers of this question, there is a more detailed description of the correlate method here: https://developer.apple.com/documentation/accelerate/vdsp/1d_correlation_and_convolution
Post not yet marked as solved
2 Replies
Thanks very much--the MDLTransform looks like just what I want! But the targetAlignment property is actually the problem: when using .any raycasts, most of the results you get back are .any targetAlignments which isn't particularly useful in my situation since I'm trying to distinguish between horizontal and vertical alignments.
Post marked as solved
9 Replies
One tricky bit I have discovered is that when working with an iPhone, the screen aspect ratio does not match the aspect ratio of the depth buffer, so translating from the buffer width to the screen position requires disregarding some of the buffer width on each side.
Post not yet marked as solved
2 Replies
I ended up removing my SCNNodes instantiations finding another way to handle my situation, but I have also found that I have problems modifying a node's transform while a UIKit animation is running. It seems to cause a one-second delay in my fps every second or so, even when the node in question is not visible. I tried out the SceneKit profiling as you suggested, but at least with my current setup I am not seeing any compile events, and no clear other culprits in the event durations.
Post not yet marked as solved
1 Replies
When working with RealityKit, your scene generates code with properties and methods specific to your scene. If you have a RealityComposer file called MyTestApp and a scene called MyScene, and then create a notification trigger with an identifier of HideObject, then the generated code will create a notification accessible from your scene object in your app. So for example: MyTestApp.loadMySceneAsync { result in switch result { case .success(let scene): scene.notifications.hideObject.post() default: break } }
Post not yet marked as solved
1 Replies
I store the y position of ARPlaneAnchors as they come in from session(_ session:didUpdate:) and then I create an ARAnchor for my ground plane with a matrix using that y-position. If my ground plane changes, I remove that ARAnchor and replace it with a new one. Then when placing elements on the ground, I use that ARAnchor as a reference (in my case I've added a RealityKit AnchorEntity anchored to that ground ARAnchor).
Post not yet marked as solved
1 Replies
The standard first attack on retain cycles is making sure you've got weak references for any delegates. The next line of attack is ensuring you're using [weak self] for blocks on other threads, like in NSNotification blocks or sink blocks if you use Combine. I was having lots of retain cycle difficulties with my ARKit / SceneKit app, and addressing all of those items took care of the problem. Good luck!
Post not yet marked as solved
3 Replies
This document might also be helpful: https://developer.apple.com/documentation/arkit/validating_a_model_for_motion_capture
Post marked as solved
1 Replies
This seems to have been resolved by fixing a memory leak which prevented the ARView from properly unloading.
Post not yet marked as solved
2 Replies
I have had the most success with glb files for packaging models, materials, and animations.
Post not yet marked as solved
3 Replies
As Sunkenf250 points out, the occlusion actually comes as an ARKit feature, not specifically RealityKit. In addition, once you've added the frame semantics to your configuration, you will receive a pixel buffer in each frame's estimatedDepthData property for all recognized people, which can easily be translated to a CIImage.
Post marked as solved
3 Replies
After further testing, I think my strange experiences may have been unrelated to scene reconstruction. When I tested for person depth at a point more in the middle of the body, as opposed to at the feet where I had been testing, I got more consistent data quality.
Post marked as solved
3 Replies
Yes, thanks, I recognize the estimated depth data is specific to people in the segmentation buffer. I have had some strange experiences yesterday--I turned on mesh scene reconstruction and the showSceneUnderstanding flag on my iPad running iOS 14 beta, and all of the sudden I got a continuous stream of of estimated depth data to a depth of around 4-5 meters. But then I tried it again with a new build, and I got literally no estimated depth data (i.e. all depth values were 0). I restarted my device and Xcode and again got a steady stream for a single build and then it again showed no data. Once I get a better handle on it I will file a bug report, but it did strike me that turning on one or both of those flags seemed to "wake up" the LiDAR for the estimated depth data. One other important note: in another test, I tried having someone hold the iPad instead of using a tripod, and that also seemed to improve my estimated depth data info, again suggesting that the LiDAR is not activated at least by default for estimated depth data.