Hi thanks for the reply; I think i wasn’t clear about what’s going on. I have a window with a RealityView in it. Currently that RealityView presents a Reality Composer scene. When I look at that window in the compiled app, the contents sit physically in front of the actual window, and moving them back in the scene has no effect at all.
Since posting this, I have experimented with doing a findEntity in the scene and pulling out a Transform which parents a ModelEntity. Doing that allows me to manipulate the depth of the ModelEntity relative to that Transform. But it is surprising that I can’t do the same thing with the scene itself; I have to extract scene elements to adjust their depth.
Post
Replies
Boosts
Views
Activity
Running Hello World in Xcode 15 beta 5 on a M1 Max still crashes and adding @ObervationIgnored just increases the errors. Ultimately I just refactored ViewModel to @ObservableObject and converted all instances to @ObservedObjects. It required a number of other tweaks too, but those were the primary changes.
Extending what has already been suggested, you can modify the predicate of the wrappedValue to dynamically trigger an update to a SwiftUI list. In the example below, toggling the isFiltered @State causes an update of the List and it reloads the FetchedResults with the filtered or unfiltered predicate.
@FetchRequest private var items: FetchedResults<Item>
@State var isFiltered = false
let filteredPredicate: NSPredicate
let unfilteredPredicate: NSPredicate
init(element: Element) {
self.element = element
filteredPredicate = NSPredicate(format: "element == %@ && score > 0.85", element)
unfilteredPredicate = NSPredicate(format: "element == %@", element)
self._items = FetchRequest<Item>(entity: Item.entity(),
sortDescriptors: [NSSortDescriptor(keyPath: \Item.name, ascending: true),
predicate: unfilteredPredicate,
animation: .default)
}
var listItems: FetchedResults<Moment> {
get {
_moments.wrappedValue.nsPredicate = isFiltered ? filteredPredicate : unfilteredPredicate
return moments
}
}
var body: some View {
List {
ForEach(Array(listItems.enumerated()), id: \.element) { index, item in
Text(item.name)
.toolbar {
ToolbarItem {
Button {
withAnimation {
isFiltered.toggle()
}
} label: {
Label("Filter Items", systemImage: isFiltered ? "star.circle.fill" : "star.circle")
}
}
}
}
}
}
For future viewers of this question, there is a more detailed description of the correlate method here:
https://developer.apple.com/documentation/accelerate/vdsp/1d_correlation_and_convolution
Just tested Beta 6, same thing
Just tested Beta 3, same thing
I cannot speak to your difficulties. I have a pre-existing app that successfully uses the motion capture with iOS 13 and 14. In my test I ran the same app simultaneously on a device running iOS 15 beta and another running iOS 14.6. I recorded a video of each and compared them. Motion capture works on 13.5, 14, and 15, but the new precision of ARKit 5 is supposed to be limited to A14 chips
Thanks very much--the MDLTransform looks like just what I want! But the targetAlignment property is actually the problem: when using .any raycasts, most of the results you get back are .any targetAlignments which isn't particularly useful in my situation since I'm trying to distinguish between horizontal and vertical alignments.
One tricky bit I have discovered is that when working with an iPhone, the screen aspect ratio does not match the aspect ratio of the depth buffer, so translating from the buffer width to the screen position requires disregarding some of the buffer width on each side.
I ended up removing my SCNNodes instantiations finding another way to handle my situation, but I have also found that I have problems modifying a node's transform while a UIKit animation is running. It seems to cause a one-second delay in my fps every second or so, even when the node in question is not visible. I tried out the SceneKit profiling as you suggested, but at least with my current setup I am not seeing any compile events, and no clear other culprits in the event durations.
When working with RealityKit, your scene generates code with properties and methods specific to your scene. If you have a RealityComposer file called MyTestApp and a scene called MyScene, and then create a notification trigger with an identifier of HideObject, then the generated code will create a notification accessible from your scene object in your app. So for example:
MyTestApp.loadMySceneAsync { result in
switch result {
case .success(let scene):
scene.notifications.hideObject.post()
default:
break
}
}
I store the y position of ARPlaneAnchors as they come in from session(_ session:didUpdate:) and then I create an ARAnchor for my ground plane with a matrix using that y-position. If my ground plane changes, I remove that ARAnchor and replace it with a new one. Then when placing elements on the ground, I use that ARAnchor as a reference (in my case I've added a RealityKit AnchorEntity anchored to that ground ARAnchor).
The standard first attack on retain cycles is making sure you've got weak references for any delegates. The next line of attack is ensuring you're using [weak self] for blocks on other threads, like in NSNotification blocks or sink blocks if you use Combine. I was having lots of retain cycle difficulties with my ARKit / SceneKit app, and addressing all of those items took care of the problem. Good luck!
This document might also be helpful:
https://developer.apple.com/documentation/arkit/validating_a_model_for_motion_capture
This seems to have been resolved by fixing a memory leak which prevented the ARView from properly unloading.