I am trying to determine the corners of a RoomPlan-detected wall using the information available in the ARView session's frame, but can't quite figure out what I'm doing wrong. The corners appear to be correct relative to each other, but the wall appears too large when I render it. (I'm also not sure I'm handling the image rotation correctly either, which may be compounding my problem). Here is the code I currently have, along with a sample image, and the resulting image when I pass it through the perspective filter. it is close but isn't cropping the walls and floors correctly.
func captureSession(_ session: RoomCaptureSession, didChange room: CapturedRoom) {
for surface in room.walls {
if let frame = self.arView.session.currentFrame {
var image: CGImage? = nil
VTCreateCGImageFromCVPixelBuffer(frame.capturedImage, options: nil, imageOut: &image)
let wallTransform = surface.transform
let cameraTransform = frame.camera.transform
let intrinsics = frame.camera.intrinsics
let projectionMatrix = frame.camera.projectionMatrix
let width = surface.dimensions.y
let height = surface.dimensions.x
let inverseCameraTransform = simd_inverse(cameraTransform)
let wallTopRight = simd_float4(width/2, height/2, 0, 1)
let wallTopLeft = simd_float4(-width/2, height/2, 0, 1)
let wallBottomRight = simd_float4(width/2, -height/2, 0, 1)
let wallBottomLeft = simd_float4(-width/2, -height/2, 0, 1)
let worldTopRight = wallTransform * wallTopRight
let worldTopLeft = wallTransform * wallTopLeft
let worldBottomRight = wallTransform * wallBottomRight
let worldBottomLeft = wallTransform * wallBottomLeft
let cameraTopRight = projectionMatrix * inverseCameraTransform * worldTopRight
let cameraTopLeft = projectionMatrix * inverseCameraTransform * worldTopLeft
let cameraBottomRight = projectionMatrix * inverseCameraTransform * worldBottomRight
let cameraBottomLeft = projectionMatrix * inverseCameraTransform * worldBottomLeft
let imageTopRight = intrinsics * simd_float3(cameraTopRight.x / cameraTopRight.w, cameraTopRight.y / cameraTopRight.w, cameraTopRight.z / cameraTopRight.w)
let imageTopLeft = intrinsics * simd_float3(cameraTopLeft.x / cameraTopLeft.w, cameraTopLeft.y / cameraTopLeft.w, cameraTopLeft.z / cameraTopLeft.w)
let imageBottomRight = intrinsics * simd_float3(cameraBottomRight.x / cameraBottomRight.w, cameraBottomRight.y / cameraBottomRight.w, cameraBottomRight.z / cameraBottomRight.w)
let imageBottomLeft = intrinsics * simd_float3(cameraBottomLeft.x / cameraBottomLeft.w, cameraBottomLeft.y / cameraBottomLeft.w, cameraBottomLeft.z / cameraBottomLeft.w)
let topRight = CGPoint(x: CGFloat(imageTopRight.x), y: CGFloat(imageTopRight.y))
let topLeft = CGPoint(x: CGFloat(imageTopLeft.x), y: CGFloat(imageTopLeft.y))
let bottomRight = CGPoint(x: CGFloat(imageBottomRight.x), y: CGFloat(imageBottomRight.y))
let bottomLeft = CGPoint(x: CGFloat(imageBottomLeft.x), y: CGFloat(imageBottomLeft.y))
if let image {
let filter = CIFilter.perspectiveCorrection()
filter.inputImage = CIImage(image: UIImage(cgImage: image))
filter.topRight = topRight
filter.topLeft = topLeft
filter.bottomRight = bottomRight
filter.bottomLeft = bottomLeft
let transformedImage = filter.outputImage
if let transformedImage {
let context = CIContext()
if let outputImage = context.createCGImage(transformedImage, from: transformedImage.extent) {
let wall = Wall(id: surface.identifier, image: outputImage, surface: surface)
self.walls.append(wall)
}
}
}
}
}
}
ARKit
RSS for tagIntegrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.
Post
Replies
Boosts
Views
Activity
I'm developing a 3D scanner works on a iPad(6th gen, 12-inch).
Photogrammetry with ObjectCaptureSession was successful, but other trials are not.
I've tried Photogrammetry with URL inputs, these are pictures from AVCapturePhoto.
It is strange... if metadata is not replaced, photogrammetry would be finished but it seems to be no depthData or gravity info were used. (depth and gravity is separated files). but if metadata is injected, this trial are fails.
and this time i tried to Photogrammetry with PhotogrammetrySamples sequence and it also failed.
the settings are:
camera: back Lidar camera,
image format: kCVPicelFormatType_32BGRA(failed with crash) or hevc(just failed) image
depth format: kCVPixelFormatType_DisparityFloat32 or kCVPixelFormatType_DepthFloat32
photoSetting: isDepthDataDeliveryEnabled = true, isDepthDataFiltered = false, embeded = true
I wonder iPad supports Photogrammetry with PhotogrammetrySamples
I've already tested some sample codes provided by apple:
https://developer.apple.com/documentation/realitykit/creating_a_photogrammetry_command-line_app
https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_depth_using_the_lidar_camera
https://developer.apple.com/documentation/realitykit/taking_pictures_for_3d_object_capture
What should I do to make Photogrammetry successful?
I am trying to change the color of usdz asset provide by my designer. But I am unable to do. Can some help me with some sample code
When isAutoFocusEnabled is set to true, the entity in the scene keeps shaking.
No focus when isAutoFocusEnabled is set to false.
How to set up to solve this problem.
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self
guard let arCGImage = UIImage(named: "111", in: .main, with: .none)?.cgImage else { return }
let arReferenceImage = ARReferenceImage(arCGImage, orientation: .up, physicalWidth: CGFloat(0.1))
let arImages: Set<ARReferenceImage> = [arReferenceImage]
let configuration = ARImageTrackingConfiguration()
configuration.trackingImages = arImages
configuration.maximumNumberOfTrackedImages = 1
configuration.isAutoFocusEnabled = false
arView.session.run(configuration)
}
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
anchors.compactMap { $0 as? ARImageAnchor }.forEach {
let anchor = AnchorEntity(anchor: $0)
let mesh = MeshResource.generateBox(size: 0.1, cornerRadius: 0.005)
let material = SimpleMaterial(color: .gray, roughness: 0.15, isMetallic: true)
let model = ModelEntity(mesh: mesh, materials: [material])
model.transform.translation.y = 0.05
anchor.children.append(model)
arView.scene.addAnchor(anchor)
}
}
Hi all,
I am trying to use ARWorldTrackingConfiguration to find any faces in my scene. However when I query the scene, using the same type of query one would use in ARFaceTrackingConfiguration, I don't get an Entity back. Here's my code:
var entityCollection : Set<Entity> = []
let faceEntity = scene.performQuery(query1).first {
$0.components[SceneUnderstandingComponent.self]?.entityType == .face
}
Every single time faceEntity returns as empty. Any help/pointers would be appreciated!
Is there any callback in the code for the reset position from the crown button(when you long press on it)?
I have a simple visionOS app that uses a RealityView to map floors and ceilings using PlaneDetectionProvider and PlaneAnchors.
I can look at a location on the floor or ceiling, tap, and place an object at that location (I am currently placing a small cube with X-Y-Z axes sticking out at the location).
The tap locations are consistently about 0.35m off along the horizontal plane (it is never off vertically) from where I was looking.
Has anyone else run into the issue of a spatial tap gesture resulting in a location offset from where they are looking?
And if I move to different locations, the offset is the same in real space, so the offset doesn't appear to be associated with the orientation of the Apple Vision Pro (e.g. it isn't off a little to the left of the headset of where I was looking).
Attached is an image showing this. I focused on the corner of the carpet (yellow circle), tapped my fingers to trigger a tap gesture in RealityView, extracted the location, and placed a purple cube at that location.
I stood in 4 different locations (where the orange squares are), looked at the corner of the rug (yellow circle) and tapped. All 4 purple cubes are place at about the same location ~0.35m away from the look location.
Here is how I captured the tap gesture and extracted the 3D location:
var myTapGesture: some Gesture {
SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { event in
let location3D = event.convert(event.location3D, from: .global, to: .scene)
let entity = event.entity
model.handleTap(location: location3D, entity: entity)
}
}
Here is how I set the position of the purple cube:
func handleTap(location: SIMD3<Float>, entity: Entity) {
let positionEntity = Entity()
positionEntity.setPosition(location, relativeTo: nil)
...
}
Flow:
User enters app and starts an arkit session with worldtracking and scene reconstruction.
User closes app so we stop the session.
User re-enters app and we try to run the session but app crashes with error: "It is not possible to re-run a stopped data provider.
If we remove code to stop the session, when the user re-enters the app the scene reconstruction doesn't work properly and shows inaccurate meshing data.
Is this a bug or am I doing something wrong here? Any ideas or insight are appreciated
While WorldTrackingProvider.removeAnchor() completes without error, the WorldAnchor might be back the next time the App is run. This can easily be replicated by the ObjectPlacement sample. Just add 10 objects, Remove All, then run App again. The first run the anchors might be gone, but run the App a couple more times and the anchors will come back.
This becomes a big problem when paired with the issue that anchors are not always found when the App enter Immersive mode. When an anchor is not found our App will add an anchor. That usually works fine for that run. The next run, however, the other anchors will show up. Anchors accumulate and it becomes difficult to track.
Hello, I tried to build something with scene reconstruction but I want to add occlusion on the Surfaces how can I do that? I tried to create an entity and than apply an Occlusion Material but I received an ShapeResourece and I should pass an MeshResource to create a mesh for the entity and than apply a material. Any suggestions?
I'm in Europe, Vision Pro isn't available here yet. I'm a developer / designer, and I want to find out whether it's worthwhile to try and sell the idea of investing in a bunch of Vision Pro devices as well as in app development for it, to the people overseeing the budget for a project I'm part of. The project is broadly in an "industry" where several constraints apply, most of them are security and safety.
So far, all the Vision Pro discussion I've seen is about consumer-level media consumption and tippy-tappy-app-stuff for a broad user base.
Now, the hardware and the OS features and SDK definitely look like professional niche use cases are possible. But some features, such as SharePlay, will for example require an Apple ID and internet connection (I guess?). This for example is a strict nope in my case, for security reasons.
I'd like to start a discussion of what works and what doesn't work, outside the realm of watching Disney+ in your condo.
Potentially, this device has several marks ticked with regards to incredibly useful features in general.
very good indoor tracking
pass through with good fidelity
hands free operation
The first point especially, is kind of a really big deal, and for me, the biggest open question. I have multiple make or break questions with regard to this. (These features are not available in the simulator)
For sake of argument, lets say the app I'm building is Cave Mapper.
it's meant to be used by archeologists inside a cave system where we have no internet, no reliable compass, and no GPS. We have a local network that we can carry around though. We can also bring lights.
One feature of the app is to build out a catalog of cave paintings and store them in a database. The archeologist wants to walk around, look at a cave painting, and tap on it to capture its position relative to the cave entrance. The next day, another archeologist may work inside the same cave, and they would want to have synchronised access to the same spatial data from the day before. For that:
How good, precise, reliable, stable is the indoor tracking really? Hyped reviewers said it's rock solid, others have said it can drift.
How well do the persistent WorldAnchor objects work?
How well do they work when you're in a concrete bunker or a cave without GPS?
Can I somehow share a world anchor with another user? is it possible to sync the ARKit map that one device has built, with another device?
Other showstoppers?
in case you cannot share your mapped world or world anchors: How solid is the tracking of an ImageAnchor (which we could physically nail to the cave entrance to use as a shared positional / rotational reference)
Other, practical stuff:
can you wear Vision Pro with a safety helmet?
does it work with gloves?
I have the following issue regarding running 2 AR service. I am trying to develop an app for my masters thesis.
Case 1: I first scan the room using the roomplan api. Then I stop the roomplan api session and start the realitykit session. When the realitykit session starts, the camera is not showing anything but black screen.
Case 2: When I had the issue with case one, I tried a seperate test app where I had 2 seperate screen for roomplan api and realitykit. There is no relation. but as soon as I introduced roomplan api, realitykit stopped working, having the same black screen as above.
There might be any states that changed by the roomplan api, that's why realitykit is not able to access the camera. Let me know if you have any idea about it or any sample.
I am using the following stack:
Xcode - Latest; Swiftui; latest os in mac mini and iphone
I'm developing a motion tracking app that takes requires a real-time view of an iPhone camera to capture the person's body. The motion is mapped to a virtual body. Currently this appears overlayed on the person that the iPhone sees.
However, I want to transmit this real time 3D virtual body to a different Apple device, as an AR app, that the other user can place in their environment.
Any suggestions on how I can get this 3d model to be viewable by another user (and maintain live updating based on motion tracking)?
I have a RealityKit based app in TestFlight and I see the following crash happening twice.
It appears to be coming from the RealityKit framework itself in cv3d::applecv3d::concurrent_sd::SurfaceDetection::PushAndDetect has anyone seen this before and have you discovered what is causing it?
Thread 32 Crashed:
0 libsystem_kernel.dylib 0x00000001cfd81fbc __pthread_kill + 8 (:-1)
1 libsystem_pthread.dylib 0x00000001f271f680 pthread_kill + 268 (pthread.c:1681)
2 libsystem_c.dylib 0x000000019069ab90 abort + 180 (abort.c:118)
3 Recon3D 0x0000000211b8cd7c cv3d::acv::surfacedetection::DepthMapPlaneDetector::detect(cv3d::esn::arr::ArrayView<float const, cv3d::esn::dim::DX<2u>, float const*>, cv3d::esn::arr::ArrayView<float const, cv3d::esn::dim::DX<2u... + 6136 (DepthMapPlaneDetector.cpp:346)
4 Recon3D 0x0000000211bb0fe4 cv3d::acv::surfacedetection::SurfaceDetector::detectAndTrack(cv3d::acv::surfacedetection::SurfaceDetector::DetectAndTrackWithDepthParams const&) + 844 (SurfaceDetector.cpp:635)
5 Recon3D 0x000000021142fd24 cv3d::applecv3d::concurrent_sd::SurfaceDetection::PushAndDetect(cv3d::applecv3d::concurrent_sd::InputSemanticsWithDepthBundle const&) + 2672 (SurfaceDetection.cpp:645)
6 Recon3D 0x00000002114678ec cv3d::kit::concurrency::detail::ProcessorInputMessageHandlingStrategy<cv3d::applecv3d::concurrent_sd::InputSemanticsWithDepthBundle, std::experimental::expected<cv3d::applecv3d::concurrent_sd::Surf... + 92 (ProcessorInputMessageHandlingStrategy.h:136)
7 Recon3D 0x00000002114675b4 std::__1::__function::__func<void cv3d::kit::concurrency::detail::Processor<cv3d::applecv3d::concurrent_sd::InputSemanticsWithDepthBundle, std::experimental::expected<cv3d::applecv3d::concurrent_sd... + 184 (function.h:356)
8 Recon3D 0x0000000211794330 void std::__1::__invoke_void_return_wrapper<void, true>::__call<std::__1::future<void> cv3d::esn::thread::IWorkQueue::DispatchAsync<void>(std::__1::function<void ()>&&)::'lambda'()&>(std::__1::futu... + 68 (invoke.h:487)
9 Recon3D 0x0000000212387830 dispatch_async_C_CallBack + 76 (GrandCentralDispatchUtil.cpp:94)
10 libdispatch.dylib 0x00000001905e2300 _dispatch_client_callout + 20 (object.m:561)
11 libdispatch.dylib 0x00000001905e9964 _dispatch_lane_serial_drain + 956 (queue.c:3885)
12 libdispatch.dylib 0x00000001905ea3f8 _dispatch_lane_invoke + 432 (queue.c:3976)
13 libdispatch.dylib 0x00000001905eb6a8 _dispatch_workloop_invoke + 1756 (queue.c:4485)
14 libdispatch.dylib 0x00000001905f5004 _dispatch_root_queue_drain_deferred_wlh + 288 (queue.c:6913)
15 libdispatch.dylib 0x00000001905f4878 _dispatch_workloop_worker_thread + 404 (queue.c:6507)
16 libsystem_pthread.dylib 0x00000001f271b964 _pthread_wqthread + 288 (pthread.c:2629)
17 libsystem_pthread.dylib 0x00000001f271ba04 start_wqthread + 8 (:-1)
Hello there,
Do you know what happens if I call one of the following but the Joint is not tracked?
var anchorFromJointTransform: simd_float4x4
The position and orientation of this joint relative to the base joint of the skeleton.
var parentFromJointTransform: simd_float4x4
The transform from the joint to its parent joint’s coordinate system.
When i call queryDeviceAnchor in my Billboard system I get transform updates but I'm unsure how to process them (similar to the Diorama sample app).
Is it a bug that I recieve these updates? The documentation says that ARKit data is only provided in a full space so I would expect this not to work at all.
But if this is the case, why am I getting deviceAnchor values in this situation?
I've been recently working on a VisionOS app which uses CoreMl to identify specific body parts and display a window with information of the identified body part, since the use of Vision Pro's cameras is blocked, I'm using an iPhone to perform image classification, and then send the label to the headset using Multipeer Connectivity, I'd like to display a volume once the user selects a body part, could my iPhone return enough spatial information for me to be able to fully take advantage of Vision Pro's mixed reality capabilities?
I was heavily reliant on using ARGeoAnchor in my iOS application and when started porting the app to visionOS encountered there is no equivalent there. Which is a huge bummer and showstopper to launching on Apple Vision Pro.
Is there any technical limitation that didn't allow devs to port this great piece of functionality? Can we expect it to be added in the future visionOS releases?
hi I am not sure what is going on...
I have been working on this model for a while on reality composer, and had no problem testing it that way...it always worked out perfectly.
So I imported the file into a brand new Xcode project... I created a new ARApp, and used SwiftUI.
I actually did it twice ...
And tested the version apple has with the box. In Apple's version, the app appears but the whole part where it tries to detect planes didn't show up. So I am confused.
I found a question that mentions the error messages I am getting but I am not sure how to get around it?
https://developer.apple.com/forums/thread/691882
//
// ContentView.swift
// AppToTest-02-14-23
//
// Created by M on 2/14/23.
//
import SwiftUI
import RealityKit
struct ContentView : View {
var body: some View {
return ARViewContainer().edgesIgnoringSafeArea(.all)
}
}
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
// Load the "Box" scene from the "Experience" Reality File
//let boxAnchor = try! Experience.loadBox()
let anchor = try! MyAppToTest.loadFirstScene()
// Add the box anchor to the scene
arView.scene.anchors.append(anchor)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {}
}
#if DEBUG
struct ContentView_Previews : PreviewProvider {
static var previews: some View {
ContentView()
}
}
#endif
This is what I get at the bottom
2023-02-14 17:14:53.630477-0500 AppToTest-02-14-23[21446:1307215] Metal GPU Frame Capture Enabled
2023-02-14 17:14:53.631192-0500 AppToTest-02-14-23[21446:1307215] Metal API Validation Enabled
2023-02-14 17:14:54.531766-0500 AppToTest-02-14-23[21446:1307215] [AssetTypes] Registering library (/System/Library/PrivateFrameworks/CoreRE.framework/default.metallib) that already exists in shader manager. Library will be overwritten.
2023-02-14 17:14:54.716866-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/suFeatheringCreateMergedOcclusionMask.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.743580-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arKitPassthrough.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.744961-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/drPostAndComposition.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.745988-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arSegmentationComposite.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.747245-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute0.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.748750-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute1.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.749140-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute2.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.761189-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute3.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.761611-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute4.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.761983-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute5.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.762604-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute6.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.763575-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute7.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.764859-0500 AppToTest-02-14-23[21446:1307215] [Foundation.Serialization] Json Parse Error line 18: Json Deserialization; unknown member 'EnableARProbes' - skipping.
2023-02-14 17:14:54.764902-0500 AppToTest-02-14-23[21446:1307215] [Foundation.Serialization] Json Parse Error line 20: Json Deserialization; unknown member 'EnableGuidedFilterOcclusion' - skipping.
2023-02-14 17:14:55.531748-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:55.534559-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:55.534633-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:55.534680-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:55.534733-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:55.534777-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:55.534825-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:55.534871-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:55.534955-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:56.207438-0500 AppToTest-02-14-23[21446:1307383] [Technique] ARWorldTrackingTechnique <0x1149cd900>: World tracking performance is being affected by resource constraints [2]
2023-02-14 17:17:15.741931-0500 AppToTest-02-14-23[21446:1307414] [Technique] ARWorldTrackingTechnique <0x1149cd900>: World tracking performance is being affected by resource constraints [1]
2023-02-14 17:22:07.075990-0500 AppToTest-02-14-23[21446:1308137] [Technique] ARWorldTrackingTechnique <0x1149cd900>: World tracking performance is being affected by resource constraints [1]
code-block
I am working with MeshAnchors, and I am having troubles getting to the classification of the triangles/faces.
This post references the MeshAnchor.Geometry, and that struct does have a property named "classifications", but it is of type GeometrySource. I cannot find any classification information in GeometrySource. Am I missing something there?
I think I am looking for something of type MeshAnchor.MeshClassification, but I cannot find any structs with this as a property.