I was able to fix the issue on Unity side, by using AddComponent<ARAnchor>() instead of ARAnchorManager.addAnchor(pose). Unity seems to do things differently in the former approach.
Yet whatever it is, I still believe this is a regression in iOS 18, though, since it worked well in iOS 17.
Post
Replies
Boosts
Views
Activity
Just realized that these crashes happen only on the very first app start (which is a super bad UX, of course).
Later starts just work fine.
The main difference is that our app delays AR initialisation on first app start, because we show a couple of onboarding screens first.
If you want to try the app out, you can find it as "Marbleverse" in the App Store.
The last messages from Unity in the Xcode log are:
UnityARKit: Updating ARSession configuration with <ARWorldTrackingConfiguration: 0x3033ea900 worldAlignment=Gravity lightEstimation=Disabled frameSemantics=None videoFormat=<ARVideoFormat: 0x302861680 imageResolution=(1920, 1440) pixelFormat=(420f) framesPerSecond=(60) captureDeviceType=AVCaptureDeviceTypeBuiltInWideAngleCamera captureDevicePosition=(1)> autoFocus=Enabled environmentTexturing=None wantsHDREnvironmentTextures=Enabled planeDetection=None collaboration=Disabled userFaceTracking=Disabled sceneReconstruction=None maximumNumberOfTrackedImages=0 automaticImageScaleEstimation=Disabled detectionImages=[count: 5, <name="B6682C11-1D0B-F34C-9D56-ED47198E76EA", physicalSize=(0.460, 0.434)>, <name="2402EEA5-7849-3240-9BC2-EA2C74BA965A", physicalSize=(0.300, 0.371)>, <name="87D47FD5-B984-2F4F-8ACC-F139DEAD64ED", physicalSize=(0.400, 0.314)>, <name="D4982507-114D-8D49-9C70-6C513E6F17CF", physicalSize=(0.110, 0.064)>, <name="341728DD-2A44-0C41-AFA3-51C1E9F3B856", physicalSize=(0.430, 0.359)>] appClipCodeTracking=Disabled>
UnityARKit: Updating ARSession configuration with <ARWorldTrackingConfiguration: 0x3033fd700 worldAlignment=Gravity lightEstimation=Disabled frameSemantics=None videoFormat=<ARVideoFormat: 0x302861680 imageResolution=(1920, 1440) pixelFormat=(420f) framesPerSecond=(60) captureDeviceType=AVCaptureDeviceTypeBuiltInWideAngleCamera captureDevicePosition=(1)> autoFocus=Enabled environmentTexturing=None wantsHDREnvironmentTextures=Enabled planeDetection=None collaboration=Disabled userFaceTracking=Disabled sceneReconstruction=None maximumNumberOfTrackedImages=0 automaticImageScaleEstimation=Disabled detectionImages=[count: 6, <name="B6682C11-1D0B-F34C-9D56-ED47198E76EA", physicalSize=(0.460, 0.434)>, <name="2402EEA5-7849-3240-9BC2-EA2C74BA965A", physicalSize=(0.300, 0.371)>, <name="87D47FD5-B984-2F4F-8ACC-F139DEAD64ED", physicalSize=(0.400, 0.314)>, <name="256FA82D-DE1F-CE49-A4FF-5824728C7845", physicalSize=(0.500, 0.834)>, <name="D4982507-114D-8D49-9C70-6C513E6F17CF", physicalSize=(0.110, 0.064)>, <name="341728DD-2A44-0C41-AFA3-51C1E9F3B856", physicalSize=(0.430, 0.359)>] appClipCodeTracking=Disabled>
I am facing the same problems. I am trying to parallelize some tasks and make sure they access a common data structure in a thread-safe manner. With every approach I try, Xcode is complaining that I try to access a reference to a captured var in a concurrently executing code. Even ChatGPT does not seem to understand Swift's concurrency model. It jumped back and forth between the same (wrong) solutions.
Awesome to hear that! Yet we definitely would want to see the attachments API on iOS as well. Our use-case is to place labels on real-world objects.
I think it's pretty ridiculous that you can't even take a picture using an API. :-/
There is no regional constraint. I am using the AVP in Sweden without any issues, although it's not officially available here. However, you have to get it shipped from the States somehow. And if you war glasses, you need a US prescription as well (there is an online service for this, though). And it cost me $2.000 to get the thing shipped and imported, in addition to the device cost. Plus, you need a silicon Mac. In total, I spent around $10k for all of this, which is pretty insane.
This is a huge limitation IMHO. We definitely need a way to shoot photos or videos in our app as well.
Anyone found a solution to this? I run into this problem everytime I switch git branches while Xcode is open. And after I added agvtool to my build phase, it happens on each build. It is so annoying.
That's awesome. Can this be embedded into a native Swift UI app? We moved away from Unity for visionOS, due to the licensing desaster, but would love to integrate a game engine into our app if possible. Reality Composer Pro is pretty limited.
Another weird behavior: Windows and ornaments that have a glass background effect actually use the environment instead of the immersive space to apply the effect.
Maybe this is related to the fact that my Immersive Space shows a sphere with an inverted normal, i.e. I am standing inside that sphere, looking at an image?
I have the same problem. I registered my component, and I add it to the entity via code. If I print the component right after adding it, I do get the component including its values. But if I receive the tap gesture, the component is gone. I have registered the component on app startup, so this should not be the issue. Any ideas why the component gets lost when a tap gesture is performed? This is super confusing.
Just noticed this also happens if you just add a badge to the tab view. Since the badge is added next to the text, the tab will also appear slightly wider when shrunk to show icons only.
I assume the OP wants to show the spatial photos you can capture with an iPhone 17+
Same here. Worked with Xcode 15.3 beta 2, but not with Xcode 15.3.