When using the plane PlaneDetectionProvider in visionOS I seem to have hit a limitation which is that regardless of where the headset is in the space, planes will only be detected that are (as far as I can tell) less that 5m from the world origin. Mapping a room becomes very tricky as a result because you often find some walls are outside the radius, even if you're standing two feet away from a ten foot wall. It just won't see it. I've picked my way through the documentation but I cannot see any way to extend this distance. Am I missing something?
Post
Replies
Boosts
Views
Activity
I'm trying to understand how Apple handles dragging windows around in an immersive space. 3d Gestures seem to be only half of the solution in that they are great if you're standing still and want to move the window an exaggerated amount around the environment, but if you then start walking while dragging, the amplified gesture sends the entity flying off into the distance. It seems they quickly transition from one coordinate system to another depending on if the user is physically moving. If you drag a window and start walking the movement suddenly matches your speed. When you stop moving, you can push and pull the windows around again like a super hero. Am I missing something obvious in how to copy this behavior? Hello world, which uses the 3d gesture has the same problem. You can move the world around but if you walk with it, it flies off. Are they tracking the head movement and if it's moved more than a certain amount it uses that offset instead? Is there anything out of the box that can do this before I try and hack my own solution?
I'm getting reports from users that our app that relies on the reported field of view of the camera is reporting the wrong field of view for the main camera. The ultrawide and telephoto are correct. It's reporting 39.3, whereas the iPhone 13 pro max is reporting 67.1.
Is this something to do with the crop value of the 48-megapixel sensor?
When we finish a room scanning session it would be nice to be able to keep the arSession alive so that we can do fine-tuning without having all the overhead of room scanning. But if I call stop() and immediately start a separate WorldScanning session then the home axis will shift according to the new location and rotation of the device. Is there any way to continue the arSession and keep the same origin?
I understand that shouldAutorotate has been deprecated in IOS16 and forcing orientations using UIDevice have also gone away. But it's not clear to me how we can prevent orientation changes using the new api.
I have managed to force orientation changes using requestGeometryUpdate but nothing I do seems to stop it from rotating back again after the forced rotation.
I've tried calling setNeedsUpdateOfSupportedInterfaceOrientations on both the current view controller and the navigation controller but supportedInterfaceOrientations never seems to be called.
supportedInterfaceOrientations is called on the first load, but it seems to be ignored as regardless of what I put in there the device will rotate to all the options allowed in the info.plist.
What trick am I missing?
When using the sample code for RoomCapture view the user is occasionally given instructions to point the camera at the ceiling or floor to complete the scan, but RoomCaptureSessionDelegate institutions do not provide the same feedback. Adding an ARCoachingOverlayView and connecting it to the arSession gives some prompts but again not as many as the RoomCapture view coaching overlay does. Because of this, I can't give people feedback to do what needs to be done to capture the walls.
roomcaptureSession includes only:
moveCloseToWall
moveAwayFromWall
slowDown
turnOnLight
normal
lowTexture
Is there a better way to get full instructions?
Windows and doors that are vertically aligned seem to confuse the system. If I have a door in a wall with a window above or a lower window and an upper window, it seems to give both the same uuid which means that when we get updates it will only add one surface but it will randomly allocate the transform of one or the other, resulting in it flicking back and forth. Sometimes one of the windows seems to have a transform that covers both. Ideally, it would return two surfaces, once for each window or door. It works fine as long as no one tips the device up to the higher widow, after that it can be hard to get it to register the lower door or window underneath.
When I iterate through the data returned I can occasionally see multiple windows with the same uuid which suggests why it's getting confused.
I have tried keeping track of the duplicated uuids and adding multiple windows, but the transforms seem to be broken.
Is there a simple way of finding out which wall the door or window is connected to?
I can see that if you serialise the CapturedRoom the Json does contain a field called parentIdentifier that links them. But there's no mention of it in the documentation.
I have several different apps that I'd like to share a personal core data store with on the same device and across devices. I've been experimenting with NSPersistentCloudKitContainer, which seems to work, but it can take a while to sync and it's not clear when it has. I have also experimented with using app groups where the SQLite file is placed in the app group so that changes are immediately available to other apps on the device.
My question is whether it is better to do one or the other or both. Will two apps open on the same device accessing the shared SQLite cause conflict issues? Do I even need to sync data with cloudKit if I'm already syncing with a shared group? Using NSPersistentCloudKitContainer alone means that each app will be syncing its own copy of the data locally, which doesn't seem very efficient and again I don't know if it's synched when I open the other app up. But I have seen conflicts when i have both apps open at the same time. Is there a foolproof way to prevent conflicts when one app goes into the background?
I wonder if anyone has managed to get NSPersistentCloudKitContainer with a public database with a pre-seeded sqlite to work.
Our app has a few thousand records held in coredata that I'd like to move to cloudkit so that when we add new data it gets shared by everyone. Currently the only way to do this is by throwing up a new version to the app store with a new sqlite database embedded inside. NSPersistentCloudKitContainer seemed like the way to go.
But I cannot get it to work with pre seeded data. It seems it works fine if I download it to a device with the same apple id as added the original data. The data is there ready as soon as it's installed and new data syncs with the device. But if I install the app onto a device with a different apple id than the original seeder it deletes all the data and starts downloading it fresh from the server. This would be fine but if the data is not there when the app launches it crashes as it's critical to the app and I can't pause the app launch until it's synced as I have no way of knowing when it's ready. Is it possible to pre-seed the app with data without requiring it to download from the server? Or does the data in the sqlite have to be tied to a particular apple id?