I have the iPhone 12 Pro Max and my wife has the most recent iPad Pro. I use both for testing AR apps, and I have generally found overall the iPhone 12 Pro Max is a snappier experience.
I don't know how much of that is due to the difference in CPUs (my iPhone has an A14 while the iPad Pro has an A12Z) vs. the amount of pixels each has to push to the screen (iPhone 2778‑by‑1284, iPad 2388-by-1668) vs. different LiDAR generations (Apple doesn't specify this).
While the iPhone is qualitatively snappier to me, the iPad provides a more immersive experience because of the larger screen. So snappier vs. more immersive.
One warning: there have been pretty steady rumors that Apple will release a new iPad Pro very soon.
Another warning: I haven't found an elegant way to convert the ARMeshGeometry Apple provides in an ARMeshAnchor (things that live in ARKit land) to a MeshResource for a ModelComponent (things that live in RealityKit land). I am hoping Apple provides a richer API for MeshResource in the next release.
Another warning: Turning on occlusion with LiDAR equipped iOS devices is awesome! You won't want to go back to non-LiDAR life afterwards.
Post
Replies
Boosts
Views
Activity
Update:
I checked the Entity's initial rotation rotation (simd_quatf) and destination rotation used in move(to:relativeTo:duration:timingFunction:), and then I checked the Entity's rotation after the animation completes.
The axis Y component after the animation completed has flipped (and so the angle has changed accordingly)
Here is what I get "normally" (or at least how I expected) The Final angle and axis are the same as the destination I gave to the animation function:
Initial Angle: 171.89, Vector: 0.00, 1.00, 0.00
Destination Angle: 206.26, Vector: 0.00, 1.00, 0.00
Final Angle: 206.26, Vector: 0.00, 1.00, 0.00
Here is what I get just prior to the unexpected behavior:
Initial Angle: 206.26, Vector: 0.00, 1.00, 0.00
Destination Angle: 240.64, Vector: 0.00, 1.00, 0.00
Final Angle: 119.36, Vector: 0.00, -1.00, 0.00
Note the Y component of the axis has flipped and the angle is 119.3 = 360 - 240.64.
I had not counted on the Y axis flipping.
I guess the answer is not to assume the axes (and angle) after the animation completes are the same as the axes (and angle) supplied as the destination Transform in the move(to:relativeTo:duration:timingFunction:) function.
Figured it out: the AnchorEntity's anchorIdentifier property holds a UUID that I can use to find the original ARAnchor.
Thanks maxxfrazer. That got me pointed in the right direction. I also found your MultipeerHelper project which has helped me understand a little better how the pieces fit together in a local environment.
I don't know if Apple will answer this, but from my experience using a number of devices I suspect Apple uses a combination of
Standard wide lens
Ultra wide lens
LiDAR sensor
and merges all this data to understand the world around it and the device's location in that world. The more combination of sensors Apple can leverage on a given device, the more effective the tracking is, so the better the experience is.
A device with a single standard wide lens will do (but you do need to move the device side-to-side to help it triangulate depth). Wide + Ultra wide lens is better. Wide + Ultra wide + LiDAR is pretty sweet.
Follow up:
I think (???) I figured it out!!
When selecting photos in the photo picker, if the photo is from a shared album, the GPS information isn't there. But if I find the original photo, it *does* have the GPS information.
Will continue testing.
I've been experiencing the same issue with only one Entity animation for a USDZ model available despite verifying there are multiple animations in the file. I've seen this with both macOS and iOS.
I've opened an issue FB8272442 in the hope that the squeaky wheel gets the grease.
I have found the the initial tranform value of the Entity returned has the scale property for x, y, and z set to 0.01. (I can't remember for sure, but the second level entities' transform might need to be reset too)What I did was reset the first entity's transform to the default transform.Try something likemyUsdzEntity.transform = Transform()and see if that fixes it.If that almost gets you there but not quite, try resetting the second level of entities too:myUsdzEntity.transform = Transform()for child in myUsdzEntity.children { child.transform = Transform()}
Thanks.Also, I did not know about the &time=<seconds> argument for a WWDC video URL. I will have to take advantage of that in my notes. Double win!
I was looking for a way to do this within a program, but I could not find anything.For now I just use iOS's built-in screen recorder.https://support.apple.com/en-us/HT207935
FYII just ran across the same problem today. When trying to install my applications (one with network exension and one with endpoint extension), both had "Placeholder Developer" showing.In my case, there were also 2 applications pending approval. One was Dropbox (which showed the Dropbox name), and the other was my program (showing "Placeholder Developer").I'll file a bug report too (squeeky wheel)
Just confirming if someone runs across this discussion trying to solve a similar problem problem.Adding my <teamID>.<endpoint system extension BundleID>.xpc as an entry to com.apple.security.temporary-exception.mach-lookup.global-name array in entitlements allowed me to turn the sandbox back on.So I'm happy there. Not looking forward to battling for the exception with the Mac App Store review team though. 😉Todd
When I used a VM before (parallels) I was not seeing the raw network packets (which I also use). Back then I was only processing IPv4, so maybe all the traffic was IPv6 at the VM and then bridged to IPv4 before going out on the physical network. One of these days I'll go back and check.Has anyone processed raw packets via NEFilterPacketProvider in a VM? If so, which system VM system were you using?
I look forward to Eskimo's response, but I finally threw in the towel and wrapped my endpoint security extension in a GUI app. That is how I could add the privisioning profile to the system extension. Basically I started with a network system extension, ripped out the network parts, and replaced them with the endpoint system extension.A related question I have is how are people distributing these system extensions in enterprises?So far I've needed to have the user (me) confirm that the extension is loaded (for both network and endpoint extensions) and then confirm again when the network extension wrapper app connects to the network extension.Can JAMF or related tools avoid requiring the end user from having to go through these steps in an enterprise?Thanks,Todd
I got this when it was HTTPS traffic from Apple's webkit library. If you try Chrome, you will probably have a source port (I haven't verified this).The good news is that when local port is 0 the flow's .url field is usually populated, so you can pick up interesting things from that.