Does Apple have any documentation on using Reality Converter to convert FBX to USDZ on an M1 Max?
I'm trying to convert an .fbx file to USDZ with Apple's Reality Converter on an M1 Mac (macOS 12.3 Beta), but everything I've tried so far has failed.
When I try to convert .fbx files on my Intel-based iMac Pro, it succeeds.
Following some advice on these forums, I tried to install all packages from Autodesk
https://www.autodesk.com/developer-network/platform-technologies/fbx-sdk-2020-0
FBX SDK 2020.0.1 Clang
FBX Python SDK Mac
FBX SDK 2020.0.1 Python Mac
FBX Extensions SDK 2020.0.1 Mac
Still no joy.
I have a work around - I still have my Intel-based iMac. But I'd like to switch over to my M1 Mac for all my development.
Any pointers?
Note: I couldn't get the usdzconvert command line tool to work on my M1 Mac either. /usr/bin/python isn't there.
Post
Replies
Boosts
Views
Activity
I don't know if this is an issue with Apple's Reality Converter app or Blender (I'm using 3.0 on the Mac), but when I export a model as .obj and import it to Reality Converter, the scale is off by a factor of 100.
That is, the following workflow creates tiny (1/100 scale) entities:
Blender > [.obj] > Reality Converter > [USDZ]
But this workflow is OK:
Blender > [.glb] > Reality Converter > [USDZ]
Two workarounds are:
export as .glb/.gltf,
when exporting .obj set the scale factor to 100 in Blender
Is this a known issue, or am I doing something wrong?
If it is an issue, should I file a bug report?
I am finding some unexpected behavior with lights I've been adding to a RealityKit scene.
For example, I created 14 PointLights, but only 8 appeared to be used to illuminate the scene.
In another example, I created 7 PointLights and 7 SpotLights, and the frame rate dropped quite a bit.
Are lights computationally expensive, causing some adaptive behavior by RealityKit?
Should I be judicious in my use of lights for a scene?
(Note: I set arView.environment.lighting.resource to a Skybox with a black image; my goal was to completely control the lighting. I don't know if that added to the computational load)
In a previous post I asked if 100,000 polygons is still the recommended size for USDZ Quick Look models on the web. (The answer is yes)
But I realize my polygons are 4-sided but are not planar, so they have to be broken down into 2 triangles when rendered.
Given that, should I shoot for 50,000 polygons (i.e., 100,000 triangles)?
Or does the 100,000 polygon statistic already assume polygons will be subdivided into triangles?
(The models are generated from digital terrain (GeoTIFF) data, not a 3D modeling tool)
I've recently added some USDZ files to a web page, and I can download and display them fine via AR Quick Look on an iPhone or iPad.
I've noticed full occlusion is active in the AR view.
Over time, the device appears to heat up and the frame rate drops.
Are there any properties I can set in the <a rel="ar" ...> HTML tag to control things like occlusion or autofocus (i.e., turn them off)?
RealityKit has a CollisionFilter to determine which entities can collide with which other ones.
Perchance, is there something similar for OcclusionMaterial?
In effect, I'd like to have the ability to have a model with an OcclusionMaterial "occlude this entity but not that entity".
I've been creating USDA files manually and converting them to USDZ via Apple's usdzconvert tool (version 0.64).
In the file I set unit size to be 1 meter
metersPerUnit = 1.0
but the USDZ keeps the unit size at 1 cm.
Apple's Reality Converter does process the metersPerUnit metadata, so that is a viable work-around for me. But sometimes I'd prefer the command-line tool.
Is there an update to the usdzconvert tool? I couldn't find one.
I'm looking for documentation/guidance on USDZ and scene model sizes. My focus is on RealityKit-based apps.
I found the 2018 WWDC presentation
Integrating Apps and Content with AR Quick Look
which mentions a rule of thumb for a USDZ model of:
100K polygons
One set of 2048x2048 textures
10 seconds of animations
Are these number still recommended in 2021?
Are these numbers just for Quicklook, or do they apply to RealityKit-based apps too?
If a RealityKit scene loads several USDZ models, should the cumulative number of polygons across all models be 100K, or is the 100K number on a per-model basis?
The talk mentioned AR Quicklook will dynamically downsample textures for devices with less memory. Does RealityKit do this as well?
If so, can I error on providing a larger texture (e.g., 4096 x 4096) and trust RealityKit to downsample as appropriate for me?
(I am hoping there is some documentation covering questions like this)
During my first external test using TestFlight for an In-App Purchase (iPadOS), the user was
(1) Prompted for their Apple ID & password
(2) Prompted for their password a second time
(3) (User believes) prompted for their password a third time
Are these multiple prompts for their password expected behavior, or have I done something wrong?
When I create an AnchorEntity like this:
let entityAnchor = AnchorEntity(plane: [.horizontal], classification: [.floor], minimumBounds: [0.2,0.2])
and add a USDZ model to it, I get a nice ground shadow.
But if I create an AnchorEntity using an ARAnchor like this:
let entityAnchor = AnchorEntity(anchor: anchor)
I do not get that nice ground shadow.
Is there a way to get that ground shadow I get from a plane anchor but with an EntityAnchor where I can specify where it goes or attach it to an ARAnchor?
[Note: for LiDAR devices, I can get a nice shadow using
config.sceneReconstruction = .mesh
arView.environment.sceneUnderstanding.options.insert(.occlusion)
arView.environment.sceneUnderstanding.options.insert(.receivesLighting)
but creating the environment mesh is computationally expensive. I'd like to avoid that if possible.]
When testing In-App Purchases in Xcode with a .storekit file, I can delete past purchase transactions, so I can re-test the purchase experience.
I've switched to using a Sandbox tester and made purchases. However, I cannot find how to delete previous purchase transactions made in the sandbox so I can re-run the tests.
Is this possible?
I've been playing with Apple's StoreKit 2 demo code (buying the cars, subscriptions, ...), and sometimes when I purchase a car, one or more of the other buttons visually flip state (e.g., purchased checkmark changes back to the price).
Leaving the StoreView and returning to it shows the correct state for each of the buttons.
I am using the StoreKit Configuration Products.storekit (for the scheme), so testing in Xcode.
I get this in both the simulator and on my actual phone.
The issue is random. The vast majority of the time everything works perfectly.
Is anyone else seeing this issue?
Does anyone know how to address it?
Dev environment:
Xcode 13.0 beta 5 (13A5212g)
macOS 12.0 Beta (21A5534d)
Mac mini (M1, 2020)
During testing of my app the frames per second -- shown either in the Xcode debug navigator or ARView .showStatistics -- sometimes drops by half and stays down there.
This low FPS will continue even when I kill the app completely and restart.
However, after giving my phone a break, the fps returns to 60 fps.
Does ARKit automatically throttle down FPS when the device gets too hot?
If so, is there a signal my program can catch from ARKit or the OS that can tell me this is happening?
Does RealityKit have an API to test if a ModelEntity (or its CollisionComponent) is currently visible on the screen?
In ARKit+RealityKit I do a raycast from the ARView's center, then create an AnchorEntity at the result and add a target ModelEntity (a flattened cube) to the AnchorEntity.
guard let result = session.raycast(query).first else { return }
let newAnchor = AnchorEntity(raycastResult: result)
newAnchor.addChild(placementTargetEntity)
arView.scene.addAnchor(newAnchor)
I repeat this for each frame update via the ARSessionDelegate session(_:didUpdate:), removing the previous AnchorEntity first.
I use this as a target to let the user know where the full model will be placed when they tap the screen.
This works find under iOS 14, but I get strange results with iPadOS 15 - two different placements are created on different screen updates, offset from each other and slightly rotated from each other.
Has anyone else had issues with raycast() or creating an AnchorEntity from the result?
Is the use of session(_:didUpdate:) via ARSessionDelegate to update virtual content considered bad style now? (I noticed in the WWDC21 they used a different mechanism to update their virtual content.)
(If any Apple engineers read this, I filed a feedback with sample code and video of the issue at FB9535616)