Post

Replies

Boosts

Views

Activity

Spatial Video in AVPlayerController vs Photos app
Hi, In the Destinations sample code project and related WWDC talk on spatial video, it seems to imply that the video player will show 3D stereoscopic videos. However, in the Photos app there's a vignetting in the simulator (and marketing material) when viewing spatial video — a portal kind of effect. Without access to a device I'm wondering if my spatial videos are actually being played as 3D spatial videos in the AVPlayerController, since I'm not seeing the vignetting. I'm thinking that the vignetting is a photos specific visual effect, but wanted to double check to make sure I'm not misunderstanding something about AVPlayerController. Does anyone know if spatial videos played through AVPlayerController will appear as stereoscopic, even if the vignetting isn't there? Has anyone tried the Destinations sample code to play spatial videos on a device to confirm? thanks!
2
1
1.2k
Jan ’24
Custom render pass texture maps with LayerRenderer pipeline
Hi, Re: WWDC2023-10089, I have a question about creating texture maps during pipeline setup. In traditional MTKView setups, it's easy to query for the view size to know what the dimensions of the texture map should be. But, after digging through all the documentation on the classes, I don't see any way to find this information. There's the drawable, and querying it, and then maybe getting the info from the default render texture maps – but, I'm trying to set these textures up when I set up the pipelines, and so I don't think that's going to work. (Because the render loop won't have started yet.) Secondly, I'm wondering w/ foviation if there's even more that needs to be considered regarding creating these types of auxiliary render passes. Basically, for example's sake, imagine you have a working visionOS Metal pipeline. But, now you want to add a special render pass to do some effects. Typically you'd create a texture map to store that pass, calculate the work in a fragment shader, etc, and then do another pipeline state to mix that with the default rendering pipeline. Any help appreciated, thanks!
1
0
496
Feb ’24
realitytool fails with killed-9 on any moderately complex Reality Composer Pro project
I'm trying to build a project with a moderately complex Reality Composer Pro project, but am unable to because my Mac mini (2023, 8GB RAM) keeps running out of memory. I'm wondering if there are any known memory leaks in realitytool, but basically the tool is taking up 20-30GB (!) memory during builds. I have a Mac Pro for content creation, which is why I didn't go for more RAM on the mini – it was supposed to just be a build machine for Apple Silicon compatibility, as my Pro is Intel. But, I'm kinda stuck here. I have a scene that builds fine, but any time I had a USD – in this case a tree asset – with lots of instances, or a lot of geometry, I run into the memory issue. I've tried greatly simplifying the model, but even a 2MB USD is resulting in the crash. I'm failing to see how adding a 2MB asset would cause the memory of realitytool to balloon so much during builds. If someone from Apple is willing to look, I can provide the scene – but it's proprietary so I can't just post it publicly here.
2
0
521
Apr ’24
Make subdivision surfaces work in Reality Composer Pro
Information is light on the new subdivision support for USD models in RealityKit, and I have been unable so far to get one of my models to actually subdivide within Reality Composer Pro or Quick Look (or when viewing on Vision Pro). I've exported a few test models from Houdini and verified that it contains ' uniform token subdivisionScheme = "catmullClark"'. I've started with some very lightweight, basic meshes. But, when viewing, they simply look like polygonal meshes. There's no 'subdividing' occurring at runtime when viewing the models. Is there a trick to getting them to actually smooth-out?
5
0
267
Oct ’24