In Reality Composer Pro shader graph I don't see a way to select which uv or specify custom uvs to use to sample textures. Is there a method to do this?
Reality Composer Pro
RSS for tagLeverage the all new Reality Composer Pro, designed to make it easy to preview and prepare 3D content for your visionOS apps
Posts under Reality Composer Pro tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Reality Composer had some rudimentary animation. I have not been able to find an equivalent in the Pro version. Are there any planned? I'd rather avoid the steep learning curve of the available high end tools for some basic transform animations.
hi
I have made a simple textured planet 3D in Reality Composer Pro, and the question is how can I visualize it in the Vision Pro simulator.
I already have everything installed, even the simulator works fine, but I don't know how to run what I do in RealityComposerPro, export it to vision Simulator.
Any comment will be very appreciated.
thank you.
Hello, I am trying to do automation on Apple Vision Pro, I use these capabilities
{
“platformName”: “iOS”,
“appium:platformVersion”: “17.0”,
“appium:deviceName”: “Apple Vision Pro”,
“appium:automationName”: “XCUITest”,
“appium:udid”: “EB2BB922-DCAF-44BF-8C15-2EDE677C08DA”
}
but appium inspector is showing just black screen and doesnt work properly. There is an issue with WebDriverAgent, so could you help me to fix these black screen issue on Appium Inspector or refactor code in WebDriverAgent project?
Hello,
I am looking for something to allow me to Anchor a webview component to the user, as in it follows their line of vision as they move.
I tried using RealityView with an Anchor Entity, but it raises an error of "Presentations are not permitted within volumetric window scene". Can I anchor the Window instead?
I'm loading a model of a room's interior from Reality Composer Pro in a RealityView in my full Immersion scene. The model loads fine, but the problem is that if I pan the simulator camera around, the model follows the camera, rather than remaining stationary and allowing the camera to move around it. How can I keep the model stationary?
I need to create some spheres that have colors, transparency and text. Color and transparency are very easy but adding text seems non-trivial. I am looking for advice on the best approach.
Also aside from WWDC video, and a few short YouTube videos, is there anywhere to find more in depth training on ShaderGraph?
Thanks
I am trying to implement a feature to play video in full immersive space but unable to achieve desired results. When I run app in full immersive space, it shows video in center of screen/window. See screenshot below.
Can you please guide/refer me how to implement a behaviour to play video in full immersive space, just like below image
Regards,
Yasir Khan
I've updated my project from Beta7 to Beta8. Now, the models outside the screen view are being cut off, invisible when out of the 2D screen's width/height. Depth seems ok.
Is there any property like CSS's overflow:hidden etc, that I can give to the RealityView so I can see the outsides?
Code is something like this.
struct AnimationTest: View {
var body: some View {
RealityView { content in
if let scene = try? await Entity(named: "Card", in: realityKitContentBundle) {
content.add(scene)
}
let transform = Transform(translation: [0.25, 0.0, 0.0])
ent.transform = transform
...
I am no longer able to drag images into the scene for Version 1.0 (393.3) of Reality Composer Pro. This used to work in the older versions.
Is it no longer possible to do it? It was really nice for prototyping, I guess I can write a feedback unless I'm doing something wrong. It was a bit of a PITA to drop images as materials on 3D objects in the past... hope that's not ht only way
I got this version with Xcode Version 15.0 beta 8 (15A5229m)
So there's a "grounding shadow" component we can add to any entity in Reality Composer Pro.
In case my use case is Apple Vision Pro + Reality Kit:
I'm wondering, by default I think I'd want to add this to any entity that's a ModelEntity in my scene... right?
Should we just add this component once to the root transform?
Or should we add it to any entity individually if it's a model entity?
Or should we not add this at all? Will RealityKit do it for us?
Or does it also depend if we use a volume or a full space?
I have an iOS ARKit app I'm trying to convert to visionOS, but not having luck detecting gestures on my Reality Composer model entities the way I could with ARView. All the documentation I could find on gestures in visionOS seems focused on RealityViews.
Hello!
My app supports iOS and visionOS in a single target. But when I preview the app on iOS device/simulator, an error occurred:
The RealityKitContent.rkassets is located in my RealityKitContent package that is linked on visionOS.
It seems that Xcode Preview is ignoring the link settings and attempt to build RealityKitContent.rkassets on iOS.
Reproduce Steps:
Clone my demo project at https://github.com/gongzhang/rkassets-preview-issue-demo
Build the app for iOS (success)
Preview ContentView.swift (failed due to RealityKitContent.rkassets issue)
Light Intro IOS Swift Vision Pro - ZIPZY GAMES
I've been using the MacOS XCode Reality Composer to export interactive .reality files that can be hosted on the web and linked to, triggering QuickLook to open the interactive AR experience.
That works really well.
I've just downloaded XCode 15 Beta which ships with the new Reality Composer Pro and I can't see a way to export to .reality files anymore. It seems that this is only for building apps that ship as native iOS etc apps, rather than that can be viewed in QuickLook.
Am I missing something, or is it no longer possible to export .reality files?
Thanks.
I'm trying to run this demo, which opened a month ago, but now returns 2 issues in xCode: https://developer.apple.com/documentation/visionos/diorama
My goal is to better understand how to use Reality Composer Pro to develop visionOS apps for the Vision Pro.
BillboardSystem.swift:39:58 Value of type 'WorldTrackingProvider' has no member 'queryDeviceAnchor'
Cannot find 'Attachment' in scope. Preview paused shows.
Any thoughts on where I'm off? Thanks!
Hi guys, I've just installed the newest Xcode Version 15.0 (15A240d) and I can see that the Reality Composer is missing. It is not appearing in Xcode -> Open Developer Tool menu. Where/how I can find it? My old reality compose project now opens like a text file, and I have no option to open it in Reality Composer like it was in old Xcode.
I'm kinda stuck with my project, so any help could be useful.
Thanks,
J
I have a usdz model with animation that I can preview in RCP. When I create a new base/example visionOS project in Xcode it's set up to load in the 'Scene" and "Immersive" reality kit content. But my models don't play the animation.
How do I fire off the contained animations in those files?
Is there a code snippet that someone can share that takes into account how the example project is setup?
Does the PBR material setup in RCP support packed RGB channels from a single image?
Does the material graph support splitting the output of RGB values on the image node for custom material setup?
I'm trying to create a feature in my app vision OS app where i show a reality view and on button click toggle different entities as in showing them on button click and vice versa.
Is this possible in Vision os? if so how can i do this ?
All that I did now was to instanciate my scene which contains a car 3d model, red tyres and blue tyres.
On a button click i'm trying to show the blue tyres instead of the red ones.
Is this possible ?
Thank you,