How to overlay an image in RealityKit on a 3D model using code so that it does not stretch to the entire object, but has its own height and width that I can change?
I have a solution on how to do this, but then it will not be possible to change the height, width or place it anywhere on the 3D model. And this is to cut out a part of the object and overlay the image on the entire cutout area. How to overlay a 2D image on a 3D model without stretching the photo to the entire 3D object?
If this is possible, please give an example of how to do this in code. I could not find on the Internet how to do this. Although in other engines this can be done, for example, in Blender or Unity. If I am not mistaken, this is done there using decals
AR / VR
RSS for tagDiscuss augmented reality and virtual reality app capabilities.
Posts under AR / VR tag
111 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hello, I was wondering how I can initialize an ImageAnchoringSource using
https://developer.apple.com/documentation/realitykit/anchoringcomponent/imageanchoringsource/init(_:)
When I construct one using a URL, it doesn't seem to be tracked and I see in the following when I debug print the component:
▿ 0 : AnchoringComponent
▿ target : Target
▿ referenceImage : 1 element
▿ from : ImageAnchoringSource
▿ url : Optional<URL>
▿ some : file:///var/mobile/Containers/Data/Application/D1126EA0-A1D7-468F-A40C-8578B7F5BDDF/Library/Caches/CodeCache/0E457AA7-2195-48B9-9DD4-58CEB9397F69.png
- _url : file:///var/mobile/Containers/Data/Application/D1126EA0-A1D7-468F-A40C-8578B7F5BDDF/Library/Caches/CodeCache/0E457AA7-2195-48B9-9DD4-58CEB9397F69.png
- _parseInfo : nil
- _baseParseInfo : nil
- name : nil
- group : nil
▿ trackingMode : TrackingMode
- trackingMode : 2
Is there a specific format for the parseInfo?
When I use the same image to make an image anchoring source by group and name in AR Resources, it is tracked.
Thank you!
Does visionOS 2 still prompt the user with a permission alert when a full immersive space is presented?
In visionOS 1, the first time an app presented an immersive space, the user was prompted with an alert to grant permission. openImmersiveSpace would return an error code if the user opted not to grant permission. In visionOS 1, it was important to handle this case correctly.
In visionOS 1, the Settings > Developer menu had an option to reset the immersive user's space permission prompting state so developers could test this interaction flow.
In visionOS 2, I no longer see the full immersive space permissions alert. I can't remember if I saw it once, the first time visionOS 2.0 beta was installed, or if I never saw it at all. The Settings > Developer menu no longer has an option to reset the permission prompting state. I can't find any way to test the interaction flow in my app to make sure that it will work correctly for users.
Does visionOS 2 no longer ask for full immersive space permission at all? I can't find this change documented anywhere.
If visionOS 2 does prompt the user for permission, is there any way to reproduce and test this interaction flow so I can make sure my app handles it correctly?
Thanks for taking the time to answer this question.
I am using Model3D to display an RCP scene/model in my UI.
How can I get to the entities so I can set material properties to adjust the appearance?
I looked at interfaces for Model3D and ResolvedModel3D and could not find a way to get access to the RCP scene or RealityKit entity.
We can use the CreateML App to build object tracking model in Xcode 16, but is it possible to use CreateML framework as well?
No documentation of Create ML object tracking is found yet. The latest documentation I can found is Xcode 15.
https://developer.apple.com/documentation/CreateML?changes=latest_minor
Really apricated the new feature of object tracking, thank you Apple Team.
Hey Everyone, this is like my first post here in the apple forum.
I need your help to understand better Reality Kit and file exports, but let me explain.
I'm trying to create a little 3D Object editor, and it looks like to work pretty well using RealityViews and managing materials on the Entity.
I'm currently working with all the Beta Apis and I would like to export my entity into an .usdz or a .obj file.
I've found a method that allows me to create a .Reality File
let path = FileManager.default.urls(for: .documentDirectory,
in: .userDomainMask)[0].appendingPathComponent("model.reality")
try await self.appState.parentEntity.write(to: path)
but I now I don't know how to convert it into a .usdz or a .obj file, or otherwise any standard 3d format.
Do you have any idea on how could I do?
Thankyou so much!
Have a nice day ^^
I have an app on the App Store for many years enabling users to post text into clouds in augmented reality. Yet last week abruptly upon installing the app on the iPhone the screen started going totally dark and a list of little comprehensible logs came up of the kind:
ARSCNCompositor <0x300ad0e00>: ARSCNCompositor (0, 0) initialization failed. Matting is not set up properly.
many times, then
RWorldTrackingTechnique <0x106235180>: Unable to update pose [PredictorFailure] for timestamp 870.392108
ARWorldTrackingTechnique <0x106235180>: Unable to predict pose [1] for timestamp 870.392108
again several times and then:
ARWorldTrackingTechnique <0x106235180>: SLAM error callback: Error Domain=Slam Error Code=7 "Non fatal error occurred due to significant drop in a IMU data" UserInfo={NSDescription=Non fatal error occurred due to significant drop in a IMU data, NSLocalizedFailureReason=SlamEngineNodeGroup Failure: IMU issue: gyro data stream verification failed [Significant data drop]. Failed on timestamp: 870.413247, Last known timestamp: 865.350198, Delta: 5.063049, System timestamp: 870.415781, Delta between system and frame: 0.002534. }
and then again the pose issues several times.
I hoped the new beta version would have solved the issue, but it was not the case. Unfortunately I do not know if that depends on the beta version or some other issue, given the app may be not installed on the Mac simulator.
it is my code
let portal = Entity()
portal.components[ModelComponent.self] = .init(mesh: .generatePlane(width: Float(size.width),
height: Float(size.height),
cornerRadius: 0.02),
materials: [PortalMaterial()])
portal.components[PortalComponent.self] = .init(target: world)
portal.components[PortalComponent.self]?.clippingPlane = .init(position: SIMD3(x: 0, y: 0, z: 0), normal: SIMD3(x: 0, y: 0, z: 0))
portal.components.set(HoverEffectComponent())
I added RealityView to multiple HStacks and implemented the portal effect. I found that the portal effect would cause confusion in the rendering level on some machines, as shown in the figure
I am using ArKit to create an augmented reality application in Unity. Following the addition of an object reference object Because it tracks the object in front of it slowly and inaccurately, the application does not update the screen quickly.
How can I track objects more quickly?
Anybody try hand tracking provider in 2.0? I'm getting them in 11ms interval, as advertised, but they are duplicate. Here's a print of the timestamps. Problematic for me because I am tracking the last 5 position for a calculation and expect them to be unique. Can't find docs on this anywhere.
I understand it's not truly 90 updates a second but predicted pose, however I expected the updates to include predicted poses.
Hello just getting this error as I am trying to run the any new project this error is looking up
Thanks
Zipzy Games
X
When I wanted to call the Reality Composer Pro scene containing Object Tracking, I tried the following code:
RealityView { content in
if let model = try? await Entity(named: "Scene", in: realityKitContentBundle) {
content.add(model)
}
}
Obviously, this is wrong. We need to add some configurations that can enable Object Tracking to Reality View. What do we need to add?
I downloaded Xcode 16 and updated my macOS to 15, but I keep getting this error when trying to build the game in simulator or in device
[xrsimulator] Exception thrown: The operation couldn’t be completed. (realitytool.RKAssetsCompiler.RKAssetsCompilerError error 3.)
What is the current recommendation for creating high-quality 3D content?
The context is a hobbyist, specialised CAD app for macOS (with an iPadOS companion) that is mostly 2D but also offers a 3D visualization option (currently OpenGL).
Somewhere down the line there might be an AR view but at the moment - certainly for macOS - it's purely generated 3D visualization, all rendered content.
So starting with a rewrite of the 3D visualization in 2024 targeting macOS Sequoia/iPadOS 18 is RealityKit the suggested way forward?
Cheers,
Jay
Hey guys! I have question for my project. I want my 3D character with an PBR Shader to only receive IBL from my HDRI map and not receive any lighting from the surrounding environment when viewing on Apple Vision Pro. Any tips?
Thank you in advance!
Does anyone measure the brightness of Vision Pro?
It seems to be dimmer than I expected.
Besides, is there any way to set the brightness of the Vision Pro to be maximum by script?
Many thanks!
Hello,
I'm trying to download a native spatial video for a software program I'm putting together where people can upload spatial videos from the web and deploy them inside a native VisionOS app showing a breadth of different file formats.
Hello guys, I do have a virtual environment in which I have a mesh. I want the mesh to be mirrored onto a glass which is very close nearby.
I can't just duplicate it because it varies depending on from which position you are looking at it.
Is there a possibility to mirror a mesh via reflections? It shouldn't reflect real world objects - just a virtual mesh.
Thank you guys
How is it possible to add a schema for ar to a usd file using the python tools (or any other way).
Following the instructions in: https://developer.apple.com/documentation/arkit/arkit_in_ios/usdz_schemas_for_ar/actions_and_triggers/preliminary_behavior
The steps are to have the following declaration:
class Preliminary_Behavior "Preliminary_Behavior" (
inherits = </Typed>
)
and a usd file
#usda 1.0
def Preliminary_Behavior "TapAndFlip"
{
rel triggers = [ <Tap> ]
rel actions = [ <Entry> ]
def Preliminary_Trigger "Tap" ( inherits = </TapGestureTrigger> )
{
rel affectedObjects = [ </Cube> ]
}
def Preliminary_Action "Entry" ( inherits = </GroupAction> )
{
uniform token type = "parallel"
rel actions = [ <Flip> ]
}
def Preliminary_Action "Flip" ( inherits = </EmphasizeAction> )
{
rel affectedObjects = [ </Cube> ]
uniform token motionType = "flip"
}
}
def Cube "Cube" { }
How do these parts fit together? I saved the usda file, but it didn't have any interactions. Obviously, I have to add that declaration, but how do I do this? is this all in an AR Xcode project? Or can I do this with python tools (I would prefer something very lightweight).
Using Unreal Engine 5.4 for Apple Vision Pro.
Creating a fully immersive VR Experience.
When deploying a VR Application, can you use deferred rendering for the Apple Vision Pro?
Or do you need to use forward shading, like for mobile devices?
My goal would be to use deferred rendering, because of the much better shader options and quality.
And I hope that the Apple Vision Pro with the integrated CPU and GPU could handle deferred rendering, that like a MacBook or a powerful Gaming PC, Workstation.
I couldn't find any information on that. Have been mainly developing VR Applications for Quest, but would love to create apps for the Apple Vision Pro.
But I would need to know if defered rendering will work when deploying VR Apps from the Unreal Engine to the Apple Vision Pro system.
Thanks a lot for any more information on that topic,
appreciate it!
all the best,
Bernhard