I would like to pass in a cube map to a custom material shader in RealityKit 2 so I can achieve effects like custom reflections and iridescence.
I am aware of material.custom.texture and material.custom.value as talked about here:
https://developer.apple.com/forums/thread/682632
But those only allow for a 2D texture and a simd_float4 (a vector of 4 float values).
And I am aware of how to pass in animatable custom parameters as talked about here:
https://developer.apple.com/forums/thread/682632
But I would like to be able to sample a cube map instead of a 2D texture. Any help is much appreciated.
Post
Replies
Boosts
Views
Activity
I would like to extract depth data for a given point in ARSession.currentFrame.smoothedSceneDepth.
Optimally this would end up looking something like:
ARView.depth(at point: CGPoint)
With the point being in UIKit coordinates just like the points passed to the raycasting methods.
I ultimately to use this depth data to convert a 2D normalized landmark from a Vision image request into a 3D world space coordinate in the 3D scene - I only lack the accurate depth data for a given 2D point.
What I have available is:
The normalized landmark from the Vision request.
Ability to convert this^ to AVFoundation coordinates.
Ability to convert this^ to screen-space/display coordinates.
When the depth data is provided correctly I can combine the 2D position in UIKit/screen-space coordinates with the depth (in meters) to produce an accurate 3D world position with the use of ARView.ray(through:)
What I have not been able to figure out is how to get this depth value for this coordinate on screen.
I can index the pixel buffer like this:
extension CVPixelBuffer {
func value(for point: CGPoint) -> Float32 {
CVPixelBufferLockBaseAddress(self, .readOnly)
let width = CVPixelBufferGetWidth(self)
let height = CVPixelBufferGetHeight(self)
//Something potentially going wrong here.
let pixelX: Int = width * point.x
let pixelY: Int = height * point.y
let bytesPerRow = CVPixelBufferGetBytesPerRow(self)
let baseAddress = CVPixelBufferGetBaseAddress(self)!
assert(kCVPixelFormatType_DepthFloat32 == CVPixelBufferGetPixelFormatType(depthDataMap))
let rowData = baseAddress + pixelY * bytesPerRow
let distanceAtXYPoint = rowData.assumingMemoryBound(to: Float32.self)[pixelX]
CVPixelBufferUnlockBaseAddress(self, .readOnly)
return distanceAtXYPoint
}
}
And then try to use this method like so:
guard let depthMap = (currentFrame.smoothedSceneDepth ?? currentFrame.sceneDepth)?.depthMap else { return nil }
//The depth at this coordinate, in meters.
let depthValue = depthMap.value(for: myGivenPoint)
The frame semantics [.smoothedSceneDepth, .sceneDepth] have been set properly on my ARConfiguration. The depth data is available.
If I hard-code the width and height values like so:
let pixelX: Int = width / 2
let pixelY: Int = height / 2
I get the correct depth value for the center of the screen.
I have only been testing in portrait mode.
But I do not know how to index the depth data for any given point.
In the WWDC talk "Enhance your spatial computing app with RealityKit." we see how to create a portal effect with RealityKit. In the "Encounter Dinosaurs" experience on Vision Pro there is a similar portal, except this portal allows entities to stick out of the portal. Using the provided example code, I have been unable to replicate this effect. With the example code, anything that sticks out of the portal gets clipped.
How do I get entities to stick out of the portal in a way similar to the "Encounter Dinosaurs" experience?
I am familiar with the old way of using OcclusionMaterial to create portals, but if the camera gets between the OcclusionMaterial and the entity (such as walking behind the portal), this can break the effect, and I was unable to break the effect in the "Encounter Dinosaurs" experience.
If it helps at all: I have noticed that if you look from the edge of the portal very closely, the rocks will not stick out the way that the dinosaurs do; The rocks get clipped. Therefore, the dinosaurs are somehow being rendered differently.