Hello,
in this project https://developer.apple.com/documentation/arkit/content_anchors/tracking_and_visualizing_faces there is some sample code that describes how to map the camera feed to an object with SceneKit and a shader modifier.
I would like know if there is an easy way to achieve the same thing with a CustomMaterial and RealityKit 2.
Specifically I'm interested in what would be the best way to pass in the background of the RealityKit environment as a texture to the custom shader.
In SceneKit this was really easy as one could just do the following:
material.diffuse.contents = sceneView.scene.background.contents
As the texture
input for custom material requires a TextureResource
I would probably need a way to create a CGImage from the background or camera feed on the fly.
What I've tried so far is accessing the captured image from the camera feed and creating a CGImage from the pixel buffer like so:
guard
let frame = arView.session.currentFrame,
let cameraFeedTexture = CGImage.create(pixelBuffer: frame.capturedImage),
let textureResource = try? TextureResource.generate(from: cameraFeedTexture, withName: "cameraFeedTexture", options: .init(semantic: .color))
else {
return
}
// assign texture
customMaterial.custom.texture = .init(textureResource)
extension CGImage {
public static func create(pixelBuffer: CVPixelBuffer) -> CGImage? {
var cgImage: CGImage?
VTCreateCGImageFromCVPixelBuffer(pixelBuffer, options: nil, imageOut: &cgImage)
return cgImage
}
}
This seems wasteful though and is also quite slow. Is there any other way to accomplish this efficiently or would I need to go the post processing route?
In the sample code the displayTransform for the view is also being passed as a SCNMatrix4. CustomMaterial custom.value
only accepts a SIMD4 though. Is there another way to pass in the matrix?
Another idea I've had was to create a CustomMaterial from an OcclusionMaterial which already seems to contain information about the camera feed but so far had no luck with it.
Thanks for the support!