I agree a new device capability key would be very useful (filed FB10140735). For example, imagine a customer pays for an app to suddenly learn it only works on LiDAR devices.
Post
Replies
Boosts
Views
Activity
As far as I can tell, RoomPlan visualizes the lines with RealityKit. Instead of using RoomCaptureView, you can run a RoomCaptureSession and add SCNNodes for each of the CapturedRoom's completed edge surfaces. One tricky aspect is you'll have to assign the RoomCaptureSession's arSession to your ARSCNView inside the captureSession(_:didStartWith:) delegate method. This is because RoomPlan owns the underlying ARSession instance.
The "new lighting" in AR Quick Look changes the image based light source. Similar adjustments can be made to SceneKit and RealityKit with lightingEnvironment and lighting, respectively.
Converting YCbCr to RGB with Metal is only needed if you intend to render the captured image with Metal (sample project demonstrating how). The simplest way to obtain a PNG image is with CoreImage:
// The file can be accessed in Finder if you set UIFileSharingEnabled to YES in your
// Info.plist.
guard let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first else { return }
// If you want to save multiple PNG images, it's best practice to reuse the same CIContext.
let context = CIContext()
let image = CIImage(cvPixelBuffer: frame.capturedImage)
let png = context.pngRepresentation(of: image, format: .BGRA8, colorSpace: image.colorSpace!)
try? png?.write(to: documentsURL.appending(component: "captured-image.png"))
RoomPlan owns the underlying ARKit session and corresponding configuration. The configuration only enables scene depth, not scene reconstruction.