Context: I have a .nonAR RealityKit scene with a grounding plane. The plane uses a PhysicallyBasedMaterial and loads/applies the textures when the scene starts.
Problem: I'm getting wildly different visual results depending on if I load from a folder in my app bundle or an asset catalog. And, when loading from an asset catalog, my frame rate drops 50% or more.
Question: Should I be using a catalog for RealityKit textures or should I just add them to my project directly? If I should be using an asset catalog, what settings should I apply to the textures for proper rendering?
Rendering samples:
Loading from files I added directly to the project, which gives the most correct looking render ➡️
Loading from an asset catalog with different compression settings ➡️
Post
Replies
Boosts
Views
Activity
Problem
I'm trying to attach masks to PhotogrammetrySamples, but when I run a request on the samples, the app crashes with a Thread 5: signal SIGABRT and my console says
2022-01-31 13:38:13.575333-0800 HelloPhotogrammetry[5538:258947] [HelloPhotogrammetry] Data ingestion is complete. Beginning processing...
libc++abi: terminating with uncaught exception of type std::__1::bad_function_call: std::exception
terminating with uncaught exception of type std::__1::bad_function_call: std::exception
App works fine if I run it without attaching the masks, or if I turn off object masking in the configuration. I'm at a bit of a loss for why it is breaking.
Process
On my iPhone, I've modified the sample capture app. After a capture session, I run a VNGeneratePersonSegmentationRequest(). I save the resulting buffer to disk using
context.pngRepresentation(of: CIImage(cvPixelBuffer: maskPixelBuffer), format: .L8, colorSpace: CGColorSpace(name: CGColorSpace.linearGray)!, options: [:])
Then, I Airdrop my files over to my laptop and run a modified version of the HelloPhotogrammetry sample app. I prepare an array of PhotogrammetrySamples with the image, depth, gravity, and mask data.
I create a PhotogrammetrySession using the samples array and have object masking enabled in my configuration.
When I process my request on the session, it looks like the data is ingested just fine, but breaks with some bad function call.
Related code
Here is how I construct my PhotogrammetrySample sequence, shortened for brevity:
private func makeSequenceFromStructuredFolder(folder: URL) -> [PhotogrammetrySample]{
// Look in the sample folder, prepare arrays of capture data
// For each capture set
// Load color image, depth image, and mask as CIImages
// Load and decode gravity data
// Convert all CIImages into CVPixelBuffers
// kCVPixelFormatType_32BGRA, <CGColorSpace 0x108e04f80> (kCGColorSpaceICCBased; kCGColorSpaceModelRGB; Display P3)
_image = pixelBufferFromCIImage(image, context: context, pixelFormat: kCVPixelFormatType_32BGRA, colorSpace: image.colorSpace!)
// kCVPixelFormatType_DepthFloat32, <CGColorSpace 0x100d1b000> (kCGColorSpaceICCBased; kCGColorSpaceModelMonochrome; Linear Gray)
_depth = pixelBufferFromCIImage(depthImage, context: context, pixelFormat: kCVPixelFormatType_DepthFloat32, colorSpace: depthImage.colorSpace!)
// kCVPixelFormatType_OneComponent8, <CGColorSpace 0x100d1b000> (kCGColorSpaceICCBased; kCGColorSpaceModelMonochrome; Linear Gray)
_mask = pixelBufferFromCIImage(mask, context: context, pixelFormat: kCVPixelFormatType_OneComponent8, colorSpace: CGColorSpace(name: CGColorSpace.linearGray)!)
// Prepare sample
var sample = PhotogrammetrySample.init(id: index, image: _image!)
if let _gravity = _gravity {
sample.gravity = _gravity
}
if let _depth = _depth {
sample.depthDataMap = _depth
}
if let _mask = _mask {
sample.objectMask = _mask
}
// Append the sample
sampleSequence.append(sample)
}
return sampleSequence
}
And here is how I convert my CIImages into CVPixelBuffers.
func pixelBufferFromCIImage(_ image: CIImage, context: CIContext, pixelFormat: OSType, colorSpace: CGColorSpace) -> CVPixelBuffer? {
var pixelBuffer: CVPixelBuffer?
let attrs = [kCVPixelBufferCGImageCompatibilityKey: true,
kCVPixelBufferCGBitmapContextCompatibilityKey: true] as CFDictionary
let width: Int = Int(image.extent.width)
let height: Int = Int(image.extent.height)
let status = CVPixelBufferCreate(kCFAllocatorDefault,
width,
height,
pixelFormat,
attrs,
&pixelBuffer)
switch status {
case kCVReturnInvalidPixelFormat:
print("status == kCVReturnInvalidPixelFormat")
case kCVReturnInvalidSize:
print("status == kCVReturnInvalidSize")
case kCVReturnPixelBufferNotMetalCompatible:
print("status == kCVReturnPixelBufferNotMetalCompatible")
case kCVReturnSuccess:
print("status == kCVReturnSuccess")
default:
print("status is other")
}
guard (status == kCVReturnSuccess) else {
return nil
}
context.render(image, to: pixelBuffer!, bounds: image.extent, colorSpace: colorSpace)
return pixelBuffer
}
Other attempted steps that ultimately failed
Scale the mask to be the same size as the color image using CIFilter.lanczosScaleTransform().
Create a binary mask using CIFilter.colorThreshold().
Render an intermediary image to be extra sure the right pixel format is being used for the mask.
Checked all image extents and made sure the color image and mask were the same size and rotation.
Read all documentation, looked for similar questions
I appreciate any help!
I'm using a geometry modifier to displace vertices well beyond their original position. When viewing this displaced mesh in an .ar RealityKit view, the mesh disappears when the mesh's pre-modified coordinates fall outside of the viewing frustum. There's usually only this mesh in the scene, so it is weird for the user to walk up to the mesh they see, only for it to go away.
Is there a way to disable viewport culling in RealityKit or in my geometry modifier? Or, is it possible to set the culling to happen after the geometry modifier has displaced the mesh?
(I looked at this answer, but it looked like that person was building custom geometry in a Metal view.)
I have a model that uses a video material as the surface shader and I need to also use a geometry modifier on the material.
This seemed like it would be promising (adapted from https://developer.apple.com/wwdc21/10075 ~5m 50s).
// Did the setup for the video and AVPlayer eventually leading me to
let videoMaterial = VideoMaterial(avPlayer: avPlayer)
// Assign the material to the entity
entity.model!.materials = [videoMaterial]
// The part shown in WWDC: Set up the library and geometry modifier before, so now try to map the new custom material to the video material
entity.model!.materials = entity.model!.materials.map { baseMaterial in
try! CustomMaterial(from: baseMaterial, geometryModifier: geometryModifier)
}
But, I get the following error
Thread 1: Fatal error: 'try!' expression unexpectedly raised an error: RealityFoundation.CustomMaterialError.defaultSurfaceShaderForMaterialNotFound
How can I apply a geometry modifier to a VideoMaterial? Or, if I can't do that, is there an easy way to route the AVPlayer video data into the baseColor of CustomMaterial?