How did this progress? Just seeing this many years later and it still seems like scenekit has key limitation. Is finding an offramp to metalkit a typical path?
Post
Replies
Boosts
Views
Activity
Will the steps in the first response cause my current version of Xcode to stop working? If so, how can I get the old version back?
Any updates for how to transcode or encode these files? I have stereo frames/video and want to encode to view on apple spatial video players.
@KevinTho any luck here? What did you learn or discover?
Thanks in advance for sharing.
on my phone now. Appologies for terse response.
1: 3x remap nodes to make interoplate for each RgB
2: combine3 node To make r g b into rgb.
i hope this helps.
You may be able to use video material with mov.
Hi Joe,
It's involved and I have not verified i'm using all the best APIs. I made an effort to ensure that Idid not make extra buffer copies. Your implementation may have a different optimal route depending on your texture source
But this shows the essence of working with the drawable queue.
code-block
func drawNextTexture(pixelBuffer: CVPixelBuffer) {
guard let textureResource = textureResource else { return }
guard let drawableQueue = drawableQueue else { return }
guard let scalePipelineState = scalePipelineState else { return }
guard let scalePipelineDescriptor = scalePipelineDescriptor else { return }
guard let commandQueue = commandQueue else { return }
guard let textureCache = textureCache else { return }
let srcWidth = CVPixelBufferGetWidth(pixelBuffer)
let srcHeight = CVPixelBufferGetHeight(pixelBuffer)
autoreleasepool {
var drawableTry: TextureResource.Drawable?
do {
drawableTry = try drawableQueue.nextDrawable() // may stall for up to 1 second.
guard drawableTry != nil else {
return // no frame needed
}
} catch {
print("Exception obtaining drawable: \(error)")
return
}
guard let drawable = drawableTry else { return }
guard let commandBuffer = commandQueue.makeCommandBuffer() else {
return
}
var cvMetalTextureTry: CVMetalTexture?
CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
textureCache,
pixelBuffer,
nil,
.bgra8Unorm_srgb, // linear color; todo try srgb
srcWidth,
srcHeight,
0,
&cvMetalTextureTry)
guard let cvMetalTexture = cvMetalTextureTry,
let sourceTexture = CVMetalTextureGetTexture(cvMetalTexture) else {
return
}
// Check if the sizes match
if srcWidth == textureResource.width && srcHeight == textureResource.height {
// Sizes match, use a blit command encoder to copy the data to the drawable's texture
if let blitEncoder = commandBuffer.makeBlitCommandEncoder() {
blitEncoder.copy(from: sourceTexture,
sourceSlice: 0,
sourceLevel: 0,
sourceOrigin: MTLOrigin(x: 0, y: 0, z: 0),
sourceSize: MTLSize(width: srcWidth, height: srcHeight, depth: 1),
to: drawable.texture,
destinationSlice: 0,
destinationLevel: 0,
destinationOrigin: MTLOrigin(x: 0, y: 0, z: 0))
blitEncoder.endEncoding()
}
} else {
// Sizes do not match, need to scale the source texture to fit the destination texture
let renderPassDescriptor = MTLRenderPassDescriptor()
renderPassDescriptor.colorAttachments[0].texture = drawable.texture
renderPassDescriptor.colorAttachments[0].loadAction = .clear
renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColorMake(0, 0, 0, 1) // Clear to opaque black
renderPassDescriptor.colorAttachments[0].storeAction = .store
if let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) {
renderEncoder.setRenderPipelineState(scalePipelineState)
renderEncoder.setVertexBuffer(scaleVertexBuffer, offset: 0, index: 0)
renderEncoder.setVertexBuffer(scaleTexCoordBuffer, offset: 0, index: 1)
renderEncoder.setFragmentTexture(sourceTexture, index: 0)
renderEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4)
renderEncoder.endEncoding()
}
}
commandBuffer.present(drawable)
commandBuffer.commit()
}
}
Good luck.
Any solution here?
Any updates here?
Would like a way to affect stereo layers in a metal shader on a swift UI view.
Thank you for the prompt response.
I made this diagram. I show the Component being defined in my SDK and shipped in source format. The customer would them copy the component source into their app's RCP sources directory.
I will test this out soon.
Two Questions:
First, is there a precedent for some SDK files to be shipped as source? (Yes, I guess this is like sample code.) Any recomended practices here to make this feel natural to developers?
Second, can we make a feature request to allow package references, like my framework, to be added to an RCP package and have all the valid public components in the framework added to the RCP components UI?
This would reduce the manual steps of app developers keeping framework components up to date.
More Profit!
Submitted: https://feedbackassistant.apple.com/feedback/13902697
Adding updated diagram to clearly show the BubbleComponent Swift file is copied into the RCP package for the app.
I hope this info i
s helpful to others in the future.
I was able to solve this using the model sort order component. I can now render spatial augmented 3D lines in with my stereoscopic texture content.
It is worth noting that the Apple MV-HEVC decoder does not currently support ALPHA.
I ended up making my own video player to get spatial video with alpha.
I did find a solution for this. I generated my USD code including my shadergraph. I then use that USD text to load my Model3D. It works pretty well and I now have scrollable list of stereo images that looks really good on Vision Pro.