Any further updates on this? I'm very interested in efficient MV-HEVC encoding for both live and on demand events.
Post
Replies
Boosts
Views
Activity
I am not seeing symbols for SwiftUI calls which makes the perf stack trace difficult to interpret.
Are there symbols available for the related modules that would make this easier to track down in time profiler?
An advantage of using the built in Codable capabilities is not writing the encoders and decoders.
Happy to do this if we know we can get perf gains. Would prefer not to d this just to get symbols so we can to iterative block experimental testing of different data arrangements.
Symbols would be a very powerful option for investigating.
Shared a picture of the perf test app.
The 'regenerate' button rebuilds the published test structure.
And 'reset stats' clears the values.
As I dive deeper into Swift performance and in memory representation, I realize that more context can be helpful.
My background is c/c++. so I'm looking to thoroughly understand the underlying behavior.
I see not my test might be overly simplified compared to the production code. It may be that swift can do much more optimization of a struct with an 80K array than it might with nesting of struct that have multiple variable length arrays in them.
I'll change my test to explore these options.
As an example, here is roughly what the source data looks like in swift. (Yes, it is a fairly naive generated codable interpretation of the JSON.)
All feedback and insights appreciated.
import Foundation
struct EventData: Codable {
var events: [Event]
var featuredItems: [FeaturedItem]
}
struct Event: Codable {
var isAvailable: Bool
var name: String
var competitionName: String
var competitionId: String
var teams: [Team]
var markets: [Market]
}
struct Team: Codable {
var name: String
var teamId: String
var players: [Player]
}
struct Player: Codable {
var playerId: String
var parentId: String?
var isTeam: Bool
var lastName: String?
var firstName: String
var details: [PlayerDetail]
}
struct PlayerDetail: Codable {
var position: String?
var jerseyNumber: String?
}
struct Market: Codable {
var marketId: String
var marketTypeId: String
var name: String
var isSuspended: Bool
var selections: [Selection]
}
struct Selection: Codable {
var selectionId: String
var handicap: Double
var competitorId: String
var name: String
var odds: Odds
var competitor: Competitor
}
struct Odds: Codable {
var decimal: Double
var numerator: Int
var denominator: Int
}
struct Competitor: Codable {
var competitorId: String
var parentId: String?
var isTeam: Bool
var name: String
var details: [CompetitorDetail]
}
struct CompetitorDetail: Codable {
var position: String?
var jerseyNumber: String?
}
struct FeaturedItem: Codable {
var lastUpdatedAt: String
var marketId: String
var marketTypeId: String
var name: String
var isSuspended: Bool
var selections: [FeaturedSelection]
}
struct FeaturedSelection: Codable {
var selectionId: String
var handicap: Double
var competitorId: String
var name: String
var odds: Odds
var competitor: Competitor
}
I did find a solution for this. I generated my USD code including my shadergraph. I then use that USD text to load my Model3D. It works pretty well and I now have scrollable list of stereo images that looks really good on Vision Pro.
It is worth noting that the Apple MV-HEVC decoder does not currently support ALPHA.
I ended up making my own video player to get spatial video with alpha.
I was able to solve this using the model sort order component. I can now render spatial augmented 3D lines in with my stereoscopic texture content.
Adding updated diagram to clearly show the BubbleComponent Swift file is copied into the RCP package for the app.
I hope this info i
s helpful to others in the future.
Submitted: https://feedbackassistant.apple.com/feedback/13902697
Thank you for the prompt response.
I made this diagram. I show the Component being defined in my SDK and shipped in source format. The customer would them copy the component source into their app's RCP sources directory.
I will test this out soon.
Two Questions:
First, is there a precedent for some SDK files to be shipped as source? (Yes, I guess this is like sample code.) Any recomended practices here to make this feel natural to developers?
Second, can we make a feature request to allow package references, like my framework, to be added to an RCP package and have all the valid public components in the framework added to the RCP components UI?
This would reduce the manual steps of app developers keeping framework components up to date.
More Profit!
Any updates here?
Would like a way to affect stereo layers in a metal shader on a swift UI view.
Any solution here?
Hi Joe,
It's involved and I have not verified i'm using all the best APIs. I made an effort to ensure that Idid not make extra buffer copies. Your implementation may have a different optimal route depending on your texture source
But this shows the essence of working with the drawable queue.
code-block
func drawNextTexture(pixelBuffer: CVPixelBuffer) {
guard let textureResource = textureResource else { return }
guard let drawableQueue = drawableQueue else { return }
guard let scalePipelineState = scalePipelineState else { return }
guard let scalePipelineDescriptor = scalePipelineDescriptor else { return }
guard let commandQueue = commandQueue else { return }
guard let textureCache = textureCache else { return }
let srcWidth = CVPixelBufferGetWidth(pixelBuffer)
let srcHeight = CVPixelBufferGetHeight(pixelBuffer)
autoreleasepool {
var drawableTry: TextureResource.Drawable?
do {
drawableTry = try drawableQueue.nextDrawable() // may stall for up to 1 second.
guard drawableTry != nil else {
return // no frame needed
}
} catch {
print("Exception obtaining drawable: \(error)")
return
}
guard let drawable = drawableTry else { return }
guard let commandBuffer = commandQueue.makeCommandBuffer() else {
return
}
var cvMetalTextureTry: CVMetalTexture?
CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
textureCache,
pixelBuffer,
nil,
.bgra8Unorm_srgb, // linear color; todo try srgb
srcWidth,
srcHeight,
0,
&cvMetalTextureTry)
guard let cvMetalTexture = cvMetalTextureTry,
let sourceTexture = CVMetalTextureGetTexture(cvMetalTexture) else {
return
}
// Check if the sizes match
if srcWidth == textureResource.width && srcHeight == textureResource.height {
// Sizes match, use a blit command encoder to copy the data to the drawable's texture
if let blitEncoder = commandBuffer.makeBlitCommandEncoder() {
blitEncoder.copy(from: sourceTexture,
sourceSlice: 0,
sourceLevel: 0,
sourceOrigin: MTLOrigin(x: 0, y: 0, z: 0),
sourceSize: MTLSize(width: srcWidth, height: srcHeight, depth: 1),
to: drawable.texture,
destinationSlice: 0,
destinationLevel: 0,
destinationOrigin: MTLOrigin(x: 0, y: 0, z: 0))
blitEncoder.endEncoding()
}
} else {
// Sizes do not match, need to scale the source texture to fit the destination texture
let renderPassDescriptor = MTLRenderPassDescriptor()
renderPassDescriptor.colorAttachments[0].texture = drawable.texture
renderPassDescriptor.colorAttachments[0].loadAction = .clear
renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColorMake(0, 0, 0, 1) // Clear to opaque black
renderPassDescriptor.colorAttachments[0].storeAction = .store
if let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) {
renderEncoder.setRenderPipelineState(scalePipelineState)
renderEncoder.setVertexBuffer(scaleVertexBuffer, offset: 0, index: 0)
renderEncoder.setVertexBuffer(scaleTexCoordBuffer, offset: 0, index: 1)
renderEncoder.setFragmentTexture(sourceTexture, index: 0)
renderEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4)
renderEncoder.endEncoding()
}
}
commandBuffer.present(drawable)
commandBuffer.commit()
}
}
Good luck.
You may be able to use video material with mov.
on my phone now. Appologies for terse response.
1: 3x remap nodes to make interoplate for each RgB
2: combine3 node To make r g b into rgb.
i hope this helps.