Posts

Post not yet marked as solved
2 Replies
362 Views
Hi everyone, I need to add spatial video maker in my app which was wrote in objective-c. I found some reference code by swift, can you help me with converting the code to objective -c? let left = CMTaggedBuffer( tags: [.stereoView(.leftEye), .videoLayerID(leftEyeLayerIndex)], pixelBuffer: leftEyeBuffer) let right = CMTaggedBuffer( tags: [.stereoView(.rightEye), .videoLayerID(rightEyeLayerIndex)], pixelBuffer: rightEyeBuffer) let result = adaptor.appendTaggedBuffers( [left, right], withPresentationTime: leftPresentationTs)
Posted
by pinkywon.
Last updated
.
Post not yet marked as solved
0 Replies
408 Views
I am playing the Vision API -- Optical Flow. I found the default output of VNGenerateOpticalFlowRequestRevision1 and VNGenerateOpticalFlowRequestRevision2 looks very different. I suppose Revision2 will generate a better (similar at least ) optical flow visualization. But looks like the magnitude of x and y direction is only ~50% to 70% of that of revision1. Any idea what is going on?
Posted
by pinkywon.
Last updated
.
Post not yet marked as solved
0 Replies
441 Views
I am new to ************ HEVC video encoding. I found the streams I got from ************ HEVC compression:CMVideoFormatDescriptionGetHEVCParameterSetAtIndex, I get vps, sps and pps as the NALU. But when I compare the hevc stream with the hevc video I took with my IphoneXS, I found Iphone XS video has 2 pps headers. As I searched the internet, people are saying some hevc videos have multiple pps, so I wonder is it possible to set multiple HEVC pps headers like the iphone camera in ************? Any response is helpful.Thanks in advance!
Posted
by pinkywon.
Last updated
.