Hello 👋
I try to implement picture in picture on iOS with webRTC but I have some issue. I started by following this Apple article : https://developer.apple.com/documentation/avkit/adopting_picture_in_picture_for_video_calls
At least when my app is in background, the picture in picture view appear, but nothing is display within it :
So by searching on internet I found this post in Stackoverflow (https://stackoverflow.com/questions/71419635/how-to-add-picture-in-picture-pip-for-webrtc-video-calls-in-ios-swift), who says :
It's interesting but unfortunately, I don't know what I have to do...
Here is my PictureInPictureManager
:
final class VideoBufferView: UIView {
override class var layerClass: AnyClass {
AVSampleBufferDisplayLayer.self
}
var sampleBufferDisplayLayer: AVSampleBufferDisplayLayer {
layer as! AVSampleBufferDisplayLayer
}
}
final class PictureInPictureManager: NSObject {
static let shared: PictureInPictureManager = .init()
private override init() { }
private var pipController: AVPictureInPictureController?
private var bufferView: VideoBufferView = .init()
func configure(for videoView: UIView) {
if AVPictureInPictureController.isPictureInPictureSupported() {
let bufferView: VideoBufferView = .init()
let pipVideoCallViewController: AVPictureInPictureVideoCallViewController = .init()
pipVideoCallViewController.preferredContentSize = CGSize(width: 108, height: 192)
pipVideoCallViewController.view.addSubview(bufferView)
let pipContentSource: AVPictureInPictureController.ContentSource = .init(
activeVideoCallSourceView: videoView,
contentViewController: pipVideoCallViewController
)
pipController = .init(contentSource: pipContentSource)
pipController?.canStartPictureInPictureAutomaticallyFromInline = true
pipController?.delegate = self
} else {
print("❌ PIP not supported...")
}
}
}
With this code, the picture in picture view appear empty.
I read multiple article who talk about using the buffer but I'm not sure how to do it with webRTC...
I tried by adding this function to my PictureInPictureManager
:
func updateBuffer(with pixelBuffer: CVPixelBuffer) {
if let sampleBuffer = createSampleBufferFrom(pixelBuffer: pixelBuffer) {
bufferView.sampleBufferDisplayLayer.enqueue(sampleBuffer)
} else {
print("❌ Sample buffer error...")
}
}
private func createSampleBufferFrom(pixelBuffer: CVPixelBuffer) -> CMSampleBuffer? {
var presentationTime = CMSampleTimingInfo()
// Create a format description for the pixel buffer
var formatDescription: CMVideoFormatDescription?
let formatDescriptionError = CMVideoFormatDescriptionCreateForImageBuffer(
allocator: kCFAllocatorDefault,
imageBuffer: pixelBuffer,
formatDescriptionOut: &formatDescription
)
guard formatDescriptionError == noErr else {
print("❌ Error creating format description: \(formatDescriptionError)")
return nil
}
// Create a sample buffer
var sampleBuffer: CMSampleBuffer?
let sampleBufferError = CMSampleBufferCreateReadyWithImageBuffer(
allocator: kCFAllocatorDefault,
imageBuffer: pixelBuffer,
formatDescription: formatDescription!,
sampleTiming: &presentationTime,
sampleBufferOut: &sampleBuffer
)
guard sampleBufferError == noErr else {
print("❌ Error creating sample buffer: \(sampleBufferError)")
return nil
}
return sampleBuffer
}
but by doing that, I get this error message :
Any help is welcome ! 🙏 Thanks,
Alexandre