How to set the Portrait effect on/off in live camera view in AVFoundation in Swift?

I am using AVFoundation for live camera view. I can get my device from the current video input (of type AVCaptureDeviceInput) like:

let device = videoInput.device

The device's active format has a isPortraitEffectSupported. How can I set the Portrait Effect on and off in live camera view?

I setup the camera like this:

private var videoInput: AVCaptureDeviceInput!
    private let session = AVCaptureSession()
    private(set) var isSessionRunning = false
    private var renderingEnabled = true
    private let videoDataOutput = AVCaptureVideoDataOutput()
    private let photoOutput = AVCapturePhotoOutput()
    private(set) var cameraPosition: AVCaptureDevice.Position = .front

func configureSession() {
        
        sessionQueue.async { [weak self] in
            
            guard let strongSelf = self else { return }
            
            if strongSelf.setupResult != .success {
                return
            }
            
            let defaultVideoDevice: AVCaptureDevice? = strongSelf.videoDeviceDiscoverySession.devices.first(where: {$0.position == strongSelf.cameraPosition})
            
            guard let videoDevice = defaultVideoDevice else {
                print("Could not find any video device")
                strongSelf.setupResult = .configurationFailed
                return
            }
            
            do {
                
                strongSelf.videoInput = try AVCaptureDeviceInput(device: videoDevice)
                
            } catch {
                print("Could not create video device input: \(error)")
                strongSelf.setupResult = .configurationFailed
                return
            }
            
            strongSelf.session.beginConfiguration()
            
            strongSelf.session.sessionPreset = AVCaptureSession.Preset.photo
            
            
            // Add a video input.
            guard strongSelf.session.canAddInput(strongSelf.videoInput) else {
                print("Could not add video device input to the session")
                strongSelf.setupResult = .configurationFailed
                strongSelf.session.commitConfiguration()
                return
            }
            strongSelf.session.addInput(strongSelf.videoInput)
            
            // Add a video data output
            if strongSelf.session.canAddOutput(strongSelf.videoDataOutput) {
                strongSelf.session.addOutput(strongSelf.videoDataOutput)
                strongSelf.videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)]
                strongSelf.videoDataOutput.setSampleBufferDelegate(self, queue: strongSelf.dataOutputQueue)
                
            } else {
                print("Could not add video data output to the session")
                strongSelf.setupResult = .configurationFailed
                strongSelf.session.commitConfiguration()
                return
            }
            
            // Add photo output
            if strongSelf.session.canAddOutput(strongSelf.photoOutput) {
                strongSelf.session.addOutput(strongSelf.photoOutput)
                
                strongSelf.photoOutput.isHighResolutionCaptureEnabled = true
                
                
            } else {
                print("Could not add photo output to the session")
                strongSelf.setupResult = .configurationFailed
                strongSelf.session.commitConfiguration()
                return
            }
            
            strongSelf.session.commitConfiguration()
            
        }
    }
    
    func prepareSession(completion: @escaping (SessionSetupResult) -> Void) {
        
        sessionQueue.async { [weak self] in
            guard let strongSelf = self else { return }
            switch strongSelf.setupResult {
            case .success:
                strongSelf.addObservers()
                
                
                if strongSelf.photoOutput.isDepthDataDeliverySupported {
                    strongSelf.photoOutput.isDepthDataDeliveryEnabled = true
                }
                
                if let photoOrientation = AVCaptureVideoOrientation(interfaceOrientation: interfaceOrientation) {
                    if let unwrappedPhotoOutputConnection = strongSelf.photoOutput.connection(with: .video) {
                        unwrappedPhotoOutputConnection.videoOrientation = photoOrientation
                    }
                }
                
                strongSelf.dataOutputQueue.async {
                    strongSelf.renderingEnabled = true
                }
                
                strongSelf.session.startRunning()
                strongSelf.isSessionRunning = strongSelf.session.isRunning
                
                strongSelf.mainQueue.async {
                    strongSelf.previewView.videoPreviewLayer.session = strongSelf.session
                }
                
                completion(strongSelf.setupResult)
            default:
                completion(strongSelf.setupResult)
                
            }
        }
    }

Then to I set isPortraitEffectsMatteDeliveryEnabled like this:

func setPortraitAffectActive(_ state: Bool) {
        sessionQueue.async { [weak self] in
            guard let strongSelf = self else { return }
            if strongSelf.photoOutput.isPortraitEffectsMatteDeliverySupported {
                strongSelf.photoOutput.isPortraitEffectsMatteDeliveryEnabled = state
            }
            
        }
    }

However, I don't see any Portrait Effect in the live camera view! Any ideas why?

Replies

Hello,

As noted in this video (https://developer.apple.com/videos/play/wwdc2021/10047/?time=1353):

The Portrait effect is always under user control through Control Center only.

In other words, your app has no way to actually activate or deactivate the Portrait effect.

You said:

Then to I set isPortraitEffectsMatteDeliveryEnabled like this

The "Portrait Effects Matte" is distinctly a different feature from the Portrait effect, enabling it delivers a portrait effects matte image to your app when you capture a photo, whereas the Portrait effect is an effect that is applied to the video stream, and then your app is delivered the resulting video stream.


I also noticed the following in your code snippet:

strongSelf.videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)]

Though it is unrelated to your question, I strongly recommend that you read over TN3121, to determine if your app truly needs to be using that pixel format.

  • Thanks for mentioning the pixel buffer. I changed that too kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, and I see a tangible decrease in memory usage! Thanks :)

Add a Comment