I am using AVFoundation for live camera view. I can get my device from the current video input (of type AVCaptureDeviceInput) like:
let device = videoInput.device
The device's active format has a isPortraitEffectSupported. How can I set the Portrait Effect on and off in live camera view?
I setup the camera like this:
private var videoInput: AVCaptureDeviceInput!
private let session = AVCaptureSession()
private(set) var isSessionRunning = false
private var renderingEnabled = true
private let videoDataOutput = AVCaptureVideoDataOutput()
private let photoOutput = AVCapturePhotoOutput()
private(set) var cameraPosition: AVCaptureDevice.Position = .front
func configureSession() {
sessionQueue.async { [weak self] in
guard let strongSelf = self else { return }
if strongSelf.setupResult != .success {
return
}
let defaultVideoDevice: AVCaptureDevice? = strongSelf.videoDeviceDiscoverySession.devices.first(where: {$0.position == strongSelf.cameraPosition})
guard let videoDevice = defaultVideoDevice else {
print("Could not find any video device")
strongSelf.setupResult = .configurationFailed
return
}
do {
strongSelf.videoInput = try AVCaptureDeviceInput(device: videoDevice)
} catch {
print("Could not create video device input: \(error)")
strongSelf.setupResult = .configurationFailed
return
}
strongSelf.session.beginConfiguration()
strongSelf.session.sessionPreset = AVCaptureSession.Preset.photo
// Add a video input.
guard strongSelf.session.canAddInput(strongSelf.videoInput) else {
print("Could not add video device input to the session")
strongSelf.setupResult = .configurationFailed
strongSelf.session.commitConfiguration()
return
}
strongSelf.session.addInput(strongSelf.videoInput)
// Add a video data output
if strongSelf.session.canAddOutput(strongSelf.videoDataOutput) {
strongSelf.session.addOutput(strongSelf.videoDataOutput)
strongSelf.videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)]
strongSelf.videoDataOutput.setSampleBufferDelegate(self, queue: strongSelf.dataOutputQueue)
} else {
print("Could not add video data output to the session")
strongSelf.setupResult = .configurationFailed
strongSelf.session.commitConfiguration()
return
}
// Add photo output
if strongSelf.session.canAddOutput(strongSelf.photoOutput) {
strongSelf.session.addOutput(strongSelf.photoOutput)
strongSelf.photoOutput.isHighResolutionCaptureEnabled = true
} else {
print("Could not add photo output to the session")
strongSelf.setupResult = .configurationFailed
strongSelf.session.commitConfiguration()
return
}
strongSelf.session.commitConfiguration()
}
}
func prepareSession(completion: @escaping (SessionSetupResult) -> Void) {
sessionQueue.async { [weak self] in
guard let strongSelf = self else { return }
switch strongSelf.setupResult {
case .success:
strongSelf.addObservers()
if strongSelf.photoOutput.isDepthDataDeliverySupported {
strongSelf.photoOutput.isDepthDataDeliveryEnabled = true
}
if let photoOrientation = AVCaptureVideoOrientation(interfaceOrientation: interfaceOrientation) {
if let unwrappedPhotoOutputConnection = strongSelf.photoOutput.connection(with: .video) {
unwrappedPhotoOutputConnection.videoOrientation = photoOrientation
}
}
strongSelf.dataOutputQueue.async {
strongSelf.renderingEnabled = true
}
strongSelf.session.startRunning()
strongSelf.isSessionRunning = strongSelf.session.isRunning
strongSelf.mainQueue.async {
strongSelf.previewView.videoPreviewLayer.session = strongSelf.session
}
completion(strongSelf.setupResult)
default:
completion(strongSelf.setupResult)
}
}
}
Then to I set isPortraitEffectsMatteDeliveryEnabled like this:
func setPortraitAffectActive(_ state: Bool) {
sessionQueue.async { [weak self] in
guard let strongSelf = self else { return }
if strongSelf.photoOutput.isPortraitEffectsMatteDeliverySupported {
strongSelf.photoOutput.isPortraitEffectsMatteDeliveryEnabled = state
}
}
}
However, I don't see any Portrait Effect in the live camera view! Any ideas why?
Post
Replies
Boosts
Views
Activity
I am trying to iterate over images in Photo Library and extract faces using CIDetector. The images are required to keep their original resolutions. To do so, I taking the following steps:
1- Getting assets given a date interval (usually more than a year)
func loadAssets(from fromDate: Date, to toDate: Date, completion: @escaping ([PHAsset]) -> Void) {
fetchQueue.async {
let authStatus = PHPhotoLibrary.authorizationStatus()
if authStatus == .authorized || authStatus == .limited {
let options = PHFetchOptions()
options.predicate = NSPredicate(format: "creationDate >= %@ && creationDate <= %@", fromDate as CVarArg, toDate as CVarArg)
options.sortDescriptors = [NSSortDescriptor(key: "creationDate", ascending: false)]
let result: PHFetchResult = PHAsset.fetchAssets(with: .image, options: options)
var _assets = [PHAsset]()
result.enumerateObjects { object, count, stop in
_assets.append(object)
}
completion(_assets)
} else {
completion([])
}
}
}
where:
let fetchQueue = DispatchQueue.global(qos: .background)
2- Extracting faces
I then extract face images using:
func detectFaces(in image: UIImage, accuracy: String = CIDetectorAccuracyLow, completion: @escaping ([UIImage]) -> Void) {
faceDetectionQueue.async {
var faceImages = [UIImage]()
let outputImageSize: CGFloat = 200.0 / image.scale
guard let ciImage = CIImage(image: image),
let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: [CIDetectorAccuracy: accuracy]) else { completion(faceImages); return }
let faces = faceDetector.features(in: ciImage) // Crash happens here
let group = DispatchGroup()
for face in faces {
group.enter()
if let face = face as? CIFaceFeature {
let faceBounds = face.bounds
let offset: CGFloat = floor(min(faceBounds.width, faceBounds.height) * 0.2)
let inset = UIEdgeInsets(top: -offset, left: -offset, bottom: -offset, right: -offset)
let rect = faceBounds.inset(by: inset)
let croppedFaceImage = ciImage.cropped(to: rect)
let scaledImage = croppedFaceImage
.transformed(by: CGAffineTransform(scaleX: outputImageSize / croppedFaceImage.extent.width,
y: outputImageSize / croppedFaceImage.extent.height))
faceImages.append(UIImage(ciImage: scaledImage))
group.leave()
} else {
group.leave()
}
}
group.notify(queue: self.faceDetectionQueue) {
completion(faceImages)
}
}
}
where:
private let faceDetectionQueue = DispatchQueue(label: "face detection queue",
qos: DispatchQoS.background,
attributes: [],
autoreleaseFrequency: DispatchQueue.AutoreleaseFrequency.workItem,
target: nil)
I use the following extension to get the image from assets:
extension PHAsset {
var image: UIImage {
autoreleasepool {
let manager = PHImageManager.default()
let options = PHImageRequestOptions()
var thumbnail = UIImage()
let rect = CGRect(x: 0, y: 0, width: pixelWidth, height: pixelHeight)
options.isSynchronous = true
options.deliveryMode = .highQualityFormat
options.resizeMode = .exact
options.normalizedCropRect = rect
options.isNetworkAccessAllowed = true
manager.requestImage(for: self, targetSize: rect.size, contentMode: .aspectFit, options: options, resultHandler: {(result, info) -> Void in
if let result = result {
thumbnail = result
} else {
thumbnail = UIImage()
}
})
return thumbnail
}
}
}
The code works fine for a few (usually less that 50) assets, but for more number of images it crashes at:
let faces = faceDetector.features(in: ciImage) // Crash happens here
I get this error:
validateComputeFunctionArguments:858: failed assertion `Compute Function(ciKernelMain): missing sampler binding at index 0 for [0].'
If I reduce the size of the image fed to detectFaces(:) e.g. 400 px, I can analyze a few hundred images (usually less than 1000) but as I mentioned, using the asset's image in the original size is a requirement. My guess is it has something to do with a memory issue when I try to extract faces with CIDetector. Any idea what this error is about and how I can fix the issue?
I have an AVFoundation-based live camera view. There is a button by which I am calling AVCaptureDevice.showSystemUserInterface(.videoEffects) so that the user can activate the Portrait Effect. I have also opted in by setting "Camera — Opt in for Portrait Effect" to true in info.plist. However, upon tapping on the button I see this screen (The red crossed-off part is the app name):
I am expecting to see something like this:
Do you have any idea why that might be?
I have a camera view based on AVFoundation. Any idea how I can switch to Portrait Effect in iOS's Control Center like SnapChat?
I have an AVFoundation-based live camera view. There is a button by which I am calling AVCaptureDevice.showSystemUserInterface(.videoEffects) so that the user can activate the Portrait Effect. I have also opted in by setting "Camera — Opt in for Portrait Effect" to true in info.plist. However, upon tapping on the button I see this screen (The red crossed-off part is the app name):
I am expecting to see something like this:
Do you have any idea why that might be?
I am trying to develop an ARKit iOS app. Is it possible to use Apple Watch to see a live view from ARKit (run from the parent iOS app)?