Easy way to output live

Hey everyone😊, I am building an app that includes a live camera feed preview. That's all I need to do along side identifying the images with createML's image classification. I don't need to capture images at all. I've seen some very complicated tutorials. I just want to use a couple of lines of code.

Answered by DTS Engineer in 819778022

Hello @O3DP,

You can use AVCaptureVideoPreviewLayer to display a live video preview of the camera.

You can also access the camera stream samples using an AVCaptureVideoDataOutput, and then process those samples using your image classification model.

Setting all of this up will require some familiarity with AVCaptureSession, and will be more than just a couple lines of code, take a look at this sample code as a reference.

— Greg

Accepted Answer

Hello @O3DP,

You can use AVCaptureVideoPreviewLayer to display a live video preview of the camera.

You can also access the camera stream samples using an AVCaptureVideoDataOutput, and then process those samples using your image classification model.

Setting all of this up will require some familiarity with AVCaptureSession, and will be more than just a couple lines of code, take a look at this sample code as a reference.

— Greg

Hello Greg, Thank you so much for your time. Could you please help me find just the barebones code for the preview? I am a beginner, and I am quite confused. I took a look at this: https://developer.apple.com/documentation/avfoundation/avcam-building-a-camera-app Using Xcode, I tried to delete the irrelevant code, but I ran into errors. Thank you so much and have a happy new year.

Hello @O3DP,

Sure, here is minimal preview code, first, the view code:

import SwiftUI
import AVFoundation

struct CameraPreview: UIViewRepresentable {
    
    let session: AVCaptureSession
    
    func makeUIView(context: Context) -> some UIView {
        
        let view = CameraPreviewView(frame: .zero)
        
        view.previewLayer.session = session
        
        return view
    }
    
    func updateUIView(_ uiView: UIViewType, context: Context) {}
}

final class CameraPreviewView: UIView {
    var previewLayer: AVCaptureVideoPreviewLayer {
        layer as! AVCaptureVideoPreviewLayer
    }
    
    override class var layerClass: AnyClass {
        AVCaptureVideoPreviewLayer.self
    }
}

Then, the capture session code:

import SwiftUI
@preconcurrency import AVFoundation

struct ContentView: View {
    
    nonisolated let session = AVCaptureSession()
        
    var body: some View {
        CameraPreview(session: session)
            .task {
                Task.detached {
                    do {
                        guard let device = AVCaptureDevice.default(for: .video) else {
                            fatalError("No video devices available.")
                        }
                        
                        let input = try AVCaptureDeviceInput(device: device)
                        
                        session.beginConfiguration()
                        session.addInput(input)
                        session.commitConfiguration()
                        
                        session.startRunning()
                    } catch {
                        fatalError(error.localizedDescription)
                    }
                }
            }
    }
}

That is about as minimal as it can get :)

-- Greg

Easy way to output live
 
 
Q