Hello,
I'm new to coding and trying to figure out a crash I'm getting back from App Review. I have been unsuccessful in replicating the crash and symbolicating the logs but they hinted at what happened when my app crashed. They "tapped on the scan button" in my app which opens a camera view that starts a VNRecognizeText method.
My app is 99% SwiftUI and I only started learning to code in March 2020 so the UIKit elements in the camera view are really tough for me.
If the reviewer is using an older iPad and I have specified 4K preset would that crash their device instead of falling back on the native resolution?
Ultimately I'm trying to learn if how I've setup the request or session warrants a crash at App Review but not on any device or simulator I have access to and how I would guard against an older device that runs iOS 14 but the camera/ hardware is somehow causing this crash I can't replicate!
Thank you for helping!
Full Code below:
I'm new to coding and trying to figure out a crash I'm getting back from App Review. I have been unsuccessful in replicating the crash and symbolicating the logs but they hinted at what happened when my app crashed. They "tapped on the scan button" in my app which opens a camera view that starts a VNRecognizeText method.
My app is 99% SwiftUI and I only started learning to code in March 2020 so the UIKit elements in the camera view are really tough for me.
If the reviewer is using an older iPad and I have specified 4K preset would that crash their device instead of falling back on the native resolution?
Code Block session.sessionPreset = AVCaptureSession.Preset.hd4K3840x2160
Ultimately I'm trying to learn if how I've setup the request or session warrants a crash at App Review but not on any device or simulator I have access to and how I would guard against an older device that runs iOS 14 but the camera/ hardware is somehow causing this crash I can't replicate!
Thank you for helping!
Full Code below:
Code Block request = VNRecognizeTextRequest(completionHandler: recognizeTextHandler) // setup session let session = AVCaptureSession() session.beginConfiguration() let videoDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .unspecified) guard videoDevice != nil, let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice!), session.canAddInput(videoDeviceInput) else { print("No camera detected") return } session.addInput(videoDeviceInput) session.commitConfiguration() // session.sessionPreset = AVCaptureSession.Preset.hd4K3840x2160 self.captureSession = session // setup video output let videoDataOutput = AVCaptureVideoDataOutput() videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey : NSNumber(value: kCVPixelFormatType_32BGRA)] as [String : Any] videoDataOutput.alwaysDiscardsLateVideoFrames = true let queue = DispatchQueue(label: "com.MyApp.VideoQueue") videoDataOutput.setSampleBufferDelegate(self, queue: queue) guard captureSession!.canAddOutput(videoDataOutput) else { fatalError() } if session.canAddOutput(videoDataOutput) { session.addOutput(videoDataOutput) videoDataOutput.connection(with: AVMediaType.video)?.preferredVideoStabilizationMode = .standard } else { print("Could not add VDO output") return } do { try videoDevice!.lockForConfiguration() videoDevice!.videoZoomFactor = 1 videoDevice!.autoFocusRangeRestriction = .near videoDevice!.unlockForConfiguration() } catch { print("Could not set zoom level due to error: \(error)") return } videoConnection = videoDataOutput.connection(with: .video) } override class var layerClass: AnyClass { AVCaptureVideoPreviewLayer.self } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } var videoPreviewLayer: AVCaptureVideoPreviewLayer { return layer as! AVCaptureVideoPreviewLayer } override func didMoveToSuperview() { super.didMoveToSuperview() if nil != self.superview { self.videoPreviewLayer.session = self.captureSession self.videoPreviewLayer.videoGravity = .resizeAspectFill self.captureSession?.startRunning() } else { self.captureSession?.stopRunning() } } func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { if connection.videoOrientation != .portrait { connection.videoOrientation = .portrait return } if let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) { let requestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: .up, options: [:]) request.recognitionLevel = .accurate request.usesCPUOnly = false request.usesLanguageCorrection = false do { try requestHandler.perform([request]) } catch { print(error) } } } }