Discuss using the camera on Apple devices.

Posts under Camera tag

182 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Cannot take picture using requestTakePicture
I am currently renovating an application for macOS Sonoma (14.4) which triggers a Canon 60D via USB cable. Unlike what happened before in MacOS 10.6, the camera (ICCameraDevice) has description that contains only 2 capabilities: { UUIDString = "00000000-0000-0000-0000-000004A93215"; autolaunchApplicationPath = ""; capabilities = ( ICCameraDeviceCanDeleteOneFile, ICCameraDeviceCanAcceptPTPCommands ); class = ICCameraDevice; connectionID = 0xffff0001; delegate = "<0x600003157ac0>"; deviceID = 0xffff0001; deviceRef = 0xffff0001; iconPath = "(null)"; locationDescription = ICDeviceLocationDescriptionUSB; moduleExecutableArchitecture = 0; modulePath = "/System/Library/Image Capture/Devices/PTPCamera.app"; moduleVersion = "1.0"; name = "Canon EOS 60D"; persistentIDString = "00000000-0000-0000-0000-000004A93215"; shared = NO; softwareInstallPercentDone = "0.000000"; transportType = ICTransportTypeUSB; type = 0x00000101; } timeOffset : 0.000000 hasConfigurableWiFiInterface : N/A isAccessRestrictedAppleDevice : NO As you can see, ICCameraDeviceCanTakePicture is not present now, and so I cannot take a picture with requestTakePicture. Do I need to do anything special to regain these capabilities, like in older versions of macOS? Is my only option to use PTP commands? Thanks!
0
0
521
Mar ’24
iOS App on Mac fails to detect camera not present
When running an iOS app as designed for iPad on an m1 Mac mini the UIImagePickerController.isSourceTypeAvailable(.camera) api returns true leading to a crash (attached) if the camera is selected to upload an image to the app as my much loved Mac mini does not have a camera. For the moment have disabled camera if platform is Mac by adding the qualification: ProcessInfo().isiOSAppOnMac == false but this seems like a bug or does the crash also happen on Macs with cameras? Other image picker options work fine. Crash log
3
0
903
Mar ’24
What do I need to do to take 48MP photos and set the iso and exposure duration successfully?
I want to take 48MP photos and get the same iso and exposure duration as I set. Configuration Set the active AVCaptureDevice.Format to a format where supportedMaxPhotoDimensions contains the (8064, 6048) size Set AVCapturePhotoOutput.maxPhotoDimensions to (8064, 6048) Set if (AVCaptureDevice.isExposureModeSupported:.custom) { AVCaptureDevice.exposureMode = .custom; } Set AVCaptureDevice.setExposureModeCustomWithDuration:1/20 ISO:100 completionHandler:handler Taking a photo Set AVCapturePhotoSettings.maxPhotoDimensions to (8064, 6048) The API discussion of setExposureModeCustomWithDuration told me https://developer.apple.com/documentation/avfoundation/avcapturedevice/1624646-setexposuremodecustomwithduratio/ To ensure that the receiver's ISO and exposureDuration values are honored while in AVCaptureExposureModeCustom or AVCaptureExposureModeLocked, you must set your AVCapturePhotoSettings.photoQualityPrioritization property to AVCapturePhotoQualityPrioritizationSpeed. But at last step, when I set AVCapturePhotoSettings.maxPhotoQualityPrioritization = .speed, the photo resolution is (4000, 3000), only 12MP, not is (8000, 6000). the iso and exposure duration on the photo are the same as what I set. and when I set AVCapturePhotoSettings.maxPhotoQualityPrioritization = .balanced/.qulity, the photo is (8000, 6000) , but the iso and exposeure duration obtained on the photo is different from the one I set. What do I need to do to take 48MP photos and set the iso and exposure duration successfully?
1
0
673
Mar ’24
iOS 17.4 breaks 48 MP capture in certain scenarios
The methods described in https://developer.apple.com/forums/thread/715452?answerId=729571022#729571022 to obtain 48 MP image captures no longer seem to work on iOS 17.4 under certain circumstances. Previously, the following steps were sufficient to get 48 MP capture from AVFoundation: Configuration Set the active AVCaptureDevice.Format to a format where supportedMaxPhotoDimensions contains the (8064, 6048) size Set AVCapturePhotoOutput.maxPhotoDimensions to (8064, 6048) Set AVCapturePhotoOutput.maxPhotoQualityPrioritization to .quality Taking a photo Set AVCapturePhotoSettings.maxPhotoDimensions to (8064, 6048) Set AVCapturePhotoSettings.photoQualityPrioritization to .quality As of iOS 17.4, the exact same code that worked through 17.3 no longer works if the session was configured manually (resulting in the .inputPriority session preset) rather than using a session preset (like .high). When configuring the session manually, all the intervening steps work (an active format can be found with the appropriate dimensions, the photo output settings can be set to 8064x6048 successfully, etc.), but the resulting photo is 4032x3024. Again, these same steps worked flawlessly prior to iOS 17.4. Am I missing something? Did iOS 17.4 change the requirements for 48 MP capture, or is this a bug?
3
0
900
Mar ’24
After iPadOS 17.4, AVCaptureMetadataOutput no longer detects QR codes on some devices.
I'm creating an app that uses AVCaptureSession to pass camera input to AVCaptureMetadataOutput and scan QRCode. After updating to iPadOS 17.4, an issue has occurred where the delegate method of AVCaptureMetadataOutputObjectsDelegate is not called on some devices. The following devices are experiencing this issue. iPad (7th Gen) iPad (6th Gen) iPad Pro (10.5) iPad Pro (12.9 2nd Gen) This issue has not occur on any other devices I have. This may only occur on devices with model number "iPad7,x". I tried running the AVFoundation sample code on the Apple Developer site on the above device. The same problem still occurs. https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces Are any additional settings required after iPadOS17.4? Or is there some problem on the OS side?
9
1
3k
Mar ’24
AVCaptureSession preview freezing
I'm currently working on an iPad application that uses a third party sdk to scan a drivers license, and then allows the user to take a picture of themselves. However, when the user is directed to the self photo view, the AVCaptureSession preview will freeze. The app as a whole does not freeze. Only the view preview. I believe this is an issue with the OS, because this only happens on iPad 9s. All the other iPads work fine. Has anyone else seen this issue? Also, is there anyway to see logs from the AVCaptureSession so I can see what is happening? Maybe there is a way I can see when it freezes and then restart it.
1
0
713
Mar ’24
iPhone webRTC / getUserMedia only uses headset mic with low volume. Can we change mics?
Hi hope all are well! We've been working on a live streaming app and it's going quite well! Just got the aspect ratio locked as desired. Now the audio, its volume is extremely low. It sounds like it's using the headset mic instead of the bottom mic that's used on Facetime or on speakerphone calls. We tried flipping cameras and specifying sample rates, almost every constraint in MediaConstraints - no go! Is there any way to specify this? Thanks in advance!
1
0
631
Mar ’24
Problem iOS camera detect QR code with vCard Containing Space in FullName
Dear Team, I am trying to add contact from QRCode. But it seems that the built-in QR code reader of iphone camera isn't able to decode the FullName with space containing in last name correctly ex:-Collin A. Al Miller. I have attached all the screenshot for your reference. Here are the examples: When I am trying to focus iphone camera on QRCode the fullname (Collin A. Al Miller). scan the The full name its giving the empty result without the fullname. The attached screenshot details a)CameraQRNotWorking b)NotWorkingQRCOde 2)When i try to removed the blank space and tried to add comma or - in the full name its getting recognised and its working perfectly. The attached screenshot name a)CameraQRCodeWorking b)workingQRCODE 3)Both the full name are working perfectly in QR camera scanner of android Collin A. Al-Miller or Collin A, Al Miller. The attached screenshot name AndroidQRCODE Hope this issue will get resolved in upcoming release. Kindly provide the feedback relatedto this issue Code to generate vcard var str = "BEGIN:VCARD \n" + "VERSION:2.1 \n" + "FN:\("Collin A. Al Miller") \n" + "TITLE:\("") \n" if options.showPersonalPhone { str.append(contentsOf: "item1.TEL;CELL:\("+91987654320") \n") str.append(contentsOf: "item1.X-ABLabel:Mobile\n") } if options.showWorkPhone { str.append(contentsOf: "item2.TEL;WORK;VOICE:\("+91987654320") \n") str.append(contentsOf: "item2.X-ABLabel:Work Phone\n") } if options.showEmail { str.append(contentsOf: "item3.EMAIL;WORK;INTERNET:\("test@gmail.com") \n") str.append(contentsOf: "item3.X-ABLabel:Work Email\n") } if options.showWebsite { str.append(contentsOf: "URL:www.test.com \n") } if options.showLocation { str.append(contentsOf: "ADR;WORK:;;\("Bangalore") \n") } str.append(contentsOf: "END:VCARD")
2
0
463
Mar ’24
How do I disable video stabilization in a AVCaptureSession with AVCapturePhotoOutput added?
I need to capture 4k photos with 4:3 ratio from the camera. I can do this, but i want to disable video stabilization. I can disable video stabilization using the AVCaptureSessionPresetHigh preset. But AVCaptureSessionPresetHigh gives me a 16:9 photo with the surroundings cropped. Unfortunately, the 16:9 ratio does not solve my needs. When I run the session using the AVCaptureSessionPresetPhoto preset and adding AVCapturePhotoOutput, I cannot turn off image stabilization. self.capturePhotoOutput = AVCapturePhotoOutput.init() self.captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera , for: AVMediaType.video, position: .back) do { let input = try AVCaptureDeviceInput(device: self.captureDevice!) self.captureSession = AVCaptureSession() self.captureSession?.beginConfiguration() self.captureSession?.sessionPreset = .photo self.captureSession?.addInput(input) if ((captureSession?.canAddOutput(capturePhotoOutput!)) != nil) { captureSession?.addOutput(capturePhotoOutput!) } if let connection = capturePhotoOutput?.connection(with: .video) { if connection.isVideoStabilizationSupported { connection.preferredVideoStabilizationMode = .off } } DispatchQueue.main.async { [self] in self.capturePhotoOutput?.isHighResolutionCaptureEnabled = true self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!) self.videoPreviewLayer?.videoGravity = .resizeAspectFill self.videoPreviewLayer?.connection?.videoOrientation = .portrait self.videoPreviewLayer?.frame = self.previewView.layer.frame self.previewView.layer.insertSublayer(self.videoPreviewLayer!, at: 0) } self.captureSession?.commitConfiguration() self.captureSession?.startRunning() } } @objc private func handleTakePhoto(){ let photoSettings = AVCapturePhotoSettings() if let photoPreviewType = photoSettings.availablePreviewPhotoPixelFormatTypes.first { photoSettings.previewPhotoFormat = [kCVPixelBufferPixelFormatTypeKey as String:photoPreviewType] photoSettings.isAutoStillImageStabilizationEnabled = false capturePhotoOutput?.capturePhoto(with: photoSettings, delegate: self) } } func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { if let dataImage = photo.fileDataRepresentation() { print(UIImage(data: dataImage)?.size as Any) let dataProvider = CGDataProvider(data: dataImage as CFData) let cgImageRef: CGImage! = CGImage(jpegDataProviderSource: dataProvider!, decode: nil, shouldInterpolate: true, intent: .defaultIntent) let image = UIImage(cgImage: cgImageRef, scale: 1.0, orientation: rotateImage(orientation: currentOrientation)) } else { print("some error here") } } As a temporary solution, I added only AVCaptureVideoDataOutput to the session without adding AVCapturePhotoOutput, and I can capture in 4:3 format with the captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) function. However, this time I cannot get a 4K image. In short, I need to turn off video stabilization in a session with AVCapturePhotoOutput added. self.captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera , for: AVMediaType.video, position: .back) do { let input = try AVCaptureDeviceInput(device: self.captureDevice!) self.captureSession = AVCaptureSession() self.captureSession?.beginConfiguration() self.captureSession?.sessionPreset = .photo self.captureSession?.addInput(input) videoDataOutput = AVCaptureVideoDataOutput() videoDataOutput?.videoSettings = [ kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA) ] videoDataOutput?.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue")) if ((captureSession?.canAddOutput(videoDataOutput!)) != nil) { captureSession?.addOutput(videoDataOutput!) } /* If I cancel the comment line, video stabilization is enabled. if ((captureSession?.canAddOutput(capturePhotoOutput!)) != nil) { captureSession?.addOutput(capturePhotoOutput!) } */ DispatchQueue.main.async { [self] in self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession!) self.videoPreviewLayer?.videoGravity = .resizeAspectFill self.videoPreviewLayer?.connection?.videoOrientation = .portrait self.videoPreviewLayer?.frame = self.previewView.layer.frame self.previewView.layer.insertSublayer(self.videoPreviewLayer!, at: 0) } self.captureSession?.commitConfiguration() self.captureSession?.startRunning() } } @objc private func handleTakePhoto(){ takePicture = true } func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { if !takePicture { return //we have nothing to do with the image buffer } //try and get a CVImageBuffer out of the sample buffer guard let cvBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return } let rect = CGRect(x: 0, y: 0, width: CVPixelBufferGetWidth(cvBuffer), height: CVPixelBufferGetHeight(cvBuffer)) let ciImage = CIImage.init(cvImageBuffer: cvBuffer) let ciContext = CIContext() let cgImage = ciContext.createCGImage(ciImage, from: rect) guard cgImage != nil else {return } let uiImage = UIImage(cgImage: cgImage!) }
0
0
643
Mar ’24
LiDAR camera low FPS problem
When I use LiDAR, AVCaptureDeviceTypeBuiltInLiDARDepthCamera is used. As AVCaptureDeviceTypeBuiltInLiDARDepthCamera is A device that consists of two cameras, one LiDAR and one YUV. I found that the LiDAR data is 30fps, even making the YUV data 30 fps. But I really need the 240fps YUV data. Is there a way to utilize the 30fps LiDAR with 240fps YUV camera? Any reply would be appreciated.
1
0
632
Feb ’24
Use external camera in background iOS app
Is there a possibility to develop an iOS app that is connected to an external camera connected through lightning or USB-C port and receives video stream. We need to be able to get this video stream even while the app is in the background or if the phone is locked. We could have the camera connected wirelessly through the lightning port. Is there an available library or a sample app featuring such functionalities. Thanks.
1
0
760
Feb ’24
Use front camera with background application
Hello Community, I plan an app to correct a specific child's behavior. For this to work, the app needs to run in the background and will be triggered to use the front camera when a pre-defined app is on screen (YouTube as an example). Snap photos will be taken for image processing before their deletion every few seconds. When the specific behavior is found, the app will take down the device volume (and put it back when "fixed"). The user photos/data are deleted and nothing is sent, saved, or shared. My main concern is that the app is always in the background and using the camera frequently. I'm unsure if that is possible/allowed, and if so, how stable will it be. But most importantly, I do not want this code activity to be found suspicious when uploading the app to the store. Hope this is clear. I would appreciate an advice. Thanks, Avi
2
0
660
Feb ’24
for 420v, camera output CVPixelBuffer, Y channel value exceed the range [16, 235]
Platfrom: iphone XR System: ios 17.3.1 using iphone front camera(normal camera), configure data output format to 'kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange' ('420v' (video range)) I found that Cb, Cr is inside [16, 240], but Y is outside range [16, 235], e.g 240, 255 It will lead that after convert to rbg, rgb may be negative number , and then clamp the r,g,b value between 0 and 255, finally convert clamped rgb back to yuv, yuv is different from origin yuv. The maxium difference of y channel will be 20. Both procssing by pure cpu and using metal shader will get this result CVPixelBuffer.h kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange = '420v', /* Bi-Planar Component Y'CbCr 8-bit 4:2:0, video-range (luma=[16,235] chroma=[16,240]). baseAddr points to a big-endian CVPlanarPixelBufferInfo_YCbCrBiPlanar struct */ // ... some code ... // config camra data output format NSDictionary* options = @{ (__bridge NSString*)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange), //(__bridge NSString*)kCVPixelBufferMetalCompatibilityKey : @(YES), }; [_videoDataOutput setVideoSettings:options]; // ... some code ... - (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection; { CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CVPixelBufferRef pixelBuffer = imageBuffer; CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly); uint8_t* yBase = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0); uint8_t* uvBase = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1); int imageWidth = (int)CVPixelBufferGetWidth(pixelBuffer); // 720 int imageHeight = (int)CVPixelBufferGetHeight(pixelBuffer);// 1280 int y_width = (int)CVPixelBufferGetWidthOfPlane (pixelBuffer, 0); // 720 int y_height = (int)CVPixelBufferGetHeightOfPlane(pixelBuffer, 0); // 1280 int uv_width = (int)CVPixelBufferGetWidthOfPlane (pixelBuffer, 1); // 360 int uv_height = (int)CVPixelBufferGetHeightOfPlane(pixelBuffer, 1); // 640 int y_stride = (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0); int uv_stride = (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1); // 768 // check Y-plane if (TRUE) { for(int i = 0 ; i < imageHeight ; i++) { for(int j = 0; j < imageWidth ; j++) { uint8_t nv12pixel = *(yBase + y_stride * i + j ); if (nv12pixel < 16 || nv12pixel > 235) { // [16, 235] NSLog(@"%s: y panel out of range, coord (x:%d, y:%d), h-coord (x:%d, y:%d) ; nv12 %u " ,__FUNCTION__ ,j ,i ,j/2, i/2 ,nv12pixel ); } } } } CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly); } // ... some code ... How to deal with this case ? Hope to get reply, Thanks
3
0
874
Feb ’24
Grayscale picture on iPhone 14 Pro Max with iOS 17.0
■detail In a Xamarin iOS app, there is a screen (Screen A) designed for capturing ID photos. We've written code to set the default camera zoom to 2x when opening Screen A, enabling users to take photos by pressing a button. The subsequent screen (Screen B) serves as a preview screen for the photos taken on Screen A. The issue at hand is that photos captured on Screen A are unintentionally displayed in grayscale on Screen B. The correct behavior should be displaying them in color on Screen B. This problem occurs only on iPhone 14 Pro Max with iOS 17.0; it does not occur on iPhone 15 Pro with iOS 17.1. Moreover, when the code for a 2x zoom is not present during the capture settings, photos are displayed in color on Screen B on iPhone 14 Pro Max with iOS 17.0. If the code for a 2x zoom is present during the capture settings, and the AVCaptureSession's SessionPreset is set to Preset640x480, the photos are displayed in color on Screen B on iPhone 14 Pro Max with iOS 17.0. Is there an instance where the setting of AVCaptureSession's SessionPreset on iPhone 14 Pro Max with iOS 17.0 influences unintentional grayscale conversion when processing images after taking a 2x zoom photo? ■how to reproduce Using the camera 2x zoom code with AVCaptureSession's SessionPreset set to Preset during capturing on iPhone 14 Pro Max with iOS 17.0 using XCode15.1's iOS SDK(17.2). ■enviroment We are building a program using Xamarin.iOS in Visual Studio for Mac. During the build process, Xcode 15.1 (iOS SDK 17) is utilized.
0
0
375
Feb ’24
iPad Pro 12.9 6th Generation, wrong resolution from navigator.mediaDevices.getUserMedia()
Hello, I've been developing a web app which I need the front camera and need to take a picture at a higher resolution. But I have one issue. When I call navigator.mediaDevices.getUserMedia() in the browser to get the resolution of the camera, it shows it as 2052 x 2736. But it's a 12 MP front camera. I take a picture of myself using the camera app on the iPad and it shows 12 MP picture. The back camera reports it fine. You can also test it out on webcamtests.com to see the resolution.
0
0
425
Feb ’24
Recognize Objects (Humans) with Apple Vision Pro Simulator
As the title already suggests, is it possible with the current Apple Vision Simulator to recognize objects/humans, like it is currently possible on the iPhone. I am not even sure, if we have an api for accessing the cameras of the Vision Pro? My goal is, to recognize for example a human and add to this object an 3D object, for example a hat. Can this be done?
1
0
909
Mar ’24
Camera intrinsic matrix for single photo capture
Is it possible to get the camera intrinsic matrix for a captured single photo on iOS? I know that one can get the cameraCalibrationData from a AVCapturePhoto, which also contains the intrinsicMatrix. However, this is only provided when using a constituent (i.e. multi-camera) capture device and setting virtualDeviceConstituentPhotoDeliveryEnabledDevices to multiple devices (or enabling isDualCameraDualPhotoDeliveryEnabled on older iOS versions). Then photoOutput(_:didFinishProcessingPhoto:) is called multiple times, delivering one photo for each camera specified. Those then contain the calibration data. As far as I know, there is no way to get the calibration data for a normal, single-camera photo capture. I also found that one can set isCameraIntrinsicMatrixDeliveryEnabled on a capture connection that leads to a AVCaptureVideoDataOutput. The buffers that arrive at the delegate of that output then contain the intrinsic matrix via the kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix metadata. However, this requires adding another output to the capture session, which feels quite wasteful just for getting this piece of metadata. Also, I would somehow need to figure out which buffer was temporarily closest to when the actual photo was taken. Is there a better, simpler way for getting the camera intrinsic matrix for a single photo capture? If not, is there a way to calculate the matrix based on the image's metadata?
0
0
785
Feb ’24
How to read a CMVideoDimensions structure with Objective C code?
I am new to Objective C and relatively new to iOS development. I have an AVCaptureDevice object at hand and would like to print the maximum supported photo dimensions as provided by activeFormat.supportedMaxPhotoDimensions, using Objective C. I tried the following: for (NSValue *obj in device.activeFormat.supportedMaxPhotoDimensions) { CMVideoDimensions *vd = (__bridge CMVideoDimensions *)obj; NSString *s = [NSString stringWithFormat:@"res=%d:%d", vd->width, vd->height]; //print that string } If I run this code, I get: res=314830880:24994 This is way too high, and there is obviously something I am doing wrong, but I don't know what it could be. According to the information I see on the developer forum, I should get something closer to 4000:3000. I can successfully read device.activeFormat.videoFieldOfView and other fields, so I believe my code is sound overall.
1
0
626
Feb ’24