Discuss using the camera on Apple devices.

Posts under Camera tag

178 Posts
Sort by:
Post not yet marked as solved
0 Replies
14 Views
Dear all, I have several scenes, each with it’s own camera at different positions. The scenes will be loaded with transitions. If I set the pointOfView in every Scene to the scene-camera, the transitions don’t work properly. The active scene View switches to the position of the camera of the scene, which is fading in. If I comment the pointOfView out, the transitions works fine, but the following error message appears: Error: camera node already has an authoring node - skip Has someone an idea to fix this? Many Thanks, Ray
Posted
by raycord.
Last updated
.
Post not yet marked as solved
0 Replies
45 Views
can the example from Support external cameras in your iPadOS app work on IOS 17.5 Iphone 15PRO ? https://developer.apple.com/videos/play/wwdc2023/10106/
Posted
by uwe2.
Last updated
.
Post not yet marked as solved
0 Replies
86 Views
Hi In my app I've to complete the IDV [Identity verification] by capturing the face os user and his/her documents, for this the backend developer provides me the URL from the IDV 3rd party, which URL I do open in webview, so before during loading the camera captureing screen in webview the Live Broadcast screen pops up from no where. I don't want this Live Broadcast screen but somehow it opens anyway. Although it is good thing that my expected camera screen was still open in background so I can go further from there. First time I'm also bit confused like how this kind of screen popsup even if I did't code for it. Also it takes me a little bit time to figure out how to close that screen. Simple peoples/users who're going to use my app they don't know how to close it. Please check the screenshots I attached. Please help me to rid of this popup. Thank You
Posted Last updated
.
Post not yet marked as solved
0 Replies
142 Views
Just watched the new product release, and I'm really hoping the new iPad Pro being advertised as the next creative tool for filmmakers and artists will finally allow RAW captures in the native Camera app or AVFoundation API (currently RAW available devices returns 0 on the previous iPad Pro). With all these fancy multicam camera features and camera hardware, I don't think it really takes that much to enable ProRAW and Action Mode on the software side of the iPad. Unless their strategy is to make us "shoot on iPhone and edit on iPad" (as implied in their video credits) which has been my workflow with the iPhone 15 and 2022 iPad Pro :( :(
Posted
by megatran.
Last updated
.
Post not yet marked as solved
1 Replies
220 Views
I've been trying to follow the "Supporting Continuity Camera in Your Mac App" article to implement the "Import from iPhone or iPad" menu for my MacOS app. I've been able to replicate most of the article in a test AppKit application but cannot do the same in my SwiftUI application. I'm not sure how to get the "NSMenuItemImportFromDeviceIdentifier" identifier into a SwiftUI Menu or create a NSMenu with a NSMenuItem for a SwiftUI app. I'm also not sure how to handle receiving the image in the SwiftUI environment. Any advice you might have is appreciated. Thanks!
Posted Last updated
.
Post not yet marked as solved
0 Replies
167 Views
Hello, I am working on a fairly complex iPhone app that controls the front built-in wide angle camera. I need to take and display a sequence of photos that cover the whole range of focus value available. Here is how I do it : call setExposureModeCustom to set the first lens position wait for the completionHandler to be called back capture a photo do it again for the next lens position. etc. This works fine, but it takes longer than I expected for the completionHandler to be called back. From what I've seen, the delay scales with the exposure duration. When I set the exposure duration to the max value: on the iPhone 14 Pro, it takes about 3 seconds (3 times the max exposure) on the iPhone 8 1.3s (4 times the max exposure). I was expecting a delay of two times the exposure duration: take a photo, throw one away while changing lens position, take the next photo, etc. but this takes more than that. I also tried the same thing with changing the ISO instead of the focus position and I get the same kind of delays. Also, I do not think the problem is linked to the way I process the images because I get the same delay even if I do nothing with the output. Is there something I could do to make things go faster for this use-case ? Any input would be appreciated, Thanks I created a minimal testing app to reproduce the issue : import Foundation import AVFoundation class Main:NSObject, AVCaptureVideoDataOutputSampleBufferDelegate { let dispatchQueue = DispatchQueue(label:"VideoQueue", qos: .userInitiated) let session:AVCaptureSession let videoDevice:AVCaptureDevice var focus:Float = 0 override init(){ session = AVCaptureSession() session.beginConfiguration() session.sessionPreset = .photo videoDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back)! super.init() let videoDeviceInput = try! AVCaptureDeviceInput(device: videoDevice) session.addInput(videoDeviceInput) let videoDataOutput = AVCaptureVideoDataOutput() if session.canAddOutput(videoDataOutput) { session.addOutput(videoDataOutput) videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA ] videoDataOutput.setSampleBufferDelegate(self, queue: dispatchQueue) } session.commitConfiguration() dispatchQueue.async { self.startSession() } } func startSession(){ session.startRunning() //lock max exposure duration try! videoDevice.lockForConfiguration() let exposure = videoDevice.activeFormat.maxExposureDuration.seconds * 0.5 print("set max exposure", exposure) videoDevice.setExposureModeCustom(duration: CMTime(seconds: exposure, preferredTimescale: 1000), iso: videoDevice.activeFormat.minISO){ time in print("did set max exposure") self.changeFocus() } videoDevice.unlockForConfiguration() } func changeFocus(){ let date = Date.now print("set focus", focus) try! videoDevice.lockForConfiguration() videoDevice.setFocusModeLocked(lensPosition: focus){ time in let dt = abs(date.timeIntervalSinceNow) print("did set focus - took:", dt, "frames:", dt/self.videoDevice.exposureDuration.seconds) self.next() } videoDevice.unlockForConfiguration() } func next(){ focus += 0.02 if focus > 1 { print("done") return } changeFocus() } func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection){ print("did receive video frame") } }
Posted
by Saturnyn.
Last updated
.
Post not yet marked as solved
1 Replies
171 Views
I have built a camera application which uses a AVCaptureSession with the AVCaptureDevice set to .builtInDualWideCamera and isVirtualDeviceConstituentPhotoDeliveryEnabled=true to enable delivery of "simultaneous" photos (AVCapturePhoto) for a single capture request. I am using the hd1920x1080 preset, but both the wide and ultra-wide photos are being delivered in the highest possible resolution (4224x2376). I've tried to disable any setting that suggests that it should be using that 4k resolution rather than 1080p on the AVCapturePhotoOutput, AVCapturePhotoSettings and AVCaptureDevice, but nothing has worked. Some debugging that I've done: When I turn off constituent photo delivery by commenting out the line of code below, I end up getting a single photo delivered with the 1080p resolution, as you'd expect. // photoSettings.virtualDeviceConstituentPhotoDeliveryEnabledDevices = captureDevice.constituentDevices I tried the constituent photo delivery with the .builtInDualCamera and got only 4k results (same as described above) I tried using a AVCaptureMultiCamSession with .builtInDualWideCamera and also only got 4k imagery I inspected the resolved settings on photo.resolvedSettings.photoDimensions, and the dimensions suggest the imagery should be 1080p, but then when I inspect the UIImage, it is always 4k. guard let imageData = photo.fileDataRepresentation() else { return } guard let capturedImage = UIImage(data: imageData ) else { return } print("photo.resolvedSettings.photoDimensions", photo.resolvedSettings.photoDimensions) // 1920x1080 print("capturedImage.size", capturedImage.size) // 4224x2376 -- Any help here would be greatly appreciated, because I've run out of things to try and documentation to follow 🙏
Posted
by nanders.
Last updated
.
Post not yet marked as solved
0 Replies
127 Views
I am implementing pan and zoom features for an app using a custom USB camera device, in iPadOS. I am using an update function (shown below) to apply transforms for scale and translation but they are not working. By re-enabling the animation I can see that the scale translation seems to initially take effect but then the image animates back to its original scale. This all happens in a fraction of a second but I can see it. The translation transform seems to have no effect at all. Printing out the value of AVCaptureVideoPreviewLayer.transform before and after does show that my values have been applied. private func updateTransform() { #if false // Disable default animation. CATransaction.begin() CATransaction.setDisableActions(true) defer { CATransaction.commit() } #endif // Apply the transform. logger.debug("\(String(describing: self.videoPreviewLayer.transform))") let transform = CATransform3DIdentity let translate = CATransform3DTranslate(transform, translationX, translationY, 0) let scale = CATransform3DScale(transform, scale, scale, 1) videoPreviewLayer.transform = CATransform3DConcat(translate, scale) logger.debug("\(String(describing: self.videoPreviewLayer.transform))") } My question is this, how can I properly implement pan/zoom for an AVCaptureVideoPreviewLayer? Or even better, if you see a problem with my current approach or understand why the transforms I am applying do not work, please share that information.
Posted Last updated
.
Post not yet marked as solved
2 Replies
202 Views
hi, in the settings in the application settings, how do I put a button there to allow the use of the camera?
Posted
by MORADENS.
Last updated
.
Post not yet marked as solved
1 Replies
220 Views
In this code, I aim to enable users to select an image from their phone gallery and display it with less opacity on top of the z-index. The selected image should appear on top of the user's phone camera feed, allowing them to see the canvas on which they are drawing as well as the low-opacity image. The app's purpose is to enable users to trace an image on the canvas while simultaneously seeing the camera feed. CameraView.swift import SwiftUI import AVFoundation struct CameraView: View { let selectedImage: UIImage var body: some View { ZStack { CameraPreview() Image(uiImage: selectedImage) .resizable() .aspectRatio(contentMode: .fill) .opacity(0.5) // Adjust the opacity as needed .edgesIgnoringSafeArea(.all) } } } struct CameraPreview: UIViewRepresentable { func makeUIView(context: Context) -> UIView { let cameraPreview = CameraPreviewView() return cameraPreview } func updateUIView(_ uiView: UIView, context: Context) {} } class CameraPreviewView: UIView { private let captureSession = AVCaptureSession() override init(frame: CGRect) { super.init(frame: frame) setupCamera() } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } private func setupCamera() { guard let backCamera = AVCaptureDevice.default(for: .video) else { print("Unable to access camera") return } do { let input = try AVCaptureDeviceInput(device: backCamera) if captureSession.canAddInput(input) { captureSession.addInput(input) let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) previewLayer.videoGravity = .resizeAspectFill previewLayer.frame = bounds layer.addSublayer(previewLayer) captureSession.startRunning() } } catch { print("Error setting up camera input:", error.localizedDescription) } } } Thanks for helping and your time.
Posted
by jhems.
Last updated
.
Post not yet marked as solved
0 Replies
135 Views
How do GPS lat and lon get written to .MOV files on iPhones? Is it programmed to export the coordinates as soon as you press record, when you press end, some other time? If you are recording and walking or driving what location will the GPS affix to the file? I thank you all so much for your time and help. It appears that this is the file that the location gets written to? com.apple.quicktime.location.ISO6709
Posted Last updated
.
Post not yet marked as solved
1 Replies
294 Views
Following the update to iOS 17.4.1, our team has observed a recurring issue across all iPhone browsers within our Virtual Try On web application. Specifically, when users switch between products, there's a disruption in camera permissions (changes to not allowed), resulting in a black screen appearing in the canvas where the live camera stream typically displays. We have noted that several users have reported experiencing the same issue. We kindly request your assistance in addressing this matter. Could you please provide guidance on any potential fixes or workarounds for this issue? Additionally, we would appreciate an estimated timeline for when a resolution might be expected. Thank you for your attention to this matter. We look forward to your prompt response and assistance in resolving this issue.
Posted Last updated
.
Post not yet marked as solved
0 Replies
176 Views
I have a camera application which aims to take images as close to simultaneously as possible from the wide and ultra-wide cameras. The AVCaptureMultiCamSession is setup with manual connections. Note: we are not using builtInDualWideCamera with constituent photo delivery enabled since some features we use are not supported in that mode. At the moment, we are manually trying to synchronize frames between the two cameras, but we would like to use the AVCaptureDataOutputSynchronizer to improve our results. Is it possible to synchronize the wide and ultra-wide video outputs? All examples and docs that I've found show synchronization with video and depth, metadata, or audio, but not two video outputs. From my testing, I've found that the dataOutputSynchronizer either fires with the wide video output, or the ultra video output, but never both (at least one is nil), suggesting that they are not being synchronized. self.outputSync = AVCaptureDataOutputSynchronizer(dataOutputs: [wideCameraOutput, ultraCameraOutput]) outputSync.setDelegate(self, queue: .main) ... func dataOutputSynchronizer(_ synchronizer: AVCaptureDataOutputSynchronizer, didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection) { guard let syncWideData: AVCaptureSynchronizedSampleBufferData = synchronizedDataCollection.synchronizedData(for: self.wideCameraOutput) as? AVCaptureSynchronizedSampleBufferData, let syncedUltraData: AVCaptureSynchronizedSampleBufferData = synchronizedDataCollection.synchronizedData(for: self.ultraCameraOutput) as? AVCaptureSynchronizedSampleBufferData else { return; } // either syncWideData or syncUltraData is always nil, so the guard condition never passes. }
Posted
by nanders.
Last updated
.
Post not yet marked as solved
1 Replies
251 Views
Hello everyone,I am a student who is working on my final project of my college.I do not get an official development account since I do not need to put my app on AppStore. In my project,I need to use the camera of iOS device, and I know I need to add NSCameraUsageDesciption in Info.plist.However, as I add the description in my Info and build my project, it failed and says"Provisioning profile "iOS Team Provisioning Profile: " doesn't include the NSCameraUsageDescription and NSPhotoLibraryUsageDescription entitlements." I also notice that in the Info.plist file, when I change the property type to entitlements,I just cannot find NSCameraUsageDescription when I add row. What's the problem?Is this because I am not an official developer?
Posted Last updated
.
Post not yet marked as solved
1 Replies
313 Views
I'm not sure if I just missed a recent breaking change, but we are having an issue with the camera in our single page app on iOS 17.4.1 in Safari. We can open the camera and display it to the user using getUserMedia. However, if the path of the site changes at all (for example, the user clicks a button to opens a sidepanel which results in the path in the browser changing) the camera goes black, even if the video element is still being displayed. I can see in the browser that the camera has stopped, and the user has to re-enable it manually by tapping "Start Using Camera". Any idea's what could be going on here?
Posted
by jungles13.
Last updated
.
Post not yet marked as solved
1 Replies
245 Views
I have the following code function load() { navigator.mediaDevices.getUserMedia({ video: true }) .then(function (stream) { var videoElement = document.getElementById('video'); videoElement.srcObject = stream; }) .catch(function (error) { console.log('navigator.MediaDevices.getUserMedia error: ', error.message, error.name); }); } <video id="video" playsinline autoplay></video> While the code works fine on browsers and load a camera on IOS. I can't seem to get the full IOS camera overlay (such as zooming etc) I just get a basic camera stream. Is it possible to stream the camera on a browser with full IOS camera functionality?
Posted Last updated
.
Post not yet marked as solved
0 Replies
182 Views
**Why does using CameraPicker require user authorization through a pop-up? ** Why don't ImagePicker or PhotoPicker require additional pop-up authorizations for accessing the photo library? All of these are implemented using UIImagePickerController, so why does one require a pop-up and the others do not? Additionally, I thought that by configuring the picker, I would theoretically not need any permissions. If permissions are still required, wouldn’t it make more sense to directly request camera permissions and utilize the native camera functionality? What then are the advantages of using the picker?
Posted Last updated
.
Post not yet marked as solved
3 Replies
406 Views
in demo ,load index.html into WKWebView, when i click file button, the camera page present and then dismiss quickly ViewController.h @property (nonatomic, strong) WKWebView *wkWebView; @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; WKWebViewConfiguration *configuration = [WKWebViewConfiguration new]; self.wkWebView = [[WKWebView alloc] initWithFrame:CGRectMake(0, 0, 400, 300) configuration:configuration]; NSURL *url = [[NSBundle mainBundle] URLForResource:@"index" withExtension:@"html"]; [self.wkWebView loadFileURL:url allowingReadAccessToURL:[[NSBundle mainBundle] bundleURL]]; self.wkWebView.backgroundColor = [UIColor blueColor]; [self.view addSubview:self.wkWebView]; } index. html <html lang="en"> <head> <meta charset="UTF-8"> </head> <body> <div> <label style="font-size: 40px;">open camera</label> <input type="file" accept="image/*" capture="camera" id="file-input" class="file-input"> </div> </body> </html>
Posted
by leozzz.
Last updated
.
Post not yet marked as solved
0 Replies
199 Views
Hi guys, I'm designing a customized camera based on avfoundation. I can output Live Photo from avCaptureDeviceInput for now. I expect to take still and live Photos with different aspect ratio, just like the apple's camera app does (1:1, 4:3, 16:9). I didn't find any useful infos from docs, any suggestion?
Posted
by ayumizll.
Last updated
.
Post not yet marked as solved
0 Replies
234 Views
I have built a camera application which uses a AVCaptureSession with the AVCaptureDevice set to .builtInDualWideCamera and isVirtualDeviceConstituentPhotoDeliveryEnabled=true to enable delivery of "simultaneous" photos (AVCapturePhoto) for a single capture request. Our app ideally would have the timestamp difference between the photos in a single capture request as short as possible, but we don't have a good idea of what the theoretical or practical limits of this timestamp difference are. In my testing on an iPhone 12 Pro, with a frame rate of 33Hz and the preset set to hd1920x1080, I get the timestamp difference between photos at approx 0.3ms, which seems smaller than I would expect, unless the frames are being synchronised incredibly well under the hood. This leaves the following unanswered questions: What sort of ranges of values should we expect to come out of these timestamp differences between photos? What factors influence this? Is there any way to control these values to ensure they are as small as possible? (Will likely be answered by (2))
Posted
by nanders.
Last updated
.