How do GPS lat and lon get written to .MOV files on iPhones? Is it programmed to export the coordinates as soon as you press record, when you press end, some other time?
If you are recording and walking or driving what location will the GPS affix to the file? I thank you all so much for your time and help.
It appears that this is the file that the location gets written to? com.apple.quicktime.location.ISO6709
Camera
RSS for tagDiscuss using the camera on Apple devices.
Posts under Camera tag
182 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
hi, in the settings in the application settings, how do I put a button there to allow the use of the camera?
In this code, I aim to enable users to select an image from their phone gallery and display it with less opacity on top of the z-index. The selected image should appear on top of the user's phone camera feed, allowing them to see the canvas on which they are drawing as well as the low-opacity image. The app's purpose is to enable users to trace an image on the canvas while simultaneously seeing the camera feed.
CameraView.swift
import SwiftUI
import AVFoundation
struct CameraView: View {
let selectedImage: UIImage
var body: some View {
ZStack {
CameraPreview()
Image(uiImage: selectedImage)
.resizable()
.aspectRatio(contentMode: .fill)
.opacity(0.5) // Adjust the opacity as needed
.edgesIgnoringSafeArea(.all)
}
}
}
struct CameraPreview: UIViewRepresentable {
func makeUIView(context: Context) -> UIView {
let cameraPreview = CameraPreviewView()
return cameraPreview
}
func updateUIView(_ uiView: UIView, context: Context) {}
}
class CameraPreviewView: UIView {
private let captureSession = AVCaptureSession()
override init(frame: CGRect) {
super.init(frame: frame)
setupCamera()
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
private func setupCamera() {
guard let backCamera = AVCaptureDevice.default(for: .video) else {
print("Unable to access camera")
return
}
do {
let input = try AVCaptureDeviceInput(device: backCamera)
if captureSession.canAddInput(input) {
captureSession.addInput(input)
let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.videoGravity = .resizeAspectFill
previewLayer.frame = bounds
layer.addSublayer(previewLayer)
captureSession.startRunning()
}
} catch {
print("Error setting up camera input:", error.localizedDescription)
}
}
}
Thanks for helping and your time.
Following the update to iOS 17.4.1, our team has observed a recurring issue across all iPhone browsers within our Virtual Try On web application. Specifically, when users switch between products, there's a disruption in camera permissions (changes to not allowed), resulting in a black screen appearing in the canvas where the live camera stream typically displays.
We have noted that several users have reported experiencing the same issue. We kindly request your assistance in addressing this matter. Could you please provide guidance on any potential fixes or workarounds for this issue? Additionally, we would appreciate an estimated timeline for when a resolution might be expected.
Thank you for your attention to this matter. We look forward to your prompt response and assistance in resolving this issue.
I have a camera application which aims to take images as close to simultaneously as possible from the wide and ultra-wide cameras. The AVCaptureMultiCamSession is setup with manual connections. Note: we are not using builtInDualWideCamera with constituent photo delivery enabled since some features we use are not supported in that mode.
At the moment, we are manually trying to synchronize frames between the two cameras, but we would like to use the AVCaptureDataOutputSynchronizer to improve our results.
Is it possible to synchronize the wide and ultra-wide video outputs? All examples and docs that I've found show synchronization with video and depth, metadata, or audio, but not two video outputs.
From my testing, I've found that the dataOutputSynchronizer either fires with the wide video output, or the ultra video output, but never both (at least one is nil), suggesting that they are not being synchronized.
self.outputSync = AVCaptureDataOutputSynchronizer(dataOutputs: [wideCameraOutput, ultraCameraOutput])
outputSync.setDelegate(self, queue: .main)
...
func dataOutputSynchronizer(_ synchronizer: AVCaptureDataOutputSynchronizer, didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection) {
guard let syncWideData: AVCaptureSynchronizedSampleBufferData = synchronizedDataCollection.synchronizedData(for: self.wideCameraOutput) as? AVCaptureSynchronizedSampleBufferData,
let syncedUltraData: AVCaptureSynchronizedSampleBufferData = synchronizedDataCollection.synchronizedData(for: self.ultraCameraOutput) as? AVCaptureSynchronizedSampleBufferData else {
return;
}
// either syncWideData or syncUltraData is always nil, so the guard condition never passes.
}
Hello everyone,I am a student who is working on my final project of my college.I do not get an official development account since I do not need to put my app on AppStore.
In my project,I need to use the camera of iOS device, and I know I need to add NSCameraUsageDesciption in Info.plist.However, as I add the description in my Info and build my project, it failed and says"Provisioning profile "iOS Team Provisioning Profile: " doesn't include the NSCameraUsageDescription and NSPhotoLibraryUsageDescription entitlements."
I also notice that in the Info.plist file, when I change the property type to entitlements,I just cannot find NSCameraUsageDescription when I add row.
What's the problem?Is this because I am not an official developer?
I've been trying to follow the "Supporting Continuity Camera in Your Mac App" article to implement the "Import from iPhone or iPad" menu for my MacOS app. I've been able to replicate most of the article in a test AppKit application but cannot do the same in my SwiftUI application.
I'm not sure how to get the "NSMenuItemImportFromDeviceIdentifier" identifier into a SwiftUI Menu or create a NSMenu with a NSMenuItem for a SwiftUI app. I'm also not sure how to handle receiving the image in the SwiftUI environment. Any advice you might have is appreciated. Thanks!
I'm not sure if I just missed a recent breaking change, but we are having an issue with the camera in our single page app on iOS 17.4.1 in Safari. We can open the camera and display it to the user using getUserMedia. However, if the path of the site changes at all (for example, the user clicks a button to opens a sidepanel which results in the path in the browser changing) the camera goes black, even if the video element is still being displayed.
I can see in the browser that the camera has stopped, and the user has to re-enable it manually by tapping "Start Using Camera".
Any idea's what could be going on here?
I have the following code
function load() {
navigator.mediaDevices.getUserMedia({ video: true })
.then(function (stream) {
var videoElement = document.getElementById('video');
videoElement.srcObject = stream;
})
.catch(function (error) {
console.log('navigator.MediaDevices.getUserMedia error: ', error.message, error.name);
});
}
<video id="video" playsinline autoplay></video>
While the code works fine on browsers and load a camera on IOS. I can't seem to get the full IOS camera overlay (such as zooming etc) I just get a basic camera stream. Is it possible to stream the camera on a browser with full IOS camera functionality?
**Why does using CameraPicker require user authorization through a pop-up? **
Why don't ImagePicker or PhotoPicker require additional pop-up authorizations for accessing the photo library? All of these are implemented using UIImagePickerController, so why does one require a pop-up and the others do not?
Additionally, I thought that by configuring the picker, I would theoretically not need any permissions. If permissions are still required, wouldn’t it make more sense to directly request camera permissions and utilize the native camera functionality? What then are the advantages of using the picker?
in demo ,load index.html into WKWebView, when i click file button, the camera page present and then dismiss quickly
ViewController.h
@property (nonatomic, strong) WKWebView *wkWebView;
@end
@implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
WKWebViewConfiguration *configuration = [WKWebViewConfiguration new];
self.wkWebView = [[WKWebView alloc] initWithFrame:CGRectMake(0, 0, 400, 300) configuration:configuration];
NSURL *url = [[NSBundle mainBundle] URLForResource:@"index" withExtension:@"html"];
[self.wkWebView loadFileURL:url allowingReadAccessToURL:[[NSBundle mainBundle] bundleURL]];
self.wkWebView.backgroundColor = [UIColor blueColor];
[self.view addSubview:self.wkWebView];
}
index. html
<html lang="en">
<head>
<meta charset="UTF-8">
</head>
<body>
<div>
<label style="font-size: 40px;">open camera</label>
<input type="file" accept="image/*" capture="camera" id="file-input" class="file-input">
</div>
</body>
</html>
Hi guys,
I'm designing a customized camera based on avfoundation. I can output Live Photo from avCaptureDeviceInput for now. I expect to take still and live Photos with different aspect ratio, just like the apple's camera app does (1:1, 4:3, 16:9).
I didn't find any useful infos from docs, any suggestion?
I have built a camera application which uses a AVCaptureSession with the AVCaptureDevice set to .builtInDualWideCamera and isVirtualDeviceConstituentPhotoDeliveryEnabled=true to enable delivery of "simultaneous" photos (AVCapturePhoto) for a single capture request.
Our app ideally would have the timestamp difference between the photos in a single capture request as short as possible, but we don't have a good idea of what the theoretical or practical limits of this timestamp difference are.
In my testing on an iPhone 12 Pro, with a frame rate of 33Hz and the preset set to hd1920x1080, I get the timestamp difference between photos at approx 0.3ms, which seems smaller than I would expect, unless the frames are being synchronised incredibly well under the hood.
This leaves the following unanswered questions:
What sort of ranges of values should we expect to come out of these timestamp differences between photos?
What factors influence this?
Is there any way to control these values to ensure they are as small as possible? (Will likely be answered by (2))
After my iPad 6 upgrades from iOS 17.3 to 17.4, the AVCaptureMetadataOutput delegate is not called anymore. I find there is the same problem in a stackoverflow post:
https://stackoverflow.com/questions/78128010/ipados-17-4-avcapturemetadataoutput-delegate-not-called-qrscanner
An Apple webpage said the "QR code scanning" issue is fixed in iPadOS 17.4.1:
If your iPad is unable to scan QR codes after updating to iPadOS 17.4 - Apple Support - https://support.apple.com/en-lamr/118614
That's true, I confirm that on my iPad 6.
But, unfortunately, iPadOS 17.4.1 does fix ONLY QR code scanning! It doesn't fix barcode scanning, like PDF 417
Happening on
iPad (7th Generation)
iPad (6th Generation)
iPad Pro 12.9-inch (2nd Generation)
iPad Pro 10.5-inch
Hey!
I'm trying to open the front camera on my demo app, and from what I read on the Apple docs and forums if you have configured your Persona you will get that image.
But I'm having some issues with it, this is my code:
struct ContentView: View {
@Environment(\.presentationMode) var presentationMode
var body: some View {
ZStack {
VStack {
Image("logo")
.resizable()
.frame(width: 337, height: 211)
.accessibilityHidden(true)
Text("My first Vision Pro app.")
.multilineTextAlignment(.center)
.font(.headline)
.frame(width: 340)
.padding(.bottom, 10)
Button {
// Add camera functionality here
} label: {
Text("Open Camera")
.frame(maxWidth: .infinity)
}
.onAppear {
requestCameraAccess()
}
.onTapGesture {
// Check if camera permission is granted
if AVCaptureDevice.authorizationStatus(for: .video) == .authorized {
openFrontCamera()
} else {
requestCameraAccess()
}
}
}
}
}
func requestCameraAccess() {
AVCaptureDevice.requestAccess(for: .video) { authorized in
DispatchQueue.main.async {
if authorized {
// Permission granted, open camera if needed
openFrontCamera()
} else {
// Handle permission denied case (optional)
}
}
}
}
func openFrontCamera() {
}
}```
On the openFrontCamera() function I tried using .devices() .default() and other methods like you would use for other Apple devices but this doesn't work with Vision Pro and I can't find anything that tells me how to open it.
Has anyone been able to work this out?
For our application, we are aiming to have full control over setting and locking the camera exposure settings when taking a video. We’re working with Apple’s AVFoundation framework for a range of devices, but most of the development is focused on the iPad 8 front camera. The manual settings are specific to our use, so we aim to use the custom exposure mode with e.g ISO = 100, exposureDuration = 1/60, and a fixed white balance. The duration, ISO, and white balance are all set in advance of recording, but when we begin we can see that something is still adjusting and compensating for lighting changes.
We then also tried locking the exposure mode after setting the custom values, but there appears to be a delay in this lock taking effect. While tracking the ISO during a recording, we see that the ISO values change in the first second of the recording, leading to oversaturated images, despite our efforts to keep it locked.
This is our attempt to lock the settings using custom mode, which we don’t adjust ourselves during the recording, but it does not work as expected:
func setCameraSettings(newValueISO: Float, newValueDuration: CMTime){
do {
try cameraDevice?.lockForConfiguration()
cameraDevice?.automaticallyAdjustsVideoHDREnabled = false
cameraDevice?.setExposureModeCustom(duration: newValueDuration, iso: newValueISO, completionHandler: { [self] _ in
cameraDevice?.setWhiteBalanceModeLocked(with: cameraDevice!.deviceWhiteBalanceGains)
if ((cameraDevice!.isFocusModeSupported(.locked))) {
do {
cameraDevice!.focusMode = .locked
debugPrint("Focus mode set to locked.")
}
}
cameraDevice?.unlockForConfiguration()
})
} catch {
debugPrint("Error adjusting the exposure")
cameraDevice?.unlockForConfiguration()
}
}
We then tried to lock the exposure mode after setting the custom values, but it then changes during the first second of the recording. We also explicitly tried setting exposureTargetBias to 0, but this made no difference.
func setCameraSettings(newValueISO: Float, newValueDuration: CMTime){
guard let camera = cameraDevice else { return }
do {
if camera.isExposureModeSupported(.custom) {
do {
try camera.lockForConfiguration()
let customExposureBias: Float = 0
//camera.setExposureTargetBias(customExposureBias, completionHandler: nil)
if camera.isExposureModeSupported(.custom) {
camera.setExposureModeCustom(duration: newValueDuration, iso: newValueISO) { [weak camera ] _ in
guard let camera = camera else { return }
if camera.isExposureModeSupported(.locked) {
camera.exposureMode = .locked
}
}
}
camera.unlockForConfiguration()
print("Exposure settings locked with custom values.")
} catch {
print("Failed to lock configuration for capture device: \(error.localizedDescription)")
camera.unlockForConfiguration()
}
} else {
print("Custom exposure mode is not supported.")
}
}
}
We would very much appreciate input on how to keep the manually selected camera settings fixed throughout the video recording.
Hey all!
I'm building a Camera app using AVFoundation, and I am using AVCaptureVideoDataOutput and AVCaptureAudioDataOutput delegates. (I cannot use AVCaptureMovieFileOutput because I am doing some processing inbetween)
When recording the audio CMSampleBuffers to the AVAssetWriter, I noticed that compared to the stock iOS camera app, they are mono-audio, not stereo audio.
I wonder how recording in stereo audio works, are there any guides or documentation available for that?
Is a stereo audio frame still one CMSampleBuffer, or will it be multiple CMSampleBuffers? Do I need to synchronize them? Do I need to set up the AVAssetWriter/AVAssetWriterInput differently?
This is my Audio Session code:
func configureAudioSession(configuration: CameraConfiguration) throws {
ReactLogger.log(level: .info, message: "Configuring Audio Session...")
// Prevent iOS from automatically configuring the Audio Session for us
audioCaptureSession.automaticallyConfiguresApplicationAudioSession = false
let enableAudio = configuration.audio != .disabled
// Check microphone permission
if enableAudio {
let audioPermissionStatus = AVCaptureDevice.authorizationStatus(for: .audio)
if audioPermissionStatus != .authorized {
throw CameraError.permission(.microphone)
}
}
// Remove all current inputs
for input in audioCaptureSession.inputs {
audioCaptureSession.removeInput(input)
}
audioDeviceInput = nil
// Audio Input (Microphone)
if enableAudio {
ReactLogger.log(level: .info, message: "Adding Audio input...")
guard let microphone = AVCaptureDevice.default(for: .audio) else {
throw CameraError.device(.microphoneUnavailable)
}
let input = try AVCaptureDeviceInput(device: microphone)
guard audioCaptureSession.canAddInput(input) else {
throw CameraError.parameter(.unsupportedInput(inputDescriptor: "audio-input"))
}
audioCaptureSession.addInput(input)
audioDeviceInput = input
}
// Remove all current outputs
for output in audioCaptureSession.outputs {
audioCaptureSession.removeOutput(output)
}
audioOutput = nil
// Audio Output
if enableAudio {
ReactLogger.log(level: .info, message: "Adding Audio Data output...")
let output = AVCaptureAudioDataOutput()
guard audioCaptureSession.canAddOutput(output) else {
throw CameraError.parameter(.unsupportedOutput(outputDescriptor: "audio-output"))
}
output.setSampleBufferDelegate(self, queue: CameraQueues.audioQueue)
audioCaptureSession.addOutput(output)
audioOutput = output
}
}
This is how I activate the audio session just before I start recording:
let audioSession = AVAudioSession.sharedInstance()
try audioSession.updateCategory(AVAudioSession.Category.playAndRecord,
mode: .videoRecording,
options: [.mixWithOthers,
.allowBluetoothA2DP,
.defaultToSpeaker,
.allowAirPlay])
if #available(iOS 14.5, *) {
// prevents the audio session from being interrupted by a phone call
try audioSession.setPrefersNoInterruptionsFromSystemAlerts(true)
}
if #available(iOS 13.0, *) {
// allow system sounds (notifications, calls, music) to play while recording
try audioSession.setAllowHapticsAndSystemSoundsDuringRecording(true)
}
audioCaptureSession.startRunning()
And this is how I set up the AVAssetWriter:
let audioSettings = audioOutput.recommendedAudioSettingsForAssetWriter(writingTo: options.fileType)
let format = audioInput.device.activeFormat.formatDescription
audioWriter = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings, sourceFormatHint: format)
audioWriter!.expectsMediaDataInRealTime = true
assetWriter.add(audioWriter!)
ReactLogger.log(level: .info, message: "Initialized Audio AssetWriter.")
The rest is trivial - I receive CMSampleBuffers of the audio in my delegate's callback, write them to the audioWriter, and it ends up in the .mov file - but it is not stereo, it's mono.
Is there anything I'm missing here?
I want to make camera app for capturing Spatial video.
I found some apps for capturing Spatial video, But I don't know how can I open dual camera.
Please let me know how can I handle this.
Similar post on StackOverflow and multiple people reported this (you will encounter it if you run the app for like 10 minutes). I'm hoping this could get Apple's attention somehow
After downloading the project code (https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_depth_using_the_lidar_camera) and running the Swift sample code on an iPhone 14 Pro, the app crashes intermittently, throwing this error:
Execution of the command buffer was aborted due to an error during execution. Caused GPU Timeout Error (00000002:kIOGPUCommandBufferCallbackErrorTimeout)
Sometimes it will crash within a few seconds, sometimes it can take around 10 minutes.
Has anyone here experienced this crash from the sample code, or using the LiDAR camera?
I have spent a long time trying to solve this issue, I have searched the web high and low and submitted (I think) a report to Apple about it. I am unable to get xcode to show me the line of code where the crash is happening.
Any help would be greatly appreciated.
I am currently renovating an application for macOS Sonoma (14.4) which triggers a Canon 60D via USB cable. Unlike what happened before in MacOS 10.6, the camera (ICCameraDevice) has description that contains only 2 capabilities:
{
UUIDString = "00000000-0000-0000-0000-000004A93215";
autolaunchApplicationPath = "";
capabilities = (
ICCameraDeviceCanDeleteOneFile,
ICCameraDeviceCanAcceptPTPCommands
);
class = ICCameraDevice;
connectionID = 0xffff0001;
delegate = "<0x600003157ac0>";
deviceID = 0xffff0001;
deviceRef = 0xffff0001;
iconPath = "(null)";
locationDescription = ICDeviceLocationDescriptionUSB;
moduleExecutableArchitecture = 0;
modulePath = "/System/Library/Image Capture/Devices/PTPCamera.app";
moduleVersion = "1.0";
name = "Canon EOS 60D";
persistentIDString = "00000000-0000-0000-0000-000004A93215";
shared = NO;
softwareInstallPercentDone = "0.000000";
transportType = ICTransportTypeUSB;
type = 0x00000101;
} timeOffset : 0.000000
hasConfigurableWiFiInterface : N/A
isAccessRestrictedAppleDevice : NO
As you can see, ICCameraDeviceCanTakePicture is not present now, and so I cannot take a picture with requestTakePicture.
Do I need to do anything special to regain these capabilities, like in older versions of macOS?
Is my only option to use PTP commands?
Thanks!