Post Content:
Hi everyone,
I’m encountering an issue with how iPhone displays contact information from a vCard QR code in the contact preview. When I scan the QR code with my iPhone camera, the contact preview shows the email address between the name and the contact image, instead of displaying the organization name.
Here’s the structure of the vCard I’m using:
BEGIN:VCARD
VERSION:3.0
FN:Ahmad Rana
N:Rana;Ahmad;;;
ORG:Company 3
TEL;TYPE=voice,msg:+1234567890
EMAIL:a(at the rate)gmail.com
URL:https://example.com
IMPP:facebook:fb
END:VCARD
What I Expect:
When I scan it with camera and in the contact preview before creating the camera I want organization name between name and image of the preview but I get email instead of ogrganization name. If only organisation is passed then it displays correctly but when I pass email it displayed email in between.
Steps I’ve Taken:
Verified the vCard structure to ensure it follows the standard format.
Reordered the fields in the vCard to prioritize the organization name and job title.
Tested with a simplified vCard containing only the name, organization, and email.
Despite these efforts, the email address continues to be displayed in the contact preview between the name and the contact image, while the organization name is not shown as expected.
Question:
How can I ensure that the organization name is displayed correctly in the contact preview on iPhone when scanning a QR code? Are there specific rules or best practices for field prioritization in vCards that I might be missing?
I would appreciate any insights or suggestions on how to resolve this issue.
Thank you!
Camera
RSS for tagDiscuss using the camera on Apple devices.
Posts under Camera tag
182 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hey, I'm building a portrait mode into my camera app but I'm having trouble with matching the quality of Apples native camera implementation. I'm streaming the depth data and applying a CIMaskedVariableBlur to the video stream which works quite well but the definition of the object in focus looks quite bad in some scenarios. See comparison below with Apples UI + depth data.
What I don't quite understand is how Apple is able to do such a good cutout around my hand assuming it has similar depth data to what I am receiving. You can see in the depth image that my hand is essentially the same colour as parts of background, and this shows in the blur preview - but Apple gets around this.
Does anyone have any ideas?
Thanks!
After upgrading to iOS18 dev beta 1/2 , camera keeps opening automatically by itself. Consuming more battery.
I have an app that uses a MultiCamCaptureSession, the devices of which are builtInUltraWideCamera and builtInLiDARDepthCamera cameras. Occasionally when outside I get some frame drops due to discontinuity that end in the media services being reset:
[06-24 11:27:13][CameraSession] Capture session runtime error: related decl 'e' for AVError(_nsError: Error Domain=AVFoundationErrorDomain Code=-11819 "Cannot Complete Action" UserInfo={NSLocalizedDescription=Cannot Complete Action, NSLocalizedRecoverySuggestion=Try again later.})
This runtime error notification is always superseded by 4-5 frame drops :
[06-24 11:27:10][CaptureSession] Dropped frame because Discontinuity
Logging the system temperature shows
[06-24 11:27:10][CaptureSession] Temperature is 'Fair'
I have some inclination that the frame discontinuity is being caused by the whileBalanceMode of the capture session, perhaps the algorithm requires 5 recent frames to work. I had a similar problem with the lidar depth camera where with filtering enabled exactly 5 frame drops would make the media services reset.
When the whiteBalanceMode is locked I do slightly better with 10 frame drops before the mediaServices are reset.
Is there any logging utility to determine the actual reason? All of these sampleBuffers come with no info attachment only the not so useful "Dropped frame because Discontinuity." Any ideas for solving this would be helpful as well. Maybe tuning the camera to work better with quickly varying lighting conditions?
After upgrading to ios 18, my camera app is opening automatically without pressing any thing on phon.
I just follow the video and add the codes, but when I switch to spatial video capturing, the videoPreviewLayer shows black.
<<<< FigCaptureSessionRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSessionRemote.m:405) - (err=0)
<<<< FigCaptureSessionRemote >>>> captureSessionRemote_getObjectID signalled err=-16405 (kFigCaptureSessionError_ServerConnectionDied) (Server connection was lost) at FigCaptureSessionRemote.m:405
<<<< FigCaptureSessionRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSessionRemote.m:421) - (err=-16405)
<<<< FigCaptureSessionRemote >>>> Fig assert: "msg" at bail (FigCaptureSessionRemote.m:744) - (err=0)
Did I miss something?
We are developing apps for visionOS and need the following capabilities for a consumer app:
access to the main camera, to let users shoot photos and videos
reading QR codes, to trigger the download of additional content
So I was really happy when I noticed that visionOS 2.0 has these features.
However, I was shocked when I also realized that these capabilities are restricted to enterprise customers only:
https://developer.apple.com/videos/play/wwdc2024/10139/
I think that Apple is shooting itself into the foot with these restrictions. I can understand that privacy is important, but these limitations restrict potential use cases for this platform drastically, even in consumer space.
IMHO Apple should decide if they want to target consumers in the first place, or if they want to go the Hololens / MagicLeap route and mainly satisfy enterprise customers and their respective devs. With the current setup, Apple is risking to push devs away to other platforms where they have more freedom to create great apps.
How to display the user's own persona in a view
After the session video, "Build a great Lock Screen camera capture experience", was unclear about the UI.
So do developers need to provide a whole new UI in the extension? The main UI cannot be repurposed?
Hey!
I'm working on a camera app and I've noticed that the .builtInTripleCamera doesn't behave anything like the native app. Tested on iPhone 15 Pro Max and iPhone 12 Pro Max.
The documentation states the following, but that seems quite different from what is happening in the app:
Automatic switching from one camera to another occurs when the zoom factor, light level, and focus position allow.
So, does it automatically switch like the native camera, or do I need to do something?
Custom Camera vs Native Camera
Custom Camera
Native Camera
The code was adapted from the Apple's project
AVCamFilter.
Just download the AVCamFilter and update videoDeviceDiscoverySession:
private let videoDeviceDiscoverySession = AVCaptureDevice.DiscoverySession(
deviceTypes: [.builtInTripleCamera],
mediaType: .video,
position: .unspecified
)
I am trying to use the AVCamFilter Apple sample project discussed in this WWDC session to get depth data using the dual camera. The project has built-in features to get depth data from the dual camera.
When the sample project was written builtInDualWideCamera didn't exist yet, and the project only tries to get builtInDualCamera and builtInWideAngleCamera. When I run the project on my iPad Pro it doesn't show any of the depth-related UI because the device doesn't have a builtInDualCamera device. So I added builtInDualWideCamera in to the videoDeviceDiscoverySession, and it seems to get that device properly, but isDepthDataDeliverySupported is returning false still.
Is there some reason why isDepthDataDeliverySupported is false even though I seem to be using a dual camera device?
I know the device has a builtInLiDARDepthCamera but I wanted to try out the dual camera depth data to see how it performs for shorter distances. I wouldn't have expected the dual camera depth data delivery to be made unavailable on the device just because the LiDAR sensor is already available.
Using iPadOS 17.5.1, iPad Pro 11-inch 4th generation.
The depth feature of this sample app works fine on an iPhone 15 I tested. Also tried on an iPhone 15 Pro and it worked even though that device also has a LiDAR sensor, so the issue is presumably not related to the fact that the iPad Pro has a LiDAR sensor.
I'm using CIDepthBlurEffect to create a portrait mode effect on a rendered image. The effect is working as expected however I want to create the "bokeh ball" effect which is seen in the photos app. I see that the filter has a "inputShape" input of type NSString, however the documents do not specify what value this should be.
Any pointers are help is greatly apprecaited.
I made CameraExtension and installed by OSSystemExtensionRequest.
I got success callback. I did uninstall old version of my CameraExtension and install new version of my CameraExtension.
"systemextensionsctl list" command shows "[activated enabled]" on my new version.
But no daemon process with my CameraExtension is not running. I need to reboot OS to start the daemon process. This issue is new at macOS Sonoma 14.5. I did not see this issue on 14.4.x
Dear all,
I have several scenes, each with it’s own camera at different positions. The scenes will be loaded with transitions.
If I set the pointOfView in every Scene to the scene-camera, the transitions don’t work properly. The active scene View switches to the position of the camera of the scene, which is fading in.
If I comment the pointOfView out, the transitions works fine, but the following error message appears:
Error: camera node already has an authoring node - skip
Has someone an idea to fix this?
Many Thanks,
Ray
can the example from
Support external cameras in your iPadOS app
work on IOS 17.5 Iphone 15PRO ?
https://developer.apple.com/videos/play/wwdc2023/10106/
Hi
In my app I've to complete the IDV [Identity verification] by capturing the face os user and his/her documents, for this the backend developer provides me the URL from the IDV 3rd party, which URL I do open in webview, so before during loading the camera captureing screen in webview the Live Broadcast screen pops up from no where. I don't want this Live Broadcast screen but somehow it opens anyway. Although it is good thing that my expected camera screen was still open in background so I can go further from there.
First time I'm also bit confused like how this kind of screen popsup even if I did't code for it. Also it takes me a little bit time to figure out how to close that screen.
Simple peoples/users who're going to use my app they don't know how to close it. Please check the screenshots I attached. Please help me to rid of this popup.
Thank You
Just watched the new product release, and I'm really hoping the new iPad Pro being advertised as the next creative tool for filmmakers and artists will finally allow RAW captures in the native Camera app or AVFoundation API (currently RAW available devices returns 0 on the previous iPad Pro). With all these fancy multicam camera features and camera hardware, I don't think it really takes that much to enable ProRAW and Action Mode on the software side of the iPad. Unless their strategy is to make us "shoot on iPhone and edit on iPad" (as implied in their video credits) which has been my workflow with the iPhone 15 and 2022 iPad Pro :( :(
Hello,
I am working on a fairly complex iPhone app that controls the front built-in wide angle camera. I need to take and display a sequence of photos that cover the whole range of focus value available.
Here is how I do it :
call setExposureModeCustom to set the first lens position
wait for the completionHandler to be called back
capture a photo
do it again for the next lens position.
etc.
This works fine, but it takes longer than I expected for the completionHandler to be called back.
From what I've seen, the delay scales with the exposure duration.
When I set the exposure duration to the max value:
on the iPhone 14 Pro, it takes about 3 seconds (3 times the max exposure)
on the iPhone 8 1.3s (4 times the max exposure).
I was expecting a delay of two times the exposure duration: take a photo, throw one away while changing lens position, take the next photo, etc. but this takes more than that.
I also tried the same thing with changing the ISO instead of the focus position and I get the same kind of delays. Also, I do not think the problem is linked to the way I process the images because I get the same delay even if I do nothing with the output.
Is there something I could do to make things go faster for this use-case ?
Any input would be appreciated,
Thanks
I created a minimal testing app to reproduce the issue :
import Foundation
import AVFoundation
class Main:NSObject, AVCaptureVideoDataOutputSampleBufferDelegate {
let dispatchQueue = DispatchQueue(label:"VideoQueue", qos: .userInitiated)
let session:AVCaptureSession
let videoDevice:AVCaptureDevice
var focus:Float = 0
override init(){
session = AVCaptureSession()
session.beginConfiguration()
session.sessionPreset = .photo
videoDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back)!
super.init()
let videoDeviceInput = try! AVCaptureDeviceInput(device: videoDevice)
session.addInput(videoDeviceInput)
let videoDataOutput = AVCaptureVideoDataOutput()
if session.canAddOutput(videoDataOutput) {
session.addOutput(videoDataOutput)
videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA ]
videoDataOutput.setSampleBufferDelegate(self, queue: dispatchQueue)
}
session.commitConfiguration()
dispatchQueue.async {
self.startSession()
}
}
func startSession(){
session.startRunning()
//lock max exposure duration
try! videoDevice.lockForConfiguration()
let exposure = videoDevice.activeFormat.maxExposureDuration.seconds * 0.5
print("set max exposure", exposure)
videoDevice.setExposureModeCustom(duration: CMTime(seconds: exposure, preferredTimescale: 1000), iso: videoDevice.activeFormat.minISO){ time in
print("did set max exposure")
self.changeFocus()
}
videoDevice.unlockForConfiguration()
}
func changeFocus(){
let date = Date.now
print("set focus", focus)
try! videoDevice.lockForConfiguration()
videoDevice.setFocusModeLocked(lensPosition: focus){ time in
let dt = abs(date.timeIntervalSinceNow)
print("did set focus - took:", dt, "frames:", dt/self.videoDevice.exposureDuration.seconds)
self.next()
}
videoDevice.unlockForConfiguration()
}
func next(){
focus += 0.02
if focus > 1 {
print("done")
return
}
changeFocus()
}
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection){
print("did receive video frame")
}
}
I am implementing pan and zoom features for an app using a custom USB camera device, in iPadOS. I am using an update function (shown below) to apply transforms for scale and translation but they are not working. By re-enabling the animation I can see that the scale translation seems to initially take effect but then the image animates back to its original scale. This all happens in a fraction of a second but I can see it. The translation transform seems to have no effect at all. Printing out the value of AVCaptureVideoPreviewLayer.transform before and after does show that my values have been applied.
private func updateTransform() {
#if false
// Disable default animation.
CATransaction.begin()
CATransaction.setDisableActions(true)
defer { CATransaction.commit() }
#endif
// Apply the transform.
logger.debug("\(String(describing: self.videoPreviewLayer.transform))")
let transform = CATransform3DIdentity
let translate = CATransform3DTranslate(transform, translationX, translationY, 0)
let scale = CATransform3DScale(transform, scale, scale, 1)
videoPreviewLayer.transform = CATransform3DConcat(translate, scale)
logger.debug("\(String(describing: self.videoPreviewLayer.transform))")
}
My question is this, how can I properly implement pan/zoom for an AVCaptureVideoPreviewLayer? Or even better, if you see a problem with my current approach or understand why the transforms I am applying do not work, please share that information.
I have built a camera application which uses a AVCaptureSession with the AVCaptureDevice set to .builtInDualWideCamera and isVirtualDeviceConstituentPhotoDeliveryEnabled=true to enable delivery of "simultaneous" photos (AVCapturePhoto) for a single capture request.
I am using the hd1920x1080 preset, but both the wide and ultra-wide photos are being delivered in the highest possible resolution (4224x2376). I've tried to disable any setting that suggests that it should be using that 4k resolution rather than 1080p on the AVCapturePhotoOutput, AVCapturePhotoSettings and AVCaptureDevice, but nothing has worked.
Some debugging that I've done:
When I turn off constituent photo delivery by commenting out the line of code below, I end up getting a single photo delivered with the 1080p resolution, as you'd expect.
// photoSettings.virtualDeviceConstituentPhotoDeliveryEnabledDevices = captureDevice.constituentDevices
I tried the constituent photo delivery with the .builtInDualCamera and got only 4k results (same as described above)
I tried using a AVCaptureMultiCamSession with .builtInDualWideCamera and also only got 4k imagery
I inspected the resolved settings on photo.resolvedSettings.photoDimensions, and the dimensions suggest the imagery should be 1080p, but then when I inspect the UIImage, it is always 4k.
guard let imageData = photo.fileDataRepresentation() else { return }
guard let capturedImage = UIImage(data: imageData ) else { return }
print("photo.resolvedSettings.photoDimensions", photo.resolvedSettings.photoDimensions) // 1920x1080
print("capturedImage.size", capturedImage.size) // 4224x2376
--
Any help here would be greatly appreciated, because I've run out of things to try and documentation to follow 🙏