Post

Replies

Boosts

Views

Activity

How to access HDRGainMap from AVCapturePhoto
Hey, I'm building a camera app and I want to use the captured HDRGainMap along side the photo to do some processing with a CIFilter chain. How can this be done? I can't find any documentation any where on this, only on how to access the HDRGainMap from an existing HEIC file, which I have done successfully. For this I'm doing something like the following: let gainmap = CGImageSourceCopyAuxiliaryDataInfoAtIndex(source, 0, kCGImageAuxiliaryDataTypeHDRGainMap) let gainDict = NSDictionary(dictionary: gainmap) let gainData = gainDict[kCGImageAuxiliaryDataInfoData] as? Data let gainDescription = gainDict[kCGImageAuxiliaryDataInfoDataDescription] let gainMeta = gainDict[kCGImageAuxiliaryDataInfoMetadata] However I'm not sure what the approach is with a AVCapturePhoto output from a AVCaptureDevice. Thanks!
2
0
360
Jan ’25
Slow performance decoding large images with Core Image.
I'm building a camera app that does some post processing after the photo has been taken. With 12MP the processing is pretty good, but larger images 24MP is very slow. I created a very simple example to demonstrate the issue, which is loading an image and the rendering it to data. let context = CIContext() let imageUrl = Bundle.main.url(forResource: "12mp", withExtension: "jpg")! let data = try! Data(contentsOf: imageUrl) let ciImage = CIImage(data: data)! let start = CFAbsoluteTimeGetCurrent() let data = context.jpegRepresentation(of: ciImage, colorSpace: context.workingColorSpace!) print(data?.count) print("Resize Completed: " + String(CFAbsoluteTimeGetCurrent() - start)) Running this code on an iPhone 16 Pro with different images produces these benchmarks: 12MP => 0.03s 24MP => 1.22s 48MP => 2.98s I understand that processing time will increase with resolution but it doesn't seem linear. I have tried setting different CiContext options such as .useSoftwareRenderer: false but it has made no difference. From profiling the process it looks like the JPEG decoding is the bottle neck. This is for a 48MP Image: Is there any way this can be improved?
0
0
290
Dec ’24
Launching an app with Camera Control
I've just received my iPhone 16 Pro to develop some of the Camera Control features. I am trying to set up my app to be launched from a button press, and from my research in the documents this is only possible if I develop a LockedCameraCaptureExtension. Is this correct? My app is written in React Native, so to build an extension would require me to re-create the entire UI in Swift which just isn't possible with my resources. Ideally I could build a simple extension that requires Authentication to open the app but I'n not sure that will work: The app extension terminates shortly after launch if it doesn’t have an active camera view that uses AVCaptureEventInteraction to handle events from the hardware buttons, or if access to the camera hasn’t been requested. This is a bit frustrating for something so simple as to just opening an app. Thanks, Alex
1
0
670
Sep ’24
App crashing due to memory pressure on iPhone 13.. but works fine on iPhone 12, iPhone 11
I have a camera app that has some intensive processing. Each photo can require between 300-500MB of memory to process all the CIFilters, depth blur etc. This has been working fine on my older test devices, iPhone 11 & 12, but I had some crash reports from users and I noticed that they were always iPhone 13 / 13 mini users. After purchasing a 13, I can confirm that after taking 2-3 photos sequentially the app crashes due to memory usage. What I don't understand is that I can take many photos sequentially on the iPhone 11 / 12 and they do not crash. The memory usage is certainly high, but all the images save and the app does not crash. Here's what the memory usage looks like when using the iPhone 11: All the devices have 4GB of RAM, so why should the iPhone 13 not be able to handle it? One option would be to try and reduce the memory usage of the application, but it's a challenge when processing 12MP images. Here's what the memory debugger looks like, not very useful! Any pointers greatly appreciated! Alex
0
0
377
Sep ’24
CIFilter chain failing to render parts of output
I’ve built a iOS camera app that applies many CIFilters to an image captured by the camera. Some of my users have reported that on occasion the images have large parts that are blank, see below: Frustratingly, I can’t reproduce this myself! Does anyone know what could he causing it, is it a memory issue? I haven’t posted the code as there’s a lot to look over and I’m not sure it would help diagnose it. Thanks for any pointers.
1
0
544
Aug ’24
Performant alternative to scaling a CIImage / PixelBuffer
Hey, I’m building a camera app where I am applying real time effects to the view finder. One of those effects is a variable blur, so to improve performance I am scaling down the input image using CIFilter.lanczosScaleTransform(). This works fine and runs at 30FPS, but when running the metal profiler I can see that the scaling transforms use a lot of GPU time, almost as much as the variable blur. Is there a more efficient way to do this? The simplified chain is like this: Scale down viewFinder CVPixelBuffer (CIFilter.lanczosScaleTransform) Scale up depthMap CVPixelBuffer to match viewFinder size (CIFilter.lanczosScaleTransform) Create CIImages from both CVPixelBuffers Apply VariableDepthBlur (CIFilter.maskedVariableBlur) Scale up final image to metal view size (CIFilter.lanczosScaleTransform) Render CIImage to a MTKView using CIRenderDestination From some research, I wonder if scaling the CVPixelBuffer using the accelerate framework would be faster? Also, Instead of scaling the final image, perhaps I could offload this to the metal view? Any pointers greatly appreciated!
2
0
792
Jul ’24
Improving object separation with live depth data
Hey, I'm building a portrait mode into my camera app but I'm having trouble with matching the quality of Apples native camera implementation. I'm streaming the depth data and applying a CIMaskedVariableBlur to the video stream which works quite well but the definition of the object in focus looks quite bad in some scenarios. See comparison below with Apples UI + depth data. What I don't quite understand is how Apple is able to do such a good cutout around my hand assuming it has similar depth data to what I am receiving. You can see in the depth image that my hand is essentially the same colour as parts of background, and this shows in the blur preview - but Apple gets around this. Does anyone have any ideas? Thanks!
0
0
498
Jun ’24