Post

Replies

Boosts

Views

Activity

Launching an app with Camera Control
I've just received my iPhone 16 Pro to develop some of the Camera Control features. I am trying to set up my app to be launched from a button press, and from my research in the documents this is only possible if I develop a LockedCameraCaptureExtension. Is this correct? My app is written in React Native, so to build an extension would require me to re-create the entire UI in Swift which just isn't possible with my resources. Ideally I could build a simple extension that requires Authentication to open the app but I'n not sure that will work: The app extension terminates shortly after launch if it doesn’t have an active camera view that uses AVCaptureEventInteraction to handle events from the hardware buttons, or if access to the camera hasn’t been requested. This is a bit frustrating for something so simple as to just opening an app. Thanks, Alex
1
0
512
Sep ’24
App crashing due to memory pressure on iPhone 13.. but works fine on iPhone 12, iPhone 11
I have a camera app that has some intensive processing. Each photo can require between 300-500MB of memory to process all the CIFilters, depth blur etc. This has been working fine on my older test devices, iPhone 11 & 12, but I had some crash reports from users and I noticed that they were always iPhone 13 / 13 mini users. After purchasing a 13, I can confirm that after taking 2-3 photos sequentially the app crashes due to memory usage. What I don't understand is that I can take many photos sequentially on the iPhone 11 / 12 and they do not crash. The memory usage is certainly high, but all the images save and the app does not crash. Here's what the memory usage looks like when using the iPhone 11: All the devices have 4GB of RAM, so why should the iPhone 13 not be able to handle it? One option would be to try and reduce the memory usage of the application, but it's a challenge when processing 12MP images. Here's what the memory debugger looks like, not very useful! Any pointers greatly appreciated! Alex
0
0
298
Sep ’24
CIFilter chain failing to render parts of output
I’ve built a iOS camera app that applies many CIFilters to an image captured by the camera. Some of my users have reported that on occasion the images have large parts that are blank, see below: Frustratingly, I can’t reproduce this myself! Does anyone know what could he causing it, is it a memory issue? I haven’t posted the code as there’s a lot to look over and I’m not sure it would help diagnose it. Thanks for any pointers.
1
0
465
Aug ’24
Performant alternative to scaling a CIImage / PixelBuffer
Hey, I’m building a camera app where I am applying real time effects to the view finder. One of those effects is a variable blur, so to improve performance I am scaling down the input image using CIFilter.lanczosScaleTransform(). This works fine and runs at 30FPS, but when running the metal profiler I can see that the scaling transforms use a lot of GPU time, almost as much as the variable blur. Is there a more efficient way to do this? The simplified chain is like this: Scale down viewFinder CVPixelBuffer (CIFilter.lanczosScaleTransform) Scale up depthMap CVPixelBuffer to match viewFinder size (CIFilter.lanczosScaleTransform) Create CIImages from both CVPixelBuffers Apply VariableDepthBlur (CIFilter.maskedVariableBlur) Scale up final image to metal view size (CIFilter.lanczosScaleTransform) Render CIImage to a MTKView using CIRenderDestination From some research, I wonder if scaling the CVPixelBuffer using the accelerate framework would be faster? Also, Instead of scaling the final image, perhaps I could offload this to the metal view? Any pointers greatly appreciated!
2
0
644
Jul ’24
Improving object separation with live depth data
Hey, I'm building a portrait mode into my camera app but I'm having trouble with matching the quality of Apples native camera implementation. I'm streaming the depth data and applying a CIMaskedVariableBlur to the video stream which works quite well but the definition of the object in focus looks quite bad in some scenarios. See comparison below with Apples UI + depth data. What I don't quite understand is how Apple is able to do such a good cutout around my hand assuming it has similar depth data to what I am receiving. You can see in the depth image that my hand is essentially the same colour as parts of background, and this shows in the blur preview - but Apple gets around this. Does anyone have any ideas? Thanks!
0
0
432
Jun ’24