I set up my AVCaptureSession for photo capture with depth data. In my AVCapturePhotoCaptureDelegate I get the AVCapturePhoto of the capture that contains the depth data.
I call fileDataRepresentation() on it and later use a PHAssetCreationRequest to save the image (including the depth data) to a new asset in Photos.
When loading the image and its depth data later again, the depth data seemed compressed. I observe some heavy quantization of the data.
Is there a way to avoid this compression? Do I need to use specific settings or even a different API for exporting the image?
I call fileDataRepresentation() on it and later use a PHAssetCreationRequest to save the image (including the depth data) to a new asset in Photos.
When loading the image and its depth data later again, the depth data seemed compressed. I observe some heavy quantization of the data.
Is there a way to avoid this compression? Do I need to use specific settings or even a different API for exporting the image?