Why is AVDepthData is different from what is read by Photoshop?

Hello,


I'm trying to read depth/disparity information from a HEIC file that contains a portrait mode photo (I took with iPhone X back camera) like this:


PHAsset *asset = ...
[asset requestContentEditingInputWithOptions:nil completionHandler:^(PHContentEditingInput * _Nullable contentEditingInput, NSDictionary * _Nonnull info) {
   NSURL *url = contentEditingInput.fullSizeImageURL;
   AVDepthData *depthData = [self depthDataFromImageAtURL:url orientation:(CGImagePropertyOrientation)contentEditingInput.fullSizeImageOrientation];
            
            
- (nullable AVDepthData *)depthDataFromImageAtURL:(NSURL*)url orientation:(CGImagePropertyOrientation)orientation {
    AVDepthData *depthData = nil;
    
    CGImageSourceRef imageSource = CGImageSourceCreateWithURL((CFURLRef)url, nil);
    if (imageSource) {
        NSDictionary *auxDataDictionary = (__bridge NSDictionary *)CGImageSourceCopyAuxiliaryDataInfoAtIndex(imageSource, 0, kCGImageAuxiliaryDataTypeDisparity);
        if (auxDataDictionary) {
            depthData = [AVDepthData depthDataFromDictionaryRepresentation:auxDataDictionary error:nil];
        }
        
        CFRelease(imageSource);
    }
    
    return depthData;
}
    }];

(sorry for the indentation, the code editor above messed up and started to delete some other line when I was trying to edit a line, anyway)


Here is a link to the actual photo: https://canpoyrazoglu.com/misc/uploads/IMG_9682.HEIC


Then I save that depth data and here is the result: https://canpoyrazoglu.com/misc/uploads/depth.jpg


However, if I open the HEIC in photoshop, it contains a depth map channel, here is a PNG that I extracted: https://canpoyrazoglu.com/misc/uploads/IMG_9947.PNG


The one from Photoshop is of much higher quality. It's not just disparity vs. depth map (e.g. inverted and streched), if I take the one produced by the code, invert and stretch the levels, they are still different depth maps, the one in Photoshop being much higher quality/resolution.


It doesn't matter if it's taken with the back or the front camera, the result is the same.


What am I doing wrong and how can Photoshop access a much higher resolution depth map from the same image?


(and just an addition: I don't know if it's related, but iPhone seems to generate JPEGs alongside HEIFs for portrait photos, and they get automatically regenerated even if I delete them, as long as I don't delete the HEIFs, I can see them in Image Capture when connected with USB (they don't show as duplicates in Photo Library) and have the following format (assume the original is IMG_1234.HEIC): IMG_E1234.JPG)