The depth map (actually disparity map) I'm extracting from a picture taken with iPhone Plus is inverted. That is: objects that are close to lens are dark/black (low values), and in the distance they're white (high values).
That seems to me to be the opposite of what Apple showed on WWDC. Is that normal? I'm able to get it "right" (close: high values, distance: low values) by simply inverting the whole depth map, but is it supposed to be like that?
I've tested with 2 different pictures and both of them had the same result. I haven't got an iPhone Plus at hand to test it more, appreciate any help.