AVCaptureVideoPreviewLayer (AVLayerVideoGravityResizeAspectFill vs. AVLayerVideoGravityResizeAspect)

Let's assume I add an AVCaptureVideoPreviewLayer to an AVCaptureSession that was configured with AVCaptureSessionPresetPhoto and I set the preview layer's videoGravity property to AVLayerVideoGravityResizeAspectFill.


AVCaptureSession *captureSession = <#Get a capture session#>;
assert(captureSession.sessionPreset = AVCaptureSessionPresetPhoto);
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
UIView *aView = <#The view in which to present the layer#>;
previewLayer.frame = aView.bounds; // Assume you want the preview layer to fill the view.
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[aView.layer addSublayer:previewLayer];


For the scope of this question, let's also assume I run this code on an iPhone 6s, which has a native display resolution of 750x1334 pixels (in portrait mode). The video camera's native resolution on the iPhone 6s is 3024x4032 pixels (also portrait mode).


Now, it's clear that AVLayerVideoGravityResizeAspect (the default video gravity) will produce a preview-sized frame by downscaling (possibly through camera pixel binning/subsampling) the original 3024x4032 frame to 750x1000.


But what does AVLayerVideoGravityResizeAspectFill do?

  1. Does it start with a preview-sized 750x1000 frame, upscales it to 1000x1334, then crops it back to 750x1334? (Which would result in a slight loss of image clarity.)
  2. Or does it downscale/crop directly from the original 3024x4032 frame to 750x1334?