How to save a resized version of UIImage assigned from PhotoImageView?

I am saving an image from Photo library to the disk for later retrieval.

I don't want the high resolution image taken from the camera. I only need it as small as 320*320.

UIImage photo = photoImageView.photo, and use NSCoder to encode it.

How to do that? Right now the saving takes long time due to this.

Replies

Does PhotoImageView have some internal function to do it? like settting the ContentMode (

scaleToFill
?)

Or we have to do it manually using

UIGraphicsGetImageFromCurrentImageContext?

I don't think an image view has anything to help you. It does its scaling on the fly, so it doesn't have an actual reduced-resolution image to give to you.


It's only a few lines of code to do it using UIGraphicsBeginImageContextWithOptions, UIGraphicsGetImageFromCurrentImageContext and UIGraphicsEndImageContext.


Note: Don't use UIGraphicsBeginImageContext for this. You want to specify a scale of 0.0 in the option, so that you get a retina-enhanced (2x or 3x) image, rather than the 1x image that the default scale of 1.0 would give you.

What's the 2x or 3x means here?

Ususally, what an iPhone 7 camera takes is a 4000*3000 photo, but my App only needs a 320*320 photo, Does the 2x or 3x means dimension increase?

All iOS devices are currently "retina", meaning that each "logical pixel" (actually called a "point" in Apple terminology) is made up of 2 or 3 physical pixels. Since your image is actually going to be displayed (eventually) using physical pixels, you want a color value for each physical pixel.


So, for example, your reduced image needs to be 320x320 points, because all of the UIKit APIs take points values for image sizes, but you want it to be 640x640 or 960x960 color values, depending on the device. These are called 2x and 3x images, because there are "two times" or "three times" as many pixels as points in each direction.


Using UIGraphicsBeginImageContextWithOptions and specifying scale 0.0 lets UIKIt figure out the number of pixels to give you. If you use UIGraphicsBeginImageContext, a scale of 1.0 is assumed, giving you a 1x image (320x320 pixels as well as 320x320 points), and that has to be scaled up for display (there aren't enough pixels), which doesn't look very good.


Welcome to the wonderful world of retina displays!

I see, so Retina use mutiple pixels as one point, and we know that for color image, each pixel uses three bytes (RGB).

So in Retina world, one point could have 6 bytes or 9 bytes, right?

But aren't these RGB value the same? or with little difference?

Is the below method the same as

UIGraphicsBeginImageContextWithOptions
?


let renderFormat = UIGraphicsImageRendererFormat.default()

renderFormat.opaque = opaque

let renderer = UIGraphicsImageRenderer(size: CGSize(width: width, height: height), format: renderFormat)

newImage = renderer.image { (context) in self.draw(in: CGRect(x: 0, y: 0, width: width, height: height)) }

Tested all above methods, the final image size on disk for resizing to 320*320 is as below:

1. original image - 11M

2. UIGraphicsImageRenderer - 3.4M

3. UIGraphicsBeginImageContextWithOptions (scale 0.0) - 1.4M

4. UIGraphicsBeginImageContextWithOptions (scale 1.0) - 177K

5. UIGraphicsBeginImageContextWithOptions (scale 2.0) - 671K


(to scale 1 under 1080HD is 1.8M, to scale 2 under 1080HD is 7M)

Question: what's the difference between item 2 and item3?

I've never used UIGraphicsImageRenderer, so I can't say what kind of output it is expected to produce.


>> we know that for color image, each pixel uses three bytes (RGB).

>>So in Retina world, one point could have 6 bytes or 9 bytes, right?


It's not clear. The documentation doesn't is inconsistent about saying whether there is an alpha channel. It's also not clear whether the resulting image is HDR or not.


>>But aren't these RGB value the same? or with little difference?


Not the same, and different enough that the human eye can easily tell the difference between 1x and 2x or 3x.


>> what's the difference between item 2 and item3?


Again, I don't know exactly what pixel format is being used. What you ultimately do depends on whether the image is being saved locally for use on the same device only (where you probably want the file's pixel format to match the display), or is part of a database used across devices (where the image is going to get copied/converted to the device screen format at least some of the time).


You might be able to find out at least some of the reasons by interrogation the UIImage object that's created in each case.

The problem here is when the app encoding the UIImage into even the local disk, it takes long time, not to say saving to a iCloud env or database.

I guess when encoding UIImage, the uncompressed data was saved? so when a jpeg load into imageviewer, it was decompressed into the image property?

for the app, 1080 1X is more than enough because the app's focus is not on image, it is on the data.

You originally said:


>> and use NSCoder to encode it


It's unspecified (AFAIK) how UIImage represents itself when encoded via NSCoder. If you're talking about iCloud or a database, then you almost certainly want to pick a compressed image format. This is a larger design question.

>one point could have 6 bytes or 9 bytes


A point is a standard length, equivalent to 1x1 pixels on a non-retina device, and 2x2 pixels on a retina device.


2x2=4 ... 4x3=12



>But aren't these RGB value the same?


See Table 2-1 here:

https://developer.apple.com/library/content/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_context/dq_context.html#//apple_ref/doc/uid/TP30001066-CH203-CJBEAGHH