I've never used UIGraphicsImageRenderer, so I can't say what kind of output it is expected to produce.
>> we know that for color image, each pixel uses three bytes (RGB).
>>So in Retina world, one point could have 6 bytes or 9 bytes, right?
It's not clear. The documentation doesn't is inconsistent about saying whether there is an alpha channel. It's also not clear whether the resulting image is HDR or not.
>>But aren't these RGB value the same? or with little difference?
Not the same, and different enough that the human eye can easily tell the difference between 1x and 2x or 3x.
>> what's the difference between item 2 and item3?
Again, I don't know exactly what pixel format is being used. What you ultimately do depends on whether the image is being saved locally for use on the same device only (where you probably want the file's pixel format to match the display), or is part of a database used across devices (where the image is going to get copied/converted to the device screen format at least some of the time).
You might be able to find out at least some of the reasons by interrogation the UIImage object that's created in each case.