Imageview - scaleToFill vs. scaleAspectFill

Hi everyone,


I have an image view which I initialized using following code:


...
imageView.frame = CGRect(x: width * (20 / width), y: navigationBar.frame.maxY, width: width * (335 / width), height: width * (335 / width))
imageView.contentMode = .scaleToFill
imageView.clipsToBounds = true
imageView.isUserInteractionEnabled = true
...

As you can see, currently the contentMode is set to .scaleToFill. If I use this setting, most of my image look strange though. The scale seems not to fit somehow. If I change the contentMode to .scaleAspectFill, then every image looks fine.


However, for my use case, it is required to use .scaleToFill. What can I do to do that? And why is it acting strange using .scaleToFill?

Replies

Imagine you have an image 50 * 100

And your imageView is 100 * 400.


With scaleToFill, image will be scaled to be 100 * 400 to fill completely imageView, so much stretched in height (an el Greco painting !).

With scaleAspectFit, image will be scaled to 100 * 200 (max that can fit preserving the aspect ratio), looking natural but not filling totally.

With scaleAspectFill, and imageView.clipsToBounds = true, image will not be distorted bu clipped.


See illustrations here

h ttps://useyourloaf.com/blog/stretching-redrawing-and-positioning-with-contentmode/


However, for my use case, it is required to use .scaleToFill.

Why ?


Then you have a trade off to do:

- either accepts image distortion

- or change the imageView.frame to be proportional to your image (in my example, change imageView to 100 * 200 for instance)

- or change mode to scaleAspectFill

Hi Claude31,


thanks for the reply. I think, I got it. hmm.. but then coming to another problem.

I want to merge/overlay two images using following code:

extension UIImage {
   
    static func imageByMergingImages(topImage: UIImage, bottomImage: UIImage, scaleForTop: CGFloat = 1.0) -> UIImage {
        let size = bottomImage.size
        let container = CGRect(x: 0, y: 0, width: size.width, height: size.height)
        UIGraphicsBeginImageContextWithOptions(size, false, 2.0)
        UIGraphicsGetCurrentContext()!.interpolationQuality = .high
        bottomImage.draw(in: container)
        
        let topWidth = size.width / scaleForTop
        let topHeight = size.height / scaleForTop
        let topX = (size.width / 2.0) - (topWidth / 2.0)
        let topY = (size.height / 2.0) - (topHeight / 2.0)
        
        topImage.draw(in: CGRect(x: topX, y: topY, width: topWidth, height: topHeight), blendMode: .normal, alpha: 1.0)
        
        return UIGraphicsGetImageFromCurrentImageContext()!
    }
   
}


Whenever, my the contentMode is set to .scaleAspectFill, the merged image doesn't look ok. But if I change it to .scaleToFill, then the merged image looks as expected. Any ideas?

I don't see:

- where you set the mode in the extension

- how you change the scale. With 1.0, topImage is drawn in container.


Maybe you have an image resolution problem (different for the 2 images ?)

https://stackoverflow.com/questions/46255728/how-to-combine-two-images


Where do you call UIGraphicsEndImageContext ?


If you could post the complete code of project somewhere of send an email address, that would ease analysis.

Basically, I have an "original" image view and another image view for cropping an image. The original image view is not touched at all regarding frame, but the cropImageView is.


ciImage = CIImage(cgImage: (ZImageCropper.cropImage(ofImageView: cropImageView, withinPoints: [
            CGPoint(x: overlay.frame.origin.x, y: overlay.frame.origin.y),   //Start point
            CGPoint(x: overlay.frame.maxX, y: overlay.frame.origin.y),
            CGPoint(x: overlay.frame.maxX, y: overlay.frame.maxY),
            CGPoint(x:  overlay.frame.origin.x, y: overlay.frame.maxY)])?.cgImage)!)


I want to merge the image of the original image view and the cropped image view...


This is the code to crop the image:

public class ZImageCropper {
    public class func cropImage(ofImageView:UIImageView, withinPoints points:[CGPoint]) -> UIImage? {
        
        //Check if there is start and end points exists
        if points.count >= 2 {
            let path = UIBezierPath()
            let shapeLayer = CAShapeLayer()
            shapeLayer.fillColor = UIColor.clear.cgColor
            shapeLayer.lineWidth = 2
            var croppedImage:UIImage?
            
            for (index,point) in points.enumerated() {
                
                //Origin
                if index == 0 {
                    path.move(to: point)
                    
                //Endpoint
                } else if index == points.count-1 {
                    path.addLine(to: point)
                    path.close()
                    shapeLayer.path = path.cgPath
                    
                    ofImageView.layer.addSublayer(shapeLayer)
                    shapeLayer.fillColor = UIColor.black.cgColor
                    ofImageView.layer.mask = shapeLayer
                    UIGraphicsBeginImageContextWithOptions(ofImageView.frame.size, false, 1)
                    
                    if let currentContext = UIGraphicsGetCurrentContext() {
                        ofImageView.layer.render(in: currentContext)
                    }
                    
                    let newImage = UIGraphicsGetImageFromCurrentImageContext()

                    UIGraphicsEndImageContext()
                    
                    croppedImage = newImage
                    
                    //Move points
                } else {
                    path.addLine(to: point)
                }
            }
            
            return croppedImage
        } else {
            return nil
        }
    }
}

Still does not show where the mode is set to aspectFill or aspectFit…


Anyway, that is not a Swift issue. You'd better move tour question to another part of the forum, such as https://forums.developer.apple.com/community/graphics-and-games/core-image


Good luck.

If I understand this code, you are creating a version of

func cropped(to rect: CGRect) -> CIImage

for a polygon ?

I had some issues with func cropped(to rect: CGRect) -> CIImage


Wrong part of the image was cropped..

But, I don't know what a polygon is 😀

Can anyone help here?