Could solve issue 🙂Did following things:cd ~/.cocoapods/reposgit clone "https://github.com/CocoaPods/Specs" master --depth 1Removing # before #source 'https://github.com/CocoaPods/Specs.git' Running pod install
Post
Replies
Boosts
Views
Activity
Hi Claude31,thanks for the reply. I think, I got it. hmm.. but then coming to another problem.I want to merge/overlay two images using following code:extension UIImage {
static func imageByMergingImages(topImage: UIImage, bottomImage: UIImage, scaleForTop: CGFloat = 1.0) -> UIImage {
let size = bottomImage.size
let container = CGRect(x: 0, y: 0, width: size.width, height: size.height)
UIGraphicsBeginImageContextWithOptions(size, false, 2.0)
UIGraphicsGetCurrentContext()!.interpolationQuality = .high
bottomImage.draw(in: container)
let topWidth = size.width / scaleForTop
let topHeight = size.height / scaleForTop
let topX = (size.width / 2.0) - (topWidth / 2.0)
let topY = (size.height / 2.0) - (topHeight / 2.0)
topImage.draw(in: CGRect(x: topX, y: topY, width: topWidth, height: topHeight), blendMode: .normal, alpha: 1.0)
return UIGraphicsGetImageFromCurrentImageContext()!
}
}Whenever, my the contentMode is set to .scaleAspectFill, the merged image doesn't look ok. But if I change it to .scaleToFill, then the merged image looks as expected. Any ideas?
Basically, I have an "original" image view and another image view for cropping an image. The original image view is not touched at all regarding frame, but the cropImageView is.ciImage = CIImage(cgImage: (ZImageCropper.cropImage(ofImageView: cropImageView, withinPoints: [
CGPoint(x: overlay.frame.origin.x, y: overlay.frame.origin.y), //Start point
CGPoint(x: overlay.frame.maxX, y: overlay.frame.origin.y),
CGPoint(x: overlay.frame.maxX, y: overlay.frame.maxY),
CGPoint(x: overlay.frame.origin.x, y: overlay.frame.maxY)])?.cgImage)!)I want to merge the image of the original image view and the cropped image view...This is the code to crop the image:public class ZImageCropper {
public class func cropImage(ofImageView:UIImageView, withinPoints points:[CGPoint]) -> UIImage? {
//Check if there is start and end points exists
if points.count >= 2 {
let path = UIBezierPath()
let shapeLayer = CAShapeLayer()
shapeLayer.fillColor = UIColor.clear.cgColor
shapeLayer.lineWidth = 2
var croppedImage:UIImage?
for (index,point) in points.enumerated() {
//Origin
if index == 0 {
path.move(to: point)
//Endpoint
} else if index == points.count-1 {
path.addLine(to: point)
path.close()
shapeLayer.path = path.cgPath
ofImageView.layer.addSublayer(shapeLayer)
shapeLayer.fillColor = UIColor.black.cgColor
ofImageView.layer.mask = shapeLayer
UIGraphicsBeginImageContextWithOptions(ofImageView.frame.size, false, 1)
if let currentContext = UIGraphicsGetCurrentContext() {
ofImageView.layer.render(in: currentContext)
}
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
croppedImage = newImage
//Move points
} else {
path.addLine(to: point)
}
}
return croppedImage
} else {
return nil
}
}
}
I had some issues with func cropped(to rect: CGRect) -> CIImageWrong part of the image was cropped..But, I don't know what a polygon is 😀
Can anyone help here?
You are absolutely correct.In case of iPhone 8 screen, the width is 375. Width for iPhone 11 is 414I actually don't put the frame correctly to the screen size.It would have been correct, if I made it like this:53/375 = 0.1413And then used in the frame like this..... CGRect(x: width * 0.1413) ..because then, it is set relatively to the screen even when using other device.So what would be the easiest way to change it being adjustable to screen size? I mean working with something likewidth * 0.1413 is not really easyy to read and understand.
I guess, I know what to do:I need to define two variables like:// SCREEN SIZE FOR IPHONE 8
let IPHONE8_SCREEN_WIDTH:CGFloat = 375.0
let IPHONE8_SCREEN_HEIGHT:CGFloat = 667.0and then change my code above to:let accountTypeLabel = UILabel()
accountTypeLabel.frame = CGRect(x: width * (53 / IPHONE8_SCREEN_WIDTH), y: height * (172 / IPHONE8_SCREEN_HEIGHT), width: width * (269 / IPHONE8_SCREEN_WIDTH), height: height * (37 / IPHONE8_SCREEN_HEIGHT))
accountTypeLabel.text = ACCOUNT_TYPE_LABEL
accountTypeLabel.textColor = UIColor.white
accountTypeLabel.backgroundColor = DARK_BLUE_COLOR
accountTypeLabel.textAlignment = .center
let accountTypeLabelLabelHeight = accountTypeLabel.frame.size.height
accountTypeLabel.font = UIFont(name: "Kefa", size: accountTypeLabelLabelHeight * (22 / accountTypeLabelLabelHeight))
This works for any devices then, right?
Hi Claude,I thought, I knew how to deal with frames but it turned out, I didn't. So now, I might have found a working solution to have a functionality like autolayout but with frames.
View is probably black because you did not set its bacjground color.I am setting the background color of the view but it is still black.If you create constraints (NSConstraint) yourself, you don't need it.If I remove self.view.translatesAutoresizingMaskIntoConstraints = false, then my button has a wrong position and the background color of the view turns to white.Why don't you set the constraints in IB ? It is muche simpler. You can set it propostional, with multiply, for line 14.My view will be much more complex with sub views etc. I thought, doing some layout stuff with IB and the rest programmatically could be confusing. I want to stick to one UI creation approach and that would be the programmatically way.
Hi Claude31,please have a look at the updated code:func setupView() {
// background color of view
self.view.backgroundColor = DARK_BLUE_COLOR
self.view.translatesAutoresizingMaskIntoConstraints = false
createdButton.translatesAutoresizingMaskIntoConstraints = false
self.view.addSubview(createdButton)
createdButton.leadingAnchor.constraint(equalTo: self.view.leadingAnchor).isActive = true
createdButton.widthAnchor.constraint(equalTo: self.view.widthAnchor, constant: self.view.frame.width / 3).isActive = true
createdButton.topAnchor.constraint(equalTo: self.view.topAnchor, constant: 60).isActive = true
createdButton.heightAnchor.constraint(equalTo: self.view.heightAnchor, constant: 54).isActive = true
createdButton.setTitle(CREATED_BUTTON_TITLE, for: .normal)
createdButton.setTitleColor(DARK_BLUE_COLOR, for: .normal)
createdButton.backgroundColor = UIColor.white
createdButton.addTarget(self, action: #selector(fetchMyChallenges), for: .touchDown)
// createdButton.title
createdButton.layer.borderWidth = 1.5
createdButton.layer.borderColor = UIColor.white.cgColor
}
Hi Claude31,thanks for the reply. I meant the background color of the superview (self.view) actually.BTW: The way I am changing the background color of my button works the way as you can see above.
I have it like this:let DARK_BLUE_COLOR = UIColor(red: 10.0/255.0, green: 120.0/255.0, blue: 147.0/255.0, alpha: 1)defined as a constant to be used everywhere.
Hello Claude31,thanks a lot! You were right. The key/hint for the problem was putting the codes for all language that should be supported.I basically have modified my function like this now public func notifyAboutSubscription(userObject:User, receiverArray:[String]) {
var receiverArray = removeChallengeCreatorTokenFromArray(receiverArray: receiverArray)
notificationTypeService.clearReceiverListForNotificationType(completionHandler: { (clearedReceiverArray) in
receiverArray = clearedReceiverArray
let source = self.determineUserType(userObject: userObject)
OneSignal.postNotification(["contents": ["en": source +
FOLLOW_MESSAGE, "de": source + " folgt dir jetzt", "tr": source + " seni takip ediyor"], "include_player_ids": receiverArray])
}, receiverList: receiverArray, notificationType: NotificationType.follow)
}Depending on what language is set on the target device, the correct message is shown. That saved my day. Have a nice weekend!
Hello @OOper,
thanks for the reply. I can always count on you in this forum :)
I put the code snippet. This is what it logs
AVAssetExportSessionStatus Optional(Error Domain=AVFoundationErrorDomain Code=-11838 "Operation Stopped" UserInfo={NSLocalizedFailureReason=The operation is not supported for this media., NSLocalizedDescription=Operation Stopped, NSUnderlyingError=0x28376e250 {Error Domain=NSOSStatusErrorDomain Code=-16976 "(null)"}})
Hi @OOPer,
yuhuu :) I found out the cause and a solution.
All video were failing which were longer than 30 seconds. This made me look into the method where I do trim the video
func assetByTrimming(timeOffStart: Double) throws -> AVAsset {
let duration = CMTime(seconds: timeOffStart, preferredTimescale: 1)
let timeRange = CMTimeRange(start: CMTime.zero, duration: duration)
let composition = AVMutableComposition()
let videoTrack = self.tracks(withMediaType: AVMediaType.video).first
let size = videoTrack!.naturalSize
let txf = videoTrack!.preferredTransform
var recordType = ""
if (size.width == txf.tx && size.height == txf.ty){
recordType = "UIInterfaceOrientationLandscapeRight"
}
else if (txf.tx == 0 && txf.ty == 0){
recordType = "UIInterfaceOrientationLandscapeLeft"
}
else if (txf.tx == 0 && txf.ty == size.width){
recordType = "UIInterfaceOrientationPortraitUpsideDown"
}
else{
recordType = "UIInterfaceOrientationPortrait"
}
do {
for track in tracks {
// let compositionTrack = composition.addMutableTrack(withMediaType: track.mediaType, preferredTrackID: track.trackID)
// try compositionTrack?.insertTimeRange(timeRange, of: track, at: CMTime.zero)
if let videoCompositionTrack = composition.addMutableTrack(withMediaType: track.mediaType, preferredTrackID: kCMPersistentTrackID_Invalid) {
try videoCompositionTrack.insertTimeRange(timeRange, of: videoTrack!, at: CMTime.zero)
if recordType == "UIInterfaceOrientationPortrait" {
let t1: CGAffineTransform = CGAffineTransform(translationX: videoTrack!.naturalSize.height, y: -(videoTrack!.naturalSize.width - videoTrack!.naturalSize.height)/2)
let t2: CGAffineTransform = t1.rotated(by: CGFloat(Double.pi / 2))
let finalTransform: CGAffineTransform = t2
videoCompositionTrack.preferredTransform = finalTransform
}
else if recordType == "UIInterfaceOrientationLandscapeRight" {
let t1: CGAffineTransform = CGAffineTransform(translationX: videoTrack!.naturalSize.height, y: -(videoTrack!.naturalSize.width - videoTrack!.naturalSize.height)/2)
let t2: CGAffineTransform = t1.rotated(by: -CGFloat(Double.pi))
let finalTransform: CGAffineTransform = t2
videoCompositionTrack.preferredTransform = finalTransform
}
else if recordType == "UIInterfaceOrientationPortraitUpsideDown" {
let t1: CGAffineTransform = CGAffineTransform(translationX: videoTrack!.naturalSize.height, y: -(videoTrack!.naturalSize.width - videoTrack!.naturalSize.height)/2)
let t2: CGAffineTransform = t1.rotated(by: -CGFloat(Double.pi/2))
let finalTransform: CGAffineTransform = t2
videoCompositionTrack.preferredTransform = finalTransform
}
}
}
} catch let error {
throw TrimError("error during composition", underlyingError: error)
}
return composition
}
Looks like, the output of this file changes the media type which is then not supported. Instead, to trim the video, I can apply a more simple solution
	 let startTime = CMTime(seconds: Double(0), preferredTimescale: 1000)
let endTime = CMTime(seconds: Double(30), preferredTimescale: 1000)
let timeRange = CMTimeRange(start: startTime, end: endTime)
exportSession.outputURL = destination
exportSession.outputFileType = .mov
exportSession.shouldOptimizeForNetworkUse = true
exportSession.timeRange = timeRange // trim video here
Do you know what exactly might be the cause why my assetsByTrimming method causes an error?
Thank you again for your assistance!