I have an application that allows users to run scripts of their own, these scripts are run by my helper app running in the background.Since El Capitan, users have an issue running scripts that accesses Reminders for example.Here is what happens :4/6/16 8:24:29.404 PM tccd[527]: SecTaskLoadEntitlements failed error=3
4/6/16 8:24:29.404 PM tccd[527]: Refusing client without path (pid 54116)MyHelper.app > Runs a user shell script > scripts run an applescript through osascript to get access to remindersFunny enough, is I run the helper app from the terminal using ./MyHelper.app/Content/MacOS/MyHelper, then it works perfectly.How can I debug TCCD, and unserstand what's missing in 'Refusing client without path'? The PID indicated is the one for "osascript" binary that my app launches.Thanks!P.S. Sorry for the weird title, I litterally spent 10mn finding a title that wouldn't be rejected as "invalid characters" by the forum...
Post
Replies
Boosts
Views
Activity
I need to merge multiple videos in a track, and at the same time, save the last frame of each source track.
Here is what I'm doing :
swift
var composition = AVMutableComposition()
guard let track=self.composition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid) else {
return
}
for i in 0..self.videos.count {
guard let asset_track = self.videos[i].asset.tracks(withMediaType: .video).first else {
return
}
let trackStart = CMTime(seconds: self.videos[i].date.timeIntervalSince(start), preferredTimescale: 10000)
try? track.insertTimeRange(asset_track.timeRange, of:asset_track , at: trackStart)
let p = AVPlayerItem(asset: track.asset!)
let q = AVPlayer(playerItem: p)
print(track.naturalSize)
framesTimeRanges.append(NSValue(time:CMTime(value: 0, timescale: 10000)))
}
let generator = AVAssetImageGenerator(asset: track.asset!)
generator.requestedTimeToleranceBefore = CMTime.zero
generator.requestedTimeToleranceAfter = CMTime.zero
var i=self.videos.count-1
generator.generateCGImagesAsynchronously(forTimes: framesTimeRanges.reversed(), completionHandler: { time, image, actual, result, error in
if result == .succeeded {
self.videos[i].lastFrame=NSImage(cgImage: image!, size: NSSize(width: 1280, height: 960))
}
i=i-1
})
On line 19, the output is (4.0, 3.0), that's where things starts to go sideways.
After that, every CGImage I got from line 31 are 4x3 tiny images.
This happens only if I use the composition track from line 1, if I use each individual tracks from line 7 to get their last frame, whether I'm using generateCGImagesAsynchronously or copyCGImage it works fine, but I thought about using generateCGImagesAsynchronously only once to improve performance.
I should add that the composition is actual playing with from its AVPlayerLayer, it's just that it seems its internal size representation is (4.0, 3.0) for some reason.
How can I generate frames from the composition track with the correct size of the actual video in the track ?