AFAIK, background execution with Background Tasks framework is all budgeted. E.g., if we try to use it too much, the system would simply stop letting us do it, at least for some time.
Is it the same when we do sendMessage from Watch app to iOS app? Is there a chance we may overuse it so that system stops waking up the iOS app? Or I can rely and use sendMessage as often and heavy as I need?
Post
Replies
Boosts
Views
Activity
Hi everyone. In our project we have processing task (BGProcessingTask) that is being submitted when app goes to background, and the task request looks like this:
let request = BGProcessingTaskRequest(identifier: taskId)
request.requiresNetworkConnectivity = false
request.requiresExternalPower = true
let halfDay: TimeInterval = 12 * 60 * 60
request.earliestBeginDate = .init(timeIntervalSinceNow: halfDay)
The purpose of task is to check if there are any new photos in Photo Library, and if there are - process them in various ways. This process may be heavy and time consuming.
When a launched background task is successfully finished or expired, we re-submit same task in order to do same check+processing somewhen later. This way no foreground launch is required to do continuous checks.
My concern is: how do we know if we accidentally abuse the system? We don’t want to do anything “bad” because we want the system to continue launching our app forever.
Are there any kind of reports or analytics regarding background tasks “health” and status of our app in background tasks system?
We use our own analytics events to somehow figure out that everything works fine, but it’s hard to implement it to make use of.
Video is not rendered if videoComposition renderSize is the same as source video track's naturalSize
In example below there is a simple AVComposition with one video track, with AVVideoComposition that is single passthrough instruction. Source video track has preferredTransform == identity, so no need to modify layer instruction, passthrough should work.
And for some reason, it doesn't render in AVPlayer when
let renderSize = videoAsset.tracks(withMediaType: .video).first!.naturalSize
videoComposition.renderSize = renderSize
If I slightly modify either render size or set some transform on composition track - everything works.
https://www.icloud.com/iclouddrive/0IrVpTkRkb213-T57OkQNn7WA#VideoCompositionProportionalRenderSize
I have a collection view with UICollectionViewDiffableDataSource, which has class PHAssetVisualAssetsPickerItem as it’s ItemIdentifierType. When I create instance of NSDiffableDataSourceSnapshot with the same items that dataSource already has, and then apply it, I see that collection view blinks even tho nothing changed. It actually blinks no matter what new snapshot is.
My investigation led to a fact that methods == (Equatable) and hasher(into:) (Hashable) are not called on items if items are classes. If I change them to structs, methods are called, dataSource understands what to do, nothing blinks.
https://www.icloud.com/iclouddrive/0vnB3auwp0ShEvPQmpVOAy6fg#ApplySnapshotIndexPaths
Here is the code where you can simply change HashableType to either class or struct and see that collectionView update looks different when you tap button.
Hi everyone!
Me and my team have an issue that we can't fix for few weeks now.
When we seek forward between two videos in AVComposition - the preview freezes. It gets stuck on last frame of first video. It doesn’t freeze if it’s a simple playback (not a seek) or if seek is quick.
Here is screen recording of what's happening:
https://www.icloud.com/iclouddrive/0651H_IA679ENDk9fLANNvwXA#AVCompositionFreezeScreenRecording
It feels like we tried everything, and nothing helps. When adding second video, we ask AVMutableComposition for a compatible track, and it returns us existing track, thus we conclude that both assetTrack's are compatible.
All the ranges and duration checked many times.
It fails both when videoComposition is set on playerItem and when not.
My current theory is even tho composition says that existing compositionTrack is compatible for second video, we can’t just put second video in it for some reason, maybe the transform is incompatible or I don’t know.
One more note - if we take range of source videoAssetTrack with duration that is shorter than videoAssetTrack.timeRange.duration - then everything works. Maybe some issue with segment time mapping, but anything that we tried with it failed.
I tried to minimize the amount of code needed to demonstrate the issue, so hopefully it will be easy to see what I’m talking about. Just seek slowly from end of video1 to start of video2 and it will get stuck.
https://www.icloud.com/iclouddrive/0tDFgcQHKsIS7obiCA9yMHAEA#AVCompositionFreezeDemo
Thanks very much in advance, any help would be much appreciated!