My app currently captures video using an AVCaptureSession set with the AVCaptureSessionPreset1920x1080 preset. However, I'd like to update this behavior, such that video can be recorded at a range of different resolutions.
There isn't a preset aligning to each desired resolution, so I thought I'd instead directly set the AVCaptureDeviceFormat. For any desired resolution, I would find the format that is closest without going under the desired resolution, and then crop it down as a post-processing step.
However, what I've observed is that there can be a range of available formats for a device at each resolution, with various differing settings. Presumably there is logic within AVCaptureSession that selects a reasonable default based on all these different settings, but since I am applying the format directly, I think I don't have a way to make use of that default logic? And it is undocumented?
Does this mean that the only way to select a format is to implement a comparison function that considers all different values of all different properties on AVCaptureDeviceFormat, and then sort the formats according to this comparator?
If so, what if some new property is added to AVCaptureDeviceFormat in the future? The sort would not take this new property into account, and the function might select a format with some new undesired property.
Are there any guarantees about what types for formats will be supported on a device? For example, can I take for granted that a '420v' format will exist at each resolution? If so I could filter the formats down only to those with this setting without risking filtering out all of the supported formats.
I suspect I may be missing something obvious. Any help would be greatly appreciated!
Video
RSS for tagDive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.
Post
Replies
Boosts
Views
Activity
Since iOS/iPadOs/tvOS 18 then we have run into a new problem with streaming of FairPlay encrypted video. On the affected streams then the audio plays perfectly but the video freezes for periods of a few seconds, so it will freeze for 5s or so, then be OK for a few seconds then freeze again.
It is entirely reproducible when all the following are true
the video streams were produced by a particular encoder (or particular settings, not sure on that)
the video must be encrypted
device is running some variety of iOS 18 (or iPadOS or tvOS)
the device is an affected device
Known devices are
AppleTV 4K 2nd Gen
iPad Pro 11" 1st and 2nd gen
Devices known not to show the problem are
all other AppleTV models
iPhone 13 Pro and 16 Pro
If we stream the same content, but unencrypted, then it plays perfectly, or if you play the encrypted stream on, say, tvOS 17.
When the freezing occurs then we can see in the console logs repeating blocks of lines like the following
default 18:08:46.578582+0000 videocodecd AppleAVD: AppleAVDDecodeFrameResponse(): Frame# 5771 DecodeFrame failed with error 0x0000013c
default 18:08:46.578756+0000 videocodecd AppleAVD: AppleAVDDecodeFrameInternal(): failed - error: 316
default 18:08:46.579018+0000 videocodecd AppleAVD: AppleAVDDecodeFrameInternal(): avdDec - Frame# 5771, DecodeFrame failed with error: 0x13c
default 18:08:46.579169+0000 videocodecd AppleAVD: AppleAVDDisplayCallback(): Asking fig to drop frame # 5771 with err -12909 - internalStatus: 315
also more relevant looking lines:
default 18:17:39.122019+0000 kernel AppleAVD: avdOutbox0ISR(): FRM DONE (cid: 2.0, fno: 10970, codecT: 1) FAILED!!
default 18:17:39.122155+0000 videocodecd AppleAVD: AppleAVDDisplayCallback(): Asking fig to drop frame # 10970 with err -12909 - internalStatus: 315
default 18:17:39.122221+0000 kernel AppleAVD: ## client[ 2.0] @ frm 10970, errStatus: 0x10
default 18:17:39.122338+0000 kernel AppleAVD: decodeFailIdentify(): VP error bit 4 has EP3B0 error
default 18:17:39.122401+0000 kernel AppleAVD: processHWResponse(): clientID 2.0 frameNumber 10970 error 315, offsetIndex 10, isHwErr 1
So it would seem to me that one of the following must be happening:
When these particular HLS files are encrypted then the data is being corrupted in some way that played back on iOS 17 and earlier but now won't on 18+, or
There's a regression in iOS 18 that means that this particular format of video data is corrupted on decryption
If anyone has seen similar behaviour, or has any ideas how to identify which of the two scenarios it is, please say.
Unfortunately we don't have control of the servers so can't make changes there unless we can identify they are definitely the cause of the problem.
Thanks, Simon.
Capturing more than one display is no longer working with macOS Sequoia.
We have a product that allows users to capture up to 2 displays/screens. Our application is using gstreamer which in turn is based on AVFoundation.
I found a quick way to replicate the issue by just running 2 captures from separate terminals. Assuming display 1 has device index 0, and display 2 has device index 1, here are the steps:
install gstreamer with
brew install gstreamer
Then open 2 terminal windows and launch the following processes:
terminal 1 (device-index:0):
gst-launch-1.0 avfvideosrc -e device-index=0 capture-screen=true ! queue ! videoscale ! video/x-raw,width=640,height=360 ! videoconvert ! osxvideosink
terminal 2 (device-index:1):
gst-launch-1.0 avfvideosrc -e device-index=1 capture-screen=true ! queue ! videoscale ! video/x-raw,width=640,height=360 ! videoconvert ! osxvideosink
The first process that is launched will show the screen, the second process launched will not.
Testing this on macOS Ventura and Sonoma works as expected, showing both screens.
I submitted the same issue on Feedback Assistant: FB15900976
Hello,
To create a test project, I want to understand how the video and audio settings would look for a destination video whose content comes from a source video.
I obtained the output from the source video in the audio and video tracks as follows:
let audioSettings = [
AVFormatIDKey: kAudioFormatLinearPCM,
AVSampleRateKey: 44100,
AVNumberOfChannelsKey: 2
] as [String : Any]
var audioOutput = AVAssetReaderTrackOutput(track: audioTrack!,
outputSettings: audioSettings)
// Video
let videoSettings = [
kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA,
kCVPixelBufferWidthKey: videoTrack!.naturalSize.width,
kCVPixelBufferHeightKey: videoTrack!.naturalSize.height
] as [String: Any]
var videoOutput = AVAssetReaderTrackOutput(track: videoTrack!, outputSettings: videoSettings)
With this, I'm obtaining the
CMSampleBuffer
using
AVAssetReader.copyNextSampleBuffer
.
How can I add it to the destination video?
Should I use a while loop, considering I already have the
AVAssetWriter
set up?
Something like this:
while let buffer = videoOutput.copyNextSampleBuffer() {
if let imgBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
let frame = imgBuffer as CVPixelBuffer
let time = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
adaptor.append(frame, withMediaTime: time)
}
}
Lastly, regarding the destination video.
How should the
AVAssetWriterInput
for audio and PixelBuffer of the destination video be set up?
Provide an example, something like:
let audioSettings = […] as [String: Any]
Looking forward to your response.
I am talking about AVCaptureVideoDataOutput.recommendedVideoSettings.
I found sometimes it return nil, there is my test result.
hevc .mov with activeColorSpace sRGB
60FPS -> ok
120FPS -> ok
hevc .mov with activeColorSpace displayP3_HLG
60FPS -> nil
120FPS -> nil
h264 .mov
30FPS -> ok
60FPS -> nil
120FPS -> nil
so, if you don't give a recommend setting, and you don't give a document, how does developer to use it?
We are processing videos with Core Image filters in our apps, using an AVMutableVideoComposition (for playback/preview and export).
For older devices, we want to limit the resolution at which the video frames are processed for performance and memory reasons. Ideally, we would tell AVFoundation to give us video frames with a defined maximum size into our composition. We thought setting the renderSize property of the composition to the desired size would do that.
However, this only changes the size of output frames, not the size of the source frames that come into the composition's handler block. For example:
let composition = AVMutableVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in
let input = request.sourceImage // <- this still has the video's original size
// ...
})
composition.renderSize = CGSize(width: 1280, heigth: 720) // for example
So if the user selects a 4K video, our filter chain gets 4K input frames. Sure, we can scale them down inside our pipeline, but this costs resources and especially a lot of memory. It would be way better if AVFoundation could decode the video frames in the desired size already before passing it into the composition handler.
Is there a way to tell AVFoundation to load smaller video frames?
Hello,
I.m deaf-blind programmer.
I'm experiencing memory issues in my app. Essentially, I'm writing a video.
In this output video, I get content from two sources.
The first source is an already recorded video of 18 seconds (just for testing). It will be shown at the beginning of the output video.
The second source is an array with photos and another array with audio buffers from AVSpeechSynthesizer.write(). The photos will be added along with the audio buffers to the output video, right after adding the 18-second video.
So, in the end, the output video should be:
18-second video + array of photos as video images and, for audio, the buffers from AVSpeechSynthesizer.write().
However, my app crashes as soon as I start the first process.
I'm using AVAssetWriter to write the video and AVAssetReader to read the video.
Below, I'll show the code where
I get the CMSampleBuffer.
I'd like an example of how to add the 18-second video to the beginning of the output video.
It doesn't need to be a big piece of code.
Here it is:
// Variables
var audioReaderBuffers = [CMSAMPLEBUFFER]()
var videoReaderBuffers = [(frame: CVPixelBuffer, time: CMTIME)]()
// Get CMSampleBuffer of a video asset
if let videoURL = videoURL {
let videoAsset = AVAsset(url: videoURL)
Task {
let videoAssetTrack = try await videoAsset.loadTracks(withMediaType: .video).first!
let audioTrack = try await videoAsset.loadTracks(withMediaType: .audio).first!
let reader = try AVAssetReader(asset: videoAsset)
let videoSettings = [
kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA,
kCVPixelBufferWidthKey: videoAssetTrack.naturalSize.width,
kCVPixelBufferHeightKey: videoAssetTrack.naturalSize.height
] as [String: Any]
let readerVideoOutput = AVAssetReaderTrackOutput(track: videoAssetTrack, outputSettings: videoSettings)
let audioSettings = [
AVFormatIDKey: kAudioFormatLinearPCM,
AVSampleRateKey: 44100,
AVNumberOfChannelsKey: 2
] as [String : Any]
let readerAudioOutput = AVAssetReaderTrackOutput(track: audioTrack,
outputSettings: audioSettings)
reader.add(readerVideoOutput)
reader.add(readerAudioOutput)
reader.startReading()
// Video CMSampleBuffer
while let sampleBuffer = readerVideoOutput.copyNextSampleBuffer() {
autoreleasepool {
if let imgBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
let pixBuf = imgBuffer as CVPixelBuffer
let pTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
videoReaderBuffers.append((frame: pixBuf, time: pTime))
}
}
}
if let videoURL = videoURL {
let videoAsset = AVAsset(url: videoURL)
Task {
let videoAssetTrack = try await videoAsset.loadTracks(withMediaType: .video).first!
let audioTrack = try await videoAsset.loadTracks(withMediaType: .audio).first!
let reader = try AVAssetReader(asset: videoAsset)
let videoSettings = [
kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA,
kCVPixelBufferWidthKey: videoAssetTrack.naturalSize.width,
kCVPixelBufferHeightKey: videoAssetTrack.naturalSize.height
] as [String: Any]
let readerVideoOutput = AVAssetReaderTrackOutput(track: videoAssetTrack, outputSettings: videoSettings)
let audioSettings = [
AVFormatIDKey: kAudioFormatLinearPCM,
AVSampleRateKey: 44100,
AVNumberOfChannelsKey: 2
] as [String : Any]
let readerAudioOutput = AVAssetReaderTrackOutput(track: audioTrack,
outputSettings: audioSettings)
reader.add(readerVideoOutput)
reader.add(readerAudioOutput)
reader.startReading()
while let sampleBuffer = readerVideoOutput.copyNextSampleBuffer() {
autoreleasepool {
if let imgBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
let pixBuf = imgBuffer as CVPixelBuffer
let pTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
}
I tried configuring the preferredForwardBufferDuration on devices using 4G and Wi-Fi, and in these cases, AVPlayer works correctly according to the configured buffer duration. However, when the device is connected to a 5G network, the configuration value no longer works.
For example, if I set preferredForwardBufferDuration to 30 seconds, AVPlayer preloads with a buffer of over 100 seconds. I’m not sure how to resolve this, as it’s causing issues with my system.
Hello,
First, some version and software details:
Software: iOS 18.1
Hardware: iPhone 14 Pro Max and later
Xcode: 16.0
Summary: AVAssetReader is not concatenating a video at the beginning of the output video. The output video should contain a scene of me introducing the content, followed by a blue screen with AVSpeechSynthesizer reading out a text that I pasted above the "Generate Video" button.
Details:
Now, let's talk about the app.
Basically, I’m developing an app that generates a video with the following features:
My app will create an output video that is split into an opening scene followed by a fully blue screen.
The opening scene will be taken from a video I choose from my gallery.
I will read the opening video using AVAssetReader as usual.
After the opening scene, I will use the content of a text read by AVSpeechSynthesizer.write().
After the opening scene, the synthesized audio will start playing while the blue screen is displayed.
All of this is already defined in the attached project.
Each project file has a comment at the beginning introducing its content.
How to test:
Write something in the field above the "Generate Video" button. For example, type "Hello, world!"
Then, press the "Library" button and select a video from the gallery, about 30 seconds long.
That’s it. Press the "Generate Video" button.
The result I’ve experienced is a crash or failure to generate the video.
Practical example of what I want to achieve:
Suppose I record a 30-second video where I say, "I’m going to tell you the story of Snow White."
Then, I paste the "Snow White" story into the field above the "Generate Video" button.
The output video should contain me saying, "I’m going to tell you the story of Snow White."
After that, the AVSpeechSynthesizer will read the story I pasted, while displaying a blue screen.
I look forward to a solution.
Thank you very much!
convertToCMSampleBuffer.swift
convertToPixelBuffer.swift
createInputs.swift
createVideo.swift
test.swift
saveVideo.swift
TestApp.swift
editingVideo.swift
sampleReaderProvider.swift
misc.swift
sampleProvider.swift
Looking to output dv video to my JVC SR-VS30 video deck. I used to be able to do this, but with most firewire stuff being deprecated, I'm not sure how to go about this. I found this old developer sample code that seems to do exactly what I'd like. Surely this could be rolled or updated for current macOS?
https://developer.apple.com/library/archive/samplecode/SimpleVideoOut/Introduction/Intro.html#//apple_ref/doc/uid/DTS10000809-Intro-DontLinkElementID_2
We have had the same video player in our app for at least 5 years with few issues but the iOS 18 updated has now resulted in video playback for our users who have downloaded the video for offline viewing is now played at 2x speed.
Hey. I am trying to create a present view with a bunch of media (images/videos). Right now I am using a ZStack to render each media and change opacity based on the index selected using a scrollView. The issue seems to be that sometimes, videos don't seem to load in the main slide. There is a slide created as the video exists, the Player shows controls too but doesn't play anything.
Present View Z-Stack
ZStack {
ForEach(presentation.slides.indices, id: .self) { index in
if let media = mediaCacheManager.mediaCache[index] {
if let player = media as? AVPlayer {
PlayerView(player: player)
.aspectRatio(16/10, contentMode: .fit )
.frame(width: UIScreen.main.bounds.width * 0.8)
.background(Color.gray.opacity(0.2))
.clipShape(RoundedRectangle(cornerRadius: 40))
.overlay(
RoundedRectangle(cornerRadius: 40)
.stroke(Color.gray.opacity(0.5), lineWidth: 1)
)
.onDisappear {
player.pause()
}
.opacity(appModel.currentSlide == index ? 1 : 0)
} else if let image = media as? Image {
image
.resizable()
.scaledToFit()
.frame(width: UIScreen.main.bounds.width * 0.8)
.background(Color.gray.opacity(0.2))
.clipShape(RoundedRectangle(cornerRadius: 40))
.overlay(
RoundedRectangle(cornerRadius: 40)
.stroke(Color.gray.opacity(0.5), lineWidth: 1)
)
.padding(.vertical, 10)
.opacity(appModel.currentSlide == index ? 1 : 0)
}
}
}
}
The PlayerView
public class PlayerUIView: UIView {
let playerVC = AVPlayerViewController()
let gravity: AVLayerVideoGravity
let manageAudio: Bool
override init(frame: CGRect) {
self.gravity = .resizeAspectFill
self.manageAudio = true
super.init(frame: frame)
}
deinit {
if manageAudio {
try? AVAudioSession.sharedInstance().setActive(false)
}
}
init(player: AVPlayer?, gravity: AVLayerVideoGravity, manageAudio: Bool = true) {
self.gravity = gravity
self.manageAudio = manageAudio
super.init(frame: .zero)
guard let player = player else { return }
self.playerSetup(player: player)
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
public override func layoutSubviews() {
super.layoutSubviews()
playerVC.view.frame = bounds
playerVC.view.backgroundColor = .clear
playerVC.allowsVideoFrameAnalysis = false
}
private func playerSetup(player: AVPlayer) {
playerVC.updatesNowPlayingInfoCenter = true
playerVC.player = player
playerVC.showsPlaybackControls = true
playerVC.view.backgroundColor = .clear
playerVC.exitsFullScreenWhenPlaybackEnds = true
playerVC.videoGravity = gravity
self.addSubview(playerVC.view)
}
}
We have a universal iOS/tvOS app that also supports iOS App on Mac.
In our AVPlayer-based video player we support AirPlay with AVRouteDetector and AVRoutePickerView. We play HLS streams.
When we try to AirPlay from an iOS device to an Apple TV or a Mac that has our app installed, it doesn't work. The receiver is marked as active in the route picker UI but the video doesn't show up on the receiver and playback stops.
When our app isn't installed on the receiver device, everything works as expected.
Has anyone encountered the same issue? Any solutions available for this?
The media services used for HLS streaming in an AVPlayer seem to crash if your segments are too large.
Anything over 20Mbps seems to cause a crash. I have tried adjusting the segment length to 1 second also and it didn't help.
I am remuxing Dolby Vision and HDR video and want to avoid transcoding and losing any metadata. However the segments are too large.
Is there a workaround for this? Otherwise it seems AVFoundation is not suited to high bitrate HLS and I should be using MPV or similar.
I am recording video on iOS using ReplayKit and found that after copying data in the processSampleBuffer:withType: callback using memcpy, the data changes. This occurs particularly frequently when the screen content changes rapidly, making it look like the frames are overlapping.
I found that the values starting from byte 672 in the video data on my device often change. Here is the test demo:
- (void)processSampleBuffer:(CMSampleBufferRef)sampleBuffer withType:(RPSampleBufferType)sampleBufferType {
switch (sampleBufferType) {
case RPSampleBufferTypeVideo: {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
int ret = 0;
uint8_t *oYData = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
size_t oYSize = CVPixelBufferGetHeightOfPlane(pixelBuffer, 0) * CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
uint8_t *oUVData = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
size_t oUVSize = CVPixelBufferGetHeightOfPlane(pixelBuffer, 1) * CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1);
if (oYSize <= 672) {
return;
}
uint8_t tempValue = oYData[672];
uint8_t *tYData = malloc(oYSize);
memcpy(tYData, oYData, oYSize);
if (tYData[672] != oYData[672]) {
NSLog(@"$$$$$$$$$$$$$$$$------ t:%d o:%d temp:%d", tYData[672], oYData[672], tempValue);
}
free(tYData);
CVPixelBufferUnlockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
break;
}
default: {
break;
}
}
}
Output:
$$$$$$$$$$$$$$$$------ t:110 o:124 temp:110
$$$$$$$$$$$$$$$$------ t:111 o:133 temp:111
$$$$$$$$$$$$$$$$------ t:124 o:138 temp:124
$$$$$$$$$$$$$$$$------ t:133 o:144 temp:133
$$$$$$$$$$$$$$$$------ t:138 o:151 temp:138
$$$$$$$$$$$$$$$$------ t:144 o:156 temp:144
$$$$$$$$$$$$$$$$------ t:151 o:135 temp:151
$$$$$$$$$$$$$$$$------ t:156 o:78 temp:156
$$$$$$$$$$$$$$$$------ t:135 o:76 temp:135
$$$$$$$$$$$$$$$$------ t:78 o:77 temp:78
$$$$$$$$$$$$$$$$------ t:76 o:80 temp:76
$$$$$$$$$$$$$$$$------ t:77 o:80 temp:77
$$$$$$$$$$$$$$$$------ t:80 o:79 temp:80
$$$$$$$$$$$$$$$$------ t:79 o:80 temp:79
I’m using AVFoundation in my iPhone application to encode a video in MP4 format with H.264, which can then be shared or exported.
Do I need to pay a license for using the H.264 format to MPEG LA? Or are these fees already covered by Apple?
I’ve read articles suggesting that Apple covers these fees when encoding is done through its native APIs (or via its dedicated encoding hardware components), but I haven’t found any explicit confirmation of this point in the various documentation or contracts... Did I miss something?
When is playbackBufferEmpty triggered? Why is this property YES observed from the observer method when returning to the foreground from the background even if the buffer is not empty?
There are significant crash reports coming from iOS 18 users regarding AVKit framework that starts from this line [AVPlayerController _observeValueForKeyPath:oldValue:newValue:] which seems to be coming from iOS internal SDK. There are 2 kinds of crash we found:
UI modification on background thread
From the stack trace it seems like when AVPictureInPictureController is being deallocated and its view is being removed from superview somehow the code is being executed in background thread because there is this line there _AssertAutoLayoutOnAllowedThreadsOnly highlighted before the crash.
But I’ve checked our code that plays around AVPictureInPictureController, in the locations where we would deallocate the object it will always be called on main thread which are insideviewDidLoad and deinit inside UIViewController class. From the log, it seems like the crash happened when user try to open another content when PIP player is active resulting in the current PIP instance will be replaced with a new one. My suspect is the observation logic inside AVPlayerController could be the hint to this issue, probably something broken over there since this issue happened across our app versions on iOS 18 users only.
Unfortunately, I was unable to reproduce this issue yet but one of my colleagues reproduced it once but haven’t been able to do it again since. The reports keep raising each day up to 1.3k events in the last 30 days now.
Over release object
This one has lower reports than the first one but I decided to include it since it might have relevant information regarding the first crash since the starting stack trace is similar. The crash timing seems to be similar to the first one, where we deallocate existing AVPictureInPictureController and later replace it with a new one and also found only in iOS 18 users which also refers to [AVPlayerController _observeValueForKeyPath:oldValue:newValue:]. I also was unable to reproduce this issue so far.
Oh, and both of the issues happened on both iPhone and iPad.
We’d appreciate any advice on what we can do to avoid this in the future and probably any hint on why it could happened.
I have reported this issue with bug number: FB15620734
I also attached one sample crash report for each of the crashes here.
non ui thread access.crash
over release.crash
Hello, I will use AVFoundation's AVAssetWriter and AVPlayer for H.264 and H.265 encoding and decoding in my app. It will be used commercially, so I would like to know if I need to pay any licensing fees for H.264 and H.265 encoding and decoding.
When setting the now playing info for playing media in MPNowPlayingInfoCenter we can set artwork. But it seems the Apple API for creating the artwork is crashing on iOS 18 (FB15145734).
On iOS 17 this gave the warning that the completion handler was not run on the main thread.
I've tried to seek help here: https://stackoverflow.com/questions/78989543/swift-data-race-with-appkit-mpmediaitemartwork-function/78990231?noredirect=1#comment139277425_78990231
but it seems that it's not possible to override the completion handler and therefor it's up to Apple to fix this issue.
.task {
await MainActor.run {
let nowPlayingInfoCenter = MPNowPlayingInfoCenter.default()
var nowPlayingInfo = [String: Any]()
let image = NSImage(named: "image")!
// warning: data race detected: @MainActor function at MPMediaItemArtwork/ContentView.swift:22 was not called on the main thread
nowPlayingInfo[MPMediaItemPropertyArtwork] = MPMediaItemArtwork(boundsSize: image.size, requestHandler: { _ in
// Not on main thread here!
return image
})
nowPlayingInfoCenter.nowPlayingInfo = nowPlayingInfo
}
}
I'm wondering if there is an alternative method to set the now playing artwork?