Given an MPMediaItem the user selected from MPMediaPickerController or from MPMusicPlayerController.systemMusicPlayer.nowPlayingItem, is it possible to find out if this song is lossless and if it supports Spatial Audio? Thanks!
Posts under Media tag
61 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I have a Catalyst application that uses (as expected) MPNowPlayingInfoCenter to set the now playing info and MPRemoteCommandCenter to get the media events for play/pause/stop/favorite/etc.
The code is shared on iOS, tvOS and watchOS and it works correctly there. It seems not to work on macOS (app is compiled as a Catalyst application) on Big Sur (and Monterey, fwiw). Media keys on the keyboard starts the Music app, the music part of the control center do not show now playing info (nor the media controls there send messages to the app).
I seem to remember that it used to work in Catalina (at least the media key part) and for sure it used to work in a precedent version of the same app that used to be an UIKit one.
Is this a bug (worth a feedback to Apple) or something wrong on my side? I forgot some magic capability for macOS? App is sandboxed and uses hardened runtime, in case this is significant.
Thank you for any hint!
I know that if you want background audio from AVPlayer you need to detatch your AVPlayer from either your AVPlayerViewController or your AVPlayerLayer in addition to having your AVAudioSession configured correctly.
I have that all squared away and background audio is fine until we introduce AVPictureInPictureController or use the PiP behavior baked into AVPlayerViewController.
If you want PiP to behave as expected when you put your app into the background by switching to another app or going to the homescreen you can't perform the detachment operation otherwise the PiP display fails.
On an iPad if PiP is active and you lock your device you continue to get background audio playback. However on an iPhone if PiP is active and you lock the device the audio pauses.
However if PiP is inactive and you lock the device the audio will pause and you have to manually tap play on the lockscreen controls. This is the same between iPad and iPhone devices.
My questions are:
Is there a way to keep background-audio playback going when PiP is inactive and the device is locked (iPhone and iPad)
Is there a way to keep background-audio playback going when PiP is active and the device is locked? (iPhone)
CVPixelBuffer.h defines
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange = '420v', /* Bi-Planar Component Y'CbCr 8-bit 4:2:0, video-range (luma=[16,235] chroma=[16,240]). baseAddr points to a big-endian CVPlanarPixelBufferInfo_YCbCrBiPlanar struct */
kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange = 'x420', /* 2 plane YCbCr10 4:2:0, each 10 bits in the MSBs of 16bits, video-range (luma=[64,940] chroma=[64,960]) */
But when I set above format camera output, and I find the output pixelbuffer's value is exceed the range.I can see [0 -255] for 420YpCbCr8BiPlanarVideoRange and
[0,1023] for 420YpCbCr10BiPlanarVideoRange
Is it a bug or something wrong of the output?If it is not how can I choose the correct matrix transfer the yuv data to rgb?
I am trying to save an image to the user's photo library using PHPhotoLibrary and set the image file name at the time of saving suing the code below. This is working the first time, but if I then try to save the same image again with a different file name, it saves with the same file name as before.
Is there something I need to add to let the system know to save a new version of the image with a new file name?
Thank you
PHPhotoLibrary.shared().performChanges ({
let assetType:PHAssetResourceType = .photo
let request:PHAssetCreationRequest = .forAsset()
let createOptions:PHAssetResourceCreationOptions = PHAssetResourceCreationOptions()
createOptions.originalFilename = "\(fileName)"
request.addResource(with: assetType, data: image.jpegData(compressionQuality: 1)!, options: createOptions)
}, completionHandler: { success, error in
if success == true && error == nil {
print("Success saving image")
} else {
print("Error saving image: \(error!.localizedDescription)")
}
})
Hi, I'm trying to add a video to my first iOS app. From the tutorials I've read online, this seemed to be a simple process of creating a AVPlayer, providing a URL to the video file and using onAppear to start the video playing when the view is shown.
Below is a simplified version of the code I'm using in my app:
struct ContentView: View {
let avPlayer = AVPlayer(url: Bundle.main.url(forResource: "Intro", withExtension: "mp4")!)
var body: some View {
VStack{
VideoPlayer(player: avPlayer)
.onAppear{
avPlayer.play()
}
}
}
}
When I run this code, the video plays but when it finishes playing I receive the following errors in the Xcode output window:
2023-01-27 11:56:39.530526+1100 TestVideo[29859:2475750] [api] -[CIImage initWithCVPixelBuffer:options:] failed because the buffer is nil.
2023-01-27 11:56:39.676462+1100 TestVideo[29859:2475835] [TextIdentificationService] Text LID dominant failure: lidInconclusive
2023-01-27 11:56:39.676822+1100 TestVideo[29859:2475835] [VisualTranslationService] Visual isTranslatable: NO; not offering translation: lidInconclusive
2023-01-27 11:56:40.569337+1100 TestVideo[29859:2476091] Metal API Validation Enabled
I have googled each of these error messages but have not been able to find any information explaining exactly what they mean or how to eliminate them.
I am using Xcode 14.2 and testing on iOS 16.2.
If anyone could please point me in the right direction of how to understand and eliminate these errors I'd really appreciate it. Thanks!
Is it still possible to tutor a QuickTime movie with hyperlinks? I'm building a website and I know at one point you could author a QuickTime movie that supported links inside the video - either to other timestamps in the video or to other web pages.
I don't want to use a custom player, I'd prefer to use the system level. I've seen a really amazing example of this on the mobile version of the memory alpha (Star Trek nerds!) website. There is a movie that plays at the top of pages that is fully interactive.
Is that still supported? Is it possible to author that way? I'm not making anything insanely complicate, I just thought it would be a nice way to build a website with tools I'm more comfortable working in.
https://developer.apple.com/documentation/shazamkit/shazamkit_dance_finder_with_managed_session
The song detection is successful however with new APIs,
I can't find this demo working with SHLibrary, it expect to display the RecentDanceRowView.
I wonder if I missed any steps or the SHLibrary is not ready yet.
I am trying to create an album for an app in photo_kit and store images in it, is there any way to do this under the NSPhotoLibraryAddUsageDescription permission?
At first glance, using NSPhotoLibraryAddUsageDescription seems to be the best choice for this app, since I will not be loading any images. However, there are two album operations that can only be done under NSPhotoLibraryUsageDescription.
Creating an album
Even though the creation of an album does not involve loading media, it is necessary to use NSPhotoLibraryUsageDescription to allow users to load media. This is a bit unconvincing.
Saving images in the created album
Before saving, you must check to see if the app has already created the album. You need to fetch the name as a key. This is where NSPhotLibraryUsageDescription is needed. I understand that the NSPhotLibraryUsageDescription is needed for fetching, but if iOS forbids the creation of albums with the same name and ignores attempts to create an album with an already existing name, this fetching will not be necessary.
In summary, I just want to create an album for my app and store media in it, but in order to do so I need to get permission from the user to read into the photos, which goes against the idea that the permissions I request should be minimal and only what I need. If there is a way to do this under the NSPhotoLibraryAddUsageDescription permission I would like to know, I am new to Swift so sorry if I am wrong.
Are there plans to expose the cinematic frames (e.g. disparity) to a AVAsynchronousCIImageFilteringRequest?
I want to use my own lens blur shader on the cinematic frames.
Right now it looks like the cinematic frames are only available in a AVAsynchronousVideoCompositionRequest like this:
guard let sourceFrame = SourceFrame(request: request, cinematicCompositionInfo: cinematicCompositionInfo) else { return }
let disparity = sourceFrame.disparityBuffer
Can I do the same with the image?
PHAssetChangeRequest.creationRequestForAsset(from: UIImage(data: bytes)
If I disable playback controls for an AVPlayer (showsPlaybackControls), some feature of MPNowPlayingInfoCenter no longer working. (play/pause, skip forward and backward).
I need custom video and audio controls on my AVPlayer in my app, that's why I disabled the iOS playback controls. But I also need the features of the MPNowPlayingInfoCenter. Is there another solution to achieve this?
Hi community
I'm developing an application for MacOS and i need to capture the mic audio stream. Currently using CoreAudio in Swift i'm able to capture the audio stream using IO Procs and have applied the AUVoiceProcessing for prevent echo from speaker device. I was able to connect the audio unit and perform the echo cancellation.
The problem that i'm getting is that when i'm using AUVoiceProcessing the gain of the two devices get reduced and that affects the volume of the two devices (microphone and speaker).
I have tried to disable the AGC using the property kAUVoiceIOProperty_VoiceProcessingEnableAGCbut the results are the same.
There is any option to disable the gain reduction or there is a better approach to get the echo cancellation working?
Hi,
Replay doesn't work on HLS videos (.m3u8) on ios 16 safari when you get to the end of a video. It works on .mp4s, .movs, etc.
I have written a github issue on the videojs repo here: https://github.com/videojs/video.js/issues/8345
But i'm starting to think its the new native ios 16 player that is causing issues and not the library itself.
I was play a pcm(24kHz 16bit) file with AudioQueue, and it mix with other sound( 192kHz 24bit) named sound2.
Setting for AVAudioSession as:
category (AVAudioSessionCategoryPlayback),
options (AVAudioSessionCategoryOptionMixWithOthers|AVAudioSessionCategoryOptionDuckOthers)
when playing pcm the sound2 should pushed volume lower as setting. BUT, there has a absolutly mute keep 0.5 second when the sound2 become low, and after a 0.5s mute the pushed lower sound came out.
It only become in sound2 situation(192kHz, 24bit). if the sound2's quality lower everything is OK.
Hi, when I try to make a memory of an album it keeps the order the photos are on in that album. However, if I change the music the order of the photos changes. Is this the expected behavior? Could we have it so that it doesn’t change the order?
I am aware that HLS is required for most video streaming use cases (watching a movie, TV show, or YouTube video). This is a requirement for all apps. However, I am confused as to whether this would also apply to video chat/video conferencing apps. It would be inefficient to upload compressed video using rtmp/rtp, decompress it, and create HLS segments. Low latency requirements only make this worse. So, is it permissible to use other protocols for video conferencing use cases? Thanks
I'm trying to use the resourceLoader of an AVAsset to progressively supply media data. Unable to because the delegate asks for the full content requestsAllDataToEndOfResource = true.
class ResourceLoader: NSObject, AVAssetResourceLoaderDelegate {
func resourceLoader(_ resourceLoader: AVAssetResourceLoader, shouldWaitForLoadingOfRequestedResource loadingRequest: AVAssetResourceLoadingRequest) -> Bool {
if let ci = loadingRequest.contentInformationRequest {
ci.contentType = // public.mpeg-4
ci.contentLength = // GBs
ci.isEntireLengthAvailableOnDemand = false
ci.isByteRangeAccessSupported = true
}
if let dr = loadingRequest.dataRequest {
if dr.requestedLength > 200_000_000 {
// memory pressure
// dr.requestsAllDataToEndOfResource is true
}
}
return true
}
}
Also tried using a fragmented MP4 created using AVAssetWriter. But didn't work. Please let me know if it's possible for the AVAssetResourceLoader to not ask for the full content?
I know that I can uniquely identify a PHAsset on a given device using localIdentifier but if that asset is synched (through iCloud, say) to another device, how to I uniquely identify that asset across multiple devices?
My app allows users to store their images in the standard photo gallery, but I have no way of referring to them when they sync their app profile to another iOS device with my app installed.
The documentation for this API mentions:
The system uses the current representation and avoids transcoding, if possible.
What are the scenarios in which transcoding takes place?
The reason for asking is that we've had a user reaching out saying they selected a video file from their Photos app, which resulted in a decrease in size from ~110MB to 35MB. We find it unlikely it's transcoding-related, but we want to gain more insights into the possible scenarios.