Hello,
This is about the Get Catalog Top Charts Genres endpoint :
GET https://api.music.apple.com/v1/catalog/{storefront}/genres
I noticed that for some storefronts, no genre is returned. You can try with the following storefront values :
France (fr)
Poland (pl)
Kyrgyzstan (kg)
Uzbekistan (uz)
Turkmenistan (tm)
Is that a bug or is it on purpose ?
Thank you.
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
I found this phenomenon, and it can be reproduced stably.
If I use a triple-camera to take a photo, if the picture is moving, or I move the phone, let's assume it moves horizontally, when I aim at an object, I press the shutter, which is called time T. At this time, the picture in the viewfinder is T0, and the photo produced is about T+100ms.
If I use a single-camera to take a photo, use the same speed to move the phone to move the picture, and press the shutter when aiming at the same object, the photo produced is about T+400ms later.
Let me describe the problem I encountered in another way.
Suppose a pile of cards are placed horizontally on the table, and the cards are written with numbers from left to right, 0,1,2,3,4,5,6...
Now aim the camera at the number 0, and then move to the right at a uniform speed. The numbers pass through the camera's viewfinder and continue to increase. When aiming at the number 5, press the shutter.
If it is a triple-camera, the photo obtained will probably show 6, while if it is taken with a single-camera, the photo obtained will be about 9.
This means the triple camera can capture photos faster, but why is this the case? Any explanation?
I tried stacking 30 RAW exposures of 1 second each, but the quality is far inferior to the 30-second long exposure in night mode.
Hi!
I am creating a aumi AUv3 extension and I am trying to achieve simultaneous connections to multiple other avaudionodes. I would like to know it is possible to route the midi to different outputs inside the render process in the AUv3.
I am using connectMIDI(_:to:format:eventListBlock:) to connect the output of the AUv3 to multiple AvAudioNodes. However, when I send midi out of the AUv3, it gets sent to all the AudioNodes connected to it. I can't seem to find any documentation on how to route the midi only to one of the connected nodes. Is this possible?
Hello,
I m trying to implement deferred photo processing in my photo capture app. After I take a photo, I pass it through a CIFilter, now with the Deferred Photo Processing where would I pass the resulting photo through the CIFilter?
Since there is no way for me to know when the system has finished processing a photo.
If I have to do it in my app foreground every time, how do I prevent a scenario, where the user takes a photo, heads straight to the Photos App and sees the image without the filter?
Hi all,
I am working on an app where I have live prompts playing, in addition to a voice channel that sometimes becomes active. Right now I am using two different AVAudioSession Configurations so what we only switch to a mic enabled mode when we actually need input from the mic. These are defined below.
When just using the device hardware, everything works as expected and the modes change and the playback continues as needed. However when using bluetooth devices such as AirPods where the switch from AD2P to HFP is needed, I am getting a AVAudioEngineConfigurationChange notification. In response I am tearing down the engine and creating a new one with the same 2 player nodes. This does work fine and there are no crashes, except all the audio I have scheduled on a player node has now been cleared. All the completion blocks marked with ".dataPlayedBack" return the second this event happens, and leaves me in a state where I now have a valid engine setup again but have no idea what actually played, or was errantly marked as such.
Is this the expected behavior when getting a configuration change notification?
Adding some information below to my audio graph for context:
All my parts of the graph, I disconnect when getting this event and do the same to the new engine
private var inputEngine: AVAudioEngine
private var audioEngine: AVAudioEngine
private let voicePlayerNode: AVAudioPlayerNode
private let promptPlayerNode: AVAudioPlayerNode
audioEngine.attach(voicePlayerNode)
audioEngine.attach(promptPlayerNode)
audioEngine.connect(
voicePlayerNode,
to: audioEngine.mainMixerNode,
format: voiceNodeFormat
)
audioEngine.connect(
promptPlayerNode,
to: audioEngine.mainMixerNode,
format: nil
)
An example of how I am scheduling playback, and where that completion is firing even if it didn't actually play.
private func scheduleVoicePlayback(_ id: AudioPlaybackSample.Id, buffer: AVAudioPCMBuffer) async throws {
guard !voicePlayerQueue.samples.contains(where: { $0 == id }) else {
return
}
seprateQueue.append(buffer)
if !isVoicePlaying {
activateAudioSession()
}
voicePlayerQueue.samples.append(id)
if !voicePlayerNode.isPlaying {
voicePlayerNode.play()
}
if let convertedBuffer = buffer.convert(to: voiceNodeFormat) {
await voicePlayerNode.scheduleBuffer(convertedBuffer, completionCallbackType: .dataPlayedBack)
} else {
throw AudioPlaybackError.failedToConvert
}
voiceSampleHasBeenPlayed(id)
}
And lastly my audio session configuration if its useful.
extension AVAudioSession {
static func setDefaultCategory() {
do {
try sharedInstance().setCategory(
.playback,
options: [
.duckOthers, .interruptSpokenAudioAndMixWithOthers
]
)
} catch {
print("Failed to set default category? \(error.localizedDescription)")
}
}
static func setVoiceChatCategory() {
do {
try sharedInstance().setCategory(
.playAndRecord,
options: [
.defaultToSpeaker,
.allowBluetooth,
.allowBluetoothA2DP,
.duckOthers,
.interruptSpokenAudioAndMixWithOthers
]
)
} catch {
print("Failed to set category? \(error.localizedDescription)")
}
}
}
Hello Apple Engineers,
Specific Issue:
I am working on a video recording feature in my SwiftUI app, and I am trying to record 4K60 video in ProRes Log format using the iPhone's internal storage. Here's what I have tried so far:
I am using AVCaptureSession with AVCaptureMovieFileOutput and configuring the session to support 4K resolution and ProRes codec.
The sessionPreset is set to .inputPriority, and the video device is configured with settings such as disabling HDR to prepare for Log.
However, when attempting to record 4K60 ProRes video, I get the error: "Capturing 4k60 with ProRes codec on this device is supported only on external storage device."
This error seems to imply that 4K60 ProRes recording is restricted to external storage devices. But I am trying to achieve this internally on devices such as the iPhone 15 Pro Max, which has native support for ProRes encoding.
Here are my questions:
Is it technically possible to record 4K60 ProRes Log video internally on supported iPhones (for example: iPhone 15 Pro Max)?
There are some 3rd apps (i.e. Blackmagic 👍🏻) that can save 4K60 ProRes Log video on iPhone internally. If internal saving is supported, what additional configuration is needed for the AVCaptureSession or other technique to bypass this limitation?
If anyone has successfully saved 4K60 ProRes Log video on iPhone internal storage, your guidance would be highly appreciated.
Thank you for your help!
[[PHPhotoLibrary sharedPhotoLibrary] presentLimitedLibraryPickerFromViewController:self];弹出照片选择器时,导航栏背景颜色和导航栏字体颜色均为白色,导致无法辨认。
使用
[[UINavigationBar appearanceWhenContainedInInstancesOfClasses:@[UIImagePickerController.class]] setTintColor:[UIColor blackColor]];没有作用
Hi,
I'm having difficulties figuring out how I can reliably detect if the macOS system PHPhotoLibrary is available.
If I place the system Photo Library on an external drive and then eject it, other apps on startup, such as Photos, will tell me that the library isn't available. How can I replicate this behavior?
The only API I can find for this detection on startup is "PHPhotoLibrary.shared().unavailabilityReason". However this always returns nil.
Another strange behavior is if I register a PHPhotoLibraryAvailabilityObserver class on startup when the library is available and then eject the drive, I do get a notification via photoLibraryDidBecomeUnavailable, but then directly after the call the app is terminated. This prevents the app to perform any kind of graceful termination. Is this the expected behavior? It would make sense that it's up to the developer to decide what happens if the library becomes unavailable.
Thanks
We have developed a simple video player Swift application for macOS, which uses the AVFoundation Framework. A special feature of this app is the ability to play the video backward with speeds like -0.25x, -0.5x, and -1.0x. MP4 video file is played directly from the local file system, video codec is h.264, and audio AAC. Video files are huge, like 10 GB, and a length of 3 hours.
Playing video in reverse direction works well on a Macbook Air with M1 or M2 chip. When we run the same app with the same video on a Macbook Air with M3 chip the reverse playback is much worse. Playback might stutter badly, especially in the latter part of the video. This same behavior also happens in Apple's Quicktime video player when playing in the reverse direction with -1x speed. What's even more strange is that at one point of a time, the video playback is totally smooth, but again, after a while, the playback is stuttering. For example, this morning reverse playback worked 100 % smoothly, then I rebooted the Mac and tried again: the result was stuttering. After this the Mac stayed idle for several hours and I tried to reverse play video again: smooth performance! My conclusion: M3 playback works fine if the stars in the sky are aligned correctly. :-)
So it's not only our app, but also Quicktime player is having exactly the same behavior. And only with the M3 chip. The same symptom appears with another similar M3 Mac, so it can't be a single fault. At the same time, open-source video player iina can reverse play the video well on the same Mac.
All Macs have otherwise identical configuration: 16 GB RAM and macOS 15.1.1.
Have you experienced the same problem? Any chance to solve this problem?
I really hope that the M4 chip Mac is behaving better here.
I want to use AVAssetWriter to output video files with a constant FPS (CFR) or variable bitrate (VBR).
As far as I could search, I couldn't find out.
How do I implement the settings?
i use navigator.mediaDevices.getUserMedia to wake up camera, when i use constraint with '{video:{facingMode: {exact: 'user'}}}',occur a error"invalid constraint"
I tried running the latest CaptureSample, selected local/canonical HDR, add to screen output, then stop capture. It is supposed to save the file, but no file is added at all.
It works for SDR, just not for HDR. How do I get HDR working?
Hi, Im working on a app with a infinite scrollable video similar to Tiktok or instagram reels. I initially thought it would be a good idea to cache videos in the file system but after reading this post it seems like it is not recommended to cache videos on the file system: https://forums.developer.apple.com/forums/thread/649810#:~:text=If%20the%20videos%20can%20be%20reasonably%20cached%20in%20RAM%20then%20we%20would%20recommend%20that.%20Regularly%20caching%20video%20to%20disk%20contributes%20to%20NAND%20wear
The reason I am hesitant to cache videos to memory is because this will add up pretty quickly and increase memory pressure for my app.
After seeing the amount of documents and data storage that instagram stores, its obvious they are caching videos on the file system. So I was wondering what is the updated best practice for caching for these kind of apps?
In the docs it looks like something allows you to set a background replacement image in the Control Center controls (like on a Mac).
However, I can't find any documentation on it, beyond this reference in the Apple docs. https://developer.apple.com/documentation/avfoundation/avcapturedevice/isbackgroundreplacementactive?language=objc
Does anyone have any advice to enable backgrounds in the camera system wide?
There seems to be an issue in iOS 18 / macOS 15 related to image thumbnail generation and/or HEIC.
We are transcoding JPEG images to HEIC when they are loaded into our app (HEIC has a much lower memory footprint when loaded by Core Image, for some reason). We use Image I/O for that:
guard let source = CGImageSourceCreateWithURL(inputURL, nil),
let destination = CGImageDestinationCreateWithURL(outputURL, UTType.heic.identifier as CFString, 1, nil) else {
throw <error>
}
let primaryImageIndex = CGImageSourceGetPrimaryImageIndex(source)
CGImageDestinationAddImageFromSource(destination, source, primaryImageIndex, nil)
When we use CGImageDestinationAddImageFromSource, we get the following warnings on the console:
createImage:1445: *** ERROR: bad image size (0 x 0) rb: 0
CGImageSourceCreateThumbnailAtIndex:5195: *** ERROR: CGImageSourceCreateThumbnailAtIndex[0] - 'HJPG' - failed to create thumbnail [-67] {alw:-1, abs: 1 tra:-1 max:4620}
writeImageAtIndex:1025: ⭕️ ERROR: '<app>' is trying to save an opaque image (4620x3466) with 'AlphaPremulLast'. This would unnecessarily increase the file size and will double (!!!) the required memory when decoding the image --> ignoring alpha.
It seems that CGImageDestinationAddImageFromSource is trying to extract/create a thumbnail, which fails somehow.
I re-wrote the last part like this:
guard let primaryImage = CGImageSourceCreateImageAtIndex(source, primaryImageIndex, nil),
let properties = CGImageSourceCopyPropertiesAtIndex(source, primaryImageIndex, nil) else {
throw <error>
}
CGImageDestinationAddImage(destination, primaryImage, properties)
This doesn't cause any warnings.
An issue that might be related has been reported here.
I've also heard from others having issues with CGImageSourceCreateThumbnailAtIndex.
I'm developing an app that plays a WAV file through the Lightning headphone adapter. When i connect the adapter, a prompt appears asking whether to select "Headphones" or "Other Device" What does this setting actually do? I've noticed that it affects the maximum amplitude (volume) of the WAV output. Could you explain the precise difference between these two modes?
What is the immersive space projection method? erp, fisheye, cube
We want to achieve the same effect as Apple immersive
I was trying to set custom audio output device for a generated audio on macCatalyst.
While using let status = AudioUnitSetProperty(outputUnit,
kAudioOutputUnitProperty_CurrentDevice,
kAudioUnitScope_Global,
0,
&outputDeviceID,
UInt32(MemoryLayout.size))
kAudioOutputUnitProperty_CurrentDevice is invalid, and status = -10879, indicating an error.
STEPS TO REPRODUCE
Set Run Destination to MacOS and run the program. "AudioUnitSetProperty: 0" should be printed, indicating it works fine.
Set Run Destination to Mac Catalyst and run the program. "Error setting output device: -10879" should be printed, indicating an error.
I'm trying to add metadata every second during video capture in the Swift sample App "AVMultiCamPiP". A simple string that changes every second with a write function triggered by a Timer. Can't get it to work, no matter how I arrange it, always ends up with the error "Cannot create a new metadata adaptor with an asset writer input that has already started writing".
This is the setup section:
// Add a metadata input
let assetWriterMetaDataInput = AVAssetWriterInput(mediaType: .metadata, outputSettings: nil, sourceFormatHint: AVTimedMetadataGroup().copyFormatDescription())
assetWriterMetaDataInput.expectsMediaDataInRealTime = true
assetWriter.add(assetWriterMetaDataInput)
self.assetWriterMetaDataInput = assetWriterMetaDataInput
This is the timed metadata creation which gets triggered every second:
let newNoteMetadataItem = AVMutableMetadataItem()
newNoteMetadataItem.value = "Some string" as (NSCopying & NSObjectProtocol)?
let metadataItemGroup = AVTimedMetadataGroup.init(items: [newNoteMetadataItem], timeRange: CMTimeRangeMake( start: CMClockGetTime( CMClockGetHostTimeClock() ), duration: CMTime.invalid ))
movieRecorder?.recordMetaData(meta: metadataItemGroup)
This function is supposed to add the metadata to the track:
func recordMetaData(meta: AVTimedMetadataGroup) {
guard isRecording,
let assetWriter = assetWriter,
assetWriter.status == .writing,
let input = assetWriterMetaDataInput,
input.isReadyForMoreMediaData else {
return
}
let metadataAdaptor = AVAssetWriterInputMetadataAdaptor(assetWriterInput: input)
metadataAdaptor.append(meta)
}
I have an older code example in objc which works OK, but it uses "AVCaptureMetadataInput appendTimedMetadataGroup" and writes to an identifier called "quickTimeMetadataLocationNote". I'd like to do something similar in the above Swift code ...
All suggestions are appreciated !