Post

Replies

Boosts

Views

Activity

Using MTAudioProcessingTap with AVPlayer requires ring buffer and format conversion?
What I'm looking to do is load a movie file and have it playback in real-time (synchronized with a particular clock) and have video and audio samples fed to me in the format I want (a lot like the AVCapture APIs do). When using an AVPlayer + AVPlayerItem, I can create a AVPlayerItemVideoOutput to get the video frames. Great. For audio though, it requires using a MTAudioProcessingTap added into the player item's audioMix.What's really odd about MTAudioProcessingTap on a player item is that I'm apparently at the mercy of whatever audio format AVFoundation wants to give me. There's seemingly no guarantee on what the format will look like. Compressed? LPCM? Floating point? Integer? Interleaved? Sample rate? I'm betting it's always at least floating LPCM (a canonical/Standard format), but what about sample rate? That I have no control over, and in my situation I need/want the sample rate to be a specific rate. (As well as wanting mixed down to stereo [or split up from mono].)Having no choice over the format is really inconvenient, because it seems that I have to convert the audio coming out of the tap. The real unfortunate part of this is that when sample rate conversion is involved, it seems there's a need to have an intermediate (small) ring buffer between the tap and the audio conversion because of the potentially fractional ratio of input:output frames in the conversion, and needing to keep the unused input frames from one tap "process" callback around until the next tap "process" callback where they would be used.Anybody following me? Am I wrong? Is there no simpler way to simply get the AVPlayerItem's audio fed to me in real-time in my specified format? I find it hard to imagine I'm the first going down this path, but so far I can't find any info from anyone's prior experience.
1
0
1.9k
Jan ’16
AudioDriverKit Extension for Virtual Devices?
At 3:38-4:00 in the session video, it seems Baek San Chang says that AudioDriverKit will not be allowed to be used for virtual audio devices... Keep in mind that the sample code presented is purely for demonstrative purpses and creates a virtual audio driver that is not associated with a hardware device, and so entitlements will not be granted for that kind of use case. For virtual audio driver, where device is all that is needed, the audio server plugin driver model should continue to be used. The mentioning of sample code is a little confusing; Does he mean the entitlements for hardware access won't be granted for a virtual device? That would seem obvious. But if he means the entitlements for driver kit extensions (com.apple.developer.driverkit and com.apple.developer.driverkit.allow-any-userclient-access) won't be granted for virtual audio devices, and this is why AudioServerPlugins should still be used, then that's another story. Are we allowed to use AudioDriverKit Extension for Virtual Devices? The benefit of having the extension bundled with the app rather than requiring an installer is a significant reason to use an extension if allowed.
2
0
1.5k
Jun ’21
AUAudioUnit – Impossible to handle frame count limitation in render?
Summary: I've created an AUAudioUnit to host some third party signal processing code and am running into a edge case limitation where I can only process and supply output audio data (from the internalRenderBlock) if it's an exact multiple of a specific number of frames. More Detail: This third party code ONLY works with exactly 10ms of data at time. For example, say with 48khz audio, it only accepts 480 frames on each processing function call. If the AUAudioUnit's internalRenderBlock is called with 1024 frames as the frame count, I can use the pullInputBlock to get 480 frames, process it, another 480 frames, and process that, but what should I then do with the remaining 64 frames? Possible Solutions Foiled: a) It seems there's no way to indicate to the host that I have only consumed 960 frames and will only be supplying 960 frames of output. I thought perhaps the host would observe that if the outputData ABL buffers have less than the frame count passed into the internalRenderBlock, that it might appropriately advance the timestamp only by that much the next time time around, but it does not. So it's required that all the audio be processed before the block returns, but I can only do that if the block is requested to handle exactly a multiple of 10ms of data. b) I can't buffer up the "remainder" input and process it on the next internalRenderBlock cycle because all of the output must be provided on return as discussed in A. c) As an alternative, I see no way to have the unit explicitly indicate to the host, how many frames the unit can process at a time. maximumFramesToRender is the host telling the unit (not the reverse), and either way it's a maximum only, not a minumum as well. What can I do?
1
0
2.0k
Jul ’22
WKWebView Payment Request API support in macOS?
When loading the Apple Pay demo page or the WebKit blog's demo page in an out-of-the-box WKWebView, both pages claim the browser does not support Apple Pay. Does this really not work at all on macOS, or is there something environmentally that I'm missing? https://applepaydemo.apple.com https://webkit.org/blog/8182/introducing-the-payment-request-api-for-apple-pay/ https://webkit.org/blog/9674/new-webkit-features-in-safari-13/ As the last page notes, Apple Pay / Payment Request API support was explicitly added in iOS 13.
0
0
920
Aug ’22
Instruments — How to measure large memory copies
What's the best way in Instruments, to measure the amount of time spent on large memory copies? For a very simple example, when directly calling memcpy? Memory copying does not show up in the time profiler, it's not a VM cache miss or zeroing event, etc so it doesn't show there, it doesn't (as far as I can tell) show up in the system trace, and there aren't any other choices.
1
0
817
Feb ’24
Big ColorSync Bug? macOS Interprets Rec 705 Profile Differently Between 15 and 14
ColorSync throughout the system interprets the Rec. ITU-R BT.709-5 color profile differently in macOS 15 than it did on macOS 14 leading to severe color differences. This is breaking/problem-causing color change. Here's a comparison of one way the results can appear different. Steps to Reproduce Open this image above (named here as sRGB_Bars.png) in ColorSync Utility. (This should be an sRGB profiled image if the forums don't mess with that.) With Digital Color Meter open and set to "Display in sRGB" [0-255] you can see the gray bars progress left-to-right as 0, 23, 46, 69, 92, 115, etc... ] Those are the correct reference values. At the bottom of the window in ColorSync use [Match to Profile] [Display -> Rec. ITU-R BT.709-5], then click the Apply Button. You can verify the image now uses the Rec 709 profile with the (i) Get Info button in the toolbar. Use "Save As…" to save the image with a different name. **) Do the steps above on macOS 14 and macOS 15 to create two images. Rec709_CreatedOnMacOS14, and Rec709_CreatedOnMacOS15. **) Compare the two images on BOTH operating system versions, and you'll see they are significantly different from each other. Rec709_CreatedOnMacOS14.png when viewed on macOS 15 has the gray values of: [0, 1, 18, 43, 67, ...] Rec709_CreatedOnMacOS14.png when viewed on macOS 15 has the gray values of: [0, 51, 72, 94, 115, ... ] These are MASSIVE differences. Significance This is not just a problem that affects ColorSync Utility or Preview, etc. This same gamma interpretation difference is affecting the Core Video, Core Image, Video Toolbox, etc pipelines as well. That gets complicated to talk about, and this is example is the simplest I can boil it down to. Conclusion Significant bug? What's going on?
2
0
267
Oct ’24