Posts

Post not yet marked as solved
1 Replies
453 Views
What's the best way in Instruments, to measure the amount of time spent on large memory copies? For a very simple example, when directly calling memcpy? Memory copying does not show up in the time profiler, it's not a VM cache miss or zeroing event, etc so it doesn't show there, it doesn't (as far as I can tell) show up in the system trace, and there aren't any other choices.
Posted
by swillits.
Last updated
.
Post not yet marked as solved
2 Replies
1.4k Views
At 3:38-4:00 in the session video, it seems Baek San Chang says that AudioDriverKit will not be allowed to be used for virtual audio devices... Keep in mind that the sample code presented is purely for demonstrative purpses and creates a virtual audio driver that is not associated with a hardware device, and so entitlements will not be granted for that kind of use case. For virtual audio driver, where device is all that is needed, the audio server plugin driver model should continue to be used. The mentioning of sample code is a little confusing; Does he mean the entitlements for hardware access won't be granted for a virtual device? That would seem obvious. But if he means the entitlements for driver kit extensions (com.apple.developer.driverkit and com.apple.developer.driverkit.allow-any-userclient-access) won't be granted for virtual audio devices, and this is why AudioServerPlugins should still be used, then that's another story. Are we allowed to use AudioDriverKit Extension for Virtual Devices? The benefit of having the extension bundled with the app rather than requiring an installer is a significant reason to use an extension if allowed.
Posted
by swillits.
Last updated
.
Post marked as solved
1 Replies
1.8k Views
Summary: I've created an AUAudioUnit to host some third party signal processing code and am running into a edge case limitation where I can only process and supply output audio data (from the internalRenderBlock) if it's an exact multiple of a specific number of frames. More Detail: This third party code ONLY works with exactly 10ms of data at time. For example, say with 48khz audio, it only accepts 480 frames on each processing function call. If the AUAudioUnit's internalRenderBlock is called with 1024 frames as the frame count, I can use the pullInputBlock to get 480 frames, process it, another 480 frames, and process that, but what should I then do with the remaining 64 frames? Possible Solutions Foiled: a) It seems there's no way to indicate to the host that I have only consumed 960 frames and will only be supplying 960 frames of output. I thought perhaps the host would observe that if the outputData ABL buffers have less than the frame count passed into the internalRenderBlock, that it might appropriately advance the timestamp only by that much the next time time around, but it does not. So it's required that all the audio be processed before the block returns, but I can only do that if the block is requested to handle exactly a multiple of 10ms of data. b) I can't buffer up the "remainder" input and process it on the next internalRenderBlock cycle because all of the output must be provided on return as discussed in A. c) As an alternative, I see no way to have the unit explicitly indicate to the host, how many frames the unit can process at a time. maximumFramesToRender is the host telling the unit (not the reverse), and either way it's a maximum only, not a minumum as well. What can I do?
Posted
by swillits.
Last updated
.
Post not yet marked as solved
0 Replies
774 Views
When loading the Apple Pay demo page or the WebKit blog's demo page in an out-of-the-box WKWebView, both pages claim the browser does not support Apple Pay. Does this really not work at all on macOS, or is there something environmentally that I'm missing? https://applepaydemo.apple.com https://webkit.org/blog/8182/introducing-the-payment-request-api-for-apple-pay/ https://webkit.org/blog/9674/new-webkit-features-in-safari-13/ As the last page notes, Apple Pay / Payment Request API support was explicitly added in iOS 13.
Posted
by swillits.
Last updated
.
Post not yet marked as solved
1 Replies
1.7k Views
What I'm looking to do is load a movie file and have it playback in real-time (synchronized with a particular clock) and have video and audio samples fed to me in the format I want (a lot like the AVCapture APIs do). When using an AVPlayer + AVPlayerItem, I can create a AVPlayerItemVideoOutput to get the video frames. Great. For audio though, it requires using a MTAudioProcessingTap added into the player item's audioMix.What's really odd about MTAudioProcessingTap on a player item is that I'm apparently at the mercy of whatever audio format AVFoundation wants to give me. There's seemingly no guarantee on what the format will look like. Compressed? LPCM? Floating point? Integer? Interleaved? Sample rate? I'm betting it's always at least floating LPCM (a canonical/Standard format), but what about sample rate? That I have no control over, and in my situation I need/want the sample rate to be a specific rate. (As well as wanting mixed down to stereo [or split up from mono].)Having no choice over the format is really inconvenient, because it seems that I have to convert the audio coming out of the tap. The real unfortunate part of this is that when sample rate conversion is involved, it seems there's a need to have an intermediate (small) ring buffer between the tap and the audio conversion because of the potentially fractional ratio of input:output frames in the conversion, and needing to keep the unused input frames from one tap "process" callback around until the next tap "process" callback where they would be used.Anybody following me? Am I wrong? Is there no simpler way to simply get the AVPlayerItem's audio fed to me in real-time in my specified format? I find it hard to imagine I'm the first going down this path, but so far I can't find any info from anyone's prior experience.
Posted
by swillits.
Last updated
.