Post

Replies

Boosts

Views

Activity

Instruments — How to measure large memory copies
What's the best way in Instruments, to measure the amount of time spent on large memory copies? For a very simple example, when directly calling memcpy? Memory copying does not show up in the time profiler, it's not a VM cache miss or zeroing event, etc so it doesn't show there, it doesn't (as far as I can tell) show up in the system trace, and there aren't any other choices.
1
0
541
Feb ’24
WKWebView Payment Request API support in macOS?
When loading the Apple Pay demo page or the WebKit blog's demo page in an out-of-the-box WKWebView, both pages claim the browser does not support Apple Pay. Does this really not work at all on macOS, or is there something environmentally that I'm missing? https://applepaydemo.apple.com https://webkit.org/blog/8182/introducing-the-payment-request-api-for-apple-pay/ https://webkit.org/blog/9674/new-webkit-features-in-safari-13/ As the last page notes, Apple Pay / Payment Request API support was explicitly added in iOS 13.
0
0
802
Aug ’22
AUAudioUnit – Impossible to handle frame count limitation in render?
Summary: I've created an AUAudioUnit to host some third party signal processing code and am running into a edge case limitation where I can only process and supply output audio data (from the internalRenderBlock) if it's an exact multiple of a specific number of frames. More Detail: This third party code ONLY works with exactly 10ms of data at time. For example, say with 48khz audio, it only accepts 480 frames on each processing function call. If the AUAudioUnit's internalRenderBlock is called with 1024 frames as the frame count, I can use the pullInputBlock to get 480 frames, process it, another 480 frames, and process that, but what should I then do with the remaining 64 frames? Possible Solutions Foiled: a) It seems there's no way to indicate to the host that I have only consumed 960 frames and will only be supplying 960 frames of output. I thought perhaps the host would observe that if the outputData ABL buffers have less than the frame count passed into the internalRenderBlock, that it might appropriately advance the timestamp only by that much the next time time around, but it does not. So it's required that all the audio be processed before the block returns, but I can only do that if the block is requested to handle exactly a multiple of 10ms of data. b) I can't buffer up the "remainder" input and process it on the next internalRenderBlock cycle because all of the output must be provided on return as discussed in A. c) As an alternative, I see no way to have the unit explicitly indicate to the host, how many frames the unit can process at a time. maximumFramesToRender is the host telling the unit (not the reverse), and either way it's a maximum only, not a minumum as well. What can I do?
1
0
1.8k
Jul ’22
Using MTAudioProcessingTap with AVPlayer requires ring buffer and format conversion?
What I'm looking to do is load a movie file and have it playback in real-time (synchronized with a particular clock) and have video and audio samples fed to me in the format I want (a lot like the AVCapture APIs do). When using an AVPlayer + AVPlayerItem, I can create a AVPlayerItemVideoOutput to get the video frames. Great. For audio though, it requires using a MTAudioProcessingTap added into the player item's audioMix.What's really odd about MTAudioProcessingTap on a player item is that I'm apparently at the mercy of whatever audio format AVFoundation wants to give me. There's seemingly no guarantee on what the format will look like. Compressed? LPCM? Floating point? Integer? Interleaved? Sample rate? I'm betting it's always at least floating LPCM (a canonical/Standard format), but what about sample rate? That I have no control over, and in my situation I need/want the sample rate to be a specific rate. (As well as wanting mixed down to stereo [or split up from mono].)Having no choice over the format is really inconvenient, because it seems that I have to convert the audio coming out of the tap. The real unfortunate part of this is that when sample rate conversion is involved, it seems there's a need to have an intermediate (small) ring buffer between the tap and the audio conversion because of the potentially fractional ratio of input:output frames in the conversion, and needing to keep the unused input frames from one tap "process" callback around until the next tap "process" callback where they would be used.Anybody following me? Am I wrong? Is there no simpler way to simply get the AVPlayerItem's audio fed to me in real-time in my specified format? I find it hard to imagine I'm the first going down this path, but so far I can't find any info from anyone's prior experience.
1
0
1.7k
Jan ’16