The build setting TREAT_MISSING_BASELINES_AS_TEST_FAILURES = YES should report a failure for every performance test that has a missing baseline for the current machine. This used to work back in Catalina / Xcode 12.4, but seems to be broken when running the tests on BigSur / Xcode 12.5.
Is this known? Is there a workaround?
Post
Replies
Boosts
Views
Activity
I observe the following behavior:
Have a mac connected to the internet via wifi.
Establish a websocket connection with some variation of URLSession.shared.webSocketTask(with ...) .
After the socket is established, disconnect your wifi (eg. by disabling from the menu bar).
Result:
Session/Task delegate is informed about a timeout after a long delay (observed delay between 60s-3m).
Expected:
Delegate is informed immediately about the connection error.
Notes:
I tried to listen to all available delegate methods for URLSessionWebSocketTask, URLSessionTask, and URLSession.
I tried many configuration option that seemed like they could be related (like adjusting timeoutIntervalForRequest of the session or request, the networkServiceType, etc)
The underlaying network framework seems to notice the error immediately, as there is a log like this showing up: Connection 1: encountered error(1:53)
Sending ping frames every 10 seconds does not change the situation either (simply no pong is returned until the timeout error occurs).
There does not seem to be an upside to this delayed error reporting, since even if you reconnect the network immediately, the connection still fails with the timeout error after the delay.
My question:
Might there be a configuration option that I overlooked that gives me the behavior I want?
Or do I misunderstand something fundamental about how this API is supposed to work?
I am trying to build level metering for AVPlayer. I am doing this with an
MTAudioProcessingTap that gets passed to an
AVAudioMix, which in turns gets passed to the
AVPlayerItem. The
MTAudioProcessingTap gets created with the
kMTAudioProcessingTapCreationFlag_PostEffectsflag.
Technical Q&A QA1783 has the following to say about the
PreEffects and
PostEffects flags:
When you create a "pre-effects" audio tap using the kMTAudioProcessingTapCreationFlag_PreEffects flag, the tap will be called before any effects specified by AVAudioMixInputParameters are applied; when you create a "post-effects" tap by using the kMTAudioProcessingTapCreationFlag_PostEffects flag, the tap will be called after those effects are applied. Currently the only "effect" that AVAudioMixInputParameters supports is a linear volume ramp.The problem:When created with the kMTAudioProcessingTapCreationFlag_PostEffects, I would expect the that samples received by the
MTAudioProcessingTap would reflect the the volume or audio ramps set on the
AVAudioMixInputParameters. For example, if I set the volume to 0, I would expect to get all 0 samples. However the samples I receive seem to be totally unaffected by the volume or volume ramps.
Am I doing something wrong?Here is a quick an dirty playground that illustrates the problem. The example sets the volume directly, but I observed the same problem when using audio ramps. Tested on both macOS and iOS:import Foundation
import XCPlayground
import PlaygroundSupport
import AVFoundation
import Accelerate
PlaygroundPage.current.needsIndefiniteExecution = true;
let assetURL = Bundle.main.url(forResource: "sample", withExtension: "mp3")!
let asset = AVAsset(url: assetURL)
let playerItem = AVPlayerItem(asset: asset)
var audioMix = AVMutableAudioMix()
// The volume. Set to > 0 to hear something.
let kVolume: Float = 0.0
var parameterArray: [AVAudioMixInputParameters] = []
for assetTrack in asset.tracks(withMediaType: .audio) {
let parameters = AVMutableAudioMixInputParameters(track: assetTrack);
parameters.setVolume(kVolume, at: kCMTimeZero)
parameterArray.append(parameters)
// Omitting most callbacks to keep sample short:
var callbacks = MTAudioProcessingTapCallbacks(
version: kMTAudioProcessingTapCallbacksVersion_0,
clientInfo: nil,
init: nil,
finalize: nil,
prepare: nil,
unprepare: nil,
process: { (tap, numberFrames, flags, bufferListInOut, numberFramesOut, flagsOut) in
guard MTAudioProcessingTapGetSourceAudio(tap, numberFrames, bufferListInOut, flagsOut, nil, numberFramesOut) == noErr else {
preconditionFailure()
}
// Assume 32bit float format, native endian:
for i in 0..<bufferListInOut.pointee.mNumberBuffers {
let buffer = bufferListInOut.pointee.mBuffers
let stride: vDSP_Stride = vDSP_Stride(buffer.mNumberChannels)
let numElements: vDSP_Length = vDSP_Length(buffer.mDataByteSize / UInt32(MemoryLayout<Float>.stride))
for j in 0..<Int(buffer.mNumberChannels) {
// Use vDSP_maxmgv tof ind the maximum amplitude
var start = buffer.mData!.bindMemory(to: Float.self, capacity: Int(numElements))
start += Int(j * MemoryLayout<Float>.stride)
var magnitude: Float = 0
vDSP_maxmgv(start, stride, &magnitude, numElements - vDSP_Length(j))
DispatchQueue.main.async {
print("buff: \(i), chan: \(j), max: \(magnitude)")
}
}
}
}
)
var tap: Unmanaged<MTAudioProcessingTap>?
guard MTAudioProcessingTapCreate(kCFAllocatorDefault, &callbacks, kMTAudioProcessingTapCreationFlag_PostEffects, &tap) == noErr else {
preconditionFailure()
}
parameters.audioTapProcessor = tap?.takeUnretainedValue()
}
audioMix.inputParameters = parameterArray
playerItem.audioMix = audioMix
let player = AVPlayer(playerItem: playerItem)
player.rate = 1.0