Hi there, I'm having some trouble with AVAudioMixerNode only working when there is a single input, and outputting silence or very quiet buzzing when >1 input node is connected. My setup has voice processing enabled, input going to a sink, and N source nodes going to the main mixer node, going to the output node. In all cases I am connecting nodes in the graph with the same declared format: 48kHz 1 channel Float32 PCM.
This is working great for 1 source node, but as soon as I add a second it breaks. I can reproduce this behaviour in the SignalGenerator sample, when the same format is used everywhere. Again, it'll work fine with 1 source node even in this configuration, but add another and there's silence.
Am I doing something wrong with formats here? Is this expected? As I understood it with voice processing on and use of a mixer node I should be able to use my own format essentially everywhere in my graph?
My SignalGenerator modified repro example follows:
import Foundation
import AVFoundation
// True replicates my real app's behaviour, which is broken.
// You can remove one source node connection
// to make it work even when this is true.
let showBrokenState: Bool = true
// SignalGenerator constants.
let frequency: Float = 440
let amplitude: Float = 0.5
let duration: Float = 5.0
let twoPi = 2 * Float.pi
let sine = { (phase: Float) -> Float in
return sin(phase)
}
let whiteNoise = { (phase: Float) -> Float in
return ((Float(arc4random_uniform(UINT32_MAX)) / Float(UINT32_MAX)) * 2 - 1)
}
// My "application" format.
let format: AVAudioFormat = .init(commonFormat: .pcmFormatFloat32,
sampleRate: 48000,
channels: 1,
interleaved: true)!
// Engine setup.
let engine = AVAudioEngine()
let mainMixer = engine.mainMixerNode
let output = engine.outputNode
try! output.setVoiceProcessingEnabled(true)
let outputFormat = engine.outputNode.inputFormat(forBus: 0)
let sampleRate = Float(format.sampleRate)
let inputFormat = format
var currentPhase: Float = 0
let phaseIncrement = (twoPi / sampleRate) * frequency
let srcNodeOne = AVAudioSourceNode { _, _, frameCount, audioBufferList -> OSStatus in
let ablPointer = UnsafeMutableAudioBufferListPointer(audioBufferList)
for frame in 0..<Int(frameCount) {
let value = sine(currentPhase) * amplitude
currentPhase += phaseIncrement
if currentPhase >= twoPi {
currentPhase -= twoPi
}
if currentPhase < 0.0 {
currentPhase += twoPi
}
for buffer in ablPointer {
let buf: UnsafeMutableBufferPointer<Float> = UnsafeMutableBufferPointer(buffer)
buf[frame] = value
}
}
return noErr
}
let srcNodeTwo = AVAudioSourceNode { _, _, frameCount, audioBufferList -> OSStatus in
let ablPointer = UnsafeMutableAudioBufferListPointer(audioBufferList)
for frame in 0..<Int(frameCount) {
let value = whiteNoise(currentPhase) * amplitude
currentPhase += phaseIncrement
if currentPhase >= twoPi {
currentPhase -= twoPi
}
if currentPhase < 0.0 {
currentPhase += twoPi
}
for buffer in ablPointer {
let buf: UnsafeMutableBufferPointer<Float> = UnsafeMutableBufferPointer(buffer)
buf[frame] = value
}
}
return noErr
}
engine.attach(srcNodeOne)
engine.attach(srcNodeTwo)
engine.connect(srcNodeOne, to: mainMixer, format: inputFormat)
engine.connect(srcNodeTwo, to: mainMixer, format: inputFormat)
engine.connect(mainMixer, to: output, format: showBrokenState ? inputFormat : outputFormat)
// Put the input node to a sink just to match the formats and make VP happy.
let sink: AVAudioSinkNode = .init { timestamp, numFrames, data in
.zero
}
engine.attach(sink)
engine.connect(engine.inputNode, to: sink, format: showBrokenState ? inputFormat : outputFormat)
mainMixer.outputVolume = 0.5
try! engine.start()
CFRunLoopRunInMode(.defaultMode, CFTimeInterval(duration), false)
engine.stop()
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
Hello,
I'm facing an issue with Xcode 15 and iOS 17: it seems impossible to get AVAudioEngine's audio input node to work on simulator.
inputNode has a 0ch, 0kHz input format,
connecting input node to any node or installing a tap on it fails systematically.
What we tested:
Everything works fine on iOS simulators <= 16.4, even with Xcode 15.
Nothing works on iOS simulator 17.0 on Xcode 15.
Everything works fine on iOS 17.0 device with Xcode 15.
More details on this here: https://github.com/Fesongs/InputNodeFormat
Any idea on this? Something I'm missing?
Thanks for your help š
Tom
PS: I filed a bug on Feedback Assistant, but it usually takes ages to get any answer so I'm also trying here š
Recently I've been trying to play some AV1-encoded streams on my iPhone 15 Pro Max. First, I check for hardware support:
VTIsHardwareDecodeSupported(kCMVideoCodecType_AV1); // YES
Then I need to create a CMFormatDescription in order to pass it into a VTDecompressionSession. I've tried the following:
{
mediaType:'vide'
mediaSubType:'av01'
mediaSpecific: {
codecType: 'av01' dimensions: 394 x 852
}
extensions: {{
CVFieldCount = 1;
CVImageBufferChromaLocationBottomField = Left;
CVImageBufferChromaLocationTopField = Left;
CVPixelAspectRatio = {
HorizontalSpacing = 1;
VerticalSpacing = 1;
};
FullRangeVideo = 0;
}}
}
but VTDecompressionSessionCreate gives me error -8971 (codecExtensionNotFoundErr, I assume).
So it has something to do with the extensions dictionary? I can't find anywhere which set of extensions is necessary for it to work šæ.
VideoToolbox has convenient functions for creating descriptions of AVC and HEVC streams (CMVideoFormatDescriptionCreateFromH264ParameterSets and CMVideoFormatDescriptionCreateFromHEVCParameterSets), but not for AV1.
As of today I am using XCode 15.0 with iOS 17.0.0 SDK.
I'm using iCloud Music Library. Iām using macOS 14.1 (23B74) and iOS 17.1.
iām using MusicKit to find songs that do not have artwork. On iOS, Song.artwork will be nil for items I know do not have artwork. On macOS, Song.artwork is not nil. However when the songs are shown in Music.app, they do not have Artwork. Is this expected? Alternately, is there a more correct way to determine that a Song has no Artwork?
I have also filed FB13315721.
Thank you for any tips!
WWDC23 Platform State of the Union mentioned that Volume shutter buttons to trigger the camera shutter is coming later this year. This was mentioned at 0:30:15.
Would anyone know when this will be available?
My app uses PHLivePhoto.request to generate live photos, but memory leaks if I use a custom targetSize.
PHLivePhoto.request(withResourceFileURLs: [imageUrl, videoUrl], placeholderImage: nil, targetSize: targetSize, contentMode: .aspectFit) {[weak self] (livePhoto, info) in
Change targetSize to CGSizeZero, problem resolved.
PHLivePhoto.request(withResourceFileURLs: [imageUrl, videoUrl], placeholderImage: nil, targetSize: CGSizeZero, contentMode: .aspectFit) {[weak self] (livePhoto, info) in
Hello,
I used kAudioDevicePropertyDeviceIsRunningSomewhere to check if an internal or external microphone is being used.
My code works well for the internal microphone, and for microphones which are connected using a cable.
External microphones which are connected using bluetooth are not reporting their status.
The status is always requested successfully, but it is always reported as inactive.
Main relevant parts in my code :
static inline AudioObjectPropertyAddress
makeGlobalPropertyAddress(AudioObjectPropertySelector selector) {
AudioObjectPropertyAddress address = {
selector,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMaster,
};
return address;
}
static BOOL getBoolProperty(AudioDeviceID deviceID,
AudioObjectPropertySelector selector)
{
AudioObjectPropertyAddress const address =
makeGlobalPropertyAddress(selector);
UInt32 prop;
UInt32 propSize = sizeof(prop);
OSStatus const status =
AudioObjectGetPropertyData(deviceID, &address, 0, NULL, &propSize, &prop);
if (status != noErr) {
return 0; //this line never gets executed in my tests. The call above always succeeds, but it always gives back "false" status.
}
return static_cast<BOOL>(prop == 1);
}
...
__block BOOL microphoneActive = NO;
iterateThroughAllInputDevices(^(AudioObjectID object, BOOL *stop) {
if (getBoolProperty(object, kAudioDevicePropertyDeviceIsRunningSomewhere) !=
0) {
microphoneActive = YES;
*stop = YES;
}
});
What could cause this and how could it be fixed?
Thank you for your help in advance!
Are there any plans to support developers for a portion of the iPhone 15 series' 24MP photoshoot? I wonder if the app can support it other than the basic camera.
Hi there,
I am building a camera application to be able to capture an image with the wide and ultra wide cameras simultaneously (or as close as possible) with the intrinsics and extrinsics for each camera also delivered.
We are able to achieve this with an AVCaptureMultiCamSession and AVCaptureVideoDataOutput, setting up the .builtInWideAngleCamera and .builtInUltraWideCamera manually. Doing this, we are able to enable the delivery of the intrinsics via the AVCaptureConnection of the cameras. Also, geometric distortion correction is enabled for the ultra camera (by default).
However, we are seeing if it possible to move the application over to the .builtInDualWideCamera with AVCapturePhotoOutput and AVCaptureSession to simplify our application and get access to depth data. We are using the isVirtualDeviceConstituentPhotoDeliveryEnabled=true property to allow for simultaneous capture. Functionally, everything is working fine, except that when isGeometricDistortionCorrectionEnabled is not set to false, the photoOutput.isCameraCalibrationDataDeliverySupported returns false.
From this thread and the docs, it appears that we cannot get the intrinsics when isGeometricDistortionCorrectionEnabled=true (only applicable to the ultra wide), unless we use a AVCaptureVideoDataOutput.
Is there any way to get access to the intrinsics for the wide and ultra while enabling geometric distortion correction for the ultra?
guard let captureDevice = AVCaptureDevice.default(.builtInDualWideCamera, for: .video, position: .back) else {
throw InitError.error("Could not find builtInDualWideCamera")
}
self.captureDevice = captureDevice
self.videoDeviceInput = try AVCaptureDeviceInput(device: captureDevice)
self.photoOutput = AVCapturePhotoOutput()
self.captureSession = AVCaptureSession()
self.captureSession.beginConfiguration()
captureSession.sessionPreset = AVCaptureSession.Preset.hd1920x1080
captureSession.addInput(self.videoDeviceInput)
captureSession.addOutput(self.photoOutput)
try captureDevice.lockForConfiguration()
captureDevice.isGeometricDistortionCorrectionEnabled = false // <- NB line
captureDevice.unlockForConfiguration()
/// configure photoOutput
guard self.photoOutput.isVirtualDeviceConstituentPhotoDeliverySupported else {
throw InitError.error("Dual photo delivery is not supported")
}
self.photoOutput.isVirtualDeviceConstituentPhotoDeliveryEnabled = true
print("isCameraCalibrationDataDeliverySupported", self.photoOutput.isCameraCalibrationDataDeliverySupported) // false when distortion correction is enabled
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "sample buffer delegate", attributes: []))
if captureSession.canAddOutput(videoOutput) {
captureSession.addOutput(videoOutput)
}
self.videoPreviewLayer.setSessionWithNoConnection(self.captureSession)
self.videoPreviewLayer.videoGravity = AVLayerVideoGravity.resizeAspect
let cameraVideoPreviewLayerConnection = AVCaptureConnection(inputPort: self.videoDeviceInput.ports.first!, videoPreviewLayer: self.videoPreviewLayer);
self.captureSession.addConnection(cameraVideoPreviewLayerConnection)
self.captureSession.commitConfiguration()
self.captureSession.startRunning()
I've added a listener block for camera notifications. This works as expected: the listener block is invoked then the camera is activated/deactivated.
However, when I call CMIOObjectRemovePropertyListenerBlock to remove the listener block, though the call succeeds, camera notifications are still delivered to the listener block.
Since in the header file it states this function "Unregisters the given CMIOObjectPropertyListenerBlock from receiving notifications when the given properties change." I'd assume that once called, no more notifications would be delivered?
Sample code:
#import <Foundation/Foundation.h>
#import <CoreMediaIO/CMIOHardware.h>
#import <AVFoundation/AVCaptureDevice.h>
int main(int argc, const char * argv[]) {
AVCaptureDevice* camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
OSStatus status = -1;
CMIOObjectID deviceID = 0;
CMIOObjectPropertyAddress propertyStruct = {0};
propertyStruct.mSelector = kAudioDevicePropertyDeviceIsRunningSomewhere;
propertyStruct.mScope = kAudioObjectPropertyScopeGlobal;
propertyStruct.mElement = kAudioObjectPropertyElementMain;
deviceID = (UInt32)[camera performSelector:NSSelectorFromString(@"connectionID") withObject:nil];
CMIOObjectPropertyListenerBlock listenerBlock = ^(UInt32 inNumberAddresses, const CMIOObjectPropertyAddress addresses[]) {
NSLog(@"Callback: CMIOObjectPropertyListenerBlock invoked");
};
status = CMIOObjectAddPropertyListenerBlock(deviceID, &propertyStruct, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), listenerBlock);
if(noErr != status) {
NSLog(@"ERROR: CMIOObjectAddPropertyListenerBlock() failed with %d", status);
return -1;
}
NSLog(@"Monitoring %@ (uuid: %@ / %x)", camera.localizedName, camera.uniqueID, deviceID);
sleep(10);
status = CMIOObjectRemovePropertyListenerBlock(deviceID, &propertyStruct, dispatch_get_main_queue(), listenerBlock);
if(noErr != status) {
NSLog(@"ERROR: 'AudioObjectRemovePropertyListenerBlock' failed with %d", status);
return -1;
}
NSLog(@"Stopped monitoring %@ (uuid: %@ / %x)", camera.localizedName, camera.uniqueID, deviceID);
sleep(10);
return 0;
}
Compiling and running this code outputs:
Monitoring FaceTime HD Camera (uuid: 3F45E80A-0176-46F7-B185-BB9E2C0E436A / 21)
Callback: CMIOObjectPropertyListenerBlock invoked
Callback: CMIOObjectPropertyListenerBlock invoked
Stopped monitoring FaceTime HD Camera (uuid: 3F45E80A-0176-46F7-B185-BB9E2C0E436A / 21)
Callback: CMIOObjectPropertyListenerBlock invoked
Callback: CMIOObjectPropertyListenerBlock invoked
Note the last two log messages showing that the CMIOObjectPropertyListenerBlock is still invoked ...even though CMIOObjectRemovePropertyListenerBlock has successfully been invoked.
Am I just doing something wrong here? Or is the API broken?
ām using the AVFoundation Swift APIs to record a Video (CMSampleBuffers) and Audio (CMSampleBuffers) to a file using AVAssetWriter.
Initializing the AVAssetWriter happens quite quickly, but calling assetWriter.startWriting() fully blocks the entire application AND ALL THREADS for 3 seconds. This only happens in Debug builds, not in Release.
Since it blocks all Threads and only happens in Debug, Iām lead to believe that this is an Xcode/Debugger/LLDB hang issue that Iām seeing.
Does anyone experience something similar?
Hereās how I set all of that up: startRecording(...)
And hereās the line that makes it hang for 3+ seconds: assetWriter.startWriting(...)
Hi, I'm trying to play multiple video/audio file with AVPlayer using AVMutableComposition. Each video/audio file can process simultaneously so I set each video/audio in individual tracks. I use only local file.
let second = CMTime(seconds: 1, preferredTimescale: 1000)
let duration = CMTimeRange(start: .zero, duration: second)
var currentTime = CMTime.zero
for _ in 0...4 {
let mutableTrack = composition.addMutableTrack(
withMediaType: .audio,
preferredTrackID: kCMPersistentTrackID_Invalid
)
try mutableTrack?.insertTimeRange(
duration,
of: audioAssetTrack,
at: currentTime
)
currentTime = currentTime + second
}
When I set many audio tracks (maybe more than 5), the first part sounds a little different from original when it starts. It seems like audio's front part is skipped.
But when I set only two tracks, AVPlayer plays as same as original file.
avPlayer.play()
How can I fix it? Why do audio tracks affect that don't have any playing parts when start? Please let me know.
Hello,
Faced with a really perplexing issue. Primary problem is that sometimes I get depth and video data as expected, but at other times I don't. And sometimes I'll get both data outputs for a 4-5 frames and then it'll just stop. The source code I implemented is a modified version of the sample code provided by Apple, and interestingly enough I can't re-create this issue with the Apple sample app. So wondering what I could be doing wrong?
Here's the code for setting up the capture input. preferredDepthResolution is 1280 in my case. I'm running this on an iPad Pro (6th gen). iOS version 17.0.3 (21A360). Encounter this issue on iPhone 13 Pro as well. iOS version is 17.0 (21A329)
private func setupLiDARCaptureInput() throws {
// Look up the LiDAR camera.
guard let device = AVCaptureDevice.default(.builtInLiDARDepthCamera, for: .video, position: .back) else {
throw ConfigurationError.lidarDeviceUnavailable
}
guard let format = (device.formats.last { format in
format.formatDescription.dimensions.width == preferredWidthResolution &&
format.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange &&
format.videoSupportedFrameRateRanges.first(where: {$0.maxFrameRate >= 60}) != nil &&
!format.isVideoBinned &&
!format.supportedDepthDataFormats.isEmpty
}) else {
throw ConfigurationError.requiredFormatUnavailable
}
guard let depthFormat = (format.supportedDepthDataFormats.last { depthFormat in
depthFormat.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_DepthFloat16
}) else {
throw ConfigurationError.requiredFormatUnavailable
}
// Begin the device configuration.
try device.lockForConfiguration()
// Configure the device and depth formats.
device.activeFormat = format
device.activeDepthDataFormat = depthFormat
let desc = format.formatDescription
dimensions = CMVideoFormatDescriptionGetDimensions(desc)
let duration = CMTime(value:1, timescale:CMTimeScale(60))
device.activeVideoMinFrameDuration = duration
device.activeVideoMaxFrameDuration = duration
// Finish the device configuration.
device.unlockForConfiguration()
self.device = device
print("Selected video format: \(device.activeFormat)")
print("Selected depth format: \(String(describing: device.activeDepthDataFormat))")
// Add a device input to the capture session.
let deviceInput = try AVCaptureDeviceInput(device: device)
captureSession.addInput(deviceInput)
guard let audioDevice = AVCaptureDevice.default(for: .audio) else {
return
}
// Configure audio input - always configure audio even if isAudioEnabled is false
audioDeviceInput = try! AVCaptureDeviceInput(device: audioDevice)
captureSession.addInput(audioDeviceInput)
deviceSystemPressureStateObservation = device.observe(
\.systemPressureState,
options: .new
) { _, change in
guard let systemPressureState = change.newValue else { return }
print("system pressure \(systemPressureState.levelAsString()) due to \(systemPressureState.factors)")
}
}
Here's how I'm setting up the output:
private func setupLiDARCaptureOutputs() {
// Create an object to output video sample buffers.
videoDataOutput = AVCaptureVideoDataOutput()
captureSession.addOutput(videoDataOutput)
// Create an object to output depth data.
depthDataOutput = AVCaptureDepthDataOutput()
depthDataOutput.isFilteringEnabled = false
captureSession.addOutput(depthDataOutput)
audioDeviceOutput = AVCaptureAudioDataOutput()
audioDeviceOutput.setSampleBufferDelegate(self, queue: videoQueue)
captureSession.addOutput(audioDeviceOutput)
// Create an object to synchronize the delivery of depth and video data.
outputVideoSync = AVCaptureDataOutputSynchronizer(dataOutputs: [depthDataOutput, videoDataOutput])
outputVideoSync.setDelegate(self, queue: videoQueue)
// Enable camera intrinsics matrix delivery.
guard let outputConnection = videoDataOutput.connection(with: .video) else { return }
if outputConnection.isCameraIntrinsicMatrixDeliverySupported {
outputConnection.isCameraIntrinsicMatrixDeliveryEnabled = true
}
}
The top part of my delegate implementation is as follows:
func dataOutputSynchronizer(
_ synchronizer: AVCaptureDataOutputSynchronizer,
didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection
) {
// Retrieve the synchronized depth and sample buffer container objects.
guard let syncedDepthData = synchronizedDataCollection.synchronizedData(for: depthDataOutput) as? AVCaptureSynchronizedDepthData,
let syncedVideoData = synchronizedDataCollection.synchronizedData(for: videoDataOutput) as? AVCaptureSynchronizedSampleBufferData else {
if synchronizedDataCollection.synchronizedData(for: depthDataOutput) == nil {
print("no depth data at time \(mach_absolute_time())")
}
if synchronizedDataCollection.synchronizedData(for: videoDataOutput) == nil {
print("no video data at time \(mach_absolute_time())")
}
return
}
print("received depth data \(mach_absolute_time())")
}
As you can see, I'm console logging whenever depth data is not received. Note because I'm driving the video frames at 60 fps, its expected that I'll only receive depth data for every alternate video frame.
Console output is posted as a follow up comment (because of the character limit). I edited some lines out for brevity. You'll see it started streaming correctly but after a while it stopped received both video and depth outputs (in some other runs, it works perfectly and in some other runs I receive no depth data whatsoever). One thing to note, I sometimes run quicktime mirroring to see the device screen to see what the app is displaying (so not sure if that's causing any interference - that said I don't see any system pressure changes either).
Any help is most appreciated! Thanks.
I got code of CMIO CameraExtension by Xcode target and it is running with FaceTime. I guess this kind of Extension has lots of security limitation.
I like to run command like "netstat" in Extension. Is that possible to call Process.run()? I got keep getting error like "The file zsh doesnāt exist". Same code with Process.run() worked in macOS app.
I like to run DistributedNotificationCenter and send text from App to CameraExtension. Is that possible? I do not receive any message on CameraExtension.
If there is any other IPC method between macOS app and CameraExtension, please let me know.
Since upgrade to iOS17 WebRTC playback have problems on going fullscreen - video element is rapidly changing its dimensions while taking full screen size and animation seems very glitchy.
I'm observing this issue on every webrtc players available, so I think the problem is in the mobile safari.
Is there any way to prevent resizing of video on fullscreen?
We're experimenting with a stream that has a large (10 minutes) clear portion in front of the protected section w/Fairplay.
We're noticing that AVPlayer/Safari trigger calls to fetch the license key even while it's playing the clear part, and once we provide the key, playback fails with:
name = AVPlayerItemFailedToPlayToEndTimeNotification, object = Optional(<AVPlayerItem: 0x281ff2800> I/NMU [No ID]), userInfo = Optional([AnyHashable("AVPlayerItemFailedToPlayToEndTimeErrorKey"): Error Domain=CoreMediaErrorDomain Code=-12894 "(null)"])
- name : "AVPlayerItemFailedToPlayToEndTimeNotification"
- object : <AVPlayerItem: 0x281ff2800> I/NMU [No ID]
āæ userInfo : 1 element
āæ 0 : 2 elements
āæ key : AnyHashable("AVPlayerItemFailedToPlayToEndTimeErrorKey")
- value : "AVPlayerItemFailedToPlayToEndTimeErrorKey"
- value : Error Domain=CoreMediaErrorDomain Code=-12894 "(null)"
It seems like AVPlayer is trying to decrypt the clear portion of the stream...and I'm wondering if it's because we've set up our manifest incorrectly.
Here it is:
#EXTM3U
#EXT-X-VERSION:8
#EXT-X-TARGETDURATION:20
#EXT-X-MEDIA-SEQUENCE:0
#EXT-X-INDEPENDENT-SEGMENTS
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-MAP:URI="clear-asset.mp4",BYTERANGE="885@0"
#EXT-X-DEFINE:NAME="path0",VALUE="clear-asset.mp4"
#EXTINF:9.98458,
#EXT-X-BYTERANGE:81088@885
{$path0}
#EXTINF:19.96916,
#EXT-X-BYTERANGE:159892@81973
{$path0}
#EXTINF:19.96916,
#EXT-X-BYTERANGE:160245@241865
{$path0}
#EXT-X-DISCONTINUITY
#EXT-X-MAP:URI="secure-asset.mp4",BYTERANGE="788@0"
#EXT-X-DEFINE:NAME="path1",VALUE="secure-asset.mp4"
#EXT-X-KEY:METHOD=SAMPLE-AES,URI="skd://guid",KEYFORMAT="com.apple.streamingkeydelivery",KEYFORMATVERSIONS="1"
#EXTINF:19.96916,
#EXT-X-BYTERANGE:159928@5196150
{$path1}
#EXT-X-ENDLIST
Our DJ application Mixxx renders scrolling waveforms with 60 Hz. This looks perfectly smooth on an older 2015 MacBook Pro. However it looks jittery on a new M1 device with "ProMotion" enabled. Selecting 60 Hz fixes the issue.
We are looking for a way to tell macOS that it can expect 60 Hz renderings from Mixxx and must not display them early (at 120 Hz) even if the pictures are ready.
The alternative would be to read out the display settings and ask the user to select 60 Hz.
Is there an API to:
hint the display diver that we render with 60 Hz
read out the refresh rate settings?
Hello everyone,
I'm currently facing a challenging issue with my macOS application that involves HEIF image processing. The application uses an OperationQueue to handle HEIF compression tasks. However, I've observed a significant delay in processing when a screen recording is active. This delay doesn't occur under normal circumstances.
Here's a brief overview of the implementation:
The HEIF processing task is encapsulated within an Operation added to an OperationQueue.
The task involves using CIContext for image processing.
When screen recording is initiated, the operation's execution becomes unusually slow or gets delayed extensively.
After some research and community feedback, I learned that screen recording might be affecting the system's resource allocation, particularly impacting tasks that utilize GPU resources, like CIContext operations in my case.
To address this, I tried the following:
Switching to a custom dispatch queue with a .userInitiated QoS.
Using GCD instead of OperationQueue.
Despite these attempts, the issue persists during screen recording. It seems like the screen recording process is given higher priority by macOS, leading to resource reallocation and thus affecting my application's performance.
I'm looking for insights or suggestions on how to handle this scenario more effectively. Specifically, I am interested in:
Understanding how screen recording impacts resource allocation in macOS.
Exploring ways to ensure that my HEIF processing task is not severely impacted by other system processes like screen recording.
Any best practices or alternative approaches for handling image processing tasks that are sensitive to system resource availability.
Here's a snippet of the HEIF processing function for reference:
import CoreImage
struct CommandResult: CustomStringConvertible {
let output: String
let error: Process.TerminationReason
let status: Int32
var description: String {
return "error:\(error.rawValue), output:\(output), status:\(status)"
}
}
func heif(at sourceURL: URL, to destinationURL: URL, as quality: Int = 75) -> CommandResult {
let compressionQuality = CGFloat(quality) / 100.0
guard let ciImage = CIImage(contentsOf: sourceURL) else {
return CommandResult(output: "Load heic image failed \(sourceURL)", error: .exit, status: -1)
}
let context = CIContext(options: nil)
let heifOptions = [kCGImageDestinationLossyCompressionQuality: compressionQuality] as! [CIImageRepresentationOption: Any]
do {
try context.writeHEIFRepresentation(of: ciImage,
to: destinationURL,
format: .RGBA8,
colorSpace: ciImage.colorSpace!,
options: heifOptions)
} catch {
return CommandResult(output: "Compress and write heic image failed \(sourceURL)", error: .exit, status: -1)
}
return CommandResult(output: "Compress and write heic image successfully \(sourceURL)", error: .exit, status: 0)
}
Thank you for your time and any assistance you can provide!
var config = PHPickerConfiguration()
config.filter = PHPickerFilter.images
I want only 'png' files to be displayed when the PHPickerViewController photo list is opened.
I've read this post : https://developer.apple.com/forums/thread/687415
In this post, it is mentioned that filtering image formats by PHPickerConfiguration is not possible (2 years ago).
Is it still not possible? Has issue 71832162 not been resolved?
When developing a custom camera for iOS, when the sessionPreset of AVCaptureSession is set to AVCaptureSessionPresetPhoto, photos cannot be taken on the iPhone 15 Pro Max, but other devices are normal.SessionPreset settings and other enumerations can be shot normally. Please help Apple developers to determine the cause.
In addition, I initially thought there was a problem with our code writing, but when I looked at some demos written by others, the same problem would occur when using the AVCaptureSessionPresetPhoto enumeration and running it on iPhone15 pro max.