We are experiencing an issue with our HLS MPEG-TS streams on Apple devices, where the AVPlayer in our iOS app and Safari jumps back to the start when the player automatically changes quality. This occurs despite the stream still indicating that it is live and there is no change in the seekbar. After testing our streams with the Apple HLS Validator, the only problem that occured was an "Measured peak bitrate compared to multivariant playlist declared value exceeds error tolerance"-Error.
On Chrome and on our Android-App this playback bug does not happen. Has someone else experienced similar issues with the AVPlayer?
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
I am attempting to do batch Transcription of audio files exported from Voice Memos, and I am running into an interesting issue. If I only transcribe a single file it works every time, but if I try to batch it, only the last one works, and the others fail with No speech detected. I assumed it must be something about concurrency, so I implemented what I think should remove any chance of transcriptions running in parallel. And with a mocked up unit of work, everything looked good. So I added the transcription back in, and
1: It still fails on all but the last file. This happens if I am processing 10 files or just 2.
2: It no longer processes in order, any file can be the last one that succeeds. And it seems to not be related to file size. I have had paragraph sized notes finish last, but also a single short sentence that finishes last.
I left the mocked processFiles() for reference.
Any insights would be greatly appreciated.
import Speech
import SwiftUI
struct ContentView: View {
@State private var processing: Bool = false
@State private var fileNumber: String?
@State private var fileName: String?
@State private var files: [URL] = []
let locale = Locale(identifier: "en-US")
let recognizer: SFSpeechRecognizer?
init() {
self.recognizer = SFSpeechRecognizer(locale: self.locale)
}
var body: some View {
VStack {
if files.count > 0 {
ZStack {
ProgressView()
Text(fileNumber ?? "-")
.bold()
}
Text(fileName ?? "-")
} else {
Image(systemName: "folder.badge.minus")
Text("No audio files found")
}
}
.onAppear {
files = getFiles()
Task {
await processFiles()
}
}
}
private func getFiles() -> [URL] {
do {
let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!
let path = documentsURL.appendingPathComponent("Voice Memos").absoluteURL
let contents = try FileManager.default.contentsOfDirectory(at: path, includingPropertiesForKeys: nil, options: [])
let files = (contents.filter {$0.pathExtension == "m4a"}).sorted { url1, url2 in
url1.path < url2.path
}
return files
}
catch {
print(error.localizedDescription)
return []
}
}
private func processFiles() async {
var fileCount = files.count
for file in files {
fileNumber = String(fileCount)
fileName = file.lastPathComponent
await processFile(file)
fileCount -= 1
}
}
// private func processFile(_ url: URL) async {
// let seconds = Double.random(in: 2.0...10.0)
// await withCheckedContinuation { continuation in
// DispatchQueue.main.asyncAfter(deadline: .now() + seconds) {
// continuation.resume()
// print("\(url.lastPathComponent) \(seconds)")
// }
// }
// }
private func processFile(_ url: URL) async {
let recognitionRequest = SFSpeechURLRecognitionRequest(url: url)
recognitionRequest.requiresOnDeviceRecognition = false
recognitionRequest.shouldReportPartialResults = false
await withCheckedContinuation { continuation in
recognizer?.recognitionTask(with: recognitionRequest) { (transcriptionResult, error) in
guard transcriptionResult != nil else {
print("\(url.lastPathComponent.uppercased())")
print(error?.localizedDescription ?? "")
return
}
if ((transcriptionResult?.isFinal) == true) {
if let finalText: String = transcriptionResult?.bestTranscription.formattedString {
print("\(url.lastPathComponent.uppercased())")
print(finalText)
}
}
}
continuation.resume()
}
}
}
Somehow I have a corrupted audio plugin authentication problem. I’m on a silicon Mac M1 and two audio plugins that were installed and working will now not authenticate. The vendors both are unable to troubleshoot and I think the issue is a corrupted low level file. One product authenticates correctly when I created a new user but another plugin only authenticates on the original user account and not on the newly created user. Reinstalling the plugins and the Mac OS does not fix the issue. Any thoughts?
How to capture the audio being played by apple music in ios and combine it with fft to achieve audio visualization?
I'm very excited about the new MusicLibrary API, but after a couple of days of playing around with it, I have to say that I find the implementation of filtering MusicLibraryRequests a little confusing. MPMediaQuery has a fairly extensive list of predicates that can be applied, including string and persistentID comparisons for artist, album artist genre, and more. It also lets you filter on an item’s title. MusicLibraryRequests let you filter on the item’s ID, or on its MusicKit Artist and Genre relationships. To me, this seems like it adds an extra step.
With an MPMediaQuery, if I wanted to fetch every album by a given artist, I’d apply an MPMediaPropertyPredicate looking at MPMediaItemPropertyAlbumArtist and compare the string. It was also easy to change the MPMediaPredicateComparison to .contains to match more widely. If I wanted to surface albums by “Aesop Rock” or “Aesop Rock & Blockhead,” I could use that.
In the MusicLibraryRequest implementation, it looks like I need to perform a MusicLibraryRequest<Artist> first in order to get the Artist objects. There’s no filter for the name property, so if I don’t have their IDs, I’ve got to use filter(text:). From there, I can take the results of that request and apply them to my MusicLibraryRequest<Album> using the filter(matching:memberOf) function.
I could use filter(text:) on the MusicLibraryRequest<Album>, but that filters across multiple properties (title and artistName?) and is less precise than defining the actual property I want to match against.
I think my ideal version of the MusicLibraryRequest API would offer something like filter(matching:equalTo:) or filter(matching:contains:) that worked off of KeyPaths rather than relationships. That seems more intuitive to me. I’m not saying we need every property from every filterable MPMediaItemProperty key, but I’d love to be able to do it on title, artistName, and other common metadata. That might look something like:
filter(matching: \.title, contains: “Abbey Road”)
filter(matching: \.artistName, equalTo: “Between The Buried And Me”)
I noticed that filter(text:) is case insensitive, which is awesome, and something I’ve wanted for a long time in MPMediaPropertyPredicate. As a bonus, it would be great if a KeyPath based filter API supported a case sensitivity flag. This is less of a problem when dealing with Apple Music catalog content, but users’ libraries are a harsh environment, and you might have an artist “Between The Buried And Me” and one called “Between the Buried and Me.” It would be great to get albums from both with something like:
filter(matching: \.artistName, equalTo: “Between The Buried And Me”, caseSensitive: false)
I've submitted the above as FB10185685. I also submitted another feedback this morning regarding filter(text:) and repeating text as FB10184823.
My last wishlist item for this API (for the time being!) is exposing the MPMediaItemPropertyAlbumPersistentID as an available filter attribute. I know, I know… hear me out. If you take a look at the other thread I made today, you’ll see that due to missing metadata in MusicKit, I still have some use cases where I need to be able to reference an MPMediaItem and might need to fetch its containing MPMediaItemCollection to get at other tracks on the album. It would be nice to seamlessly be able to fetch the MPMediaItemCollection or the library Album using a shared identifier, especially when it comes to being able to play the album in MusicKit’s player rather than Media Player’s.
I've submitted that list bit as FB10185789
Thanks for bearing with my walls of text today. Keep up the great work!
I'm creating app that listening other app's sound. in this use case, screen data is not needed.
but if I don't call SCStream#addStreamOutput(_, type: .screen, ...), console shows this error:
[ERROR] _SCStream_RemoteVideoQueueOperationHandlerWithError:701 stream output NOT found. Dropping frame
currently I'm setting SCStreamConfiguration#minimumFrameInterval to large value (e.g. 0.1fps) as workaround, but it would be good if i can completely disable screen capture for best performance.
there is any way to disable screen capture and only captures apps audio?
I have spent the past 2 weeks diving into CoreAudio and have seemingly run into a wall...
For my initial test, I am simply trying to create an AUGraph for monitoring input levels from a user chosen Audio Input Device (multi-channel in my case).
I was not able to find any way to monitor input levels of a single AUHAL input device - so I decided to create a simple AUGraph for input level monitoring.
Graph looks like:
[AUHAL Input Device] -> [B1] -> [MatrixMixerAU] -> [B2] -> [AUHAL Output Device]
Where B1 is an audio stream consisting of all the input channels available from the input device. The MatrixMixer has metering mode turned on, and level meters are read from the each submix of the MatrixMixer using kMatrixMixerParam_PostAveragePower.
B2 is a stereo (2 channel) stream from the MatrixMixerAU to the default audio device - however, since I don't really want to pass audio through to an actual output, I have the volume muted on the MatrixMixerAU output channel. I tried using a GenericOutputAU instread of the default system output, however, the GenericOutputAU never seems to pull date from the ringBuffer (the graph renderProc is never called if a GenericOutputAU is used instead of AUHAL default output device).
I have not been able to get this simple graph to work. I do not see any errors when creating the graph and initializing the graph, and I have verified that the inputProc is being called for filling up the ringBuffer - but when I read the level of the MatrixMixer, the levels are always -758 (silence).
I have posted my demo project on github in hopes I can find someone with CoreAudio expertise to help with this problem.
I am willing to move this to DTS Code Level support if there is someone in DTS with CoreAudio experience.
Notes:
My App is not sandboxed in this test
I have tried with and without hardened runtime with Audio Input checked
The multichannel audio device I am using for testing is the Audient iD14 USB-C audio device. It supports 12 input channels and 6 output channels.
All input channels have been tested and are working in Ableton Live and Logic Pro.
Of particular interest, is that I can't even get the Apple CAPlayThrough demo to work on my system. I see no errors when creating the graph, but all I hear is silence.
The MatrixMixerTest from the Apple documentation archives does work - but note, that that demo does not use Audio Input devices - it reads audio into the graph from an audio file.
Link to github project page.
Diagram of AUGraph for initial test (code that is on github)
Once I get audio input level metering to work, my plan is to implement something like in Phase 2 below - with the purpose of capturing a stereo input stream, mixing to mono, and sending to lowpass, bandpass, hipass AUs - and will again use MatrixMixer for level monitoring of the levels out of each filter.
I have no plans on passthough audio (sending actual audio out to devices) - I am simple monitoring input levels
Diagram of ultimate scope - rendering audio levels of a stereo to mono stream after passing through various filters
To get calibration data during video recording, I use AVCaptureDepthDataOutput together with AVCaptureVideoDataOutput, synchronized via AVCaptureDataOutputSynchronizer (using the dataOutputSynchronizer method of CameraController in the example).
The issue is that AVCaptureDepthDataOutput can only be used with .builtInLiDARDepthCamera, meaning it isn’t available for devices without LiDAR.
Is it possible to obtain calibration data during video recording on devices without LiDAR, such as with .builtInWideAngleCamera?
Can lensDistortionLookupTable and lensDistortionCenter be used to undistort the sampleBuffer I receive from AVCaptureVideoDataOutput?
There are different microphones that can be connected via a 3.5-inch jack or via USB or via Bluetooth, the behavior is the same.
There is a code that gets access to the microphone (connected to the 3.5-inch audio jack) and starts an audio capture session. At the same time, the microphone use icon starts to be displayed. The capture of the audio device (microphone) continues for a few seconds, then the session stops, the microphone use icon disappears, then there is a pause of a few seconds, and then a second attempt is made to access the same microphone and start an audio capture session. At the same time, the microphone use icon is displayed again. After a few seconds, access to the microphone stops and the audio capture session stops, after which the microphone access icon disappears.
Next, we will try to perform the same actions, but after the first stop of access to the microphone, we will try to pull the microphone plug out of the connector and insert it back before trying to start the second session. In this case, the second attempt to access begins, the running part of the program does not return errors, but the microphone access icon is not displayed, and this is the problem. After the program is completed and restarted, this icon is displayed again.
This problem is only the tip of the iceberg, since it manifests itself in the fact that it is not possible to record sound from the audio microphone after reconnecting the microphone until the program is restarted.
Is this normal behavior of the AVFoundation framework? Is it possible to somehow make it so that after reconnecting the microphone, access to it occurs correctly and the usage indicator is displayed? What additional actions should the programmer perform in this case? Is there a description of this behavior somewhere in the documentation?
Below is the code to demonstrate the described behavior.
I am also attaching an example of the microphone usage indicator icon.
Computer description: MacBook Pro 13-inch 2020 Intel Core i7 macOS Sequoia 15.1.
#include <chrono>
#include <condition_variable>
#include <iostream>
#include <mutex>
#include <thread>
#include <AVFoundation/AVFoundation.h>
#include <Foundation/NSString.h>
#include <Foundation/NSURL.h>
AVCaptureSession* m_captureSession = nullptr;
AVCaptureDeviceInput* m_audioInput = nullptr;
AVCaptureAudioDataOutput* m_audioOutput = nullptr;
std::condition_variable conditionVariable;
std::mutex mutex;
bool responseToAccessRequestReceived = false;
void receiveResponse()
{
std::lock_guard<std::mutex> lock(mutex);
responseToAccessRequestReceived = true;
conditionVariable.notify_one();
}
void waitForResponse()
{
std::unique_lock<std::mutex> lock(mutex);
conditionVariable.wait(lock, [] { return responseToAccessRequestReceived; });
}
void requestPermissions()
{
responseToAccessRequestReceived = false;
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeAudio completionHandler:^(BOOL granted)
{
const auto status = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeAudio];
std::cout << "Request completion handler granted: " << (int)granted << ", status: " << status << std::endl;
receiveResponse();
}];
waitForResponse();
}
void timer(int timeSec)
{
for (auto timeRemaining = timeSec; timeRemaining > 0; --timeRemaining)
{
std::cout << "Timer, remaining time: " << timeRemaining << "s" << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(1));
}
}
bool updateAudioInput()
{
[m_captureSession beginConfiguration];
if (m_audioOutput)
{
AVCaptureConnection *lastConnection = [m_audioOutput connectionWithMediaType:AVMediaTypeAudio];
[m_captureSession removeConnection:lastConnection];
}
if (m_audioInput)
{
[m_captureSession removeInput:m_audioInput];
[m_audioInput release];
m_audioInput = nullptr;
}
AVCaptureDevice* audioInputDevice = [AVCaptureDevice deviceWithUniqueID: [NSString stringWithUTF8String: "BuiltInHeadphoneInputDevice"]];
if (!audioInputDevice)
{
std::cout << "Error input audio device creating" << std::endl;
return false;
}
// m_audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioInputDevice error:nil];
// NSError *error = nil;
NSError *error = [[NSError alloc] init];
m_audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioInputDevice error:&error];
if (error)
{
const auto code = [error code];
const auto domain = [error domain];
const char* domainC = domain ? [domain UTF8String] : nullptr;
std::cout << code << " " << domainC << std::endl;
}
if (m_audioInput && [m_captureSession canAddInput:m_audioInput]) {
[m_audioInput retain];
[m_captureSession addInput:m_audioInput];
}
else
{
std::cout << "Failed to create audio device input" << std::endl;
return false;
}
if (!m_audioOutput)
{
m_audioOutput = [[AVCaptureAudioDataOutput alloc] init];
if (m_audioOutput && [m_captureSession canAddOutput:m_audioOutput])
{
[m_captureSession addOutput:m_audioOutput];
}
else
{
std::cout << "Failed to add audio output" << std::endl;
return false;
}
}
[m_captureSession commitConfiguration];
return true;
}
void start()
{
std::cout << "Starting..." << std::endl;
const bool updatingResult = updateAudioInput();
if (!updatingResult)
{
std::cout << "Error, while updating audio input" << std::endl;
return;
}
[m_captureSession startRunning];
}
void stop()
{
std::cout << "Stopping..." << std::endl;
[m_captureSession stopRunning];
}
int main()
{
requestPermissions();
m_captureSession = [[AVCaptureSession alloc] init];
start();
timer(5);
stop();
timer(10);
start();
timer(5);
stop();
}
My Camera app is repeatedly opening even though I am not taking any action to open it when I use iPhone. Today during a FaceTime call, the Camera app opened while the phone was unlocked without me touching anything. It didn’t end the FaceTime call, but just put the video on pause for the person I was speaking with. I force closed the Camera app, then it happened again a few minutes later.
This has happened while using Google Maps and other apps as well, while the phone is unlocked.
This is also happening while the phone is locked, just sitting on a table. All the sudden I look over and the screen is active showing the camera view.
Today this has happened at least 20 times. I need to know how to stop it.
I am on iOS 18.1 and enrolled in iOS 18 Public Beta. There are no pending software updates.
Hello,
I have a problem reading a 2D data matrix type code with a camera. In the application, I use AVFoundation to operate the camera and work with 2D codes, and in the vast majority there is no problem with loading. Nothing special.
I originally thought it might be a problem in my code, but I got the same result when I tried with the Camera app integrated in IOS. It can be seen that only the LiveText API for text recognition worked.
But I am attaching the code with which the camera has a problem, even though the code looks perfectly fine at first glance. A classic handheld 2D code reader will read the code just fine.
Can someone please explain to me why the camera, which normally reads these codes at the speed of light, sometimes has a problem with the codes?
Thank you
[Personal Information Edited by Moderator]
I created a locked camera capture extension as explained in Apple's documentation.
I'm trying to explore the possibilities of using a bluetooth peripheral from that extension - anybody knows if this is possible?
The CBCentralManagerDelegate reports .unsupported in func centralManagerDidUpdateState, even if I have provided all the permissions in Info.plist.
So I have been using the iOS 18 beta and I would like to say first and foremost really great update I love the timed messages and the car Sickness stuff that’s awesome but my only problem with the update is screen time. I have screen time on my phone and when I updated to iOS 18 beta I can’t seem to claim more screen time. I have my screen time to where I can ignore it once I run out But in the new update I can seem press the button all I want but It wont give me more screen time. This Is what I imagine a minor fix and I would really appreciate if this was fixed soon. But other than that small detail Amazing update love what your doing over here at apple!
I am calling AVSampleBufferDisplayLayer.flush from a background queue but this seems to occasionally crash the app. I am calling it from the same thread I pass to - (void)requestMediaDataWhenReadyOnQueue:(dispatch_queue_t)queue usingBlock:(void (^)(void))block;.
My question is, is this API threadsafe, or do I need to call flush from the main thread? Or is there another issue that I am not considering? It seems strange to me that this API would trigger an autolayout pass.
0 CoreFoundation 0x00000001bb384e38 __exceptionPreprocess + 164
1 libobjc.A.dylib 0x00000001b451b8d8 objc_exception_throw + 59
2 CoreAutoLayout 0x00000001d7e09e84 _AssertAutoLayoutOnAllowedThreadsOnly + 327
3 CoreAutoLayout 0x00000001d7e00e60 -[NSISEngine withBehaviors:performModifications:] + 35
4 UIKitCore 0x00000001be58fd40 -[UIView _postMovedFromSuperview:] + 671
5 UIKitCore 0x00000001bd56dfec -[UIView(Internal) _addSubview:positioned:relativeTo:] + 1903
6 UIKitCore 0x00000001bda57ccc -[_UITextLayoutCanvasView textViewportLayoutController:configureRenderingSurfaceForTextLayoutFragment:] + 455
7 UIFoundation 0x00000001c588bc9c __48-[NSTextViewportLayoutController layoutViewport]_block_invoke_4 + 151
8 UIFoundation 0x00000001c5836b50 __80-[NSTextLayoutManager enumerateViewportElementsFromLocation:options:usingBlock:]_block_invoke + 43
9 UIFoundation 0x00000001c580e158 __83-[NSTextLayoutManager enumerateTextLayoutFragmentsFromLocation:options:usingBlock:]_block_invoke_2 + 535
10 CoreFoundation 0x00000001bb385350 __NSARRAY_IS_CALLING_OUT_TO_A_BLOCK__ + 23
11 CoreFoundation 0x00000001bb3b24dc -[__NSSingleObjectArrayI enumerateObjectsWithOptions:usingBlock:] + 91
12 UIFoundation 0x00000001c580de28 __83-[NSTextLayoutManager enumerateTextLayoutFragmentsFromLocation:options:usingBlock:]_block_invoke + 775
13 UIFoundation 0x00000001c57f7504 -[NSTextLayoutManager enumerateTextLayoutFragmentsFromLocation:options:usingBlock:] + 659
14 UIFoundation 0x00000001c57f7264 -[NSTextLayoutManager enumerateViewportElementsFromLocation:options:usingBlock:] + 99
15 UIFoundation 0x00000001c57f6d7c -[NSTextViewportLayoutController layoutViewport] + 1299
16 UIKitCore 0x00000001bd580a3c +[UIView(Animation) performWithoutAnimation:] + 75
17 UIKitCore 0x00000001bd5582d0 -[_UITextLayoutCanvasView layoutSubviews] + 139
18 UIKitCore 0x00000001bd5544c8 -[UIView(CALayerDelegate) layoutSublayersOfLayer:] + 1979
19 QuartzCore 0x00000001bca277fc CA::Layer::layout_if_needed(CA::Transaction*) + 499
20 QuartzCore 0x00000001bca3aeb0 CA::Layer::layout_and_display_if_needed(CA::Transaction*) + 147
21 QuartzCore 0x00000001bca4c234 CA::Context::commit_transaction(CA::Transaction*, double, double*) + 443
22 QuartzCore 0x00000001bca81630 CA::Transaction::commit() + 651
23 MediaToolbox 0x00000001ca8d0da0 videoQueueRemote_SetProperty + 367
24 AVFCore 0x00000001cad191b4 __63-[AVSampleBufferVideoRenderer _setContentLayerOnFigVideoQueue:]_block_invoke + 179
25 libdispatch.dylib 0x00000001c299cf88 _dispatch_client_callout + 19
26 libdispatch.dylib 0x00000001c29ac574 _dispatch_lane_barrier_sync_invoke_and_complete + 55
27 AVFCore 0x00000001cad190d0 -[AVSampleBufferVideoRenderer _setContentLayerOnFigVideoQueue:] + 167
28 AVFCore 0x00000001cad14674 -[AVSampleBufferVideoRenderer _createVideoQueue:errorStep:] + 195
29 AVFCore 0x00000001cad14ac8 -[AVSampleBufferVideoRenderer createVideoQueue:] + 55
30 AVFCore 0x00000001cad179cc -[AVSampleBufferVideoRenderer flushWithRemovalOfDisplayedImage:completionHandler:] + 439
31 App 0x0000000102370214 -[AVSampleBufferDisplayLayer flush] + 51
Hello,
Faced with a really perplexing issue. Primary problem is that sometimes I get depth and video data as expected, but at other times I don't. And sometimes I'll get both data outputs for a 4-5 frames and then it'll just stop. The source code I implemented is a modified version of the sample code provided by Apple, and interestingly enough I can't re-create this issue with the Apple sample app. So wondering what I could be doing wrong?
Here's the code for setting up the capture input. preferredDepthResolution is 1280 in my case. I'm running this on an iPad Pro (6th gen). iOS version 17.0.3 (21A360). Encounter this issue on iPhone 13 Pro as well. iOS version is 17.0 (21A329)
private func setupLiDARCaptureInput() throws {
// Look up the LiDAR camera.
guard let device = AVCaptureDevice.default(.builtInLiDARDepthCamera, for: .video, position: .back) else {
throw ConfigurationError.lidarDeviceUnavailable
}
guard let format = (device.formats.last { format in
format.formatDescription.dimensions.width == preferredWidthResolution &&
format.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange &&
format.videoSupportedFrameRateRanges.first(where: {$0.maxFrameRate >= 60}) != nil &&
!format.isVideoBinned &&
!format.supportedDepthDataFormats.isEmpty
}) else {
throw ConfigurationError.requiredFormatUnavailable
}
guard let depthFormat = (format.supportedDepthDataFormats.last { depthFormat in
depthFormat.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_DepthFloat16
}) else {
throw ConfigurationError.requiredFormatUnavailable
}
// Begin the device configuration.
try device.lockForConfiguration()
// Configure the device and depth formats.
device.activeFormat = format
device.activeDepthDataFormat = depthFormat
let desc = format.formatDescription
dimensions = CMVideoFormatDescriptionGetDimensions(desc)
let duration = CMTime(value:1, timescale:CMTimeScale(60))
device.activeVideoMinFrameDuration = duration
device.activeVideoMaxFrameDuration = duration
// Finish the device configuration.
device.unlockForConfiguration()
self.device = device
print("Selected video format: \(device.activeFormat)")
print("Selected depth format: \(String(describing: device.activeDepthDataFormat))")
// Add a device input to the capture session.
let deviceInput = try AVCaptureDeviceInput(device: device)
captureSession.addInput(deviceInput)
guard let audioDevice = AVCaptureDevice.default(for: .audio) else {
return
}
// Configure audio input - always configure audio even if isAudioEnabled is false
audioDeviceInput = try! AVCaptureDeviceInput(device: audioDevice)
captureSession.addInput(audioDeviceInput)
deviceSystemPressureStateObservation = device.observe(
\.systemPressureState,
options: .new
) { _, change in
guard let systemPressureState = change.newValue else { return }
print("system pressure \(systemPressureState.levelAsString()) due to \(systemPressureState.factors)")
}
}
Here's how I'm setting up the output:
private func setupLiDARCaptureOutputs() {
// Create an object to output video sample buffers.
videoDataOutput = AVCaptureVideoDataOutput()
captureSession.addOutput(videoDataOutput)
// Create an object to output depth data.
depthDataOutput = AVCaptureDepthDataOutput()
depthDataOutput.isFilteringEnabled = false
captureSession.addOutput(depthDataOutput)
audioDeviceOutput = AVCaptureAudioDataOutput()
audioDeviceOutput.setSampleBufferDelegate(self, queue: videoQueue)
captureSession.addOutput(audioDeviceOutput)
// Create an object to synchronize the delivery of depth and video data.
outputVideoSync = AVCaptureDataOutputSynchronizer(dataOutputs: [depthDataOutput, videoDataOutput])
outputVideoSync.setDelegate(self, queue: videoQueue)
// Enable camera intrinsics matrix delivery.
guard let outputConnection = videoDataOutput.connection(with: .video) else { return }
if outputConnection.isCameraIntrinsicMatrixDeliverySupported {
outputConnection.isCameraIntrinsicMatrixDeliveryEnabled = true
}
}
The top part of my delegate implementation is as follows:
func dataOutputSynchronizer(
_ synchronizer: AVCaptureDataOutputSynchronizer,
didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection
) {
// Retrieve the synchronized depth and sample buffer container objects.
guard let syncedDepthData = synchronizedDataCollection.synchronizedData(for: depthDataOutput) as? AVCaptureSynchronizedDepthData,
let syncedVideoData = synchronizedDataCollection.synchronizedData(for: videoDataOutput) as? AVCaptureSynchronizedSampleBufferData else {
if synchronizedDataCollection.synchronizedData(for: depthDataOutput) == nil {
print("no depth data at time \(mach_absolute_time())")
}
if synchronizedDataCollection.synchronizedData(for: videoDataOutput) == nil {
print("no video data at time \(mach_absolute_time())")
}
return
}
print("received depth data \(mach_absolute_time())")
}
As you can see, I'm console logging whenever depth data is not received. Note because I'm driving the video frames at 60 fps, its expected that I'll only receive depth data for every alternate video frame.
Console output is posted as a follow up comment (because of the character limit). I edited some lines out for brevity. You'll see it started streaming correctly but after a while it stopped received both video and depth outputs (in some other runs, it works perfectly and in some other runs I receive no depth data whatsoever). One thing to note, I sometimes run quicktime mirroring to see the device screen to see what the app is displaying (so not sure if that's causing any interference - that said I don't see any system pressure changes either).
Any help is most appreciated! Thanks.
I am recording video on iOS using ReplayKit and found that after copying data in the processSampleBuffer:withType: callback using memcpy, the data changes. This occurs particularly frequently when the screen content changes rapidly, making it look like the frames are overlapping.
I found that the values starting from byte 672 in the video data on my device often change. Here is the test demo:
- (void)processSampleBuffer:(CMSampleBufferRef)sampleBuffer withType:(RPSampleBufferType)sampleBufferType {
switch (sampleBufferType) {
case RPSampleBufferTypeVideo: {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
int ret = 0;
uint8_t *oYData = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
size_t oYSize = CVPixelBufferGetHeightOfPlane(pixelBuffer, 0) * CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
uint8_t *oUVData = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
size_t oUVSize = CVPixelBufferGetHeightOfPlane(pixelBuffer, 1) * CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1);
if (oYSize <= 672) {
return;
}
uint8_t tempValue = oYData[672];
uint8_t *tYData = malloc(oYSize);
memcpy(tYData, oYData, oYSize);
if (tYData[672] != oYData[672]) {
NSLog(@"$$$$$$$$$$$$$$$$------ t:%d o:%d temp:%d", tYData[672], oYData[672], tempValue);
}
free(tYData);
CVPixelBufferUnlockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly);
break;
}
default: {
break;
}
}
}
Output:
$$$$$$$$$$$$$$$$------ t:110 o:124 temp:110
$$$$$$$$$$$$$$$$------ t:111 o:133 temp:111
$$$$$$$$$$$$$$$$------ t:124 o:138 temp:124
$$$$$$$$$$$$$$$$------ t:133 o:144 temp:133
$$$$$$$$$$$$$$$$------ t:138 o:151 temp:138
$$$$$$$$$$$$$$$$------ t:144 o:156 temp:144
$$$$$$$$$$$$$$$$------ t:151 o:135 temp:151
$$$$$$$$$$$$$$$$------ t:156 o:78 temp:156
$$$$$$$$$$$$$$$$------ t:135 o:76 temp:135
$$$$$$$$$$$$$$$$------ t:78 o:77 temp:78
$$$$$$$$$$$$$$$$------ t:76 o:80 temp:76
$$$$$$$$$$$$$$$$------ t:77 o:80 temp:77
$$$$$$$$$$$$$$$$------ t:80 o:79 temp:80
$$$$$$$$$$$$$$$$------ t:79 o:80 temp:79
Hello!
I'm trying to create a headphone safety prototype to give warnings if I listen to music too loud, but inputing my headphone's impedance, sensitivity, and wanted SPL level, and all I need is just the data on the amount of power each percentage of volume outputs(I'm assuming the MacBook Pro has 1-100% volume scale). If anyone has this info, or can direct me to someone who has this info, that would be great! Also do I contact apple support for things like this? I'm not too sure...
Thanks!!
Hello,
I have a command line application that uses iTunesLibrary to "save" the state of what I have listened to. I have it run every night via a LaunchAgent. You can see the source here: https://github.com/bolsinga/itunes_json
Prior to Sequoia it would run nightly. I'd just have to grant it access to the Music library once, and it would be fine thereafter. However with Sequoia it requires UI interaction to grant it access every time. This makes it no longer run unattended overnight, defeating its purpose.
I have the console logs of when this happens. You can see it in my issue tracking it here: https://github.com/bolsinga/itunes_json/issues/410
One thing that makes me wonder is that it is a command line application, not a bundle. How do I make a command line application get access to MusicKit / iTunesLibrary, and keep it thereafter? I'd like to get my pre-Sequoia behavior back. I've filed FB15592660 too.
I've granted it access to run in the background, as well as access to my Music library (please see attached screenshots).
AMPLibraryAgent 10:48:29.489944-0700 xpc Connection from framework client invalidated pid:57606 clientname:iTunesLibrary(itunes_json)
AMPLibraryAgent 10:48:29.492763-0700 service Unloading domains(14) for ClientID:iTunesLibrary(itunes_json)-1229 previous open:15 new open:1
itunes_json 10:48:59.980864-0700 connection [0x157f05800] activating connection: mach=true listener=false peer=false name=com.apple.amp.library.framework
tccd 10:48:59.982568-0700 access AUTHREQ_ATTRIBUTION: msgID=1795.214, attribution={accessing={TCCDProcess: identifier=itunes_json, pid=57652, auid=501, euid=501, binary_path=/Users/bolsinga/Applications/itunes_json/Products/usr/local/bin/itunes_json}, requesting={TCCDProcess: identifier=com.apple.AMPLibraryAgent, pid=1795, auid=501, euid=501, binary_path=/System/Library/PrivateFrameworks/AMPLibrary.framework/Versions/A/Support/AMPLibraryAgent}, },
tccd 10:48:59.982651-0700 access requestor: TCCDProcess: identifier=com.apple.AMPLibraryAgent, pid=1795, auid=501, euid=501, binary_path=/System/Library/PrivateFrameworks/AMPLibrary.framework/Versions/A/Support/AMPLibraryAgent is checking access for accessor TCCDProcess: identifier=itunes_json, pid=57652, auid=501, euid=501, binary_path=/Users/bolsinga/Applications/itunes_json/Products/usr/local/bin/itunes_json
tccd 10:48:59.995636-0700 access AUTHREQ_SUBJECT: msgID=1795.214, subject=/Users/bolsinga/Applications/itunes_json/Products/usr/local/bin/itunes_json,
tccd 10:48:59.996283-0700 access -[TCCDAccessIdentity staticCode]: static code for: identifier /Users/bolsinga/Applications/itunes_json/Products/usr/local/bin/itunes_json, type: 1: 0xc00341b00 at /Users/bolsinga/Applications/itunes_json/Products/usr/local/bin/itunes_json
tccd 10:49:00.018205-0700 access Failed to match existing code requirement for subject /Users/bolsinga/Applications/itunes_json/Products/usr/local/bin/itunes_json and service kTCCServiceMediaLibrary
cdhash H"6bc380972f4df49b337a2a05308fb7b98fbe6473" or cdhash H"0708bcaabbfbab8770522050f7e2642d4d864f31"
cdhash H"6bc380972f4df49b337a2a05308fb7b98fbe6473" or cdhash H"0708bcaabbfbab8770522050f7e2642d4d864f31"
tccd 10:49:00.018997-0700 access AUTHREQ_PROMPTING: msgID=1795.214, service=kTCCServiceMediaLibrary, subject=Sub:{/Users/bolsinga/Applications/itunes_json/Products/usr/local/bin/itunes_json}Resp:{TCCDProcess: identifier=itunes_json, pid=57652, auid=501, euid=501, binary_path=/Users/bolsinga/Applications/itunes_json/Products/usr/local/bin/itunes_json},
AMPLibraryAgent 10:49:02.489170-0700 xpc ampld> register framework ClientName:iTunesLibrary(itunes_json)
tccd 10:49:02.488189-0700 events Publishing <TCCDEvent: type=Create, service=kTCCServiceMediaLibrary, identifier_type=Path, identifier=/Users/bolsinga/Applications/itunes_json/Products/usr/local/bin/itunes_json> to 4 subscribers: {
633 = "<TCCDEventSubscriber: token=633, state=Initial, csid=(null)>";
628 = "<TCCDEventSubscriber: token=628, state=Passed, csid=com.apple.chronod>";
464 = "<TCCDEventSubscriber: token=464, state=Passed, csid=com.apple.cloudd>";
513 = "<TCCDEventSubscriber: token=513, state=Passed, csid=com.apple.photolibraryd>";
}
AMPLibraryAgent 10:49:02.490391-0700 xpc ampld> registered framework ClientName:iTunesLibrary(itunes_json) with clientID:1230
itunes_json 10:49:02.792084-0700 connection [0x147e04340] activating connection: mach=true listener=false peer=false name=com.apple.amp.artworkd
itunes_json 10:49:02.801482-0700 <Missing Description> openDatabase 0xe4af30f4493e5ef5 artwork folder Y '<private>'
itunes_json 10:49:02.805087-0700 <Missing Description> openDatabase 0xf2db6e8d7672edc9 artwork folder Y '<private>'
itunes_json 10:49:02.806736-0700 <Missing Description> openDatabase 0xfb2acd898c951851 artwork folder Y '<private>'
itunes_json 10:49:02.813286-0700 <Missing Description> openDatabase 0xf0f4919c5ff0e88 artwork folder Y '<private>'
itunes_json 10:49:09.634928-0700 connection [0x600002b6a0d0] activating connection: mach=true listener=false peer=false name=com.apple.cfprefsd.daemon
itunes_json 10:49:09.635019-0700 connection [0x600002b78000] activating connection: mach=true listener=false peer=false name=com.apple.cfprefsd.agent
AMPLibraryAgent 10:49:12.382878-0700 xpc Connection from framework client invalidated pid:57652 clientname:iTunesLibrary(itunes_json)
AMPLibraryAgent 10:49:12.383474-0700 service Unloading domains(14) for ClientID:iTunesLibrary(itunes_json)-1230 previous open:15 new open:1
itunes_json.log
I use Universal Product Codes (UPC) in my app to reliably identify albums after having used albumIDs for a time. AlbumIDs can change over time for no obvious reasons (see here for songIDs) so I switched to UPCs since I believed they cannot change. Well apparently they can.
A few days ago I populated a JSON with UPCs including 196871067713. Today trying to perform a MusicCatalogResourchRequest for the UPC does not return anything. When using that UPC and putting it into an Apple Music link like https://music.apple.com/de/album/folge-89-im-geistergarten/1683337782?l=en-GB redirects to https://music.apple.com/de/album/folge-89-im-geistergarten/1683337782?l=en-GB so I assume the UPC has changed from 196871067713 to 1683337782.
Apple Music can handle that and redirects to the new upc both in the app and as a website.
But a MusicCatalogResourceRequest cannot do that. I filed a suggestion for that (FB15167146) but I need a solution quicker. Can I somehow detect where the URL is redirecting to? Is there a way MusicCatalogResourceRequest can do this? Performing a MusicCatalogSearchRequest can be an option but seems unreliable when using the title as search term. Other ideas?
Thank you
I’m using AVFoundation in my iPhone application to encode a video in MP4 format with H.264, which can then be shared or exported.
Do I need to pay a license for using the H.264 format to MPEG LA? Or are these fees already covered by Apple?
I’ve read articles suggesting that Apple covers these fees when encoding is done through its native APIs (or via its dedicated encoding hardware components), but I haven’t found any explicit confirmation of this point in the various documentation or contracts... Did I miss something?