Hello,
I have a command line application that uses iTunesLibrary to "save" the state of what I have listened to. I have it run every night via a LaunchAgent. You can see the source here: https://github.com/bolsinga/itunes_json
Prior to Sequoia it would run nightly. I'd just have to grant it access to the Music library once, and it would be fine thereafter. However with Sequoia it requires UI interaction to grant it access every time. This makes it no longer run unattended overnight, defeating its purpose.
I have the console logs of when this happens. You can see it in my issue tracking it here: https://github.com/bolsinga/itunes_json/issues/410
One thing that makes me wonder is that it is a command line application, not a bundle. How do I make a command line application get access to MusicKit / iTunesLibrary, and keep it thereafter? I'd like to get my pre-Sequoia behavior back. I've filed FB15592660 too.
I've granted it access to run in the background, as well as access to my Music library (please see attached screenshots).
AMPLibraryAgent 10:48:29.489944-0700 xpc Connection from framework client invalidated pid:57606 clientname:iTunesLibrary(itunes_json)
AMPLibraryAgent 10:48:29.492763-0700 service Unloading domains(14) for ClientID:iTunesLibrary(itunes_json)-1229 previous open:15 new open:1
itunes_json 10:48:59.980864-0700 connection [0x157f05800] activating connection: mach=true listener=false peer=false name=com.apple.amp.library.framework
tccd 10:48:59.982568-0700 access AUTHREQ_ATTRIBUTION: msgID=1795.214, attribution={accessing={TCCDProcess: identifier=itunes_json, pid=57652, auid=501, euid=501, binary_path=/Users/bolsinga/Applications/itunes_json/Products/usr/local/bin/itunes_json}, requesting={TCCDProcess: identifier=com.apple.AMPLibraryAgent, pid=1795, auid=501, euid=501, binary_path=/System/Library/PrivateFrameworks/AMPLibrary.framework/Versions/A/Support/AMPLibraryAgent}, },
tccd 10:48:59.982651-0700 access requestor: TCCDProcess: identifier=com.apple.AMPLibraryAgent, pid=1795, auid=501, euid=501, binary_path=/System/Library/PrivateFrameworks/AMPLibrary.framework/Versions/A/Support/AMPLibraryAgent is checking access for accessor TCCDProcess: identifier=itunes_json, pid=57652, auid=501, euid=501, binary_path=/Users/bolsinga/Applications/itunes_json/Products/usr/local/bin/itunes_json
tccd 10:48:59.995636-0700 access AUTHREQ_SUBJECT: msgID=1795.214, subject=/Users/bolsinga/Applications/itunes_json/Products/usr/local/bin/itunes_json,
tccd 10:48:59.996283-0700 access -[TCCDAccessIdentity staticCode]: static code for: identifier /Users/bolsinga/Applications/itunes_json/Products/usr/local/bin/itunes_json, type: 1: 0xc00341b00 at /Users/bolsinga/Applications/itunes_json/Products/usr/local/bin/itunes_json
tccd 10:49:00.018205-0700 access Failed to match existing code requirement for subject /Users/bolsinga/Applications/itunes_json/Products/usr/local/bin/itunes_json and service kTCCServiceMediaLibrary
cdhash H"6bc380972f4df49b337a2a05308fb7b98fbe6473" or cdhash H"0708bcaabbfbab8770522050f7e2642d4d864f31"
cdhash H"6bc380972f4df49b337a2a05308fb7b98fbe6473" or cdhash H"0708bcaabbfbab8770522050f7e2642d4d864f31"
tccd 10:49:00.018997-0700 access AUTHREQ_PROMPTING: msgID=1795.214, service=kTCCServiceMediaLibrary, subject=Sub:{/Users/bolsinga/Applications/itunes_json/Products/usr/local/bin/itunes_json}Resp:{TCCDProcess: identifier=itunes_json, pid=57652, auid=501, euid=501, binary_path=/Users/bolsinga/Applications/itunes_json/Products/usr/local/bin/itunes_json},
AMPLibraryAgent 10:49:02.489170-0700 xpc ampld> register framework ClientName:iTunesLibrary(itunes_json)
tccd 10:49:02.488189-0700 events Publishing <TCCDEvent: type=Create, service=kTCCServiceMediaLibrary, identifier_type=Path, identifier=/Users/bolsinga/Applications/itunes_json/Products/usr/local/bin/itunes_json> to 4 subscribers: {
633 = "<TCCDEventSubscriber: token=633, state=Initial, csid=(null)>";
628 = "<TCCDEventSubscriber: token=628, state=Passed, csid=com.apple.chronod>";
464 = "<TCCDEventSubscriber: token=464, state=Passed, csid=com.apple.cloudd>";
513 = "<TCCDEventSubscriber: token=513, state=Passed, csid=com.apple.photolibraryd>";
}
AMPLibraryAgent 10:49:02.490391-0700 xpc ampld> registered framework ClientName:iTunesLibrary(itunes_json) with clientID:1230
itunes_json 10:49:02.792084-0700 connection [0x147e04340] activating connection: mach=true listener=false peer=false name=com.apple.amp.artworkd
itunes_json 10:49:02.801482-0700 <Missing Description> openDatabase 0xe4af30f4493e5ef5 artwork folder Y '<private>'
itunes_json 10:49:02.805087-0700 <Missing Description> openDatabase 0xf2db6e8d7672edc9 artwork folder Y '<private>'
itunes_json 10:49:02.806736-0700 <Missing Description> openDatabase 0xfb2acd898c951851 artwork folder Y '<private>'
itunes_json 10:49:02.813286-0700 <Missing Description> openDatabase 0xf0f4919c5ff0e88 artwork folder Y '<private>'
itunes_json 10:49:09.634928-0700 connection [0x600002b6a0d0] activating connection: mach=true listener=false peer=false name=com.apple.cfprefsd.daemon
itunes_json 10:49:09.635019-0700 connection [0x600002b78000] activating connection: mach=true listener=false peer=false name=com.apple.cfprefsd.agent
AMPLibraryAgent 10:49:12.382878-0700 xpc Connection from framework client invalidated pid:57652 clientname:iTunesLibrary(itunes_json)
AMPLibraryAgent 10:49:12.383474-0700 service Unloading domains(14) for ClientID:iTunesLibrary(itunes_json)-1230 previous open:15 new open:1
itunes_json.log
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Post
Replies
Boosts
Views
Activity
Hello.
We are trying to get audio volume from microphone.
We have 2 questions.
1. Can anyone tell me about AVAudioEngine.InputNode.volume?
AVAudioEngine.InputNode.volume
Return 0 in the silence, Return float type value within 1.0 depending on the
volume are expected work, but it looks 1.0 (default value) is returned at any time.
Which case does it return 0.5 or 0?
Sample code is below. Microphone works correctly.
// instance member
private var engine: AVAudioEngine!
private var node: AVAudioInputNode!
// start method
self.engine = .init()
self.node = engine.inputNode
engine.prepare()
try! engine.start()
// volume getter
print(\(self.node.volume))
2. What is the best practice to get audio volume from microphone?
Requirements are:
Without AVAudioRecorder. We use it for streaming audio.
it should withstand high frequency access.
Testing info
device: iPhone XR
OS version: iOS 18
Best Regards.
ColorSync throughout the system interprets the Rec. ITU-R BT.709-5 color profile differently in macOS 15 than it did on macOS 14 leading to severe color differences. This is breaking/problem-causing color change.
Here's a comparison of one way the results can appear different.
Steps to Reproduce
Open this image above (named here as sRGB_Bars.png) in ColorSync Utility. (This should be an sRGB profiled image if the forums don't mess with that.)
With Digital Color Meter open and set to "Display in sRGB" [0-255] you can see the gray bars progress left-to-right as 0, 23, 46, 69, 92, 115, etc... ] Those are the correct reference values.
At the bottom of the window in ColorSync use [Match to Profile] [Display -> Rec. ITU-R BT.709-5], then click the Apply Button. You can verify the image now uses the Rec 709 profile with the (i) Get Info button in the toolbar.
Use "Save As…" to save the image with a different name.
**) Do the steps above on macOS 14 and macOS 15 to create two images. Rec709_CreatedOnMacOS14, and Rec709_CreatedOnMacOS15.
**) Compare the two images on BOTH operating system versions, and you'll see they are significantly different from each other.
Rec709_CreatedOnMacOS14.png when viewed on macOS 15 has the gray values of:
[0, 1, 18, 43, 67, ...]
Rec709_CreatedOnMacOS14.png when viewed on macOS 15 has the gray values of:
[0, 51, 72, 94, 115, ... ]
These are MASSIVE differences.
Significance
This is not just a problem that affects ColorSync Utility or Preview, etc. This same gamma interpretation difference is affecting the Core Video, Core Image, Video Toolbox, etc pipelines as well. That gets complicated to talk about, and this is example is the simplest I can boil it down to.
Conclusion
Significant bug? What's going on?
I observe significant performance differences when encoding a video in mp4 format (H264). The code I use is standard (using AVAssetWriter, AVAssetWriterInput...).
Here is what I notice when I run the same code on different platforms:
On an iPhone, the video is encoded in 3 seconds (iPhone 13, 14, 15, 16, Pro...).
On a Mac equipped with an M2 Pro, the video is encoded in 50 seconds.
On a Mac equipped with an Intel processor (2,3 GHz Intel Xeon W 18 cœurs), the video is encoded in 2 minutes.
The encoding on an iPhone is very fast due to hardware acceleration. However, I don’t understand why I don’t get similar performance with a Mac M2 Pro, which is equipped with a dedicated component for hardware acceleration (H264 media engine)?
Is hardware acceleration disabled on a Mac?
See Configuration Details at the end of this message.
Despite numerous attempts, I have been unable to determine the correct syntax to fetch photo albums from my iPad Pro 13.0 using Xcode and Swift.
All the photo album were synced to the iPad Pro 13-inch using the latest versions of Apple iTunes for Windows from an external Western Digital G-Drive hard drive (No iCloud). All synced albums appear under "From My Mac" on the iPad. I only want to access each album's photo and video count.
See sample code snippet below. I have tried multiple subtype options and album types without success. Zero albums are always returned despite having around 3900 albums in the iPad Pro 13.0 photo library. Authorization to the photo library does not appear to be the problem.
PHPhotoLibrary.requestAuthorization { status in
if status == .authorized {
let result = PHAssetCollection.fetchAssetCollections(with: .album, subtype: .any, options: nil)
if result.count == 0 {
print("No albums found.")
return
}
}
}
Any help or suggestions would greatly be appreciated.
ApplePhoto
Configuration Details
iPad Pro 13-inch (M4) (iPad16,6)
iPadOS = 17.7
iCloud = Turned Off
iPad Pro Photo Library Albums = 3900
iPad Pro Photo Library Photos = 118000
iPad Pro Photo Library Videos = 4800
MacOS = Sonoma 14.6.1
XCode Version = 16.0
Swift Version = 5.0 (Xcode Default)
Microsoft Windows 10 Pro Version = 22H2
Apple iTunes for Windows = 12.13.4.4
Hi,
I'm working on an integration with the Apple Music Feed, but over the last day, I've been getting a 500 Upstream Service Error on 99% of the API calls I make.
Remarkably, retrying the same endpoint 20-30 times sometimes gives the correct response, but mostly it's a 500 error.
Just to give an example, this:
GET https://api.media.apple.com/v1/feed/album/latest
Returns this generic error:
{
"errors": [
{
"id": "U4ARRA2QDCGYKYRI2IPEJVBTHY",
"title": "Upstream Service Error",
"detail": "Call to get metadata for album feed failed",
"status": "500",
"code": "50001"
}
]
}
The same goes for other feeds like song, artist, and so on.
Before today, I did get the same error message sometimes, but a few retries would solve the issue.
Any insight on what's happening and/or an ETA on fixing it?
Thank you
Can anyone please guide me on how to use SFCustomLanguageModelData.CustomPronunciation?
I am following the below example from WWDC23
https://wwdcnotes.com/documentation/wwdcnotes/wwdc23-10101-customize-ondevice-speech-recognition/
While using this kind of custom pronunciations we need X-SAMPA string of the specific word.
There are tools available on the web to do the same
Word to IPA: https://openl.io/
IPA to X-SAMPA: https://tools.lgm.cl/xsampa.html
But these tools does not seem to produce the same kind of X-SAMPA strings used in demo, example - "Winawer" is converted to "w I n aU @r".
While using any online tools it gives - "/wI"nA:w@r/".
Hi folks:
I've been creating .reality files out of Reality Composer for over a year. Some of the files are up to 500 mB and, prior to the last month they opened fine as AR projected experiences on even basic iPhones and iPads. Now, I think since iOS 18, a 64Mb file will open as an AR experience but files it seems from about 350MB up don't open. Files just opens a window displaying the name of the file, that it's a .reality file and the file size. But it no longer opens into either an AR or Object display of the .reality scene. Has there been a new file size limit put on .reality files that Files will open or what else is going on here. Have a client who was about to launch and experience based on the .Reality file I can no longer open. Please help.
Set 3 controls to the AVCaptureSession and remove them all. The number of controls in the session is indeed 0, but the camera controls button still shows the previous 3 controls. If it is only 3->2 or 3->1, it can be modified normally, 3->0 is not OK, 0->3 is OK.
f (self.captureControl.zoom) {
if (self.zoomScaleControl) {
self.zoomScaleControl.enabled = false;
[_session removeControl:self.zoomScaleControl];
}
AVCaptureSlider *zoomSlider = [self.captureControl.zoom fetchCaptureSlider];
[zoomSlider setActionQueue:dispatch_get_main_queue() action:^(float zoomFactor) {
@strongify(self);
if ([self.dataOutputDelegate respondsToSelector:@selector(videoCaptureSession:tryChangeZoomScale:)]) {
[self.dataOutputDelegate videoCaptureSession:self tryChangeZoomScale:zoomFactor];
}
}];
self.zoomScaleControl = zoomSlider;
} else {
if (self.zoomScaleControl) {
self.zoomScaleControl.enabled = false;
[_session removeControl:self.zoomScaleControl];
}
self.zoomScaleControl = nil;
}
if (self.captureControl.exposure) {
if (self.exposureBiasControl) {
self.exposureBiasControl.enabled = false;
[_session removeControl:self.exposureBiasControl];
}
AVCaptureSlider *exposureSlider = [self.captureControl.exposure fetchCaptureSlider];
[exposureSlider setActionQueue:dispatch_get_main_queue() action:^(float bias) {
@strongify(self);
if ([self.dataOutputDelegate respondsToSelector:@selector(videoCaptureSession:tryChangeExposureBias:)]) {
[self.dataOutputDelegate videoCaptureSession:self tryChangeExposureBias:bias];
}
}];
self.exposureBiasControl = exposureSlider;
} else {
if (self.exposureBiasControl) {
self.exposureBiasControl.enabled = false;
[_session removeControl:self.exposureBiasControl];
}
self.exposureBiasControl = nil;
}
if (self.captureControl.len) {
if (self.lenControl) {
self.lenControl.enabled = false;
[_session removeControl:self.lenControl];
}
ORLenCaptureControlCustomModel *len = self.captureControl.len;
AVCaptureIndexPicker *picker = [len fetchCaptureSlider];
[picker setActionQueue:dispatch_get_main_queue() action:^(NSInteger selectedIndex) {
@strongify(self);
if ([self.dataOutputDelegate respondsToSelector:@selector(videoCaptureSession:didChangeLenIndex:datas:)]) {
[self.dataOutputDelegate videoCaptureSession:self didChangeLenIndex:selectedIndex datas:self.captureControl.len.indexDatas];
}
}];
self.lenControl = picker;
} else {
if (self.lenControl) {
self.lenControl.enabled = false;
[_session removeControl:self.lenControl];
}
self.lenControl = nil;
}
if ([_session canAddControl:self.zoomScaleControl]) {
[_session addControl:self.zoomScaleControl];
} else {
self.zoomScaleControl = nil;
}
if ([_session canAddControl:self.lenControl]) {
[_session addControl:self.lenControl];
} else {
self.lenControl = nil;
}
if ([_session canAddControl:self.exposureBiasControl]) {
[_session addControl:self.exposureBiasControl];
} else {
self.exposureBiasControl = nil;
}
if (_session.controlsDelegate == nil) {
[_session setControlsDelegate:self queue:GetCaptureControlQueue()];
}
We captured a spatial video with iPhone 15 pro.
When we try to export the video with AVAssetExportSession and AVAssetExportPresetMVHEVC960x960 it always go failed state and
exportSession.error?.localizedDescription yield "Operation Stopped" error.
Code implementation is straight forward .. other HEVC file works well.This problem occurred with only mv-hevc file.
func exportSpatialVideo(videoFilePath: String, outputUrl: URL){
let url:URL? = URL(fileURLWithPath: videoFilePath)
let asset: AVAsset = AVAsset(url:url!)
print(asset.description)
print(asset.tracks.first?.mediaType.rawValue)
let preset = "AVAssetExportPresetMVHEVC960x960"
let exportSession:AVAssetExportSession = AVAssetExportSession(asset: asset, presetName: preset)!
exportSession.outputURL = outputUrl
exportSession.shouldOptimizeForNetworkUse = true
exportSession.outputFileType = AVFileType.mov
exportSession.exportAsynchronously(completionHandler: {
switch exportSession.status {
case .unknown:
print("Unknown Error")
case .waiting:
print( "waiting ... ")
case .exporting:
print( "exporting ...")
case .completed:
print( "completed.")
case .failed:
print("failed.\(String(describing: exportSession.error?.localizedDescription))")
case .cancelled:
}
})
}
is there any solution for it ?
We are currently in the process of migrating our application from using ALAssetsLibrary to PHPhotoLibrary to ensure compatibility with the latest versions of iOS. However, we have noticed a discrepancy in the file sizes of images obtained using PHPhotoLibrary compared to those obtained using ALAssetsLibrary.
Specifically, we would like to understand the following points:
1.Reason for File Size Differences:
What are the reasons for the difference in file sizes between images obtained using ALAssetsLibrary and those obtained using PHPhotoLibrary?
Could you provide detailed information on the settings and options in PHPhotoLibrary that affect the size and quality of the images?
2.Optimal Settings:
What are the optimal settings in PHPhotoLibrary to obtain images with the same quality and file size as those obtained using ALAssetsLibrary?
If possible, could you provide code examples or recommended option settings?
In iOS, when I use AVPlayerViewController to play back a slow motion video, it has a "ramp-up" stage at the start and a "ramp-down" stage at the end, and the video plays at the normal speed (i.e. not slow motion) during these stages.
My question is: are these non-slow-motion stages defined in the video file itself? (e.g. some kind of meta data?) Or, is it just a playback approach used by AVPlayerViewController ?
Thanks!
I am using PHImageRequestOptions and PHImageManager to load images to my app.
I use version.original and resizeMode.none, version.original and resizeMode.extract.
Both used to work well but since iOS18 version.original and resizeMode.extract doesn't work anymore.
The images are loaded but the they are not shown. (Only the frames?)
Anyone knows why?
Thank you for reading.
I'm trying to get the item that's assigned to the currentEntry when playing any song which is currently coming up nil when the song is playing. Note currentEntry returns:
MusicPlayer.Queue.Entry(id: "evn7UntpH::ncK1NN3HS", title: "SONG TITLE")
I'm a bit stumped on the API usage. if the song is playing, how could the queue item be nil?
if queueObserver == nil {
queueObserver = ApplicationMusicPlayer.shared.queue.objectWillChange
.sink { [weak self] in
self?.handleNowPlayingChange()
}
}
}
private func handleNowPlayingChange() {
if let currentItem = ApplicationMusicPlayer.shared.queue.currentEntry {
if let song = currentItem.item {
self.currentlyPlayingID = song.id
self.objectWillChange.send()
print("Song ID: \(song.id)")
} else {
print("NO ITEM: \(currentItem)")
}
} else {
print("No Entries: \(ApplicationMusicPlayer.shared.queue.currentEntry)")
}
}
The new iPhone 16 supports spatial audio recordings in the camera app when recording videos. Is it possible to also record spatial audio without video, and is it possible for 3rd party developers to do so? If so, how do I need to configure AVAudioSession and/or AVAudioEngine to record spatial audio in my audio recording app on iPhone 16?
I am making an app that can two two videos, and then stitch them together on the screen (one video on top half and the other on bottom half).
This is achieved with AVMutableComposition, and then I am using AVAssetExportSession to export a mp4 file out:
guard let export = AVAssetExportSession(asset: composition, presetName: AVAssetExportPreset1920x1080) else {
return
}
export.exportAsynchronously {
....
When the two input videos are around 1GB each, starting the export session immediately increases memory usage by ~2GB, as if it moves the input files into memory immediately (my guess), and at some point my app is killed for using too much memory.
Is there a way to avoid this upfront memory usage and/or avoid getting killed?
AVAudioRecorder leaves a completely useless chunk of file if a crash happens while recording.
I need to be able to recover. I'm thinking of streaming the recording to disk. I know that is possible with AVAudioEngine but I also know that API is a headache that will lead to unexpected crashes unless you're lucky and the person who built it.
Does Apple have a recommended strategy for failsafe audio recordings? I'm thinking of chunking recordings using many instances of AVAudioRecorder and then stitching those chunks together.
I'm working on a little light and sound controller in Swift, driving DMX lights and audio.
For the audio portion, I need to play a bunch of looping sounds (long-duration MP3s), and occasionally play sound effects (short-duration sounds, varying formats). I want all of this mixed into selected channels on specific devices. That is, I might have one audio stream going to the left channel, and a completely different one going to the right channel.
What's the right API to do this from Swift? Core Audio? AVPlayer stuff?
I'm running into an issue where in some cases, when the AUHostingServiceXPC_arrow process is shut down by Logic, the process is terminated abruptly without calling AP_Close on all of the plugins hosted in the process. In our case, we have filesystem resources we need to clean up, and having stale files around from the last run can cause issues in new sessions, so this leak is having some pretty gnarly effects.
I can reproduce the issue using only Apple sample plugins, and it seems to be triggered by a timeout. If I have two different AU plugins in the session, and I add a 1 second sleep to the destructor of one of the sample plugins, Logic will force terminate the process and the remaining destructors are not called (even for the plugins without the 1 second sleep).
Is there a way to avoid this behavior? Or to safely clean up our plugin even if other plugins in the session take a second to tear down?
I have an app that gets data from Music.app with both the iTunesLibrary and MusicKit.
iTunesLibrary has ITLibArtist.sortName and ITLibAlbum.sortTitle and ITLibAlbum.sortAlbumArtist.
I can’t seem to find an equivalent in MusicKit. How are those properties obtained using MusicKit? Thanks.
FYI I have filed FB15554956 on this. You also may see my code at https://github.com/bolsinga/itunes_json