I'm trying to capture audio samples from the selected output device on macOS using ScreenCaptureKit?
Thank you
Audio
RSS for tagIntegrate music and other audio content into your apps.
Posts under Audio tag
80 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
hello all!
I'm setting up a really simple media player in my swiftui app.
the code is the following:
import AVFoundation
import MediaPlayer
class AudioPlayerProvider {
private var player: AVPlayer
init() {
self.player = AVPlayer()
self.player.automaticallyWaitsToMinimizeStalling = false
self.setupAudioSession()
self.setupRemoteCommandCenter()
}
private func setupAudioSession() {
do {
try AVAudioSession.sharedInstance().setCategory(.playback, mode: .default)
try AVAudioSession.sharedInstance().setActive(true)
} catch {
print("Failed to set up audio session: \(error.localizedDescription)")
}
}
private func setupRemoteCommandCenter() {
let commandCenter = MPRemoteCommandCenter.shared()
commandCenter.playCommand.addTarget { [weak self] _ in
guard let self = self else { return .commandFailed }
self.play()
return .success
}
commandCenter.pauseCommand.addTarget { [weak self] _ in
guard let self = self else { return .commandFailed }
self.pause()
return .success
}
}
func loadAudio(from urlString: String) {
guard let url = URL(string: urlString) else { return }
let asset = AVAsset(url: url)
let playerItem = AVPlayerItem(asset: asset)
self.player.pause()
self.player.replaceCurrentItem(with: playerItem)
NotificationCenter.default.addObserver(self, selector: #selector(self.streamFinished), name: .AVPlayerItemDidPlayToEndTime, object: self.player.currentItem)
}
func setMetadata(title: String, artist: String, duration: Double) {
var nowPlayingInfo = [
MPMediaItemPropertyTitle: title,
MPMediaItemPropertyArtist: artist,
MPMediaItemPropertyPlaybackDuration: duration,
MPNowPlayingInfoPropertyPlaybackRate: 1.0,
] as [String: Any]
MPNowPlayingInfoCenter.default().nowPlayingInfo = nowPlayingInfo
}
@objc
private func streamFinished() {
self.player.seek(to: .zero)
try? AVAudioSession.sharedInstance().setActive(false)
MPNowPlayingInfoCenter.default().playbackState = .stopped
}
func play() {
MPNowPlayingInfoCenter.default().playbackState = .playing
self.player.play()
}
func pause() {
MPNowPlayingInfoCenter.default().playbackState = .paused
self.player.pause()
}
}
pretty scholastic.
The code works when called on views. It also shows up within the lock screen / dynamic island (when in background), but here lies the problems:
The play/pause button do not appear neither in the Command Center nor in the dynamic island. If I tap on the position these button should show up, the command works. Just the icons are not appearing.
the waveform animation does not animate when playing.
Many audio apps are working just fine so is my code lacking something. But I don't know why.
What is missing?
Thanks in advance!
Hi,
I would like to stream live audio (hls) from watch itself, our app can start stream on paired device, but when phone is not near by i want to start streaming on watchos (just like spotify or Apple music app) I watched the 2019 wwdc video about streaming and also looked to the documentation
Documentation : https://developer.apple.com/documentation/watchkit/storyboard_support/playing_background_audio
I can present the route controller to select output, but for example after selecting Air Podcas, stream did not start..
Here is the code: (I have enabled background mode audio)
do {
try AVAudioSession.sharedInstance().setCategory(.playback, mode: .default, policy: .longFormAudio, options: [])
}
catch {
print("no audiosession!")
}
AVAudioSession.sharedInstance().activate(options: []) { success, error in
dump(success)
dump(error)
DispatchQueue.main.async {
if let streamURL = moduleItem.media?[0] as? String {
dump(streamURL)
let asset = AVURLAsset(url: URL(string: streamURL)!, options: nil)
let item = AVPlayerItem(asset: asset)
let player = AVQueuePlayer(playerItem: item)
player.play()
}
}
}
Hi,
So recently I've been itching to get some actual work done and create something that is genuinely useful to me. Mainly, I want to practice my skills and hopefully build up some new ones.
My concept/idea is to create something that resembles spatial audio ( or spatialized stereo ), but on steroids for macOS. I want to create something that has much more customizability, but also ease of use. I'd like to make use of the power in products like AirPods Pro and create a more immersive audio experience. However, before I make any serious plans, I'd like to ask for some feedback, really just to consider the feasibility of something like this. I throughly enjoy a rich audio experience, but I feel the experience as is on macOS is quite lackluster, especially compared to iOS with system wide spatialized stereo already available.
I would genuinely appreciate any feedback or support that could be given on this project. Thank you for your time.
I also apologize if this is the wrong usage of this forum/support.
Does anyone have a ready-made script/shortcut like the one shown in the video?
Hi, does anyone know how to capture audio input in vision os? I tried the sample code from official examples https://developer.apple.com/documentation/avfoundation/avcapturesession , but it did work.
Has anyone found a way to retrieve user token if I want to make an app on Flutter that uses that to make Apple Music API calls to just retrieve the user data and display it on the application?
Something like when a user opens their app, there is a button that says connect with apple. It takes the user to give permissions for apple music and that retrieves their user token.
I know there is Music Kit on Swift but i wonder if there is something like that on flutter
https://developer.apple.com/videos/play/wwdc2023/10235/ - In this WWDC session,
at 3:19 - Apple has introduced **Other audio ducking ** feature
In iOS17, we can control the amount of 'other audio' ducking through the AVAudioEngine. Is this also possible on AVAudioSession ?
We are using an AVAudioSession for a VOIP call while concurrently attempting to play a video through an AVPlayer. However, the volume of the AVPlayer is considerably low.
Does anyone have any ideas on how to achieve the level of control that AVAudioEngine offers?
I am trying to create an app for a custom Now Playing UI. How can I grab the following:
Song Name
Album Name
Artist Name
Album Art
I'm attempting to record from a device's microphone (under iOS) using AVAudioRecorder. The examples are all quite simple, and I'm following the same method. But I'm getting error messages on attempts to record, and the resulting M4A file (after several seconds of recording) is only 552 bytes long and won't load. Here's the recorder usage:
func startRecording()
{
let settings = [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 22050,
AVNumberOfChannelsKey: 1,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue
]
do
{
recorder = try AVAudioRecorder(url: tempFileURL(), settings: settings)
recorder?.delegate = self
recorder!.record()
recording = true
}
catch
{
recording = false
recordingFinished(success: false)
}
}
The immediate sign of trouble appears to be the following, in the console. Note the 0 bits per channel and irrelevant 8K sample rate:
AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 8000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 8000 Hz, Int16
A subsequent attempt to load the file into AVAudioPlayer results in:
MP4_BoxParser.cpp:1089 DataSource read failed MP4AudioFile.cpp:4365 MP4Parser_PacketProvider->GetASBD() failed AudioFileObject.cpp:105 OpenFromDataSource failed AudioFileObject.cpp:80 Open failed
But that's not surprising given that it's only 500+ bytes and we had the earlier error. Anybody have an idea here? Every example on the Web shows essentially this exact method.
I've also tried constructing the recorder with
let audioFormat = AVAudioFormat.init(standardFormatWithSampleRate: 44100, channels: 1)
if audioFormat == nil
{
print("Audio format failed.")
}
else
{
do
{
recorder = try AVAudioRecorder(url: tempFileURL(), format: audioFormat!)
...
with mostly the same result. In that case the instantiation error message was the following, which at least mentions the requested sample rate:
AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 44100 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 44100 Hz, Int32
I first encountered this issue on my Spotify web app on 6th March 2024 where the song will restart and/or jumps to another song within the playlist, plays for a bit and occasionally restarts a number of times.
I will never know when Spotify will restart/jumps the song. No issues on Apple Music and on my iPhone & Apple Watch & then it started happening today 21 March 2024.
I tried Googling but to no avail and exhausted all solutions with Spotify's care team (re-installing, clearing the app and macbook's cache, host files & etc, restarting my devices).
I assume that it's now Apple's software issue with macOS Sonoma? Please help, anyone / Apple!!!
Details:
The spotify version i'm having is 1.2.33.1039.g8ddb5918
Macbook Air M2 2022 with macOS Sonoma 14.4
iPhone 14 Pro
I would like to detect audio input device state change in system logs. Right now i can detect the activation using:
log show
--info
--predicate
"process == 'coreaudiod' && category == 'access'".
But i'm unable to detect deactivation and have no idea which predicate to use.
Some of installers which we have suddenly become broken for users running the latest version of OS X, I found that the reason was that we install Core Audio HAL driver and because I wanted to avoid system reboot I relaunched coreaudio daemon via from a pkg post-install script.
sudo launchctl kickstart -kp system/com.apple.audio.coreaudiod
So with the OS update the command fails, if a computer has SIP enabled (what is the default).
sudo launchctl kickstart -kp system/com.apple.audio.coreaudiod
Password:
Could not kickstart service "com.apple.audio.coreaudiod": 1: Operation not permitted
It would be super nice if either the change can be:
reverted OR
I and similar people to know a workaround of how to hot-plug (and unplug) such a HAL driver.
We have an angular/ionic based app that has an audio playback feature. It appears that iphone (but not ipad) users who upgrade to iOS 17.4 can no longer play audio in our app. iPad users who upgraded to 17.4 don't have an issue.
We use the HTMLAudioElement for audio playback. It appears that in 17.4 it is no longer firing the 'canplay' event that we listening for, for starting our playback. The other data buffering events like 'loadeddata' are also not being delivered. By changing the logic to listen for the 'loadstart' event, audio playback works and then the remaining 'canplaythrough' and 'canplay' are delivered. In other words, I need to start playback before any data buffering status events are delivered, otherwise they never get delivered. I am testing this against an audio delivery server on my same machine and have confirmed that the data is correctly delivered.
Is anyone else experiencing a similar issue on iphones in iOS 17.4?
I often find when doing basic actions in MusicKit it is incredibly slow compared to Apple's Music App. I've tried different versions, devices, networks, Apple's sample code, it all throughout the last several years, and it is all the same. Does anyone else have this issue?
We develop virtual instruments for Mac/AU and are trying to get our AU-Plugins and our Standalone player to work with Audio Workgroups.
When the Standalone App or Logic Pro is in the foreground and active all is well and as expected.
However when the App or Logic Pro is not in focus all my auxiliary threads are running on E-Cores. Even though they are properly joined to the processing thread's workgroup. This leads to a lot of audible drop outs because deadlines are not met anymore.
The processing thread itself stays on a p-core. But has to wait for the other threads to finish.
How can I opt out of this behaviour? Our users certainly have use cases where they expect the Player to run smoothly even though they currently have a different App in focus.
I’ve noticed that audio is locked into position with builds. It sometimes comes out the left ear, other times the right. I occasionally get it working with both, but the audio isn’t moving with head tracking.
I've seen other reports of this issue in the Unity support forums too.
Any ideas on how to fix this? Its being reported in my app reviews as a negative.
Hello everyone,
I'm relatively new to iOS development, and I'm currently working on a Flutter plugin package. I want to use the AVFAudio package to load instrument sounds from an SF2 file into different channels. Specifically, I'd like to load individual instruments from the SF2 file onto separate channels.
However, I've been struggling to find a way to achieve this. Could someone guide me on how to load SF2 instrument sounds into different channels using AVFAudio? I've tried various combinations of parameters (program number, soundbank MSB, and soundbank LSB), but none seem to work.
If anyone has experience with AVFAudio and SF2 files, I'd greatly appreciate your help. Perhaps there's a proven approach or a way to determine the correct values for these parameters? Should I use a soundfont editor to inspect specific values within the SF2 file?
Thank you in advance for any assistance!
Best regards,
Melih
I am working on a radio app. This is the first time and I have a problem with lock Screen Audio Card. According to docs It looks ok but could you please check why I can not display Audio Now Playing Card on lock Screen.
2 Code samples, 1. Now Playing and 2. Logic of current song and Album art.
1. Now Playing
// Create a dictionary to hold the now playing information
var nowPlayingInfo: [String: Any] = [:]
// Set the title of the current song
nowPlayingInfo[MPMediaItemPropertyTitle] = currentSong
// If album art URL is available, fetch the image asynchronously
if let albumArtUrl = albumArtUrl {
URLSession.shared.dataTask(with: albumArtUrl) { data, _, error in
if let data = data, let image = UIImage(data: data) {
// Create artwork object
let artwork = MPMediaItemArtwork(boundsSize: image.size) { _ in image }
// Update now playing info with artwork on the main queue
DispatchQueue.main.async {
nowPlayingInfo[MPMediaItemPropertyArtwork] = artwork
MPNowPlayingInfoCenter.default().nowPlayingInfo = nowPlayingInfo
}
} else {
// If there's an error fetching the album art, set now playing info without artwork
MPNowPlayingInfoCenter.default().nowPlayingInfo = nowPlayingInfo
print("Error retrieving album art data:", error?.localizedDescription ?? "Unknown error")
}
}.resume()
} else {
// If album art URL is not available, set now playing info without artwork
MPNowPlayingInfoCenter.default().nowPlayingInfo = nowPlayingInfo
}
}
2. Current Song, Album Art Logic
let parts = currentSong.split(separator: "-", maxSplits: 1, omittingEmptySubsequences: true).map { $0.trimmingCharacters(in: .whitespaces) }
let titleWithExtra = parts.count > 1 ? parts[1] : ""
let title = titleWithExtra.components(separatedBy: " (").first ?? titleWithExtra
return title
}
func updateSongInfo() {
let url = URL(string: "https://live.heartfm.com.tr/listen/heart_fm/currentsong")!
URLSession.shared.dataTask(with: url) { data, response, error in
if let data = data, let songString = String(data: data, encoding: .utf8) {
DispatchQueue.main.async {
self.currentSong = songString.trimmingCharacters(in: .whitespacesAndNewlines)
self.updateAlbumArtUrl(song: self.currentSong)
}
}
}.resume()
}
private func updateAlbumArtUrl(song: String) {
let parts = song.split(separator: "-", maxSplits: 1, omittingEmptySubsequences: true).map { $0.trimmingCharacters(in: .whitespaces) }
let artist = parts.first ?? ""
let titleWithExtra = parts.count > 1 ? parts[1] : ""
let title = titleWithExtra.components(separatedBy: " (").first ?? titleWithExtra
let artistAndTitle = artist.isEmpty || title.isEmpty ? song : "\(artist) - \(title)"
let encodedArtistAndTitle = artistAndTitle.addingPercentEncoding(withAllowedCharacters: .urlQueryAllowed) ?? artistAndTitle
albumArtUrl = URL(string: "https://www.heartfm.com.tr/ArtCover/\(encodedArtistAndTitle).jpg")
}
I'm trying to add a USB mic to my Mini runing the latest Sonoma software but it full of crackles. Why isn't it clean?