I can't figure out how to get audio from my RealityKitContentBundle to play on Vision Pro...
I have a scene in Reality Composer Pro called "WinterVivarium" which contains a 3D model of a tree, a particle emitter, a ChannelAudio entity, and an audio file (m4a) with 30 minutes of nature sounds.
The 3D model and particle emitter load up just fine on my device, but I'm getting an error when I try to load the audio...
Swift file below. When I run the app and this file gets called it throws the following error:
"Error loading winter vivarium model and/or audio: The operation couldn’t be completed. (RealityKit.__REAsset.LoadError error 2.)"
ChatGPT tells me error code 2 likely means "file not found" but I'm not sure on that one...
Please help!
import SwiftUI
import RealityKit
import RealityKitContent
struct WinterVivarium: View {
@State private var angle: Angle = .degrees(0)
var body: some View {
RealityView { content in
let audioFilePath = "/Root/back-yard-feb-7am.m4a"
let audioEntity = Entity()
do {
let entity = try await Entity(named: "WinterVivarium", in: realityKitContentBundle)
content.add(entity)
let resource = try await AudioFileResource.load(named: audioFilePath, from: "WinterVivarium.usda", in: RealityKitContent.RealityKitContentBundle)
let audioController = audioEntity.playAudio(resource)
} catch {
print("Error loading winter vivarium model and/or audio: \(error.localizedDescription)")
}
}
}
#Preview {
WinterVivarium()
}
Audio
RSS for tagIntegrate music and other audio content into your apps.
Posts under Audio tag
80 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
If user use AirPods, and he change the name of AirPods to "xxxx", how to get the origin name of AirPods?
Is there a way for an FXPlug to access the Source audio?
Or do we need to make an AU plugin, apply it to a audio source [both video or audio track], and feed the info via shared memory to an FXPlug?
Is there an AU plugin for external processes to "listen" to the audio?
Hi all,
I have created a QuickLook Preview for my custom datatype in my app.
I use SwiftUI wrapped in UIKit for the preview. My issue is that when I try and play audio using AVAudioPlayer, I receive a status code 50 error.
Does anyone know if there are seperate permissions I need to request before being able to do this?
Here are the errors I get while trying to set my audio session as active and play on the avaudioplayer
Thanks for your help and advice!
The operation couldn’t be completed. (OSStatus error -50.)
nwi_state: registration failed (9)
connection <connection: 0x100e0b270> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 } : error <dictionary: 0x251524530> { count = 1, transaction: 0, voucher = 0x0, contents =
"XPCErrorDescription" => <string: 0x2515246c8> { length = 18, contents = "Connection invalid" }
}
auto-cancelling <connection: 0x100e0b270> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 }
0x2816bf680 reply: XPC_ERROR_CONNECTION_INVALID
throwing swix::exception: !(is_valid())
AQ_API_V2Impl.cpp:134 AudioQueueNew: <-AudioQueueNew failed -302
rebuilding null connection
0x2816bf680 reply: XPC_ERROR_CONNECTION_INVALID
connection <connection: 0x100822a90> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 } : error <dictionary: 0x251524530> { count = 1, transaction: 0, voucher = 0x0, contents =
"XPCErrorDescription" => <string: 0x2515246c8> { length = 18, contents = "Connection invalid" }
}
throwing swix::exception: !(is_valid())
auto-cancelling <connection: 0x100822a90> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 }
AQ_API_V2Impl.cpp:134 AudioQueueNew: <-AudioQueueNew failed -302
There is a CustomPlayer class and inside it is using the MTAudioProcessingTap to modify the Audio buffer.
Let's say there are instances A and B of the Custom Player class.
When A and B are running, the process of B's MTAudioProcessingTap is stopped and finalize callback is coming up when A finishes the operation and the instance is terminated.
B is still experiencing this with some parts left to proceed. Same code same project is not happening in iOS 17.0 or lower.
At the same time when A is terminated, B can complete the task without any impact on B.
What changes to iOS 17.1 are resulting in these results? I'd appreciate it if you could give me an answer on how to avoid these issues.
let audioMix = AVMutableAudioMix()
var audioMixParameters: [AVMutableAudioMixInputParameters] = []
try composition.tracks(withMediaType: .audio).forEach { track in
let inputParameter = AVMutableAudioMixInputParameters(track: track)
inputParameter.trackID = track.trackID
var callbacks = MTAudioProcessingTapCallbacks(
version: kMTAudioProcessingTapCallbacksVersion_0,
clientInfo: UnsafeMutableRawPointer(
Unmanaged.passRetained(clientInfo).toOpaque()
),
init: { tap, clientInfo, tapStorageOut in
tapStorageOut.pointee = clientInfo
},
finalize: { tap in
Unmanaged<ClientInfo>.fromOpaque(MTAudioProcessingTapGetStorage(tap)).release()
},
prepare: nil,
unprepare: nil,
process: { tap, numberFrames, flags, bufferListInOut, numberFramesOut, flagsOut in
var timeRange = CMTimeRange.zero
let status = MTAudioProcessingTapGetSourceAudio(tap,
numberFrames,
bufferListInOut,
flagsOut,
&timeRange,
numberFramesOut)
if noErr == status {
....
}
})
var tap: Unmanaged<MTAudioProcessingTap>?
let status = MTAudioProcessingTapCreate(kCFAllocatorDefault,
&callbacks,
kMTAudioProcessingTapCreationFlag_PostEffects,
&tap)
guard noErr == status else {
return
}
inputParameter.audioTapProcessor = tap?.takeUnretainedValue()
audioMixParameters.append(inputParameter)
tap?.release()
}
audioMix.inputParameters = audioMixParameters
return audioMix
Each time your listening music you are streaming from a server powered by frequently coal or gaz are rarely green energy.
As a developper on IOS, i request to Apple to provide download of audio file into our audio app . The goal is not to resell the audio and violate authors right.
You Tube already does that.
it is time to find tips and tricks to reduce the consumption of the energy specially into data brodcasting and useless streaming of the same song again and again and again.
is it possible to change the API in accordance to this reality.
I’m exploring enabling speech-to-commands processing for a game, but would like to try and do a baseline of voice recognition within that to allow two people in close proximity to interact , but not interfere with each others voice commands to this system.
(it’s for an accessible game idea)
I have an app that is getting rejected from TestFlight because of this error:
ITMS-90683: Missing purpose string in Info.plist - Your app’s code references one or more APIs that access sensitive user data, or the app has one or more entitlements that permit such access. The Info.plist file for the “TurtleTuner.app” bundle should contain a NSCameraUsageDescription key with a user-facing purpose string explaining clearly and completely why your app needs the data. If you’re using external libraries or SDKs, they may reference APIs that require a purpose string. While your app might not use these APIs, a purpose string is still required. For details, visit: https://developer.apple.com/documentation/uikit/protecting_the_user_s_privacy/requesting_access_to_protected_resources.
The app does not use the camera, only the microphone. I cannot find references to the camera in any of the third party libraries I'm using.
What are some ways to troubleshoot this beyond looking for "camera" in the few dependencies?
For context, this commit allows the app to get through successfully to TestFlight: https://github.com/tsargent/turtle-tuner/commit/67d4a52e62839ad6c2a49848bea9c408d983f17a
While this following commit, which reverts the commit, fails on TestFlight with the mentioned camera permission error: https://github.com/tsargent/turtle-tuner/commit/c95b0b16c4e85d77e625d36b816ed53faa826cf5
I have a question about the Apple Music preview app for Windows 11.
It has a setting called Sound Check.
Is that feature available on the Apple Music web player and the Apple Music Android app?
If not, is that a planned feature for those?
I am working on a design that requires connecting an ios device to two audio output devices specifically headphones and a speaker. I want the audio driver to switch output device without user action. Is this manageable via ios SDK?
visionOS App how to play Spatial Audio, Ambient Audio and Channel Audio ?
How does visionOS play an MP4 audio to Spatial Audio through SwiftUI or RealityKit?
Note: Since I can only test the App through Simulator, in order to ensure that my Spatial Audio is played correctly in the space, please tell me how to display the location of Spatial Audio in the space. Ew and how to delete this View after the test, thank you!
i have create one recording application, but user switch off or kill the application, so that time how to save ongoing record.
Developing for iphone/ipad/mac
I have an idea for a music training app, but need to know of supporting libraries for recognizing a musical note's fundamental frequency in close to real time (100 ms delay) Accuracy should be within a few cents (hundredths of a semi tone)
A search for "music" resolved the core-midi library -- fine if I want to take input from midi, but I want to be open to audio input too.
And I found MusicKit, which seems to be a programmer's API for digging into
Meta questions:
Should I be using different search terms:
Where are libraries listed?
Who are the names in 3rd party libraries.
Hi I was trying to design the above UI, But using the below code of CPListImageRowItem
func templateApplicationScene(_ templateApplicationScene: CPTemplateApplicationScene,
didConnect interfaceController: CPInterfaceController) {
self.interfaceController = interfaceController
// Create a list row item with images
let item = CPListImageRowItem(text: "Category", images: [UIImage(named: "cover.jpeg")!, UIImage(named: "cover2.jpeg")!, UIImage(named: "discover.jpeg")!, UIImage(named: "thumbnail.jpeg")!])
// Create a list section
let section = CPListSection(items: [item])
// Create a list template with the section
let listTemplate = CPListTemplate(title: "Your Template Title", sections: [section])
// Set the template on the interface controller
interfaceController.setRootTemplate(listTemplate, animated: true)
}
I was getting only header and below image items but detailed text under images are no way to set.
can any one help me out of this
Hi, I'm trying to play multiple video/audio file with AVPlayer using AVMutableComposition. Each video/audio file can process simultaneously so I set each video/audio in individual tracks. I use only local file.
let second = CMTime(seconds: 1, preferredTimescale: 1000)
let duration = CMTimeRange(start: .zero, duration: second)
var currentTime = CMTime.zero
for _ in 0...4 {
let mutableTrack = composition.addMutableTrack(
withMediaType: .audio,
preferredTrackID: kCMPersistentTrackID_Invalid
)
try mutableTrack?.insertTimeRange(
duration,
of: audioAssetTrack,
at: currentTime
)
currentTime = currentTime + second
}
When I set many audio tracks (maybe more than 5), the first part sounds a little different from original when it starts. It seems like audio's front part is skipped.
But when I set only two tracks, AVPlayer plays as same as original file.
avPlayer.play()
How can I fix it? Why do audio tracks affect that don't have any playing parts when start? Please let me know.
I was watching a video when I noticed a periodical popping sound. Thought it was a problem with the video, but the issue persists on all other videos or sound tracks. Gets very annoying! Quite disappointed to find this flaw in a brand new Mac Book Pro.
I'm involved in development on an iOS app for home security and alarm systems. There is recently a lot of negative feedback from customers about how low the notification sounds are since iOS17. Much of the feedback centers around the inability to control the volume of the notification sounds.
My question is: if our app uses custom notification sounds, are these impacted by the volume changes made in iOS 17? I know previous versions of iOS allow you to control "Ringtone and Alert" volume in settings (with a volume slider). Is this same control still available for custom notification sounds within our app?
I have a need to list all known audio/image file types in a planned app.
What I have known so far:
images
.apng
.avi, .avif
.gif
.jpg, .jpeg, .jfif, .pjpeg, .pjp
.png
.svg
.webp
audio
.aif
.cda
.mid, .midi
.mp3
.mpa
.ogg
.wav
.wma
What are the missing ones?
Does anyone have a working example on how to play OGG files with swift?
I've been trying for over a year now. I was able to wrap the C Vorbis library in swift. I then used it to parse an OGG file successfully. Then I was required to use Obj-C\++ to fill the PCM because this method seems to only be available in C\++ and that part hangs my app for a good 40 seconds to several minutes depending on the audio file, it then plays for about 2 seconds and then crashes.
I can't get the examples on the Vorbis site to work in objective-c and i tried every example on github I could find (most of which are for iOS - I want to play the files on mac)
I also tried using Cricket Audio framework below.
https://github.com/sjmerel/ck
It has a swift example and it can play their proprietary soundbank format but it is also supposed to play OGG and it just doesn't do anything when trying to play OGG as you can see in the posted issue
https://github.com/sjmerel/ck/issues/3
Right now I believe every player that can play OGGs on mac is written in Objective-C or C++.
Anyway, any help/advice is appreciated. OGG format is very prevalent in the gaming community. I could use unity, which I believe plays oggs through the mono framework but I really really want to stay in swift.