Posts

Post not yet marked as solved
2 Replies
1.1k Views
I'm on Xcode 13 GM (13A233) and I'm noticing that it won't stay paused when it hits a breakpoint. It stops, but about a minute later it just resumes. Is there a new default setting that might cause this? I don't see anything relevant in "Behaviors," but maybe I'm missing something. Has anybody else encountered this issue? The only unusual thing about my project is that I am calling Python from it, using PythonKit. But this breakpoint pause issue isn't isolated to code involving PythonKit. I've tried all the usual "fixes"—i.e., clean build, delete derived data, relaunch, etc. The app is just a simple SwiftUI app for macOS. Any thoughts appreciated.
Posted
by jbmaxwell.
Last updated
.
Post not yet marked as solved
0 Replies
339 Views
I've been trying to get some PythonKit code running on Ubuntu and having a terrible time. The issue is coming up when Python tries to load from packages/modules called by a script. Code I'm able to run from Xcode on macOS, calling into PythonKit, doesn't run in Vapor. The odd thing is the error: iPythonKit/Python.swift:674: Fatal error: 'try!' expression unexpectedly raised an error: Python exception: ('invalid syntax', ('/home/james/swift/vapor/SpliqsAPI/Sources/SpliqsLib/SpliqsML/core/helper_functions.py', 13, 26, 'def get_model_config(args: ModelDataArguments):\n')) This seems to suggest that it's not parsing the module file itself properly—i.e., seeing 'invalid syntax' just because the text file has a carriage return??? Any thoughts what might be going on? I realize this may specifically be a Vapor problem, but it’s such a particular error that I thought someone might have an idea about what’s going on.
Posted
by jbmaxwell.
Last updated
.
Post not yet marked as solved
1 Replies
867 Views
In the WWDC 2011 session 404 video we're told that AVAudioPlayer should be able to dynamically ignore the priming and remainder frames in AAC audio, which should enable seamless looping of compressed files, but I can't get this to work in practice. I can verify in both Logic Pro and WaveLab that the wav versions of my files loop perfectly, but in our app (iOS) they have a clear gap at the loop point. Our encoded files are m4a 448 kbps, 48k AAC files. I've tried exports from WaveLab and afconvert, but both fail to provide seamless looping. For afconvert I used the arguments from the WWDC video: -f "m4af" -d "aac". Any advice? We really need to use compressed files to keep app size down, but we absolutely need them to loop seamlessly (they're ambient background soundscapes).
Posted
by jbmaxwell.
Last updated
.
Post not yet marked as solved
3 Replies
1.3k Views
I was trying to debug an exc_bad_access problem in my AudioToolbox-based app and decided to enable the AddressSanitizer. Interestingly, it found a more general problem with a stack buffer overflow happening in a function for writing user events to a MusicTrack (i.e., MusicTrackNewUserEvent). There's nothing very special about my function:func addUserEventToSequence(event: Event, sequence: MusicSequence, track: MusicTrack) { var tempEvent = event _ = withEventOfType(for: &tempEvent.self, body: { eventUserData in MusicTrackNewUserEvent(track, event.timestamp, eventUserData) }) }where withEventOfType is just:func withEventOfType<T>(for eventPtr: UnsafePointer<Event>, body: (_ data: UnsafePointer<MusicEventUserData>) throws -> T) rethrows -> T { let dataLength = eventPtr.pointee.length return try eventPtr.withMemoryRebound(to: MusicEventUserData.self, capacity: 8 + Int(dataLength), { eventAsBytes in return try body(eventAsBytes) }) }...and "Event" is a custom struct for holding music data required by our app. I've always thought user events were intended to be used for arbitrary data, but I realize now that I've never been very conscious of the data length when using them. There is a length property, of course, and I do set it when I create the event.Event = Event(length: UInt32(MemoryLayout<Event>.size), typeID: 3, trackID: UInt32(0), pitch: UInt8(0), velocity: UInt8(0), channel: UInt8(0), timestamp: beat, duration: 0, barBeat: nil)(I realize that data looks strange—this is just a running beat count that I'm using for other purposes. Here I could use a different type, but this is just one specific example; the problem is more general.)Presumably I'm misunderstanding something here... (??)Any thoughts appreciated.
Posted
by jbmaxwell.
Last updated
.
Post not yet marked as solved
1 Replies
401 Views
We have an audio app that works with AirPods, until going to background. More specifically, it's fine if we go to background with audio still playing, but if we stop/pause playback, then go to background, on returning from background we have no audio from the AirPods. The app is reporting a .categoryChange notification, which does list the AirPods as the input/output device. With standard headphones and the built-in speaker everything is fine. Is there a simple solution? I've been looking at CBCentralManager, but this seems incorrect as it's specifically for BLE... Any help greatly appreciated.
Posted
by jbmaxwell.
Last updated
.
Post not yet marked as solved
0 Replies
299 Views
I'm getting error -10852 (kAudioToolboxErr_InvalidPlayerState) from MusicTrackSetDestMIDIEndpoint, but it's really not clear what the error actually means. Does anyone happen to know the conditions that will lead to that error? There are two possibilities I'm wondering about in particular: 1) if the sequence is playing when I try to set it, and 2) if sending the same endpoint ref as it already had will trigger it. Any help greatly appreciated.
Posted
by jbmaxwell.
Last updated
.
Post not yet marked as solved
0 Replies
710 Views
I have an mlmodel based on pytorch-pretrained-BERT, exported via ONNX to CoreML. That process was pretty smooth, so now I'm trying to do some (very) basic testing—i.e., just to make some kind of prediction, and get a rough idea of what performance problems we might encounter.Howver, when I try to run prediction, I get the following error:[espresso] [Espresso::handle_ex_plan] exception=Espresso exception: "Invalid state": Cannot squeeze a dimension whose value is not 1: shape[1]=128 stat2020-02-16 11:36:05.959261-0800 Spliqs[6725:2140794] [coreml] Error computing NN outputs -5Is this error indicating a problem with the model itself (i.e., from model conversion), or is there something in Swift/CoreML-land that I'm doing wrong? My prediction function looks like this:public func prediction(withInput input: String) -> MLMultiArray? { var predictions: MLMultiArray? = nil if let drummer = drummerBertMLModel { var ids = tokenizer.tokenizeToIds(text: input) while ids.count < 128 { ids.append(1) } let segMask = Array<Int>(repeating: 0, count: ids.count) let inputMLArray = MLMultiArray.from(ids, dims: 2) let segMaskMLArray = MLMultiArray.from(segMask, dims: 2) let modelInput = spliqs_bert_fp16Input(input_1: inputMLArray, input_3: segMaskMLArray) var modelOutput: spliqs_bert_fp16Output? = nil do { modelOutput = try drummer.prediction(input: modelInput) } catch { print("Error running prediction on drummer: \(error)") } if let modelOutput = modelOutput { predictions = modelOutput._1139 } } return predictions }I'm not trying to do anything with this, at this stage, just getting it running.I used the pytorch-pretrained-BERT becuase I was able to find a ground-up pretraining example. But I have since noticed that Huggingface has released a "from scratch" training option (just a couple of days ago), so I am happy to move over to that, if the general consensus is that my current approach is likely to be a dead-end.Any thoughts appreciated.J.
Posted
by jbmaxwell.
Last updated
.