Stack buffer overflow in MusicTrackNewUserEvent

I was trying to debug an exc_bad_access problem in my AudioToolbox-based app and decided to enable the AddressSanitizer. Interestingly, it found a more general problem with a stack buffer overflow happening in a function for writing user events to a MusicTrack (i.e., MusicTrackNewUserEvent). There's nothing very special about my function:


func addUserEventToSequence(event: Event, sequence: MusicSequence, track: MusicTrack) {
        var tempEvent = event
        _ = withEventOfType(for: &tempEvent.self, body: { eventUserData in
            MusicTrackNewUserEvent(track, event.timestamp, eventUserData)
        })
    }


where withEventOfType is just:


func withEventOfType<T>(for eventPtr: UnsafePointer<Event>, body: (_ data: UnsafePointer<MusicEventUserData>) throws -> T) rethrows -> T {
    let dataLength = eventPtr.pointee.length
    return try eventPtr.withMemoryRebound(to: MusicEventUserData.self, capacity: 8 + Int(dataLength), { eventAsBytes in
        return try body(eventAsBytes)
    })
}


...and "Event" is a custom struct for holding music data required by our app. I've always thought user events were intended to be used for arbitrary data, but I realize now that I've never been very conscious of the data length when using them. There is a length property, of course, and I do set it when I create the event.


Event = Event(length: UInt32(MemoryLayout<Event>.size), typeID: 3, trackID: UInt32(0), pitch: UInt8(0), velocity: UInt8(0), channel: UInt8(0), timestamp: beat, duration: 0, barBeat: nil)

(I realize that data looks strange—this is just a running beat count that I'm using for other purposes. Here I could use a different type, but this is just one specific example; the problem is more general.)


Presumably I'm misunderstanding something here... (??)


Any thoughts appreciated.

Replies

Okay, the more I search for a solution to this, the more confused I am. I guess this has to do with .data being (UInt8), not [UInt8] in Swift. So there isn't really any memory allocated for my user data. But I don't understand how one is supposed to do anything with variable length data, like MIDIMetaEvent text "trackName", for example... ??? Confused!


[EDIT: I should clarify that I recall reading many posts in the past indicating that the accepted way of sequencing any kind of custom data is by using MusicEventUserData. But maybe that was always assuming C, not Swift?]


[UPDATE: As an example of this confusion, this code:


        let eventDataBuffer = ByteBackpacker.pack(event)     // convert to [UInt8]
        var midiData = MusicEventUserData()
        midiData.length = UInt32(MemoryLayout.size)
        withUnsafeMutablePointer(to: &midiData.data, { pointer in
            for i in 0 ..< eventDataBuffer.count {
                print("Can write byte \(i)...")
                pointer[i] = eventDataBuffer[i]
            }
        })


gives me a stack-buffer-overflow error as soon as i == 4. Since my Event contains an (essential) Float64 duration property, this obviously can't work.

I'm resurrecting this old question because I'm (finally) looping back around to this problem. I have a very rare crash during playback that I've been trying to debug, and noticed that I can't run with Address Sanitizer enabled because my app has this overflow when trying to create MusicTrackNewUserEvents (I do this at launch)—so it basically stops on that before I can even start playing anything. But what is the procedure for properly creating User Events that we can respond to as callbacks? I can imagine defining a "thin" struct for the 4-member [UInt8], but then how do I get the User Event's timestamp? In my original code, the User Event is wrapping our custom Event struct, which includes the timestamp and duration, but as I mention above, this requires Float64 (not UInt8).

What we basically need is a timestamped User Event for signalling certain types of sequenced events during playback. We use that timestamp directly, so I either have to encode the time in the User Event or figure out how to get the time. If anybody has an example of how this is actually supposed to work it would be much appreciated.
Okay, I see I can do this much more simply. I've rewritten what I had based on Gene De Lisa's post here, on the AudioKit GitHub:

https://github.com/AudioKit/AudioKit/issues/1393