Posts

Post not yet marked as solved
1 Replies
893 Views
I am using the MusicSequenceFileCreate method to generate a MIDI file from a beat-based MusicSequence. In iOS 16.0.2 devices, the file that is created has a Sysex MIDI message added (not by me) to the file at time 0: f0 2a 11 67 40 40 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ff ff ff ff 00 00 00 00 00 00 00 00 00 00 f7 Sysex messages are manufacturer dependent, a file with this Sysex message can't be read into apps like Nanostudio, Ableton, Zenbeats. It can be read by GarageBand. My app's deployment target is iOS 13.0. Has anybody else ran into this issue? Thanks
Posted
by fermor.
Last updated
.
Post not yet marked as solved
2 Replies
1.2k Views
In Interface Builder, setting the constraints for a UI element (let's say a button) doesn't change if I make the aligment proportional to the Safe Area or proportional to the Superview. I have a button which I set its horizontal alignment to be:  I have another button which I set its horizontal alignment to be:  Both buttons end up being aligned horizontally: I would have expected the button aligned to the Safe Area to be shifted to the right as the Safe Area's leading edge is shifted to the right from the one of the Superview. I'm probably missing something but can't quite understand what is going on here. The problem is that heights and widths proportional to the Safe Area are honored, so the size of UI elements does change if you make them proportional to the Safe Area or to the Superview. So when you try to layout something with Safe Area proportional heights and widths, and also use Safe Area proportional horizontal and vertical placements, UI Elements don't line up for iPhones with a notch. They kind of lineup for devices like iPads and iPhones with no notch where the Safe Area is very close to the Superview area.
Posted
by fermor.
Last updated
.
Post not yet marked as solved
0 Replies
943 Views
When rendering a scene using environment lighting and the physically based lighting model, I have a need for an object to reflect another object. As I understand it, in this type of rendering, reflections are only based on the environment lighting and nothing else. As a solution I was intending to use a light probe placed between the object to be reflected and the reflecting object. My scene has been developed programatically and not through an XCode scene file. From Apple's WWDC 2016 presentation on SceneKit I could gather that light probes can be updated programatically through the use of the updateProbes method of the SCNRenderer class. I have the following code, where I am trying to initialize a light probe by using the updateProbes method:let sceneView = SCNView(frame: self.view.frame) self.view.addSubview(sceneView) let scene = SCNScene() sceneView.scene = scene let lightProbeNode = SCNNode() lightProbe = SCNLight() lightProbeNode.light = lightProbe lightProbe.type = .probe scene.rootNode.addChildNode(lightProbeNode) var initLightProbe = true func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) { if initLightProbe { initLightProbe = false let scnRenderer = SCNRenderer(device: sceneView.device, options: nil) scnRenderer.scene = scene scnRenderer.updateProbes([lightProbeNode], atTime: time) print ("Initializing light probe") } }I don't seem to get any light coming from this light probe. My question is simple, can the updateProbes method be used to initialize a lightProbe? If not, how can you initialize a light probe programatically?
Posted
by fermor.
Last updated
.
Post not yet marked as solved
0 Replies
782 Views
I am trying to understand how timestamping works for an AUv3 MIDI plug-in of type "aumi", where the plug-in sends MIDI events to a host. I cache the MIDIOutputEventBlockand the transportStateBlock properties into _outputEventBlock and _transportStateBlock in the allocateRenderResourcesAndReturnError method and use them in the internalRenderBlockmethod: (AUInternalRenderBlock)internalRenderBlock { 		 		// Capture in locals to avoid Obj-C member lookups. If "self" is captured in render, we're doing it wrong. See sample code. 	 		return ^AUAudioUnitStatus(AudioUnitRenderActionFlags *actionFlags, const AudioTimeStamp *timestamp, AVAudioFrameCount frameCount, NSInteger outputBusNumber, AudioBufferList *outputData, const AURenderEvent *realtimeEventListHead, AURenderPullInputBlock pullInputBlock) { 				// Transport State 				if (_transportStateBlock) { 						AUHostTransportStateFlags transportStateFlags; 						_transportStateBlock(&transportStateFlags, nil, nil, nil); 						 						if (transportStateFlags & AUHostTransportStateMoving) { 								if (!playedOnce) { 										// NOTE On! 										unsigned char dataOn[] = {0x90,69,96}; 										_outputEventBlock(timestamp->mSampleTime, 0, 3, dataOn); 										playedOnce = YES; 										 										// NOTE Off! 										unsigned char dataOff[] = {0x80,69,0}; 										_outputEventBlock(timestamp->mSampleTime+96000, 0, 3, dataOff); 								} 						} 						else { 								 								playedOnce = NO; 								 						} 				} 				 				return noErr; 		}; } What this code is meant to do is to play the A4 note in a synthesizer at the host for 2 seconds (the sampling rate is 48KHz). What I get is a click sound. Experimenting some, I have tried delaying the start of the note on MIDI event by offsetting the _outputEventBlock AUEventSampleTime, but it plays the click sound as soon as the play button is pressed on the host. Now, if I change the code to generate the note off MIDI event when the _transportStateFlags are indicating the state is "not moving" instead, then the note plays as soon as the play button is pressed and stops when the pause button is pressed, which would be the correct behavior. This tells me that my understanding of the AUEventSampleTime property in MIDIOutputEventBlock is flawed and that it cannot be used to schedule MIDI events for the host by adding offsets to it. I see that there is another property scheduleMIDIEventBlock, and tried using this property instead but when I use it, there isn't any sound played. Any clarification of how this all works would be greatly appreciated.
Posted
by fermor.
Last updated
.
Post not yet marked as solved
1 Replies
888 Views
I am using the AUv3 template that gets created by XCode to implement a MIDI AUv3 plugin of type "aumi". For the plugin to be able to send MIDI to a host it needs to be able to access the MIDIOutputEventBlock provided by the host. I have done some research and found that this is done by caching the MIDIOutputEventBlock in the allocateRenderResourcesAndReturnError method: _midiOut = self.MIDIOutputEventBlock; and then using \_midiOut in the internalRenderBlock method. The first thing is that in the template that gets created isn't an allocateRenderResourcesAndReturnError method, there is only an allocateRenderResources method. When I apply that code in this method I get a compile error that basically says that this property is not found in the object of type xxxDSPKernelAdapter. I've seen in other examples (like Gene de Lisa's "Audio Units (AUv3) MIDI extension", a wonderful tutorial, by the way!) that the initial template from a couple years ago was very different from what I have and that MIDIOutputEventBlock is actually defined in the AUAudioUnit.h header file, but in this case self is also a different class. I am very new at working with Objective-C, C++ and Swift in the same project, so I know my understanding of how this all works is minimal and very shallow. Any insight anybody could provide on this would be greatly appreciated.
Posted
by fermor.
Last updated
.
Post not yet marked as solved
0 Replies
432 Views
I can use NSMutableAttributedString to generate subscripts and superscripts for strings for a UILabel. But what if I want the subscript and the superscript to be vertically aligned. Is there a way to do this with NSMutableAttributedString ?
Posted
by fermor.
Last updated
.
Post marked as solved
2 Replies
2.7k Views
I am generating audio with an AVAudioEngine. I am then installing a tap on the mainMixerNode output of the engine which provides an AVAudioPCMBuffer which is then written into an MPEG4ACC AVAudioFile. The input Audio nodes to the engine are AVAudioUnitSampler nodes. The issue I have is that the audio played on the resulting .m4a file is slower in speed than the one that you can hear on the device output itself (speakers, headphones). This is the code I am implementing:// Audio Format let audioFormat = AVAudioFormat(standardFormatWithSampleRate: 44100, channels: 2) // Engine var engine = AVAudioEngine() // Two AVAudioNodes are hooked up to the AVAudioEngine engine.connect(myAVAudioNode0, to: engine.mainMixerNode, format: audioFormat) engine.connect(myAVAudioNode1, to: engine.mainMixerNode, format: audioFormat) // Function to Write Audio to a File func writeAudioToFile() { // File to write let documentsDirectory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first! let audioURL = documentsDirectory.appendingPathComponent("share.m4a") // Format parameters let sampleRate = Int(audioFormat!.sampleRate) let channels = Int(audioFormat!.channelCount) // Audio File settings let settings = [ AVFormatIDKey: Int(kAudioFormatMPEG4AAC), AVSampleRateKey: Int(audioFormat!.sampleRate), AVNumberOfChannelsKey: Int(audioFormat!.channelCount), AVEncoderAudioQualityKey: AVAudioQuality.max.rawValue ] // Audio File var audioFile = AVAudioFile() do { audioFile = try AVAudioFile(forWriting: audioURL, settings: settings, commonFormat: .pcmFormatFloat32, interleaved: false) } catch { print ("Failed to open Audio File For Writing: \(error.localizedDescription)") } // Install Tap on mainMixer // Write into buffer and then write buffer into AAC file engine.mainMixerNode.installTap(onBus: 0, bufferSize: 8192, format: nil, block: { (pcmBuffer, when) in do { try audioFile.write(from: pcmBuffer) } catch { print("Failed to write Audio File: \(error.localizedDescription)") } }) }
Posted
by fermor.
Last updated
.
Post marked as solved
1 Replies
807 Views
I am trying to implement MIDI Out for an app. The app will generate a MIDI sequence that could be directed to be played on another app (synthesizer or as an input to apps like AUM). I noticed that certain functions in the CoreMIDI framework like MIDIReceived and MIDISend have been deprecated. I couldn't find any new functions that replace these. CoreMIDI documentation is very sparse and lacking. Anybody know of new functions that replace these?
Posted
by fermor.
Last updated
.
Post marked as solved
1 Replies
1.1k Views
I am setting up an app to send MIDI data to another app using the CoreMIDI framework. I will not be using the AudioKit framework in this app. var destRef = MIDIEndpointRef() destRef = MIDIGetDestination(destIndex) // Create Client var midiClientRef = MIDIClientRef() MIDIClientCreate("Source App" as CFString, nil, nil, &midiClientRef) // Create MIDI Source Endpoint Ref var virtualSrcEndpointRef = MIDIEndpointRef() MIDISourceCreate(midiClientRef, "Source App Endpoint" as CFString, &virtualSrcEndpointRef) // Create MIDI Output port var outputPortRef = MIDIPortRef() MIDIOutputPortCreate(midiClientRef, "Source App Output Port" as CFString, &outputPortRef) After that I use the MIDIReceived function to send MIDI packets to the Source Endpoint. This works, but the issue is that if there are several destination apps open, the MIDI gets played in all of them. This makes sense because there isn't an explicit connection between the client's output port and the destination endpoint. When it is the opposite, when you create a Destination Endpoint and you are receiving MIDI there is a function called MIDIPortConnectSource which establishes a connection from a source to a client's input port. I cannot find an equivalent MIDIPortConnectDestination in the CoreMIDI MIDI services APIs. How does one make that direct connection? Again, I will not be using AudioKit in this app.
Posted
by fermor.
Last updated
.
Post marked as solved
1 Replies
881 Views
I am trying to use the AVAudioRecorder method to record the output of the mainMixerNode of an AVAudioEngine instance and save it to an MPEG4ACC file. From what I have been reading the default input to AVAudioRecorder is the microphone. I have everything setup so I can record to a file but how can I change the AVAudioRecorder input to be the mainMixerNode output?
Posted
by fermor.
Last updated
.
Post not yet marked as solved
0 Replies
638 Views
I have been successful at issuing score challenges in Game Center by using the challengeComposeController method of GKScore. This GKScore object has a context property which holds a specific seed to be used to start a specific game when the challenged player accepts the challenge. My question is, when the challenged player presses the Play Now button in the Challenges screen of the Game Center View Controller, how can the game view controller know that the player accepted the challenge and which challenge he/she accepted when the GKGameCenterViewController is dismissed? I know there is a GKLocalPlayerListener protocol with different methods to manage challenges but it isn't very well documented when these methods fire or should be used.
Posted
by fermor.
Last updated
.
Post marked as solved
1 Replies
1.2k Views
We are trying to generate an SKTexture (texture) from the rendering of an SKNode (node) in an SKScene presented to an SKView (view)let origin = CGPoint (x:0, y:0) let rect = CGRect(origin:origin, size:CGSize(width:100, height:100)) let texture = view?.texture(from:node, crop:rect)The resulting texture is always the same regardless of the origin value.Has anybody ran into this issue before?
Posted
by fermor.
Last updated
.