Hi,
if values are PUBLISHED rapidly, then ALL are present in the Combine sink, but SOME of them are absent from the async loop. Why the difference?
For example, in the code below, tapping repeatedly 4 times gives the output:
INPUT 24, INPUT 9, INPUT 31, INPUT 45, SINK 24, SINK 9, LOOP 24, SINK 31, SINK 45, LOOP 31.
import SwiftUI
import Combine
import PlaygroundSupport
var subject = PassthroughSubject<Int, Never>()
struct ContentView: View {
@State var bag = [AnyCancellable]()
@State var a = [String]()
var body: some View {
Text("TAP A FEW TIMES RAPIDLY")
.frame(width: 160, height: 160)
.onTapGesture {
Task {
let anyInt = Int.random(in: 1..<100)
print("INPUT \(anyInt)")
try await Task.sleep(nanoseconds: 3_000_000_000)
subject.send(anyInt)
}
}
.task {
for await anyInt in subject.values {
print(" LOOP \(anyInt)")
}
}
.onAppear{
subject.sink{ anyInt in
print(" SINK \(anyInt)")
}.store(in: &bag)
}
}
}
PlaygroundPage.current.setLiveView(ContentView())
Thank you.
Post
Replies
Boosts
Views
Activity
Using the example from WWDC, with the following command
./SignalGenerator -signal square -duration 3 -output ./uncompressed.wav I can generate an audio file, which is not playable, and its Get Info is:
Duration: 00:00
Audio channels: Mono
Sample rate: 44,1 kHz
Bits per sample: 32
Although for MacOS Preview the duration is zero, this file is playable in VLC and convertible to other formats, so its content is ok. To get more information about the format of an output file I changed the example to print outputFormatSettings:
["AVLinearPCMBitDepthKey": 32, "AVLinearPCMIsBigEndianKey": 0, "AVSampleRateKey": 44100, "AVFormatIDKey": 1819304813, "AVLinearPCMIsFloatKey": 1, "AVNumberOfChannelsKey": 1, "AVLinearPCMIsNonInterleaved": 1]
I don’t know how to interpret the “number of AVFormatIDKey": 1819304813. The documentation says:
let AVFormatIDKey: String -
For information about the possible values for this key, see Audio Format Identifiers
Having red this, I still don't know the relation of AVFormatIDKey and listed Audio Format Identifiers.
If I knew which file format to expect, it might have helped to guess why the duration of the generated files is always 0? Can you help me with both questions? Thanks.
For a SwiftUI view, I can SET its size and position with modifiers, or GET its frame by being clever with geometry reader. It is not symmetrical, and violates the Ockham's razor IMO.
Do you know a single modifier, from Apple or not, with which caller can set the value of a frame programmatically, BUT ALSO when the view is autopositioned and autosized the binding sets the value to the correct CGRect?
The autor of Creating a Movie with an Image and Audio on iOS
proposes the way of rechecking for assetwriter readiness.
// lines 52-55
while !adaptor.assetWriterInput.isReadyForMoreMediaData { usleep(10) }
adaptor.append(buffer, withPresentationTime: startFrameTime)
Is this the canonical way of querying AVFoundation's objects, or maybe there's a better way than sleep and try again loop? The only remotely related post I found is How to check if AvAssetWriter has finished writing the last frame, and has four years and no answer.
Let’s imagine HStack full of views. You put down your finger on the leftmost view and lift it up on the rightmost one. What modifier to use for the views in-betweeen just to be notified when the finger is sliding over them? With mouse it would be onHover, but with finger is there anything?
Without a modifier, the result can be achieved with this code:
import SwiftUI
import PlaygroundSupport
//
// How to refactor this ugly code
// with a modifier similar to onHover(perform:)?
//
struct ContentView: View{
@State var isHovered = Array(repeating: false, count: 8)
@State private var location = CGPoint.zero
var body: some View{
HStack{
ForEach(0..<8){ i in
GeometryReader{ g in
Rectangle()
.fill(isHovered[i] ? .orange : .gray)
.onChange(of: location) { newValue in
isHovered[i] = g.frame(
in: .named("keyboardSpace")
).contains(location)
}
}
}
}
.gesture(
DragGesture()
.onChanged { gesture in
location = gesture.location
}
.onEnded{ _ in
isHovered = Array(repeating: false, count: 8)
}
)
.frame(width: 500, height: 500)
.coordinateSpace(name: "keyboardSpace")
}
}
PlaygroundPage.current.setLiveView(ContentView())
In Swift Playgrounds on macOS, when we create a new file in .documentDirectory given by FileManager, we can always search for the results in e.g. ~/Library/Developer/Xcode/DerivedData/[project-specific]/Build/Products/Debug .
I guess this location may change anytime.
I know that Xcode is not Matlab, but is there a folder / a shortcut / any place in the IDE that includes the resulting files, or even presents a live preview of created files (.csv) for easy inspection?
The new syntax would be \CALayer.position. Of course easy cut and paste into CAKeyframeAnimation(keyPath: #keyPath(CALayer.position)) won't work. What will?
The first few lines of this code generate audio noise of arbitrary length. What would be the equivalent for generation of uncompressed video noise (analog tv static)?
import AVFoundation
let srcNode = AVAudioSourceNode { _, _, frameCount, bufferList in
for frame in 0..<Int(frameCount) {
let buf: UnsafeMutableBufferPointer<Float> =
UnsafeMutableBufferPointer(bufferList.pointee.mBuffers)
buf[frame] = Float.random(in: -1...1)
}
return noErr
}
let engine = AVAudioEngine()
let output = engine.outputNode
let format = output.inputFormat(forBus: 0)
engine.attach(srcNode)
engine.connect(srcNode, to: output, format: format)
try? engine.start()
CFRunLoopRunInMode(.defaultMode, CFTimeInterval(5.0), false)
engine.stop()
For AVAudioPlayer, there is corresponding AVAudioRecorder. For AVMIDIPLayer, I found nothing for recording from the system's active MIDI input device.
Can I record midi events from the system’s active input midi device, without resolving to low level CoreMidi?
After configuring AVAudioUnitSampler with just a few lines of code,
import AVFoundation
var engine = AVAudioEngine()
let unit = AVAudioUnitSampler()
engine.attach(unit)
engine.connect(unit, to: engine.outputNode, format: engine.outputNode.outputFormat (forBus:0))
try! unit.loadInstrument(at:sndurl) //url to .sf2 file
try! engine.start()
I could send midi events programmatically.
// feeding AVAudioUnitMIDIInstrument with midi data
let range = (0..<100)
let midiStart = range.map { _ in UInt8.random(in: 70...90) }
let midiStop = [0] + midiStart
let times = range.map { _ in TimeInterval.random(in: 0...100) * 0.3 }
for i in range {
DispatchQueue.main.asyncAfter(deadline: .now()+TimeInterval(times[i])){
unit.stopNote(midiStop[i], onChannel: 1)
unit.startNote(midiStart[i], withVelocity: 127, onChannel: 1)
}
}
But instead, I need to send midi events from a midi instrument, and tap to them for recording.
SecCopyErrorMessageString returns a string explaining the meaning of a security result code and its declaration is
func SecCopyErrorMessageString(
_ status: OSStatus,
_ reserved: UnsafeMutableRawPointer?
) -> CFString?
with typealias OSStatus = Int32
Having arbitrary OSStatus (for example for kAudioFormatUnsupportedDataFormatError it is 1718449215), is there something to get the description as string?
The idea would be analogous to:
let x: Int32 = 1718449215
if let errMsg = SecCopyErrorMessageString(x, nil) as? String{
print(errMsg)
}
This is not a security result code, and the output is just OSStatus 1718449215, what I expect is a "string explaining the meaning".
With SKView having presentScene(withIdentifier identifier: String) -> SKScene? we won't be forced to reinstantiate the same scenes for every SKView's presentScene(_:), or else to keep a collection of strong references outside. And scene presentation could be optimised by the framework. Is it now in any way?
Let's imagine a SpriteKit scene with hundreds of SKNodes, recreated all over again. Or better, let's imagine dealing with UITableViewCells for UITableViews manually, if there was no auto-recycling mechanism.
Can you see any resemblance, what is your opinion about it?
Below I have included steps for Java from https://docs.oracle.com/javase/8/docs/technotes/guides/sound/programmer_guide/chapter11.html as an example.
Can you help me with a similar instruction for Swift and iOS? How do I feed MusicSequenceFileCreate(::::_:) with MIDIEventList from MIDIInputPortCreateWithProtocol(::::_:) without parsing incoming MIDI events to recreate messages, one by one, with scraped raw data. I feel really embarrassed asking this question, because IMHO this should be documented, easy to find, and to do.
I will be filling audio and video buffers with randomly distributed data for each frame in real time. Initializing these arrays with Floats inside basic for loop somehow seems naive. Are there any optimised methods for this task in iOS libraries? I was looking for data-science oriented framework from Apple, did not found one, but maybe Accelerate, Metal, or CoreML are good candidates to research? Is my thinking correct, and if so, can you guide me?
Let’s say I have a hundred of big files to upload to a server, using requests with a short-lived authorisation token.
NSURLSession background upload describes a problem with adding too many requests, so the first question is: should we manage uploads manually, by which I mean queuing, postponing, retrying. Will we then fall into the same traps only in a different way?
What to do if tasks picked by the system have outdated token, and so fail. How to update a token: is there a delegate method (preferable pre iOS 13 compatibile) in which I can get a fresh token and modify a request header?
Is there any iOS specific design pattern or contract (Apple's list of server requirements) that would allow uploads to be resumable?
How to spread incoming messages (A, B, ...) evenly, one every second, marking them with most recent timestamps (1, 2, ...) ?
output on the gray stripe, input above it:
It requires buffering messages (e.g. B) when necessary, and omitting timer ticks (e.g. 4) in case there were no messages to consume, current or buffered.