If increasing sampling rate isn't possible, then is the only option for "continuous" drawing repeatedly adding a bezier curve from n-2 to n-1, that takes into account n-3 and n?
Are there other (easy) options to interpolate between location n-2 and n-1 not only visually, but with knowing and storing all intermediate points? Think more bitmap, less vector.
Below my code and a current "uneven" result
struct ContentView: View {
@State var points: [CGPoint] = []
var body: some View {
Canvas { context, size in
for point in points{
context.draw(Image(systemName: "circle"), at: point)
}
} .gesture(
DragGesture().onChanged{ value in
points += [value.location]
}
)
}
}
Post
Replies
Boosts
Views
Activity
UIViews can be wrapped and used in SwiftUI. But guess what other visual elements are there that can't be wrapped? Sprites! Do you thing wrapping SKNode in similar way even make sense, also with performance considerations?
For a start it could bring back physics introduced in UIKit and lacking in SwiftUI, with even more options... Have you ever thought about it, and what were your thoughts?
Since WWDC'20 it is easy to test and deploy lambda functions to AWS.
But can Xcode similarly facilitate workflow for cloud providers other than Amazon? There are open source solutions, like https://openwhisk.apache.org, that also support Swift. If not directly, then with some format converting scripts used for deployment, can we adapt/extend lambda testing built into Xcode?
In Simulator, I can select send keyboard input to device, or get input from a microphone. Can I have I/O from other USB devices connected to a Mac?
I am interested in receiving in a simulator events from a MIDI keyboard connected to a Mac (and supported by both MacOS, and iOS). There are many more possible use cases, since with DriverKit iPads can now support many more external devices.
Repeating protocol conformance in a subclass of conforming class results in a redundancy error. My problem is that, because of how protocol witness tables work, repeating protocol conformance in a subclass seems not to be redundant at all . Intuitively, in the Example3 below repeating a conformance of the parent class in the child class could restore the route from protocol extension's implementation back to child's implementation . Where am I wrong?
protocol Prot {
func f()
func g()
}
extension Prot {
func f(){ print("protocol extension's implementation") }
func g(){ f() }
}
class Parent: Prot {
}
//Directly implementing protocol would route to a child's implementation.
class Example1: Prot {
func f(){ print("child's implementation")}
}
//Indirectly implementing protocol would route to a protocol extension's implementation.
class Example2: Parent {
func f(){ print("child's implementation")}
}
//Redundant conformance of 'Child' to protocol 'Prot' error, instead of restoring route to a child's implementation.
class Example3: Parent, Prot {
func f(){ print("child's implementation")}
}
Example1().g()
Example2().g()
When an organisation adds an app to an account the first time, it may choose a developer name different to its legal name.
Can I expect my previously chosen developer name to stay exactly as it was before after I update my legal name?
I've heard that for developers who decide to choose a different name, when/if the legal entity name has to be changed at a later stage, the App Store Connect company name is also changed to the legal entity name accordingly. But are terms App Store Connect company name and Developer Name, interchangeable? Do they refer to the same thing?
If any change to a legal name means losing an originally accepted developer name irreversibly , doesn't it contradict Apple statement that a developer name cannot be edited or updated later?
How to spread incoming messages (A, B, ...) evenly, one every second, marking them with most recent timestamps (1, 2, ...) ?
output on the gray stripe, input above it:
It requires buffering messages (e.g. B) when necessary, and omitting timer ticks (e.g. 4) in case there were no messages to consume, current or buffered.
Let’s say I have a hundred of big files to upload to a server, using requests with a short-lived authorisation token.
NSURLSession background upload describes a problem with adding too many requests, so the first question is: should we manage uploads manually, by which I mean queuing, postponing, retrying. Will we then fall into the same traps only in a different way?
What to do if tasks picked by the system have outdated token, and so fail. How to update a token: is there a delegate method (preferable pre iOS 13 compatibile) in which I can get a fresh token and modify a request header?
Is there any iOS specific design pattern or contract (Apple's list of server requirements) that would allow uploads to be resumable?
I will be filling audio and video buffers with randomly distributed data for each frame in real time. Initializing these arrays with Floats inside basic for loop somehow seems naive. Are there any optimised methods for this task in iOS libraries? I was looking for data-science oriented framework from Apple, did not found one, but maybe Accelerate, Metal, or CoreML are good candidates to research? Is my thinking correct, and if so, can you guide me?
Below I have included steps for Java from https://docs.oracle.com/javase/8/docs/technotes/guides/sound/programmer_guide/chapter11.html as an example.
Can you help me with a similar instruction for Swift and iOS? How do I feed MusicSequenceFileCreate(::::_:) with MIDIEventList from MIDIInputPortCreateWithProtocol(::::_:) without parsing incoming MIDI events to recreate messages, one by one, with scraped raw data. I feel really embarrassed asking this question, because IMHO this should be documented, easy to find, and to do.
With SKView having presentScene(withIdentifier identifier: String) -> SKScene? we won't be forced to reinstantiate the same scenes for every SKView's presentScene(_:), or else to keep a collection of strong references outside. And scene presentation could be optimised by the framework. Is it now in any way?
Let's imagine a SpriteKit scene with hundreds of SKNodes, recreated all over again. Or better, let's imagine dealing with UITableViewCells for UITableViews manually, if there was no auto-recycling mechanism.
Can you see any resemblance, what is your opinion about it?
SecCopyErrorMessageString returns a string explaining the meaning of a security result code and its declaration is
func SecCopyErrorMessageString(
_ status: OSStatus,
_ reserved: UnsafeMutableRawPointer?
) -> CFString?
with typealias OSStatus = Int32
Having arbitrary OSStatus (for example for kAudioFormatUnsupportedDataFormatError it is 1718449215), is there something to get the description as string?
The idea would be analogous to:
let x: Int32 = 1718449215
if let errMsg = SecCopyErrorMessageString(x, nil) as? String{
print(errMsg)
}
This is not a security result code, and the output is just OSStatus 1718449215, what I expect is a "string explaining the meaning".
For AVAudioPlayer, there is corresponding AVAudioRecorder. For AVMIDIPLayer, I found nothing for recording from the system's active MIDI input device.
Can I record midi events from the system’s active input midi device, without resolving to low level CoreMidi?
After configuring AVAudioUnitSampler with just a few lines of code,
import AVFoundation
var engine = AVAudioEngine()
let unit = AVAudioUnitSampler()
engine.attach(unit)
engine.connect(unit, to: engine.outputNode, format: engine.outputNode.outputFormat (forBus:0))
try! unit.loadInstrument(at:sndurl) //url to .sf2 file
try! engine.start()
I could send midi events programmatically.
// feeding AVAudioUnitMIDIInstrument with midi data
let range = (0..<100)
let midiStart = range.map { _ in UInt8.random(in: 70...90) }
let midiStop = [0] + midiStart
let times = range.map { _ in TimeInterval.random(in: 0...100) * 0.3 }
for i in range {
DispatchQueue.main.asyncAfter(deadline: .now()+TimeInterval(times[i])){
unit.stopNote(midiStop[i], onChannel: 1)
unit.startNote(midiStart[i], withVelocity: 127, onChannel: 1)
}
}
But instead, I need to send midi events from a midi instrument, and tap to them for recording.
The first few lines of this code generate audio noise of arbitrary length. What would be the equivalent for generation of uncompressed video noise (analog tv static)?
import AVFoundation
let srcNode = AVAudioSourceNode { _, _, frameCount, bufferList in
for frame in 0..<Int(frameCount) {
let buf: UnsafeMutableBufferPointer<Float> =
UnsafeMutableBufferPointer(bufferList.pointee.mBuffers)
buf[frame] = Float.random(in: -1...1)
}
return noErr
}
let engine = AVAudioEngine()
let output = engine.outputNode
let format = output.inputFormat(forBus: 0)
engine.attach(srcNode)
engine.connect(srcNode, to: output, format: format)
try? engine.start()
CFRunLoopRunInMode(.defaultMode, CFTimeInterval(5.0), false)
engine.stop()
The new syntax would be \CALayer.position. Of course easy cut and paste into CAKeyframeAnimation(keyPath: #keyPath(CALayer.position)) won't work. What will?