I've noticed that enabling voice processing on AVAudioInputNode change the node's format - most noticeably channel count.
let inputNode = avEngine.inputNode
print("Format #1: \(inputNode.outputFormat(forBus: 0))")
// Format #1: <AVAudioFormat 0x600002bb4be0: 1 ch, 44100 Hz, Float32>
try! inputNode.setVoiceProcessingEnabled(true)
print("Format #2: \(inputNode.outputFormat(forBus: 0))")
// Format #2: <AVAudioFormat 0x600002b18f50: 3 ch, 44100 Hz, Float32, deinterleaved>
Is this expected? How can I interpret these channels?
My input device is an aggregate device where each channel comes from a different microphone. I then record each channels to separate files.
But when voice processing messes up with the channels layout, I cannot rely on this anymore.
Post
Replies
Boosts
Views
Activity
SwiftUI Previews are broken for me when one of imported Swift Package has a conditional dependency to another platform.
Steps to reproduce:
Create Xcode Project with 2 targets. One for macOS, another for iOS.
Add a Swift Package that has a conditional dependency - eg. depends on another package but only on iOS.
Example:
targets: [
.target(name: "Components",
dependencies: [
.productItem(name: "FloatingPanel", package: "FloatingPanel", condition: .when(platforms: [.iOS])),
]),
]
Try running SwiftUI preview on macOS. It won’t work.
The error I get is no such module UIKit.
It looks like Xcode is trying to build FloatingPanel dependency even though it's condition specifies iOS platform.
Is there any way to fix this?
I notice weird issues where NSFetchedResultsController returns in its fetchedObjects two instances of the same object.
func controllerDidChangeContent(_ controller: NSFetchedResultsController<NSFetchRequestResult>) {
items = (controller.fetchedObjects ?? []) as! [Item]
print(items)
// Prints:
// - 0 : <Item: 0x600000d40e10> (entity: Item; id: 0x600002e6b2e0 <x-coredata:///Item/tE1646DB0-C3C2-4AE1-BC32-6B10934F292C2>; ....
// - 1 : <Item: 0x600000d40e10> (entity: Item; id: 0x600002e6b2e0 <x-coredata:///Item/tE1646DB0-C3C2-4AE1-BC32-6B10934F292C2>; ...
print(items[0] == items[1]) // true !!!
}
The issue occurs after adding a new item and saving both contexts.
func weirdTest() {
print("1. Adding.")
let item = Item(context: viewContext)
item.id = UUID()
viewContext.processPendingChanges()
print("2. Saving.")
viewContext.performAndWait {
try! viewContext.save()
}
rootCtx.performAndWait {
try! rootCtx.save()
}
}
Here's my Core Data Stack:
View Context (main thread) --> Background Context --> Persistent Store Coordinator
Ahd here's how I configure my contexts:
lazy var rootCtx: NSManagedObjectContext = {
let rootCtx = NSManagedObjectContext(concurrencyType: .privateQueueConcurrencyType)
rootCtx.persistentStoreCoordinator = coordinator
rootCtx.automaticallyMergesChangesFromParent = true
return rootCtx
}()
lazy var viewContext: NSManagedObjectContext = {
let ctx = NSManagedObjectContext(concurrencyType: .mainQueueConcurrencyType)
ctx.parent = rootCtx
ctx.automaticallyMergesChangesFromParent = true
return ctx
}()
Anyone has any idea what's going on here? :<
I have a CALayer with many sublayers. Those sublayers have multiple CABasicAnimation added to them.
Now, I'd like to render the whole layer subtree to the UIImage at a specific point of animation time. How could I achieve that?
The only thing I found is a CALayer.render(in:) method but the docs say that this method ignores Core Animations :<
I'm trying to add an animated CALayer over my video and export it with AVAssetExportSession.
I'm animating the layer using CABasicAnimation set to my custom property.
However, it seems that func draw(in ctx: CGContext) is never called during an export for my custom layer, and no animation is played.
I found out that animating standard properties like borderWidth works fine, but custom properties are ignored.
Can someone help with that?
func export(standard: Bool) {
print("Exporting...")
let composition = AVMutableComposition()
//composition.naturalSize = CGSize(width: 300, height: 300)
// Video track
let videoTrack = composition.addMutableTrack(withMediaType: .video,
preferredTrackID: CMPersistentTrackID(1))!
let _videoAssetURL = Bundle.main.url(forResource: "emptyVideo", withExtension: "mov")!
let _emptyVideoAsset = AVURLAsset(url: _videoAssetURL)
let _emptyVideoTrack = _emptyVideoAsset.tracks(withMediaType: .video)[0]
try! videoTrack.insertTimeRange(CMTimeRange(start: .zero, duration: _emptyVideoAsset.duration),
of: _emptyVideoTrack, at: .zero)
// Root Layer
let rootLayer = CALayer()
rootLayer.frame = CGRect(origin: .zero, size: composition.naturalSize)
// Video layer
let video = CALayer()
video.frame = CGRect(origin: .zero, size: composition.naturalSize)
rootLayer.addSublayer(video)
// Animated layer
let animLayer = CustomLayer()
animLayer.progress = 0.0
animLayer.frame = CGRect(origin: .zero, size: composition.naturalSize)
rootLayer.addSublayer(animLayer)
animLayer.borderColor = UIColor.green.cgColor
animLayer.borderWidth = 0.0
let key = standard ? "borderWidth" : "progress"
let anim = CABasicAnimation(keyPath: key)
anim.fromValue = 0.0
anim.toValue = 50.0
anim.duration = 6.0
anim.beginTime = AVCoreAnimationBeginTimeAtZero
anim.isRemovedOnCompletion = false
animLayer.add(anim, forKey: nil)
// Video Composition
let videoComposition = AVMutableVideoComposition(propertiesOf: composition)
videoComposition.renderSize = composition.naturalSize
videoComposition.frameDuration = CMTime(value: 1, timescale: 30)
// Animation tool
let animTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: video,
in: rootLayer)
videoComposition.animationTool = animTool
// Video instruction > Basic
let videoInstruction = AVMutableVideoCompositionInstruction()
videoInstruction.timeRange = CMTimeRange(start: .zero, duration: composition.duration)
videoComposition.instructions = [videoInstruction]
// Video-instruction > Layer instructions
let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)
videoInstruction.layerInstructions = [layerInstruction]
// Session
let exportSession = AVAssetExportSession(asset: composition,
presetName: AVAssetExportPresetHighestQuality)!
exportSession.videoComposition = videoComposition
exportSession.shouldOptimizeForNetworkUse = true
var url = FileManager.default.temporaryDirectory.appendingPathComponent("\(arc4random()).mov")
url = URL(fileURLWithPath: url.path)
exportSession.outputURL = url
exportSession.outputFileType = .mov
_session = exportSession
exportSession.exportAsynchronously {
if let error = exportSession.error {
print("Fail. \(error)")
} else {
print("Ok")
print(url)
DispatchQueue.main.async {
let vc = AVPlayerViewController()
vc.player = AVPlayer(url: url)
self.present(vc, animated: true) {
vc.player?.play()
}
}
}
}
}
CustomLayer:
class CustomLayer: CALayer {
@NSManaged var progress: CGFloat
override init() {
super.init()
}
override init(layer: Any) {
let l = layer as! CustomLayer
super.init(layer: layer)
print("Copy. \(progress) \(l.progress)")
self.progress = l.progress
}
required init?(coder: NSCoder) {
super.init(coder: coder)
}
override class func needsDisplay(forKey key: String) -> Bool {
let needsDisplayKeys = ["progress"]
if needsDisplayKeys.contains(key) {
return true
}
return super.needsDisplay(forKey: key)
}
override func display() {
print("Display. \(progress) | \(presentation()?.progress)")
super.display()
}
override func draw(in ctx: CGContext) {
// Save / restore ctx
ctx.saveGState()
defer { ctx.restoreGState() }
print("Draw. \(progress)")
ctx.move(to: .zero)
ctx.addLine(to: CGPoint(x: bounds.size.width * progress,
y: bounds.size.height * progress))
ctx.setStrokeColor(UIColor.red.cgColor)
ctx.setLineWidth(40)
ctx.strokePath()
}
}
Here's a full sample project if someone is interested:
https://www.dropbox.com/s/evkm60wkeb2xrzh/BrokenAnimation.zip?dl=0
There are many folks on the web who successfully loaded AppKit frameworks into Catalyst apps using plugin bundles.
However, I couldn't find any information whether it's feasible in the opposite order.
I want to include an iOS framework built with Mac Catalyst in the native AppKit app. Is it possible? Any tips how this could be achieved?
I have a SwiftUI view which updates my NSManagedObject subclass object after finishing dragging.
I've noticed that NSFetchedResultsController is not reporting updates at the end of the run-loop during which the change occurred. It takes a few moments or a save() to changes being noticed.
To debug it, I've swizzled processPendingChanges method of NSManagedObjectContext and logged when it's called.
To my surprise, I've noticed it's not always called at the end of the run loop.
What am I missing here? Why is processPendingChanges() not called? Should I call it manually on my own after every change?
For reference: I'm testing on macOS in the AppKit app. The NSManagedObjectContext is created by NSPersistentDocument.
Here's how my View's code look like:
// `myItem` is my subclass of NSManagedObject
Item()
.gesture(
DragGesture(minimumDistance: 0, coordinateSpace: CoordinateSpace.compositionGrid)
.onChanged({ _ in
// ...
})
.onEnded({ (dragInfo) in
// :-(
// This change is not always noticed by NSFetchedResultsController
//
myItem.someProperty = dragInfo.location
})
)
SwiftUI promise is to call View’s body only when needed to avoid invalidating views whose State has not changed.
However, there are some cases when this promise is not kept and the View is updated even though its state has not changed.
Example:
struct InsideView: View {
@Binding var value: Int
// …
}
Looking at that view, we’d expect that its body is called when the value changes. However, this is not always true and it depends on how that binding is passed to the view.
When the view is created this way, everything works as expected and InsideView is not updated when value hasn’t changed.
@State private var value: Int = 0
InsideView(value: $value)
In the example below, InsideView will be incorrectly updated even when value has not changed. It will be updated whenever its container is updated too.
var customBinding: Binding<Int> {
Binding<Int> { 100 } set: { _ in }
}
InsideView(value: customBinding)
Can anyone explain this and say whether it's expected? Is there any way to avoid this behaviour that can ultimately lead to performance issues?
Here's a sample project if anyone wants to play with it:
import SwiftUI
struct ContentView: View {
@State private var tab = 0
@State private var count = 0
@State private var someValue: Int = 100
var customBinding: Binding<Int> {
Binding<Int> { 100 } set: { _ in }
}
var body: some View {
VStack {
Picker("Tab", selection: $tab) {
Text("@Binding from @State").tag(0)
Text("Custom @Binding").tag(1)
}
.pickerStyle(SegmentedPickerStyle())
VStack(spacing: 10) {
if tab == 0 {
Text("When you tap a button, a view below should not be updated. That's a desired behaviour.")
InsideView(value: $someValue)
} else if tab == 1 {
Text("When you tap a button, a view below will be updated (its background color will be set to random value to indicate this). This is unexpected because the view State has not changed.")
InsideView(value: customBinding)
}
}
.frame(width: 250, height: 150)
Button("Tap! Count: \(count)") {
count += 1
}
}
.frame(width: 300, height: 350)
.padding()
}
}
struct InsideView: View {
@Binding var value: Int
var body: some View {
print("[⚠️] InsideView body.")
return VStack {
Text("I'm a child view. My body should be called only once.")
.multilineTextAlignment(.center)
Text("Value: \(value)")
}
.background(Color.random)
}
}
extension ShapeStyle where Self == Color {
static var random: Color {
Color(
red: .random(in: 0...1),
green: .random(in: 0...1),
blue: .random(in: 0...1)
)
}
}
I'm trying to change device of the inputNode of AVAudioEngine.
To do so, I'm calling setDeviceID on its auAudioUnit. Although this call doesn't fail, something wrong happens to the output busses.
When I ask for its format, it shows a 0Hz and 0 channels format. It makes the app crash when I try to connect the node to the mainMixerNode.
Can anyone explain what's wrong with this code?
avEngine = AVAudioEngine()
print(avEngine.inputNode.auAudioUnit.inputBusses[0].format)
// <AVAudioFormat 0x1404b06e0: 2 ch, 44100 Hz, Float32, non-inter>
print(avEngine.inputNode.auAudioUnit.outputBusses[0].format)
// <AVAudioFormat 0x1404b0a60: 2 ch, 44100 Hz, Float32, inter>
// Now, let's change a device from headphone's mic to built-in mic.
try! avEngine.inputNode.auAudioUnit.setDeviceID(inputDevice.deviceID)
print(avEngine.inputNode.auAudioUnit.inputBusses[0].format)
// <AVAudioFormat 0x1404add50: 2 ch, 44100 Hz, Float32, non-inter>
print(avEngine.inputNode.auAudioUnit.outputBusses[0].format)
// <AVAudioFormat 0x1404adff0: 0 ch, 0 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved>
// !!!
// Interestingly, 'inputNode' shows a different format than `auAudioUnit`
print(avEngine.inputNode.inputFormat(forBus: 0))
// <AVAudioFormat 0x1404af480: 1 ch, 44100 Hz, Float32>
print(avEngine.inputNode.outputFormat(forBus: 0))
// <AVAudioFormat 0x1404ade30: 1 ch, 44100 Hz, Float32>
Edit:
Further debugging revels another puzzling thing.
avEngine.inputNode.auAudioUnit == avEngine.outputNode.auAudioUnit // this is true ?!
inputNode and outputNode share the same AUAudioUnit. And its deviceID is by default set to the speakers. It's so confusing to me...why would inpudeNode's device be a speaker?
CATiledLayer doesn't work as expected on macOS.
It's choosing to render too few tiles.
In my example, my view is 300px wide. I set layer's transform scale to 4x.
In that case, CATiledLayer renders only two tiles: 50px wide at 2x scale, and stretches them.
Interestingly, when I run similar code on iOS, it works correctly - it renders 3 tiles, 25px wide at 4x scale.
Is it a bug or am I missing something here?
My code below:
class WaveformView: NSView {
		var scale: CGFloat = 1.0 {
				didSet {
						layer?.transform = CATransform3DScale(CATransform3DIdentity, scale, 1.0, 1.0)
						layer?.setNeedsDisplay(bounds)
				}
		}
		private var tiledLayer: CATiledLayer { layer as! CATiledLayer }
		override init(frame frameRect: NSRect) {
				super.init(frame: frameRect)
				wantsLayer = true
				tiledLayer.levelsOfDetail = 8
				tiledLayer.levelsOfDetailBias = 8
				tiledLayer.tileSize = CGSize(width: 100.0, height: .infinity)
				tiledLayer.contentsScale = 1.0
		}
		required init?(coder: NSCoder) {
				fatalError("init(coder:) has not been implemented")
		}
		override func draw(_ dirtyRect: NSRect) {
				let nsContext = NSGraphicsContext.current!
				let cgContext = nsContext.cgContext
				cgContext.saveGState()
				let scaleX: CGFloat = cgContext.ctm.a
				NSColor.red.setStroke()
				NSBezierPath(rect: dirtyRect)
						.stroke()
				let fontSize: CGFloat = 12.0
				let attr = [
						NSAttributedString.Key.font: NSFont.systemFont(ofSize: fontSize)
					]
				let str = "S: \(scaleX)\n\(dirtyRect.width)" as NSString
				str.draw(at: NSPoint(x: dirtyRect.minX, y: dirtyRect.midY), withAttributes: attr)
				nsContext.cgContext.restoreGState()
		}
		override func makeBackingLayer() -> CALayer {
				return CATiledLayer()
		}
}
Behavior of reading frames through GeometryReader is confusing on macOS.
Apparently when you read a frame in local or named coordinate space, returned frame is in the "SwiftUI coordinate system" where the (0,0) point is in the upper-left corner.
However, when you read a frame in a global space, returned frame is in the "native macOS system" where the (0,0) is in the bottom-left corner.
Is this behavior documented anywhere or is it a bug?
I would suspect SwiftUI to always return frames in the same way on all the platforms.
I'm trying to figure out if I'm missing something here.
My sample code:
struct ContentView: View {
var body: some View {
		ZStack(alignment: .bottom) {
				Color.blue
						.frame(width: 100, height: 150)
				Color.red
						.frame(width: 20, height: 60)
						.background(
								GeometryReader { geo -> Color in
										let g = geo.frame(in: .global)
										let s = geo.frame(in: .named("stack"))
										print("Global: \(g) | Stack: \(s)")
										return Color.purple
								}
						)
				.padding(.bottom, 5)
		}
		.padding(40)
		.coordinateSpace(name: "stack")
		.background(Color.pink)
}
Output:
Global: (80.0, 45.0, 20.0, 60.0) | Stack: (80.0, 125.0, 20.0, 60.0)
Hi folks,
is there an easy way to use Core Data-backed document with a SwiftUI DocumentGroup?
My app is currently using NSPersistentDocument / UIManagedDocument and I struggle to migrate it to the new API.
Any tips appreciated :)
I'm using a Speech framework to transcribe a very long audio file (1h+) and I want to present partial results along the way.
What I've noticed is that SFSpeechRecognizer is processing audio in batches.
Delivered SFTranscriptionSegment have timestamp set to 0.0 most of the time, but it seems they are set to meaningful values at the end of the "batch". When it's done, the next reported partial results no longer contains those segments. It starts delivering partial results from the next batch.
Note that all I'm talking about here is when SFSpeechRecognitionResult has isFinal set to false.
I found zero mentions about this in the docs.
What's problematic for me is that segments timestamps in each batch are relative to the batch itself and not the entire audio file. Because of that, it's impossible to determine segment's absolute timestamp because we don't know absolute timestamp of the batch.
Is there any Apple engineer here that could shed some light on that behavior? Is there any way to get a meaningful segment timestamp from partial results callbacks?
I'm implementing a 'ruler' view similar to what you would find in Sketch / Photoshop / etc. Basically a ruler at the top which shows you the current view size and updates as you zoom in/out. In my example, it draws about ~250 rectangles.
I wanted to do it with SwiftUI but I'm getting into performance issues.
When I update the view's scale with a slider, FPS drops noticeably.
(Testing on macOS Catalina 10.15.7, Xcode 12.0.1, Macbook Air.)
I wonder if I hit the limits of SwiftUI and should switch to Metal, or am I missing some optimization here?
Note that I did add .drawingGroup() modifier but it doesn't seem to help in any way.
Here's a sample app to download: Github - https://github.com/Moriquendi/swiftui-performance-tests
Here's a code for the "ruler" view:
struct Timeline: View {
		let scale: Double
		private let minLongTickWidth = 30.0
		let LARGE_TICKS_COUNT = 50
		var SMALL_TICKS_COUNT = 5
		var body: some View {
				HStack(alignment: .bottom, spacing: 0) {
						ForEach(ticks, id: \.self) { time in
								HStack(alignment: .bottom, spacing: 0) {
										LongTick(text: "X")
												.frame(width: smallTickWidth, alignment: .leading)
										ForEach(1..<SMALL_TICKS_COUNT, id: \.self) { time in
												SmallTick()
														.frame(width: smallTickWidth, alignment: .leading)
										}
								}
								.frame(width: longTickWidth, alignment: .leading)
						}
				}
				.background(Color(NSColor.black))
				.drawingGroup()
		}
		var oneLongTickDurationInMs: Double {
				let pointsForOneMilisecond = scale / 1000
				var msJump = 1
				var oneLongTickDurationInMs = 1.0
				while true {
						let longTickIntervalWidth = oneLongTickDurationInMs * pointsForOneMilisecond
						if longTickIntervalWidth >= minLongTickWidth {
								break
						}
						oneLongTickDurationInMs += Double(msJump)
						switch oneLongTickDurationInMs {
						case 0..<10: msJump = 1
						case 10..<100: msJump = 10
						case 100..<1000: msJump = 100
						case 1000..<10000: msJump = 1000
						default: msJump = 10000
						}
				}
				return oneLongTickDurationInMs
		}
		var longTickWidth: CGFloat {
				CGFloat(oneLongTickDurationInMs / 1000 * scale)
		}
		var ticks: [Double] {
				let oneLongTickDurationInMs = self.oneLongTickDurationInMs
				let tickTimesInMs = (0...LARGE_TICKS_COUNT).map { Double($0) * oneLongTickDurationInMs }
				return tickTimesInMs
		}
		var smallTickWidth: CGFloat {
				longTickWidth / CGFloat(SMALL_TICKS_COUNT)
		}
}
struct SmallTick: View {
		var body: some View {
				Rectangle()
						.fill(Color.blue)
						.frame(width: 1)
						.frame(maxHeight: 8)
		}
}
struct LongTick: View {
		let text: String
		var body: some View {
				Rectangle()
						.fill(Color.red)
						.frame(width: 1)
						.frame(maxHeight: .infinity)
						.overlay(
								Text(text)
										.font(.system(size: 12))
										.fixedSize()
										.offset(x: 3, y: 0)
								,
								alignment: .topLeading
						)
		}
}
It seems to me that NSItemProvider doesn't work well in the latest Xcode 12 (macOS 10.15.6).
No matter what file types I'm trying to drop, I can never load them.
The error I'm getting:
Error Domain=NSItemProviderErrorDomain Code=-1000 "Cannot load representation of type public.audio" UserInfo={NSLocalizedDescription=Cannot load representation of type public.audio} And here's my code:
// My SwiftUI View
Color.red
		.onDrop(of: ["public.audio"], delegate: self)
Drop delegate:
func performDrop(info: DropInfo) -> Bool {
	 let provider = info.itemProviders(for: ["public.audio"])[0]
	 provider.loadFileRepresentation(forTypeIdentifier: "public.audio") { (url, error) in
			guard let url = url, error != nil else {
				 print(error!.localizedDescription)
				 return
			 }
			 ...
}
I've tried this code with different type identifiers: audio, image, etc. All failed.
Anyone knows what's the issue?