Posts

Post not yet marked as solved
1 Replies
2.2k Views
I've noticed that enabling voice processing on AVAudioInputNode change the node's format - most noticeably channel count. let inputNode = avEngine.inputNode print("Format #1: \(inputNode.outputFormat(forBus: 0))") // Format #1: <AVAudioFormat 0x600002bb4be0:  1 ch,  44100 Hz, Float32> try! inputNode.setVoiceProcessingEnabled(true) print("Format #2: \(inputNode.outputFormat(forBus: 0))") // Format #2: <AVAudioFormat 0x600002b18f50:  3 ch,  44100 Hz, Float32, deinterleaved> Is this expected? How can I interpret these channels? My input device is an aggregate device where each channel comes from a different microphone. I then record each channels to separate files. But when voice processing messes up with the channels layout, I cannot rely on this anymore.
Posted
by smialek.
Last updated
.
Post not yet marked as solved
0 Replies
1k Views
SwiftUI Previews are broken for me when one of imported Swift Package has a conditional dependency to another platform. Steps to reproduce: Create Xcode Project with 2 targets. One for macOS, another for iOS. Add a Swift Package that has a conditional dependency - eg. depends on another package but only on iOS. Example: targets: [    .target(name: "Components", dependencies: [            .productItem(name: "FloatingPanel", package: "FloatingPanel", condition: .when(platforms: [.iOS])),       ]), ] Try running SwiftUI preview on macOS. It won’t work. The error I get is no such module UIKit. It looks like Xcode is trying to build FloatingPanel dependency even though it's condition specifies iOS platform. Is there any way to fix this?
Posted
by smialek.
Last updated
.
Post not yet marked as solved
0 Replies
2.4k Views
Behavior of reading frames through GeometryReader is confusing on macOS. Apparently when you read a frame in local or named coordinate space, returned frame is in the "SwiftUI coordinate system" where the (0,0) point is in the upper-left corner. However, when you read a frame in a global space, returned frame is in the "native macOS system" where the (0,0) is in the bottom-left corner. Is this behavior documented anywhere or is it a bug? I would suspect SwiftUI to always return frames in the same way on all the platforms. I'm trying to figure out if I'm missing something here. My sample code: struct ContentView: View { var body: some View { &#9;&#9;ZStack(alignment: .bottom) { &#9;&#9;&#9;&#9;Color.blue &#9;&#9;&#9;&#9;&#9;&#9;.frame(width: 100, height: 150) &#9;&#9;&#9;&#9;Color.red &#9;&#9;&#9;&#9;&#9;&#9;.frame(width: 20, height: 60) &#9;&#9;&#9;&#9;&#9;&#9;.background( &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;GeometryReader { geo -> Color in &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;let g = geo.frame(in: .global) &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;let s = geo.frame(in: .named("stack")) &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;print("Global: \(g) | Stack: \(s)") &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;return Color.purple &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;&#9;&#9;) &#9;&#9;&#9;&#9;.padding(.bottom, 5) &#9;&#9;} &#9;&#9;.padding(40) &#9;&#9;.coordinateSpace(name: "stack") &#9;&#9;.background(Color.pink) } Output: Global: (80.0, 45.0, 20.0, 60.0) | Stack: (80.0, 125.0, 20.0, 60.0)
Posted
by smialek.
Last updated
.
Post not yet marked as solved
0 Replies
1.6k Views
I'm trying to change device of the inputNode of AVAudioEngine. To do so, I'm calling setDeviceID on its auAudioUnit. Although this call doesn't fail, something wrong happens to the output busses. When I ask for its format, it shows a 0Hz and 0 channels format. It makes the app crash when I try to connect the node to the mainMixerNode. Can anyone explain what's wrong with this code? avEngine = AVAudioEngine() print(avEngine.inputNode.auAudioUnit.inputBusses[0].format) // <AVAudioFormat 0x1404b06e0: 2 ch, 44100 Hz, Float32, non-inter> print(avEngine.inputNode.auAudioUnit.outputBusses[0].format) // <AVAudioFormat 0x1404b0a60: 2 ch, 44100 Hz, Float32, inter> // Now, let's change a device from headphone's mic to built-in mic. try! avEngine.inputNode.auAudioUnit.setDeviceID(inputDevice.deviceID) print(avEngine.inputNode.auAudioUnit.inputBusses[0].format) // <AVAudioFormat 0x1404add50: 2 ch, 44100 Hz, Float32, non-inter> print(avEngine.inputNode.auAudioUnit.outputBusses[0].format) // <AVAudioFormat 0x1404adff0: 0 ch, 0 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved> // !!! // Interestingly, 'inputNode' shows a different format than `auAudioUnit` print(avEngine.inputNode.inputFormat(forBus: 0)) // <AVAudioFormat 0x1404af480: 1 ch, 44100 Hz, Float32> print(avEngine.inputNode.outputFormat(forBus: 0)) // <AVAudioFormat 0x1404ade30: 1 ch, 44100 Hz, Float32> Edit: Further debugging revels another puzzling thing. avEngine.inputNode.auAudioUnit == avEngine.outputNode.auAudioUnit // this is true ?! inputNode and outputNode share the same AUAudioUnit. And its deviceID is by default set to the speakers. It's so confusing to me...why would inpudeNode's device be a speaker?
Posted
by smialek.
Last updated
.
Post not yet marked as solved
2 Replies
1.6k Views
SwiftUI promise is to call View’s body only when needed to avoid invalidating views whose State has not changed. However, there are some cases when this promise is not kept and the View is updated even though its state has not changed. Example: struct InsideView: View { @Binding var value: Int // … } Looking at that view, we’d expect that its body is called when the value changes. However, this is not always true and it depends on how that binding is passed to the view. When the view is created this way, everything works as expected and InsideView is not updated when value hasn’t changed. @State private var value: Int = 0 InsideView(value: $value) In the example below, InsideView will be incorrectly updated even when value has not changed. It will be updated whenever its container is updated too. var customBinding: Binding<Int> { Binding<Int> { 100 } set: { _ in } } InsideView(value: customBinding) Can anyone explain this and say whether it's expected? Is there any way to avoid this behaviour that can ultimately lead to performance issues? Here's a sample project if anyone wants to play with it: import SwiftUI struct ContentView: View { @State private var tab = 0 @State private var count = 0 @State private var someValue: Int = 100 var customBinding: Binding<Int> { Binding<Int> { 100 } set: { _ in } } var body: some View { VStack { Picker("Tab", selection: $tab) { Text("@Binding from @State").tag(0) Text("Custom @Binding").tag(1) } .pickerStyle(SegmentedPickerStyle()) VStack(spacing: 10) { if tab == 0 { Text("When you tap a button, a view below should not be updated. That's a desired behaviour.") InsideView(value: $someValue) } else if tab == 1 { Text("When you tap a button, a view below will be updated (its background color will be set to random value to indicate this). This is unexpected because the view State has not changed.") InsideView(value: customBinding) } } .frame(width: 250, height: 150) Button("Tap! Count: \(count)") { count += 1 } } .frame(width: 300, height: 350) .padding() } } struct InsideView: View { @Binding var value: Int var body: some View { print("[⚠️] InsideView body.") return VStack { Text("I'm a child view. My body should be called only once.") .multilineTextAlignment(.center) Text("Value: \(value)") } .background(Color.random) } } extension ShapeStyle where Self == Color { static var random: Color { Color( red: .random(in: 0...1), green: .random(in: 0...1), blue: .random(in: 0...1) ) } }
Posted
by smialek.
Last updated
.
Post not yet marked as solved
0 Replies
782 Views
I notice weird issues where NSFetchedResultsController returns in its fetchedObjects two instances of the same object. func controllerDidChangeContent(_ controller: NSFetchedResultsController<NSFetchRequestResult>) { items = (controller.fetchedObjects ?? []) as! [Item] print(items) // Prints: // - 0 : <Item: 0x600000d40e10> (entity: Item; id: 0x600002e6b2e0 <x-coredata:///Item/tE1646DB0-C3C2-4AE1-BC32-6B10934F292C2>; .... // - 1 : <Item: 0x600000d40e10> (entity: Item; id: 0x600002e6b2e0 <x-coredata:///Item/tE1646DB0-C3C2-4AE1-BC32-6B10934F292C2>; ... print(items[0] == items[1]) // true !!! } The issue occurs after adding a new item and saving both contexts. func weirdTest() { print("1. Adding.") let item = Item(context: viewContext) item.id = UUID() viewContext.processPendingChanges() print("2. Saving.") viewContext.performAndWait { try! viewContext.save() } rootCtx.performAndWait { try! rootCtx.save() } } Here's my Core Data Stack: View Context (main thread) --> Background Context --> Persistent Store Coordinator Ahd here's how I configure my contexts: lazy var rootCtx: NSManagedObjectContext = { let rootCtx = NSManagedObjectContext(concurrencyType: .privateQueueConcurrencyType) rootCtx.persistentStoreCoordinator = coordinator rootCtx.automaticallyMergesChangesFromParent = true return rootCtx }() lazy var viewContext: NSManagedObjectContext = { let ctx = NSManagedObjectContext(concurrencyType: .mainQueueConcurrencyType) ctx.parent = rootCtx ctx.automaticallyMergesChangesFromParent = true return ctx }() Anyone has any idea what's going on here? :<
Posted
by smialek.
Last updated
.
Post not yet marked as solved
0 Replies
911 Views
I have a CALayer with many sublayers. Those sublayers have multiple CABasicAnimation added to them. Now, I'd like to render the whole layer subtree to the UIImage at a specific point of animation time. How could I achieve that? The only thing I found is a CALayer.render(in:) method but the docs say that this method ignores Core Animations :<
Posted
by smialek.
Last updated
.
Post not yet marked as solved
0 Replies
1.1k Views
I'm trying to add an animated CALayer over my video and export it with AVAssetExportSession. I'm animating the layer using CABasicAnimation set to my custom property. However, it seems that func draw(in ctx: CGContext) is never called during an export for my custom layer, and no animation is played. I found out that animating standard properties like borderWidth works fine, but custom properties are ignored. Can someone help with that? func export(standard: Bool) { print("Exporting...") let composition = AVMutableComposition() //composition.naturalSize = CGSize(width: 300, height: 300) // Video track let videoTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID: CMPersistentTrackID(1))! let _videoAssetURL = Bundle.main.url(forResource: "emptyVideo", withExtension: "mov")! let _emptyVideoAsset = AVURLAsset(url: _videoAssetURL) let _emptyVideoTrack = _emptyVideoAsset.tracks(withMediaType: .video)[0] try! videoTrack.insertTimeRange(CMTimeRange(start: .zero, duration: _emptyVideoAsset.duration), of: _emptyVideoTrack, at: .zero) // Root Layer let rootLayer = CALayer() rootLayer.frame = CGRect(origin: .zero, size: composition.naturalSize) // Video layer let video = CALayer() video.frame = CGRect(origin: .zero, size: composition.naturalSize) rootLayer.addSublayer(video) // Animated layer let animLayer = CustomLayer() animLayer.progress = 0.0 animLayer.frame = CGRect(origin: .zero, size: composition.naturalSize) rootLayer.addSublayer(animLayer) animLayer.borderColor = UIColor.green.cgColor animLayer.borderWidth = 0.0 let key = standard ? "borderWidth" : "progress" let anim = CABasicAnimation(keyPath: key) anim.fromValue = 0.0 anim.toValue = 50.0 anim.duration = 6.0 anim.beginTime = AVCoreAnimationBeginTimeAtZero anim.isRemovedOnCompletion = false animLayer.add(anim, forKey: nil) // Video Composition let videoComposition = AVMutableVideoComposition(propertiesOf: composition) videoComposition.renderSize = composition.naturalSize videoComposition.frameDuration = CMTime(value: 1, timescale: 30) // Animation tool let animTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: video, in: rootLayer) videoComposition.animationTool = animTool // Video instruction > Basic let videoInstruction = AVMutableVideoCompositionInstruction() videoInstruction.timeRange = CMTimeRange(start: .zero, duration: composition.duration) videoComposition.instructions = [videoInstruction] // Video-instruction > Layer instructions let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack) videoInstruction.layerInstructions = [layerInstruction] // Session let exportSession = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetHighestQuality)! exportSession.videoComposition = videoComposition exportSession.shouldOptimizeForNetworkUse = true var url = FileManager.default.temporaryDirectory.appendingPathComponent("\(arc4random()).mov") url = URL(fileURLWithPath: url.path) exportSession.outputURL = url exportSession.outputFileType = .mov _session = exportSession exportSession.exportAsynchronously { if let error = exportSession.error { print("Fail. \(error)") } else { print("Ok") print(url) DispatchQueue.main.async { let vc = AVPlayerViewController() vc.player = AVPlayer(url: url) self.present(vc, animated: true) { vc.player?.play() } } } } } CustomLayer: class CustomLayer: CALayer { @NSManaged var progress: CGFloat override init() { super.init() } override init(layer: Any) { let l = layer as! CustomLayer super.init(layer: layer) print("Copy. \(progress) \(l.progress)") self.progress = l.progress } required init?(coder: NSCoder) { super.init(coder: coder) } override class func needsDisplay(forKey key: String) -> Bool { let needsDisplayKeys = ["progress"] if needsDisplayKeys.contains(key) { return true } return super.needsDisplay(forKey: key) } override func display() { print("Display. \(progress) | \(presentation()?.progress)") super.display() } override func draw(in ctx: CGContext) { // Save / restore ctx ctx.saveGState() defer { ctx.restoreGState() } print("Draw. \(progress)") ctx.move(to: .zero) ctx.addLine(to: CGPoint(x: bounds.size.width * progress, y: bounds.size.height * progress)) ctx.setStrokeColor(UIColor.red.cgColor) ctx.setLineWidth(40) ctx.strokePath() } } Here's a full sample project if someone is interested: https://www.dropbox.com/s/evkm60wkeb2xrzh/BrokenAnimation.zip?dl=0
Posted
by smialek.
Last updated
.
Post not yet marked as solved
1 Replies
1.1k Views
There are many folks on the web who successfully loaded AppKit frameworks into Catalyst apps using plugin bundles. However, I couldn't find any information whether it's feasible in the opposite order. I want to include an iOS framework built with Mac Catalyst in the native AppKit app. Is it possible? Any tips how this could be achieved?
Posted
by smialek.
Last updated
.
Post not yet marked as solved
1 Replies
962 Views
I have a SwiftUI view which updates my NSManagedObject subclass object after finishing dragging. I've noticed that NSFetchedResultsController is not reporting updates at the end of the run-loop during which the change occurred. It takes a few moments or a save() to changes being noticed. To debug it, I've swizzled processPendingChanges method of NSManagedObjectContext and logged when it's called. To my surprise, I've noticed it's not always called at the end of the run loop. What am I missing here? Why is processPendingChanges() not called? Should I call it manually on my own after every change? For reference: I'm testing on macOS in the AppKit app. The NSManagedObjectContext is created by NSPersistentDocument. Here's how my View's code look like: // `myItem` is my subclass of NSManagedObject Item() .gesture( DragGesture(minimumDistance: 0, coordinateSpace: CoordinateSpace.compositionGrid) .onChanged({ _ in // ... }) .onEnded({ (dragInfo) in // :-( // This change is not always noticed by NSFetchedResultsController // myItem.someProperty = dragInfo.location }) )
Posted
by smialek.
Last updated
.
Post not yet marked as solved
2 Replies
1.4k Views
CATiledLayer doesn't work as expected on macOS. It's choosing to render too few tiles. In my example, my view is 300px wide. I set layer's transform scale to 4x. In that case, CATiledLayer renders only two tiles: 50px wide at 2x scale, and stretches them. Interestingly, when I run similar code on iOS, it works correctly - it renders 3 tiles, 25px wide at 4x scale. Is it a bug or am I missing something here? My code below: class WaveformView: NSView { &#9;&#9;var scale: CGFloat = 1.0 { &#9;&#9;&#9;&#9;didSet { &#9;&#9;&#9;&#9;&#9;&#9;layer?.transform = CATransform3DScale(CATransform3DIdentity, scale, 1.0, 1.0) &#9;&#9;&#9;&#9;&#9;&#9;layer?.setNeedsDisplay(bounds) &#9;&#9;&#9;&#9;} &#9;&#9;} &#9;&#9;private var tiledLayer: CATiledLayer { layer as! CATiledLayer } &#9;&#9;override init(frame frameRect: NSRect) { &#9;&#9;&#9;&#9;super.init(frame: frameRect) &#9;&#9;&#9;&#9;wantsLayer = true &#9;&#9;&#9;&#9;tiledLayer.levelsOfDetail = 8 &#9;&#9;&#9;&#9;tiledLayer.levelsOfDetailBias = 8 &#9;&#9;&#9;&#9;tiledLayer.tileSize = CGSize(width: 100.0, height: .infinity) &#9;&#9;&#9;&#9;tiledLayer.contentsScale = 1.0 &#9;&#9;} &#9;&#9;required init?(coder: NSCoder) { &#9;&#9;&#9;&#9;fatalError("init(coder:) has not been implemented") &#9;&#9;} &#9;&#9;override func draw(_ dirtyRect: NSRect) { &#9;&#9;&#9;&#9;let nsContext = NSGraphicsContext.current! &#9;&#9;&#9;&#9;let cgContext = nsContext.cgContext &#9;&#9;&#9;&#9;cgContext.saveGState() &#9;&#9;&#9;&#9;let scaleX: CGFloat = cgContext.ctm.a &#9;&#9;&#9;&#9;NSColor.red.setStroke() &#9;&#9;&#9;&#9;NSBezierPath(rect: dirtyRect) &#9;&#9;&#9;&#9;&#9;&#9;.stroke() &#9;&#9;&#9;&#9;let fontSize: CGFloat = 12.0 &#9;&#9;&#9;&#9;let attr = [ &#9;&#9;&#9;&#9;&#9;&#9;NSAttributedString.Key.font: NSFont.systemFont(ofSize: fontSize) &#9;&#9;&#9;&#9;&#9;] &#9;&#9;&#9;&#9;let str = "S: \(scaleX)\n\(dirtyRect.width)" as NSString &#9;&#9;&#9;&#9;str.draw(at: NSPoint(x: dirtyRect.minX, y: dirtyRect.midY), withAttributes: attr) &#9;&#9;&#9;&#9;nsContext.cgContext.restoreGState() &#9;&#9;} &#9;&#9;override func makeBackingLayer() -> CALayer { &#9;&#9;&#9;&#9;return CATiledLayer() &#9;&#9;} }
Posted
by smialek.
Last updated
.
Post not yet marked as solved
1 Replies
987 Views
I'm using a Speech framework to transcribe a very long audio file (1h+) and I want to present partial results along the way. What I've noticed is that SFSpeechRecognizer is processing audio in batches. Delivered SFTranscriptionSegment have timestamp set to 0.0 most of the time, but it seems they are set to meaningful values at the end of the "batch". When it's done, the next reported partial results no longer contains those segments. It starts delivering partial results from the next batch. Note that all I'm talking about here is when SFSpeechRecognitionResult has isFinal set to false. I found zero mentions about this in the docs. What's problematic for me is that segments timestamps in each batch are relative to the batch itself and not the entire audio file. Because of that, it's impossible to determine segment's absolute timestamp because we don't know absolute timestamp of the batch. Is there any Apple engineer here that could shed some light on that behavior? Is there any way to get a meaningful segment timestamp from partial results callbacks?
Posted
by smialek.
Last updated
.
Post not yet marked as solved
1 Replies
2.2k Views
I'm implementing a 'ruler' view similar to what you would find in Sketch / Photoshop / etc. Basically a ruler at the top which shows you the current view size and updates as you zoom in/out. In my example, it draws about ~250 rectangles. I wanted to do it with SwiftUI but I'm getting into performance issues. When I update the view's scale with a slider, FPS drops noticeably. (Testing on macOS Catalina 10.15.7, Xcode 12.0.1, Macbook Air.) I wonder if I hit the limits of SwiftUI and should switch to Metal, or am I missing some optimization here? Note that I did add .drawingGroup() modifier but it doesn't seem to help in any way. Here's a sample app to download: Github - https://github.com/Moriquendi/swiftui-performance-tests Here's a code for the "ruler" view: struct Timeline: View { &#9;&#9;let scale: Double &#9;&#9;private let minLongTickWidth = 30.0 &#9;&#9;let LARGE_TICKS_COUNT = 50 &#9;&#9;var SMALL_TICKS_COUNT = 5 &#9;&#9;var body: some View { &#9;&#9;&#9;&#9;HStack(alignment: .bottom, spacing: 0) { &#9;&#9;&#9;&#9;&#9;&#9;ForEach(ticks, id: \.self) { time in &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;HStack(alignment: .bottom, spacing: 0) { &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;LongTick(text: "X") &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;.frame(width: smallTickWidth, alignment: .leading) &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;ForEach(1..<SMALL_TICKS_COUNT, id: \.self) { time in &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;SmallTick() &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;.frame(width: smallTickWidth, alignment: .leading) &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;.frame(width: longTickWidth, alignment: .leading) &#9;&#9;&#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;.background(Color(NSColor.black)) &#9;&#9;&#9;&#9;.drawingGroup() &#9;&#9;} &#9;&#9;var oneLongTickDurationInMs: Double { &#9;&#9;&#9;&#9;let pointsForOneMilisecond = scale / 1000 &#9;&#9;&#9;&#9;var msJump = 1 &#9;&#9;&#9;&#9;var oneLongTickDurationInMs = 1.0 &#9;&#9;&#9;&#9;while true { &#9;&#9;&#9;&#9;&#9;&#9;let longTickIntervalWidth = oneLongTickDurationInMs * pointsForOneMilisecond &#9;&#9;&#9;&#9;&#9;&#9;if longTickIntervalWidth >= minLongTickWidth { &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;break &#9;&#9;&#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;&#9;&#9;oneLongTickDurationInMs += Double(msJump) &#9;&#9;&#9;&#9;&#9;&#9;switch oneLongTickDurationInMs { &#9;&#9;&#9;&#9;&#9;&#9;case 0..<10: msJump = 1 &#9;&#9;&#9;&#9;&#9;&#9;case 10..<100: msJump = 10 &#9;&#9;&#9;&#9;&#9;&#9;case 100..<1000: msJump = 100 &#9;&#9;&#9;&#9;&#9;&#9;case 1000..<10000: msJump = 1000 &#9;&#9;&#9;&#9;&#9;&#9;default: msJump = 10000 &#9;&#9;&#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;return oneLongTickDurationInMs &#9;&#9;} &#9;&#9;var longTickWidth: CGFloat { &#9;&#9;&#9;&#9;CGFloat(oneLongTickDurationInMs / 1000 * scale) &#9;&#9;} &#9;&#9;var ticks: [Double] { &#9;&#9;&#9;&#9;let oneLongTickDurationInMs = self.oneLongTickDurationInMs &#9;&#9;&#9;&#9;let tickTimesInMs = (0...LARGE_TICKS_COUNT).map { Double($0) * oneLongTickDurationInMs } &#9;&#9;&#9;&#9;return tickTimesInMs &#9;&#9;} &#9;&#9;var smallTickWidth: CGFloat { &#9;&#9;&#9;&#9;longTickWidth / CGFloat(SMALL_TICKS_COUNT) &#9;&#9;} } struct SmallTick: View { &#9;&#9;var body: some View { &#9;&#9;&#9;&#9;Rectangle() &#9;&#9;&#9;&#9;&#9;&#9;.fill(Color.blue) &#9;&#9;&#9;&#9;&#9;&#9;.frame(width: 1) &#9;&#9;&#9;&#9;&#9;&#9;.frame(maxHeight: 8) &#9;&#9;} } struct LongTick: View { &#9;&#9;let text: String &#9;&#9;var body: some View { &#9;&#9;&#9;&#9;Rectangle() &#9;&#9;&#9;&#9;&#9;&#9;.fill(Color.red) &#9;&#9;&#9;&#9;&#9;&#9;.frame(width: 1) &#9;&#9;&#9;&#9;&#9;&#9;.frame(maxHeight: .infinity) &#9;&#9;&#9;&#9;&#9;&#9;.overlay( &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;Text(text) &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;.font(.system(size: 12)) &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;.fixedSize() &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;.offset(x: 3, y: 0) &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;, &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;alignment: .topLeading &#9;&#9;&#9;&#9;&#9;&#9;) &#9;&#9;} }
Posted
by smialek.
Last updated
.
Post not yet marked as solved
1 Replies
1.7k Views
It seems to me that NSItemProvider doesn't work well in the latest Xcode 12 (macOS 10.15.6). No matter what file types I'm trying to drop, I can never load them. The error I'm getting: Error Domain=NSItemProviderErrorDomain Code=-1000 "Cannot load representation of type public.audio" UserInfo={NSLocalizedDescription=Cannot load representation of type public.audio} And here's my code: // My SwiftUI View Color.red &#9;&#9;.onDrop(of: ["public.audio"], delegate: self) Drop delegate: func performDrop(info: DropInfo) -> Bool { &#9; let provider = info.itemProviders(for: ["public.audio"])[0] &#9; provider.loadFileRepresentation(forTypeIdentifier: "public.audio") { (url, error) in &#9;&#9;&#9;guard let url = url, error != nil else { &#9;&#9;&#9;&#9; print(error!.localizedDescription) &#9;&#9;&#9;&#9; return &#9;&#9;&#9; } &#9;&#9;&#9; ... } I've tried this code with different type identifiers: audio, image, etc. All failed. Anyone knows what's the issue?
Posted
by smialek.
Last updated
.