Post

Replies

Boosts

Views

Activity

Custom SwiftUI view with a two-way binding
Hi folks,I'm struggling to implement a custom view which can take Binding as an argument and implement two-way updates of that value.So basically I'm implementing my custom slider and want its initializer to look like this:MySlider(value: <binding)___What I'm struggling with:1. How do I subscribe to remote updates of the binding value so that I can update the view's state?2. Is there any nice way to bind a Binding with @State property?___Here's my current implementation so far which is not perfect.struct MySlider: View { @Binding var selection: Float? @State private var selectedValue: Float? init(selection: Binding) { self._selection = selection // https://stackoverflow.com/a/58137096 _selectedValue = State(wrappedValue: selection.wrappedValue) } var body: some View { HStack(spacing: 3) { ForEach(someValues) { (v) in Item(value: v, isSelected: v == self.selection) .onTapGesture { // No idea how to do that other way so I don't have to set it twice self.selection = v self.selectedValue = v } } } } }
0
0
884
Feb ’20
Tracking Transparency - clarification on when asking for permission is not required
We're building a modern analytics tool for mobile apps. I personally applause to iOS 14 privacy changes but on the same hand, I'm a little bit worried about opt-in system for tracking as early surveys suggests most users won't opt-in. We'd love to build a tool that both respects privacy in Apple's spirit and gives developers a broad perspective on their business performance. I'd love to hear more clarification on cases where we don't have to ask for permission for "tracking"? Here are a few of gray area cases. Case 1 Do we need to ask for "tracking permission" if: 1) user's activity data is uploaded to the 3rd party analytics tool 2) users can be identified with email and/or identifiers 3) BUT it's only used for developer's business analytics and is NOT SOLD OR USED for advertising? In that case, 3rd party is solely providing a product to persist and analyze the data on developers behalf and it's not selling that data to anyone. ____ Case 2 Do we need to ask for "tracking permission" if: 1) user's activity data is uploaded to the 3rd party analytics tool 2) BUT we're only using anonymous identifiers generated on-device that can't be linked to the device? For example, we generate a new UUID on first app launch and use it to "identify" the user. No ID for Vendor is used.
0
0
844
Jul ’20
NSItemProvider always fails with loading a dropped file.
It seems to me that NSItemProvider doesn't work well in the latest Xcode 12 (macOS 10.15.6). No matter what file types I'm trying to drop, I can never load them. The error I'm getting: Error Domain=NSItemProviderErrorDomain Code=-1000 "Cannot load representation of type public.audio" UserInfo={NSLocalizedDescription=Cannot load representation of type public.audio} And here's my code: // My SwiftUI View Color.red 		.onDrop(of: ["public.audio"], delegate: self) Drop delegate: func performDrop(info: DropInfo) -> Bool { 	 let provider = info.itemProviders(for: ["public.audio"])[0] 	 provider.loadFileRepresentation(forTypeIdentifier: "public.audio") { (url, error) in 			guard let url = url, error != nil else { 				 print(error!.localizedDescription) 				 return 			 } 			 ... } I've tried this code with different type identifiers: audio, image, etc. All failed. Anyone knows what's the issue?
1
0
2k
Sep ’20
SwiftUI Performance - FPS drops when view is updated on every scroll action
I'm implementing a 'ruler' view similar to what you would find in Sketch / Photoshop / etc. Basically a ruler at the top which shows you the current view size and updates as you zoom in/out. In my example, it draws about ~250 rectangles. I wanted to do it with SwiftUI but I'm getting into performance issues. When I update the view's scale with a slider, FPS drops noticeably. (Testing on macOS Catalina 10.15.7, Xcode 12.0.1, Macbook Air.) I wonder if I hit the limits of SwiftUI and should switch to Metal, or am I missing some optimization here? Note that I did add .drawingGroup() modifier but it doesn't seem to help in any way. Here's a sample app to download: Github - https://github.com/Moriquendi/swiftui-performance-tests Here's a code for the "ruler" view: struct Timeline: View { &#9;&#9;let scale: Double &#9;&#9;private let minLongTickWidth = 30.0 &#9;&#9;let LARGE_TICKS_COUNT = 50 &#9;&#9;var SMALL_TICKS_COUNT = 5 &#9;&#9;var body: some View { &#9;&#9;&#9;&#9;HStack(alignment: .bottom, spacing: 0) { &#9;&#9;&#9;&#9;&#9;&#9;ForEach(ticks, id: \.self) { time in &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;HStack(alignment: .bottom, spacing: 0) { &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;LongTick(text: "X") &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;.frame(width: smallTickWidth, alignment: .leading) &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;ForEach(1..<SMALL_TICKS_COUNT, id: \.self) { time in &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;SmallTick() &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;.frame(width: smallTickWidth, alignment: .leading) &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;.frame(width: longTickWidth, alignment: .leading) &#9;&#9;&#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;.background(Color(NSColor.black)) &#9;&#9;&#9;&#9;.drawingGroup() &#9;&#9;} &#9;&#9;var oneLongTickDurationInMs: Double { &#9;&#9;&#9;&#9;let pointsForOneMilisecond = scale / 1000 &#9;&#9;&#9;&#9;var msJump = 1 &#9;&#9;&#9;&#9;var oneLongTickDurationInMs = 1.0 &#9;&#9;&#9;&#9;while true { &#9;&#9;&#9;&#9;&#9;&#9;let longTickIntervalWidth = oneLongTickDurationInMs * pointsForOneMilisecond &#9;&#9;&#9;&#9;&#9;&#9;if longTickIntervalWidth >= minLongTickWidth { &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;break &#9;&#9;&#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;&#9;&#9;oneLongTickDurationInMs += Double(msJump) &#9;&#9;&#9;&#9;&#9;&#9;switch oneLongTickDurationInMs { &#9;&#9;&#9;&#9;&#9;&#9;case 0..<10: msJump = 1 &#9;&#9;&#9;&#9;&#9;&#9;case 10..<100: msJump = 10 &#9;&#9;&#9;&#9;&#9;&#9;case 100..<1000: msJump = 100 &#9;&#9;&#9;&#9;&#9;&#9;case 1000..<10000: msJump = 1000 &#9;&#9;&#9;&#9;&#9;&#9;default: msJump = 10000 &#9;&#9;&#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;return oneLongTickDurationInMs &#9;&#9;} &#9;&#9;var longTickWidth: CGFloat { &#9;&#9;&#9;&#9;CGFloat(oneLongTickDurationInMs / 1000 * scale) &#9;&#9;} &#9;&#9;var ticks: [Double] { &#9;&#9;&#9;&#9;let oneLongTickDurationInMs = self.oneLongTickDurationInMs &#9;&#9;&#9;&#9;let tickTimesInMs = (0...LARGE_TICKS_COUNT).map { Double($0) * oneLongTickDurationInMs } &#9;&#9;&#9;&#9;return tickTimesInMs &#9;&#9;} &#9;&#9;var smallTickWidth: CGFloat { &#9;&#9;&#9;&#9;longTickWidth / CGFloat(SMALL_TICKS_COUNT) &#9;&#9;} } struct SmallTick: View { &#9;&#9;var body: some View { &#9;&#9;&#9;&#9;Rectangle() &#9;&#9;&#9;&#9;&#9;&#9;.fill(Color.blue) &#9;&#9;&#9;&#9;&#9;&#9;.frame(width: 1) &#9;&#9;&#9;&#9;&#9;&#9;.frame(maxHeight: 8) &#9;&#9;} } struct LongTick: View { &#9;&#9;let text: String &#9;&#9;var body: some View { &#9;&#9;&#9;&#9;Rectangle() &#9;&#9;&#9;&#9;&#9;&#9;.fill(Color.red) &#9;&#9;&#9;&#9;&#9;&#9;.frame(width: 1) &#9;&#9;&#9;&#9;&#9;&#9;.frame(maxHeight: .infinity) &#9;&#9;&#9;&#9;&#9;&#9;.overlay( &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;Text(text) &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;.font(.system(size: 12)) &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;.fixedSize() &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;.offset(x: 3, y: 0) &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;, &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;alignment: .topLeading &#9;&#9;&#9;&#9;&#9;&#9;) &#9;&#9;} }
1
0
2.5k
Oct ’20
Undocumented behavior of delivering partial results in batches.
I'm using a Speech framework to transcribe a very long audio file (1h+) and I want to present partial results along the way. What I've noticed is that SFSpeechRecognizer is processing audio in batches. Delivered SFTranscriptionSegment have timestamp set to 0.0 most of the time, but it seems they are set to meaningful values at the end of the "batch". When it's done, the next reported partial results no longer contains those segments. It starts delivering partial results from the next batch. Note that all I'm talking about here is when SFSpeechRecognitionResult has isFinal set to false. I found zero mentions about this in the docs. What's problematic for me is that segments timestamps in each batch are relative to the batch itself and not the entire audio file. Because of that, it's impossible to determine segment's absolute timestamp because we don't know absolute timestamp of the batch. Is there any Apple engineer here that could shed some light on that behavior? Is there any way to get a meaningful segment timestamp from partial results callbacks?
1
1
1.1k
Nov ’20
SwiftUI - Global coordinate space is flipped on macOS
Behavior of reading frames through GeometryReader is confusing on macOS. Apparently when you read a frame in local or named coordinate space, returned frame is in the "SwiftUI coordinate system" where the (0,0) point is in the upper-left corner. However, when you read a frame in a global space, returned frame is in the "native macOS system" where the (0,0) is in the bottom-left corner. Is this behavior documented anywhere or is it a bug? I would suspect SwiftUI to always return frames in the same way on all the platforms. I'm trying to figure out if I'm missing something here. My sample code: struct ContentView: View { var body: some View { &#9;&#9;ZStack(alignment: .bottom) { &#9;&#9;&#9;&#9;Color.blue &#9;&#9;&#9;&#9;&#9;&#9;.frame(width: 100, height: 150) &#9;&#9;&#9;&#9;Color.red &#9;&#9;&#9;&#9;&#9;&#9;.frame(width: 20, height: 60) &#9;&#9;&#9;&#9;&#9;&#9;.background( &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;GeometryReader { geo -> Color in &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;let g = geo.frame(in: .global) &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;let s = geo.frame(in: .named("stack")) &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;print("Global: \(g) | Stack: \(s)") &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;return Color.purple &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;&#9;&#9;) &#9;&#9;&#9;&#9;.padding(.bottom, 5) &#9;&#9;} &#9;&#9;.padding(40) &#9;&#9;.coordinateSpace(name: "stack") &#9;&#9;.background(Color.pink) } Output: Global: (80.0, 45.0, 20.0, 60.0) | Stack: (80.0, 125.0, 20.0, 60.0)
1
0
2.6k
Nov ’20
CATiledLayer renders too few tiles on macOS
CATiledLayer doesn't work as expected on macOS. It's choosing to render too few tiles. In my example, my view is 300px wide. I set layer's transform scale to 4x. In that case, CATiledLayer renders only two tiles: 50px wide at 2x scale, and stretches them. Interestingly, when I run similar code on iOS, it works correctly - it renders 3 tiles, 25px wide at 4x scale. Is it a bug or am I missing something here? My code below: class WaveformView: NSView { &#9;&#9;var scale: CGFloat = 1.0 { &#9;&#9;&#9;&#9;didSet { &#9;&#9;&#9;&#9;&#9;&#9;layer?.transform = CATransform3DScale(CATransform3DIdentity, scale, 1.0, 1.0) &#9;&#9;&#9;&#9;&#9;&#9;layer?.setNeedsDisplay(bounds) &#9;&#9;&#9;&#9;} &#9;&#9;} &#9;&#9;private var tiledLayer: CATiledLayer { layer as! CATiledLayer } &#9;&#9;override init(frame frameRect: NSRect) { &#9;&#9;&#9;&#9;super.init(frame: frameRect) &#9;&#9;&#9;&#9;wantsLayer = true &#9;&#9;&#9;&#9;tiledLayer.levelsOfDetail = 8 &#9;&#9;&#9;&#9;tiledLayer.levelsOfDetailBias = 8 &#9;&#9;&#9;&#9;tiledLayer.tileSize = CGSize(width: 100.0, height: .infinity) &#9;&#9;&#9;&#9;tiledLayer.contentsScale = 1.0 &#9;&#9;} &#9;&#9;required init?(coder: NSCoder) { &#9;&#9;&#9;&#9;fatalError("init(coder:) has not been implemented") &#9;&#9;} &#9;&#9;override func draw(_ dirtyRect: NSRect) { &#9;&#9;&#9;&#9;let nsContext = NSGraphicsContext.current! &#9;&#9;&#9;&#9;let cgContext = nsContext.cgContext &#9;&#9;&#9;&#9;cgContext.saveGState() &#9;&#9;&#9;&#9;let scaleX: CGFloat = cgContext.ctm.a &#9;&#9;&#9;&#9;NSColor.red.setStroke() &#9;&#9;&#9;&#9;NSBezierPath(rect: dirtyRect) &#9;&#9;&#9;&#9;&#9;&#9;.stroke() &#9;&#9;&#9;&#9;let fontSize: CGFloat = 12.0 &#9;&#9;&#9;&#9;let attr = [ &#9;&#9;&#9;&#9;&#9;&#9;NSAttributedString.Key.font: NSFont.systemFont(ofSize: fontSize) &#9;&#9;&#9;&#9;&#9;] &#9;&#9;&#9;&#9;let str = "S: \(scaleX)\n\(dirtyRect.width)" as NSString &#9;&#9;&#9;&#9;str.draw(at: NSPoint(x: dirtyRect.minX, y: dirtyRect.midY), withAttributes: attr) &#9;&#9;&#9;&#9;nsContext.cgContext.restoreGState() &#9;&#9;} &#9;&#9;override func makeBackingLayer() -> CALayer { &#9;&#9;&#9;&#9;return CATiledLayer() &#9;&#9;} }
2
0
1.5k
Dec ’20
AVAudioEngine - output format has 0 channels after changing device of auAudioUnit of the inputNode
I'm trying to change device of the inputNode of AVAudioEngine. To do so, I'm calling setDeviceID on its auAudioUnit. Although this call doesn't fail, something wrong happens to the output busses. When I ask for its format, it shows a 0Hz and 0 channels format. It makes the app crash when I try to connect the node to the mainMixerNode. Can anyone explain what's wrong with this code? avEngine = AVAudioEngine() print(avEngine.inputNode.auAudioUnit.inputBusses[0].format) // <AVAudioFormat 0x1404b06e0: 2 ch, 44100 Hz, Float32, non-inter> print(avEngine.inputNode.auAudioUnit.outputBusses[0].format) // <AVAudioFormat 0x1404b0a60: 2 ch, 44100 Hz, Float32, inter> // Now, let's change a device from headphone's mic to built-in mic. try! avEngine.inputNode.auAudioUnit.setDeviceID(inputDevice.deviceID) print(avEngine.inputNode.auAudioUnit.inputBusses[0].format) // <AVAudioFormat 0x1404add50: 2 ch, 44100 Hz, Float32, non-inter> print(avEngine.inputNode.auAudioUnit.outputBusses[0].format) // <AVAudioFormat 0x1404adff0: 0 ch, 0 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved> // !!! // Interestingly, 'inputNode' shows a different format than `auAudioUnit` print(avEngine.inputNode.inputFormat(forBus: 0)) // <AVAudioFormat 0x1404af480: 1 ch, 44100 Hz, Float32> print(avEngine.inputNode.outputFormat(forBus: 0)) // <AVAudioFormat 0x1404ade30: 1 ch, 44100 Hz, Float32> Edit: Further debugging revels another puzzling thing. avEngine.inputNode.auAudioUnit == avEngine.outputNode.auAudioUnit // this is true ?! inputNode and outputNode share the same AUAudioUnit. And its deviceID is by default set to the speakers. It's so confusing to me...why would inpudeNode's device be a speaker?
1
0
1.8k
Jun ’21
View body is called although its @Binding has not changed
SwiftUI promise is to call View’s body only when needed to avoid invalidating views whose State has not changed. However, there are some cases when this promise is not kept and the View is updated even though its state has not changed. Example: struct InsideView: View { @Binding var value: Int // … } Looking at that view, we’d expect that its body is called when the value changes. However, this is not always true and it depends on how that binding is passed to the view. When the view is created this way, everything works as expected and InsideView is not updated when value hasn’t changed. @State private var value: Int = 0 InsideView(value: $value) In the example below, InsideView will be incorrectly updated even when value has not changed. It will be updated whenever its container is updated too. var customBinding: Binding<Int> { Binding<Int> { 100 } set: { _ in } } InsideView(value: customBinding) Can anyone explain this and say whether it's expected? Is there any way to avoid this behaviour that can ultimately lead to performance issues? Here's a sample project if anyone wants to play with it: import SwiftUI struct ContentView: View { @State private var tab = 0 @State private var count = 0 @State private var someValue: Int = 100 var customBinding: Binding<Int> { Binding<Int> { 100 } set: { _ in } } var body: some View { VStack { Picker("Tab", selection: $tab) { Text("@Binding from @State").tag(0) Text("Custom @Binding").tag(1) } .pickerStyle(SegmentedPickerStyle()) VStack(spacing: 10) { if tab == 0 { Text("When you tap a button, a view below should not be updated. That's a desired behaviour.") InsideView(value: $someValue) } else if tab == 1 { Text("When you tap a button, a view below will be updated (its background color will be set to random value to indicate this). This is unexpected because the view State has not changed.") InsideView(value: customBinding) } } .frame(width: 250, height: 150) Button("Tap! Count: \(count)") { count += 1 } } .frame(width: 300, height: 350) .padding() } } struct InsideView: View { @Binding var value: Int var body: some View { print("[⚠️] InsideView body.") return VStack { Text("I'm a child view. My body should be called only once.") .multilineTextAlignment(.center) Text("Value: \(value)") } .background(Color.random) } } extension ShapeStyle where Self == Color { static var random: Color { Color( red: .random(in: 0...1), green: .random(in: 0...1), blue: .random(in: 0...1) ) } }
2
0
1.8k
Aug ’21
NSManagedObjectContext's processPendingChanges() is not called at the end of the RunLoop
I have a SwiftUI view which updates my NSManagedObject subclass object after finishing dragging. I've noticed that NSFetchedResultsController is not reporting updates at the end of the run-loop during which the change occurred. It takes a few moments or a save() to changes being noticed. To debug it, I've swizzled processPendingChanges method of NSManagedObjectContext and logged when it's called. To my surprise, I've noticed it's not always called at the end of the run loop. What am I missing here? Why is processPendingChanges() not called? Should I call it manually on my own after every change? For reference: I'm testing on macOS in the AppKit app. The NSManagedObjectContext is created by NSPersistentDocument. Here's how my View's code look like: // `myItem` is my subclass of NSManagedObject Item() .gesture( DragGesture(minimumDistance: 0, coordinateSpace: CoordinateSpace.compositionGrid) .onChanged({ _ in // ... }) .onEnded({ (dragInfo) in // :-( // This change is not always noticed by NSFetchedResultsController // myItem.someProperty = dragInfo.location }) )
1
0
1.1k
Sep ’21
AVVideoCompositionCoreAnimationTool is not animating custom properties during export with AVAssetExportSession
I'm trying to add an animated CALayer over my video and export it with AVAssetExportSession. I'm animating the layer using CABasicAnimation set to my custom property. However, it seems that func draw(in ctx: CGContext) is never called during an export for my custom layer, and no animation is played. I found out that animating standard properties like borderWidth works fine, but custom properties are ignored. Can someone help with that? func export(standard: Bool) { print("Exporting...") let composition = AVMutableComposition() //composition.naturalSize = CGSize(width: 300, height: 300) // Video track let videoTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID: CMPersistentTrackID(1))! let _videoAssetURL = Bundle.main.url(forResource: "emptyVideo", withExtension: "mov")! let _emptyVideoAsset = AVURLAsset(url: _videoAssetURL) let _emptyVideoTrack = _emptyVideoAsset.tracks(withMediaType: .video)[0] try! videoTrack.insertTimeRange(CMTimeRange(start: .zero, duration: _emptyVideoAsset.duration), of: _emptyVideoTrack, at: .zero) // Root Layer let rootLayer = CALayer() rootLayer.frame = CGRect(origin: .zero, size: composition.naturalSize) // Video layer let video = CALayer() video.frame = CGRect(origin: .zero, size: composition.naturalSize) rootLayer.addSublayer(video) // Animated layer let animLayer = CustomLayer() animLayer.progress = 0.0 animLayer.frame = CGRect(origin: .zero, size: composition.naturalSize) rootLayer.addSublayer(animLayer) animLayer.borderColor = UIColor.green.cgColor animLayer.borderWidth = 0.0 let key = standard ? "borderWidth" : "progress" let anim = CABasicAnimation(keyPath: key) anim.fromValue = 0.0 anim.toValue = 50.0 anim.duration = 6.0 anim.beginTime = AVCoreAnimationBeginTimeAtZero anim.isRemovedOnCompletion = false animLayer.add(anim, forKey: nil) // Video Composition let videoComposition = AVMutableVideoComposition(propertiesOf: composition) videoComposition.renderSize = composition.naturalSize videoComposition.frameDuration = CMTime(value: 1, timescale: 30) // Animation tool let animTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: video, in: rootLayer) videoComposition.animationTool = animTool // Video instruction > Basic let videoInstruction = AVMutableVideoCompositionInstruction() videoInstruction.timeRange = CMTimeRange(start: .zero, duration: composition.duration) videoComposition.instructions = [videoInstruction] // Video-instruction > Layer instructions let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack) videoInstruction.layerInstructions = [layerInstruction] // Session let exportSession = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetHighestQuality)! exportSession.videoComposition = videoComposition exportSession.shouldOptimizeForNetworkUse = true var url = FileManager.default.temporaryDirectory.appendingPathComponent("\(arc4random()).mov") url = URL(fileURLWithPath: url.path) exportSession.outputURL = url exportSession.outputFileType = .mov _session = exportSession exportSession.exportAsynchronously { if let error = exportSession.error { print("Fail. \(error)") } else { print("Ok") print(url) DispatchQueue.main.async { let vc = AVPlayerViewController() vc.player = AVPlayer(url: url) self.present(vc, animated: true) { vc.player?.play() } } } } } CustomLayer: class CustomLayer: CALayer { @NSManaged var progress: CGFloat override init() { super.init() } override init(layer: Any) { let l = layer as! CustomLayer super.init(layer: layer) print("Copy. \(progress) \(l.progress)") self.progress = l.progress } required init?(coder: NSCoder) { super.init(coder: coder) } override class func needsDisplay(forKey key: String) -> Bool { let needsDisplayKeys = ["progress"] if needsDisplayKeys.contains(key) { return true } return super.needsDisplay(forKey: key) } override func display() { print("Display. \(progress) | \(presentation()?.progress)") super.display() } override func draw(in ctx: CGContext) { // Save / restore ctx ctx.saveGState() defer { ctx.restoreGState() } print("Draw. \(progress)") ctx.move(to: .zero) ctx.addLine(to: CGPoint(x: bounds.size.width * progress, y: bounds.size.height * progress)) ctx.setStrokeColor(UIColor.red.cgColor) ctx.setLineWidth(40) ctx.strokePath() } } Here's a full sample project if someone is interested: https://www.dropbox.com/s/evkm60wkeb2xrzh/BrokenAnimation.zip?dl=0
1
0
1.3k
Oct ’21
NSFetchedResultsController returns duplicates after merging contexts
I notice weird issues where NSFetchedResultsController returns in its fetchedObjects two instances of the same object. func controllerDidChangeContent(_ controller: NSFetchedResultsController<NSFetchRequestResult>) { items = (controller.fetchedObjects ?? []) as! [Item] print(items) // Prints: // - 0 : <Item: 0x600000d40e10> (entity: Item; id: 0x600002e6b2e0 <x-coredata:///Item/tE1646DB0-C3C2-4AE1-BC32-6B10934F292C2>; .... // - 1 : <Item: 0x600000d40e10> (entity: Item; id: 0x600002e6b2e0 <x-coredata:///Item/tE1646DB0-C3C2-4AE1-BC32-6B10934F292C2>; ... print(items[0] == items[1]) // true !!! } The issue occurs after adding a new item and saving both contexts. func weirdTest() { print("1. Adding.") let item = Item(context: viewContext) item.id = UUID() viewContext.processPendingChanges() print("2. Saving.") viewContext.performAndWait { try! viewContext.save() } rootCtx.performAndWait { try! rootCtx.save() } } Here's my Core Data Stack: View Context (main thread) --> Background Context --> Persistent Store Coordinator Ahd here's how I configure my contexts: lazy var rootCtx: NSManagedObjectContext = { let rootCtx = NSManagedObjectContext(concurrencyType: .privateQueueConcurrencyType) rootCtx.persistentStoreCoordinator = coordinator rootCtx.automaticallyMergesChangesFromParent = true return rootCtx }() lazy var viewContext: NSManagedObjectContext = { let ctx = NSManagedObjectContext(concurrencyType: .mainQueueConcurrencyType) ctx.parent = rootCtx ctx.automaticallyMergesChangesFromParent = true return ctx }() Anyone has any idea what's going on here? :<
0
0
868
Nov ’21