Posts

Post not yet marked as solved
3 Replies
193 Views
In the 'Discussion' section of the current documentation for Swift's DispatchQueue, it says: If the target queue is a concurrent queue, the blocks run in parallel and must therefore be reentrant-safe. However, unlike dispatch_apply (on which this API is built), this method provides no direct means of specifying a target queue, so this callout is somewhat more confusing than it ought to be. IMO, it's important to highlight the reentrancy considerations that apply in most (all?) cases, but the implicit reference to the implementation details should be removed or clarified. Feedback filed as: FB13708750
Posted
by jamie_sq.
Last updated
.
Post not yet marked as solved
2 Replies
266 Views
Per my understanding of the DispatchQueue docs, and various WWDC videos on the matter, if one creates a queue in the following manner: let q = DisqpatchQueue( label: "my-q", qos: .utility, target: .global(qos: .userInteractive) ) then one should expect work items submitted via async() to effectively run at userInteractive QoS, as the target queue should provide a 'floor' on the effective QoS value (assuming no additional rules are in play, e.g. higher priority items have been enqueued, submitted work items enforce QoS, etc). In practice, however, this particular formulation does not appear to function that way, and the 'resolved' QoS value seems to be utility, contrary to what the potentially relevant documentation suggests. This behavior appears to be inconsistent with other permutations of queue construction, which makes it even more surprising. Here's some sample code I was experimenting with to check the behavior of queues created in various ways that I would expect to function analogously (in regards to the derived QoS value for the threads executing their work items): func test_qos_permutations() { // q1 let utilTargetingGlobalUIQ = DispatchQueue( label: "qos:util tgt:globalUI", qos: .utility, target: .global(qos: .userInitiated) ) let customUITargetQ = DispatchQueue( label: "custom tgt, qos: unspec, tgt:globalUI", target: .global(qos: .userInitiated) ) // q2 let utilTargetingCustomSerialUIQ = DispatchQueue( label: "qos:util tgt:customSerialUI", qos: .utility, target: customUITargetQ ) // q3 let utilDelayedTargetingGlobalUIQ = DispatchQueue( label: "qos:util tgt:globalUI-delayed", qos: .utility, attributes: .initiallyInactive ) utilDelayedTargetingGlobalUIQ.setTarget(queue: .global(qos: .userInitiated)) utilDelayedTargetingGlobalUIQ.activate() let queues = [ utilTargetingGlobalUIQ, utilTargetingCustomSerialUIQ, utilDelayedTargetingGlobalUIQ, ] for q in queues { q.async { Thread.current.name = q.label let threadQos = qos_class_self() print(""" q: \(q.label) orig qosClass: \(q.qos.qosClass) thread qosClass: \(DispatchQoS.QoSClass(rawValue: threadQos)!) """) } } } Running this, I get the following output: q: qos:util tgt:customSerialUI orig qosClass: utility thread qosClass: userInitiated q: qos:util tgt:globalUI-delayed orig qosClass: utility thread qosClass: userInitiated q: qos:util tgt:globalUI orig qosClass: utility thread qosClass: utility This test suggests that constructing a queue with an explicit qos parameter and targeting a global queue of nominally 'higher' QoS does not result in a queue that runs its items at the target's QoS. Perhaps most surprisingly is that if the target queue is set after the queue was initialized, you do get the expected 'QoS floor' behavior. Is this behavior expected, or possibly a bug?
Posted
by jamie_sq.
Last updated
.
Post marked as solved
3 Replies
517 Views
I was wondering if there is a way, while debugging, to observe the 'QoS boosting' behavior that is implemented in various places to provide priority inversion avoidance. The pthread_override_qos_class_start/end_np header comments specifically say that overrides aren't reflected in the qos_class_self() and pthread_get_qos_class_np() return values. As far as I can tell the 'CPU Report' UI in Xcode also does not reflect this information (perhaps for the reason the header comments call out). Is there a direct mechanism to observe this behavior? Presumably a heuristic empirical test could be done to compare throughput of a queue that should have its priority boosted and one that should not, but I would prefer a less opaque means of verification if possible. Thanks in advance!
Posted
by jamie_sq.
Last updated
.
Post not yet marked as solved
1 Replies
422 Views
i’ve seen evidence that UIViewController has logic to prevent its dealloc/deinit methods from running on a background thread. what seems to occur is that, if the last strong reference to a UIVC is zeroed off the main thread, then the VC is logically marked as ‘deallocating’, but actual invocation of dealloc/deinit is enqueued to the main queue. during the window of time between the beginning and end of this asynchronous deallocation, some strange issues can occur. in particular, if the deallocating VC is a parent view controller, then its children can still access it via their parent property. despite this property being marked as weak, a non-nil parent VC reference will be returned. if a weak reference is attempted to be stored to the parent, you get a runtime crash immediately to the effect of: Cannot form weak reference to instance (0x1234) of class <some-UIVC-sublass>... surprisingly, if you load the reference via the objc runtime's objc_loadWeak method, you'll get nil, but no crash. unsurprisingly, if a strong reference to the deallocating parent is stored and persists past its dealloc invocation, you’ll generally end up with a segmentation violation if the reference is accessed. i imagine the UIVC source is quite complex and there are probably good reasons to try and ensure deallocation only ever occurs on the main thread, but it seems surprising that simply passing view controller variables across threads could lead to exposing unsafe references like this. is this behavior expected? assuming not, i've preemptively filed feedback FB13478946 regarding this issue. attached is some sample code that can reliably reproduce the unexpected behavior. UIKitAsyncDeallocCrashTests.swift
Posted
by jamie_sq.
Last updated
.
Post not yet marked as solved
7 Replies
2.1k Views
In iOS 16.1 there are two memory-access issues that have surfaced in related to NSFetchedResultsController: If your sectionNameKeyPath returns an ordering that doesn't match that of the fetch request's first sort descriptor, the internal NSError instance created to warn you of this fact gets over-released and may crash the application. If your implementation of sectionNameKeyPath is a computed property implemented in objective-c, the fetched results controller may crash due to over-releasing the returned strings (it's possible this may also happen in Swift, though I was unable to produce the behavior). I've filed two feedbacks (FB11652942 & FB11653996) regarding these issues, but also wanted to raise them here in case that may help expedite their resolution. The offending method seems to be -[NSFetchedResultsController _computeSectionInfo:error:], which, from inspecting the disassembly, appears to now contain some references to objc_autoreleasePoolPop which were previously absent. Regarding issue 2. specifically – it's unclear what the ownership model for the value returned from sectionNameKeyPath is intended to be if said key path produces a computed value. Does the fetched results controller take ownership of it, or does the framework assume that the corresponding managed object is the owner? -Jamie
Posted
by jamie_sq.
Last updated
.