Why use serial queue over NSLock or os_unfair_lock?

I have the following 2 thread safe wrappers implementation for a boolean:

1 - Using NSLock

class ThreadSafeBool {
    private let lock = NSLock()
    private var wrappedValue: Bool

    var value: Bool {
        get {
            lock.lock()
            defer { lock.unlock() }

            return wrappedValue
        }
        set {
            lock.lock()
            defer { lock.unlock() }

            wrappedValue = newValue
        }
    }

    init(_ initialValue: Bool) {
        wrappedValue = initialValue
    }
}

2 - Using DispatchQueue and sync

class ThreadSafeBoolQueue {
    private let queue = DispatchQueue(label: "my.queue")
    private var wrappedValue: Bool

    var value: Bool {
        get {
            self.queue.sync { return wrappedValue }
        }
        set {
            self.queue.sync { wrappedValue = newValue }
        }
    }

    init(_ initialValue: Bool) {
        wrappedValue = initialValue
    }
}

Even though the NSLock it is much more faster then the sync queues, os_unfair_lock is even faster.

Could someone please let me know why in lots of example is prefer the second locking mode, including Apple presentation?

PS: Please keep in mind that the classes are just examples, so the main question is why queue over NSLock/os_unfair_lock?

Thank you very much

Accepted Reply

Could someone please let me know why in lots of example is prefer the second locking mode, including Apple presentation?

It’s mostly for historical reasons. When Dispatch was first introduced folks went ‘Dispatch happy’ and started using it for everything. This wasn’t helped by Apple’s documentation not being clear about this issue.

If you’re curious about Apple’s current thinking on this topic — and there’s a lot of emphasis on unfair lock! — watch WWDC 2017 Session 706 Modernizing Grand Central Dispatch Usage.

Personally I tend to use NSLock and will continue to do so until I’m able to rely on OSAllocatedUnfairLock being available. Using os_unfair_lock from Swift more hassle that it’s worth in most cases [1].

Share and Enjoy

Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"

[1] Because you need to manually manage memory due to the issue discussed in The Peril of the Ampersand.

  • FYI I filed FB11967856 requesting that the OS requirement for OSAllocatedUnfairLock be lowered. It doesn’t look like it needs to be constrained to the latest OS versions.

    (Also, FYI I filed FB11968310 for a documentation bug regarding the OS requirement for NSLocking.withLock.)

  • Would ThreadSafeBoolQueue cause crash or deadlock when it's used on the main queue and the system is busy that could not spawn a new thread?

  • @eskimo , would it be enough to do something like this? final class UnfairLock { let** unfairLock: UnsafeMutablePointer<os_unfair_lock>**

    init() { ** unfairLock = UnsafeMutablePointer.allocate(capacity: 1)** unfairLock.initialize(to: os_unfair_lock()) } deinit { unfairLock.deinitialize(count: 1) unfairLock.deallocate() } func lock() { os_unfair_lock_lock(unfairLock) }

    ....

Add a Comment

Replies

Could someone please let me know why in lots of example is prefer the second locking mode, including Apple presentation?

It’s mostly for historical reasons. When Dispatch was first introduced folks went ‘Dispatch happy’ and started using it for everything. This wasn’t helped by Apple’s documentation not being clear about this issue.

If you’re curious about Apple’s current thinking on this topic — and there’s a lot of emphasis on unfair lock! — watch WWDC 2017 Session 706 Modernizing Grand Central Dispatch Usage.

Personally I tend to use NSLock and will continue to do so until I’m able to rely on OSAllocatedUnfairLock being available. Using os_unfair_lock from Swift more hassle that it’s worth in most cases [1].

Share and Enjoy

Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"

[1] Because you need to manually manage memory due to the issue discussed in The Peril of the Ampersand.

  • FYI I filed FB11967856 requesting that the OS requirement for OSAllocatedUnfairLock be lowered. It doesn’t look like it needs to be constrained to the latest OS versions.

    (Also, FYI I filed FB11968310 for a documentation bug regarding the OS requirement for NSLocking.withLock.)

  • Would ThreadSafeBoolQueue cause crash or deadlock when it's used on the main queue and the system is busy that could not spawn a new thread?

  • @eskimo , would it be enough to do something like this? final class UnfairLock { let** unfairLock: UnsafeMutablePointer<os_unfair_lock>**

    init() { ** unfairLock = UnsafeMutablePointer.allocate(capacity: 1)** unfairLock.initialize(to: os_unfair_lock()) } deinit { unfairLock.deinitialize(count: 1) unfairLock.deallocate() } func lock() { os_unfair_lock_lock(unfairLock) }

    ....

Add a Comment

Hi, thanks for your answer and and video recommendation.

Yes, I have the same impression that people are using GCD for everything. The sad part is that apple didn't stop the trend or at least to come with some videos, maybe something like GCD in depth, to make devs aware of the +/-.

One result of this is that more and more forums have as accepted answers things like Never use NSLock, always use queue.sync from any synchronization without any explanation.

I've read that os_unfair_lock is faster than pthread_mutex/NSLock, but as you mention it is a little problematic in swift. I've used in the past pthread_mutex, or std::mutex in C++ for UNIX-like, and they performed ok plus it is available in the UNIX-like world.

Regarding the video, I think it is interesting. Please correct me if I'm wrong. I think the approach of this queue "tree" is a result of overuse dispatch/queues, too many threads created or blocked. They also emphasize to try to have queues per subsystem and not per class.

Have a nice day :)