8 Replies
      Latest reply on Nov 26, 2018 5:46 AM by mwp
      mwp Level 1 Level 1 (10 points)

        The following code allocates a bunch of big structs in dispatch queues, and it crashes consistently for me when using DispatchQueue.concurrentPerform or NSArray.enumerateObjects, but not when just executing on a background queue.

         

        Is there some queue-specific stack-size limit that can be adjusted?

         

         

        import Dispatch
        import Foundation
        import XCTest
        
        class QueueMemoryAllocCrashTest : XCTestCase {
        
            /// Eight is enough
            struct OctoThing {
                let t1, t2, t3, t4, t5, t6, t7, t8: T
            }
        
            /// A 32K block of memory
            struct MemoryChunk {
                let chunk: OctoThing<octothing<octothing<octothing>>>? = nil // 32,768 bytes
            }
        
            /// This function does nothing but waste stack space (491,520 bytes to be specific)
            func wasteMemory() {
                // any fewer than 15 of these and the test will pass
                let _ = MemoryChunk()
                let _ = MemoryChunk()
                let _ = MemoryChunk()
                let _ = MemoryChunk()
                let _ = MemoryChunk()
                let _ = MemoryChunk()
                let _ = MemoryChunk()
                let _ = MemoryChunk()
                let _ = MemoryChunk()
                let _ = MemoryChunk()
                let _ = MemoryChunk()
                let _ = MemoryChunk()
                let _ = MemoryChunk()
                let _ = MemoryChunk()
                let _ = MemoryChunk()
            }
        
            func testWasteOnQueue() {
                // this passes without any problems
                DispatchQueue.global(qos: .userInteractive).sync(execute: {
                    wasteMemory()
                    wasteMemory()
                    wasteMemory()
                })
            }
        
        
            func testWasteWithConcurrentPerform() {
                // this crashes with 2 iterations or more with a EXC_BAD_ACCESS
                DispatchQueue.concurrentPerform(iterations: 2, execute: { _ in
                    wasteMemory()
                })
            }
        
            func testWasteWithEnumerateObjects() {
                // this crashes with 17 iterations or more with a EXC_BAD_ACCESS
                (Array(1...17) as NSArray).enumerateObjects(options: [.concurrent]) { _, _, _ in
                    wasteMemory()
                }
            }
        
        }
        • Re: Consistent crash when allocating large memory in DispatchQueue
          Ken Thomases Level 4 Level 4 (705 points)

          Yes, you're exceeding the default stack size limit for secondary threads.  The main thread's default stack size is 8MB, but secondary threads are typically created with a 512KB stack limit.  https://developer.apple.com/library/archive/qa/qa1419/_index.html

           

          testWasteOnQueue() doesn't have the problem because synchronous dispatch usually runs on the current thread.  The others crash beyond certain number of iterations because that's, apparently, the point at which they bother to shunt work to a secondary thread.

           

          You should either use heap-allocated memory instead of putting that much data on the stack or you need to use your own threads, which you can configure with larger stack sizes, instead of GCD.

            • Re: Consistent crash when allocating large memory in DispatchQueue
              mwp Level 1 Level 1 (10 points)

              Thanks! I wonder why the default stack size is so small, especially with the potential for rich Swift value-typed models taking up a lot more stack space. From your response and the docs, it looks like there isn’t any way to provide a hint to GCD to allocate more stack space, even with a custom queue. That’s a pity. I guess I’ll have to use good old fashioned NSThreads.

                • Re: Consistent crash when allocating large memory in DispatchQueue
                  eskimo Apple Staff Apple Staff (10,445 points)

                  I guess I’ll have to use good old fashioned NSThreads

                  That’s a fine option.  It’s a common misconception that Dispatch is a complete replacement for traditional threading APIs.  That’s not true, something confirmed by the fact that we’ve not deprecated any of the traditional threading APIs (in fact, if you’re working in Swift you’ll notice that we added a new one, Thread, which you’ll probably want to use in preference to NSThread).  There are times when using a thread is absolutely the right thing to do.

                  especially with the potential for rich Swift value-typed models taking up a lot more stack space

                  Most Swift value types don’t use a lot of stack space.  Rather, they use copy on write, that is a small stack-allocated structure combined with a (potentially) large, heap-allocated backing buffer.  There two reasons for this:

                  • It reduces the stack impact.

                  • It improves the runtime performance because it makes copies much faster (you only need to copy the small structure and increment the reference count on the backing buffer).

                  Your OctoThing generic is a cunning way to get Swift to use a lot of stack space.  I’ve not seen that trick before (-:  It’s also noteworthy that you had to employ that trick in your example because all the built-in value types — things like Array, String and Data — do not use a lot of stack space.

                  Share and Enjoy

                  Quinn “The Eskimo!”
                  Apple Developer Relations, Developer Technical Support, Core OS/Hardware
                  let myEmail = "eskimo" + "1" + "@apple.com"

                  • Re: Consistent crash when allocating large memory in DispatchQueue
                    john daniel Level 3 Level 3 (300 points)

                    Why are you using stack space for this anyway? The heap is going to outperform the stack for data, even in highly efficient languages like C++.

                      • Re: Consistent crash when allocating large memory in DispatchQueue
                        mwp Level 1 Level 1 (10 points)

                        My data model uses value types because I need the copy-on-write behavior and the automatic Codable/Equatable implementations. If I could alocate structs and enums on the heap to get around this issue, I would.

                          • Re: Consistent crash when allocating large memory in DispatchQueue
                            eskimo Apple Staff Apple Staff (10,445 points)

                            My data model uses value types because I need the copy-on-write behavior …

                            Unfortunately you don’t get copy-on-write for free when creating custom types.  If you want that, you’ll have to build it yourself.  This is perfectly feasible, the mechanisms are well documented, but it’s hard to offer concrete advice without knowing more about your actual data structure.  Care to go into more details?

                            Share and Enjoy

                            Quinn “The Eskimo!”
                            Apple Developer Relations, Developer Technical Support, Core OS/Hardware
                            let myEmail = "eskimo" + "1" + "@apple.com"

                              • Re: Consistent crash when allocating large memory in DispatchQueue
                                mwp Level 1 Level 1 (10 points)

                                Regardless of whether the compiler will optimize my structs with COW behavior or not, I still need the value semantics for my application (plus, the automatic Coding/Equatable/Hashable impementations for structs is very valuable). The model itself is a large combination of many-valued enums holding structs with numerous properties, all of which represent the essential complexity of the application. Reducing the size of the model isn't feasible, and converting some/all of it into reference types would be a major undertaking. The one possible memory-size reduction option,which is to change the enums to be indirect, has yielded some stack size savings, but not enough to enable us to work with the model on a dispatch queue without crashing.

                                 

                                Anyway, everything works just fine on the main thread

                                 

                                If I could change the per-queue stack size, or if I could ask the compiler to allocate the structs on the heap instead of the stack, the problem would be avoided. But since neither is possible, I think that manually creating Thread instances with a large stack size will be our best option. Thanks for the help!