Post

Replies

Boosts

Views

Activity

MTLDebugRenderCommandEncoder Error from Swift Charts
I've started using swift charts and since then get random crashes with the error: Thread 467: hit program assert The console outputs the following at the time of the crash: -[MTLDebugRenderCommandEncoder setVertexBufferOffset:atIndex:]:1758: failed assertion Set Vertex Buffer Offset Validation index(0) must have an existing buffer.` I'm not using Metal directly buit it seems like this is related to Swift Charts. I cannot work out the source of the issue from the stack trace and the debugger shows teh crash in libsystem_kernel.dylib so does not tie back to my code. I'm looking for ideas about where to start to try and find the source of the issue 0 libsystem_kernel.dylib 0x9764 __pthread_kill + 8 1 libsystem_pthread.dylib 0x6c28 (Missing UUID 1f30fb9abdf932dba7098417666a7e45) 2 libsystem_c.dylib 0x76ae8 abort + 180 3 libsystem_c.dylib 0x75e44 __assert_rtn + 270 4 Metal 0x1426c4 MTLReportFailure.cold.1 + 46 5 Metal 0x11f22c MTLReportFailure + 464 6 Metal 0x11552c _MTLMessageContextEnd + 876 7 MetalTools 0x95350 -[MTLDebugRenderCommandEncoder setVertexBufferOffset:atIndex:] + 272 8 RenderBox 0xa5e18 RB::RenderQueue::encode(RB::RenderQueue::EncoderState&) + 1804 9 RenderBox 0x7d5fc RB::RenderFrame::encode(RB::RenderFrame::EncoderData&, RB::RenderQueue&) + 432 10 RenderBox 0x7d928 RB::RenderFrame::flush_pass(RB::RenderPass&, bool)::$_4::__invoke(void*) + 48 11 libdispatch.dylib 0x4400 (Missing UUID 9897030f75d3374b8787322d3d72e096) 12 libdispatch.dylib 0xba88 (Missing UUID 9897030f75d3374b8787322d3d72e096) 13 libdispatch.dylib 0xc5f8 (Missing UUID 9897030f75d3374b8787322d3d72e096) 14 libdispatch.dylib 0x17244 (Missing UUID 9897030f75d3374b8787322d3d72e096) 15 libsystem_pthread.dylib 0x3074 (Missing UUID 1f30fb9abdf932dba7098417666a7e45) 16 libsystem_pthread.dylib 0x1d94 (Missing UUID 1f30fb9abdf932dba7098417666a7e45)
1
0
731
Nov ’23
TaskGroup lockup with more than 7 tasks
Platform: macOS 12.4, MacBook Pro 2.3GHz Quad-Core i5, 16GB RAM I'm trying to read an OSLog concurrently because it is big and I don't need any of the data in order. Because the return from .getEntries is a sequence, the most efficient approach seems to be to iterate through. Iteration through the sequence from start to finish can take a long time so I thought I can split it up into days and concurrently process each day. I'm using a task group to do this and it works as long as the number of tasks is less than 8. When it works, I do get the result faster but not a lot faster. I guess there is a lot of overhead but actually it seems to be that my log file is dominated by the processing on 1 of the days. Ideally I want more concurrent tasks to break up the day into smaller blocks. But as soon as I try to create 8 or more tasks, I get a lockup with the following error posted to the console. enable_updates_common timed out waiting for updates to reenable Here are my tests. First - a pure iterative approach. No tasks. completion of this routine on my i5 quad core takes 229s    func scanLogarchiveIterative(url: URL) async {     do {       let timer = Date()               let logStore = try OSLogStore(url: url)       let last5days = logStore.position(timeIntervalSinceEnd: -3600*24*5)       let filteredEntries = try logStore.getEntries(at: last5days)               var processedEntries: [String] = []               for entry in filteredEntries {         processedEntries.append(entry.composedMessage)       }               print("Completed iterative scan in: ", timer.timeIntervalSinceNow)     } catch {             }   } Next is a concurrent approach using a TaskGroup which creates 5 child tasks. Completion takes 181s. Faster but the last day dominates so not a huge benefit as the most time is taken by a single task processing the last day.   func scanLogarchiveConcurrent(url: URL) async {     do {       let timer = Date()               var processedEntries: [String] = []               try await withThrowingTaskGroup(of: [String].self) { group in         let timestep = 3600*24         for logSectionStartPosition in stride(from: 0, to: -3600*24*5, by: -1*timestep) {           group.addTask {             let logStore = try OSLogStore(url: url)             let filteredEntries = try logStore.getEntries(at: logStore.position(timeIntervalSinceEnd: TimeInterval(logSectionStartPosition)))             var processedEntriesConcurrent: [String] = []             let endDate = logStore.position(timeIntervalSinceEnd: TimeInterval(logSectionStartPosition + timestep)).value(forKey: "date") as? Date             for entry in filteredEntries {               if entry.date > (endDate ?? Date()) {                 break               }               processedEntriesConcurrent.append(entry.composedMessage)             }             return processedEntriesConcurrent           }         }                   for try await processedEntriesConcurrent in group {           print("received task completion")           processedEntries.append(contentsOf: processedEntriesConcurrent)         }       }               print("Completed concurrent scan in: ", timer.timeIntervalSinceNow)             } catch {             }   } If I split this further to concurrently process half days, then the app locks up. The console periodically prints enable_updates_common timed out waiting for updates to reenable If I pause the debugger, it seems like there is a wait on a semaphore which must be internal to the concurrent framework?    func scanLogarchiveConcurrentManyTasks(url: URL) async {     do {       let timer = Date()               var processedEntries: [String] = []               try await withThrowingTaskGroup(of: [String].self) { group in         let timestep = 3600*12         for logSectionStartPosition in stride(from: 0, to: -3600*24*5, by: -1*timestep) {           group.addTask {             let logStore = try OSLogStore(url: url)             let filteredEntries = try logStore.getEntries(at: logStore.position(timeIntervalSinceEnd: TimeInterval(logSectionStartPosition)))             var processedEntriesConcurrent: [String] = []             let endDate = logStore.position(timeIntervalSinceEnd: TimeInterval(logSectionStartPosition + timestep)).value(forKey: "date") as? Date             for entry in filteredEntries {               if entry.date > (endDate ?? Date()) {                 break               }               processedEntriesConcurrent.append(entry.composedMessage)             }             return processedEntriesConcurrent           }         }                   for try await processedEntriesConcurrent in group {           print("received task completion")           processedEntries.append(contentsOf: processedEntriesConcurrent)         }       }               print("Completed concurrent scan in: ", timer.timeIntervalSinceNow)             } catch {             }   } I read that it may be possible to get more insight into concurrency issue by setting the following environment: LIBDISPATCH_COOPERATIVE_POOL_STRICT 1 This stops the lockup but it is because each task is run sequentially so there is no benefit from concurrency anymore. I cannot see where to go next apart from accept the linear processing time. It also feels like doing any concurrency (even if under 8 tasks) is risky as there is no documentation to suggest that is a limit. Could it be that concurrently processing the sequence from OSLog .getEntries is not suitable for concurrent access and shouldn't be done? Again, I don't see any documentation to suggest this is the case. Finally, the processing of each entry is so light that there is little benefit to offloading just the processing to other tasks. The time taken seems to be purely dominated by iterating the sequence. In reality I do use a predicate in .getEntries which helps a bit but its not enough and concurrency would still be valuable if I could process 1 hour blocks concurrently.
2
0
2.2k
Jun ’22
Should Hotspot Helper give up to date RSSI values for unmanaged networks
I am using the Hotspot Helper to connect to certain networks which I have control over.In my app I display the current connected network information and show an indication of the signal strength.I can call supportedNetworkInterfaces() to get the NEHotspotNetwork object representing the Wi-Fi interface.When the currently connected Wi-Fi is one that I am managing then I get live updates of the signal strength. When I am not managing the current network I get an RSSI upon connection but any call after that seems to always report a stale RSSI from connection time. i.e. RSSI is not updating but it is being reported. Should I be able to receive up to date RSSI for the current connected WiFi whether it is managed or not? It seems strange that a value is returned but it is not current?I cannot find any obvious bugs so want to check what the Hotspot helper should do in this case before going any further.Any tips for why this might happen would be appreciated.
4
0
1.6k
Jul ’16