I see keys: current, downloaded, and notDownloaded, but no variables for progress. The spinning beachball will have to do.
Thanks for letting me know that URLSession was a dead end though.
Post
Replies
Boosts
Views
Activity
Turns out, I was declaring new instances of a structure within each loop. Rookie mistake. Replaced most of my DQ's with Task(), sendable func().
Works great!
just to be clear, the DQ morph's to:
DispatchQueue.main.async{ self.loop_count += Int(pow(Double(loopSet.count), 6.0)) }
It is not continuous but it need not be.
The alerts/warnings have returned. Added -DACCELERATE_NEW_LAPACK to "Build Settings"/"Apple Clang - Custom Compiler Flags"/Debug & Release but alerts/warnings persist. Nevertheless, Build Succeeds and LAPACK generated data.
FYI: "man sysctlbyname" does deliver a man page but "which sysctlbyname" gives: "sysctlbyname not found". Also, mdfind sysctlbyname is void.
OK, that's all the unique code. If you cut and paste into a new project of your making, you can run it and get the same errors. Thanks.
NO. runtime actually aborts @ aCoder.encode(e_Two, forKey: CodingKeys.e_Two.rawValue) during a save-to-file function.
the runtime aborts when it hits a classTwo.init() @ self.e_Two = e_Two ?? [class...
Nope, my class has optional arrays of unknown count.
Actually, since the size of the structures ARE known if they are populated, i.e, they are a class instance with optional variables, I could use that full size. It may be inefficient, disk wise, but is the best option so far.
Thanks, but I was looking for faster computing of heavily nested For-Loop modeling. My research shows GPU's are not so useful.
The action of a button { inside the curly brackets } seems to act like a Dispatch.async
I assume ProgressView() is still a work-in-progress at Apple. Their current example code is clearly non-real-world, i.e., useless.
Fred Weasley: "Blimey, what a waste of parchment"
I didn't mean to abandon this thread, it's just that I've tested the code and discovered the nested for-loops execute over 1.5E14 times and would require over 650,000 days to compute. A progress indicator is moot. I'm looking into whether GPU efforts would even help. It's mostly an issue of getting the results to memory and then storage. Since the memory required is tens of GB if not hundreds of GB, I/O speeds dictate not computational speeds. So I'm also re-thinking compromises. In the mean time, Claud31 did answer my question in a few different ways, so I'll close the thread.