Posts

Post not yet marked as solved
2 Replies
521 Views
My app allows the user to select different stable diffusion models, and I noticed a very strange issue concerning memory management. When using the StableDiffusionPipeline (https://github.com/apple/ml-stable-diffusion) with cpu+gpu, around 1.5 GB of memory is not properly released after generateImages is called and the pipeline is released. When generating more images with a new StableDiffusionPipeline object, memory is reused and stays stable at around 1.5 GB after inference is complete. Everything, especially MLModels, are released properly. Guessing, MLModel seems to create a persistent cache. Here is the problem: When using a different MLModel afterwards, another 1.5 GB is not released and stays resident. Using a third model, this totales to 4.5 GB of unreleased, persistent memory. At first I thought that would be a bug in the StableDiffusionPipeline – but I was able to reproduce this behaviour in a very minimal objective-c sample without ARC: MLArrayBatchProvider *batchProvider = [[MLArrayBatchProvider alloc] initWithFeatureProviderArray:@[<VALID FEATURE PROVIDER>]]; MLModelConfiguration *config = [[MLModelConfiguration alloc] init]; config.computeUnits = MLComputeUnitsCPUAndGPU; MLModel *model = [[MLModel modelWithContentsOfURL:[NSURL fileURLWithPath:<VALID PATH TO .mlmodelc SD 1.5 FILE>] configuration:config error:&error] retain]; id<MLBatchProvider> returnProvider = [model predictionsFromBatch:batchProvider error:&error]; [model release]; [config release]; [batchProvider release]; After running this minimal code, 1.5 GB of persistent memory is present that is not released during the lifetime of the app. This only happens on macOS 14(.1) Sonoma and on iOS 17(.1), but not on macOS 13 Ventura. On Ventura, everything works as expected and the memory is released when predictionsFromBatch: is done and the model is released. Some observations: This only happens using cpu+gpu, not cpu+ane (since the memory is allocated out of process) and not using cpu-only It does not matter which stable diffusion model is used, I tried custom sd-derived models as well as the apple-provided sd 1.5 models I reproduced the issue on MBP 16" M1 Max with macOS 14.1, iPhone 12 mini with iOS 17.0.3 and iPad Pro M2 with iPadOS 17.1 The memory that "leaks" are mostly huge malloc block of 100-500 MB of size OR IOSurfaces This memory is allocated during predictionsFromBatch, not while loading the model Loading and unloading a model does not leak memory – only when predictionsFromBatch is called, the huge memory chunk is allocated and never freed during the lifetime of the app Does anybody have any clue what is going on? I highly suspect that I am missing something crucial, but my colleagues and me looked everywhere trying to find a method of releasing this leaked/cached memory.
Posted
by MendelK.
Last updated
.
Post not yet marked as solved
7 Replies
3.8k Views
Hi!I am the lead developer of MacFamlyTree and we received a very strange rejection submitting a bugfix release. It seems we're no longer allowed to use "Mac" as the app name prefix:"Your app uses Mac in the app name in a manner that is not consistent with Apple's trademark guidelines. In regards to the 5.2.5 rejection, your app uses Mac in the app name in a manner that is not consistent with Apple's trademark guidelines. Indicating Mac compatibility in the app name is not necessary for the Mac App Store. It would be appropriate to remove the term “Mac” from the app’s name before resubmitting for review."It is really odd since Apple reviewed and approved an update of MacFamilyTree just 5 days prior to this rejection. Additionally, Apple is currently promoting MacFamilyTree big time on the Mac App Store front page. Looking at the trademark guidelines for the Mac Trademark, we're in full compliance to it.I see numerous apps and companies using the "Mac" prefix at the Mac or iOS App Store. Has anybody made a similar experience recently? Is this just an overzealous app reviewer or some kind of new guideline that is not yet published?
Posted
by MendelK.
Last updated
.
Post not yet marked as solved
2 Replies
1.4k Views
Hi,Anyone experiencing the same? When downloading a large zone from the private database with a large number of changes, adds and deletes (say 20k records, 15k deletes and 5k changes) cloudd crashes reproducibly along the way. This only happens when fetchAllChanges=YES. When set to NO or when using the older CKFetchRecordChangesOperation, everything works as expected.Here are some stacks where cloudd is crashing:Thread 5 Crashed:: Dispatch queue: com.apple.cloudkit.fetchAllZoneChanges.callback.0x7fa2cde154500 com.apple.cloudkit.CloudKitDaemon 0x00007fffd2420004 CKDPQueryRetrieveRequestReadFrom + 5061 libdispatch.dylib 0x00007fffdcb41128 _dispatch_client_callout + 82 libdispatch.dylib 0x00007fffdcb578e8 _dispatch_queue_serial_drain + 2093 libdispatch.dylib 0x00007fffdcb49d41 _dispatch_queue_invoke + 10464 libdispatch.dylib 0x00007fffdcb579d2 _dispatch_queue_serial_drain + 4435 libdispatch.dylib 0x00007fffdcb49d41 _dispatch_queue_invoke + 10466 libdispatch.dylib 0x00007fffdcb579d2 _dispatch_queue_serial_drain + 4437 libdispatch.dylib 0x00007fffdcb49d41 _dispatch_queue_invoke + 10468 libdispatch.dylib 0x00007fffdcb42ee0 _dispatch_root_queue_drain + 4769 libdispatch.dylib 0x00007fffdcb42cb7 _dispatch_worker_thread3 + 9910 libsystem_pthread.dylib 0x00007fffdcd8d746 _pthread_wqthread + 129911 libsystem_pthread.dylib 0x00007fffdcd8d221 start_wqthread + 13Thread 5 Crashed:: Dispatch queue: com.apple.cloudkit.fetchAllZoneChanges.callback.0x7fbcb1575f200 libobjc.A.dylib 0x00007fffdc2938a4 objc_loadWeakRetained + 1661 com.apple.cloudkit.CloudKitDaemon 0x00007fffd24292b4 __82-[CKDFetchRecordZoneChangesOperation _handleRecordChange:perRequestSchedulerInfo:]_block_invoke_2 + 472 libdispatch.dylib 0x00007fffdcb56680 _dispatch_block_async_invoke_and_release + 753 libdispatch.dylib 0x00007fffdcb41128 _dispatch_client_callout + 84 libdispatch.dylib 0x00007fffdcb578e8 _dispatch_queue_serial_drain + 2095 libdispatch.dylib 0x00007fffdcb49d41 _dispatch_queue_invoke + 10466 libdispatch.dylib 0x00007fffdcb579d2 _dispatch_queue_serial_drain + 4437 libdispatch.dylib 0x00007fffdcb49d41 _dispatch_queue_invoke + 10468 libdispatch.dylib 0x00007fffdcb579d2 _dispatch_queue_serial_drain + 4439 libdispatch.dylib 0x00007fffdcb49d41 _dispatch_queue_invoke + 104610 libdispatch.dylib 0x00007fffdcb42ee0 _dispatch_root_queue_drain + 47611 libdispatch.dylib 0x00007fffdcb42cb7 _dispatch_worker_thread3 + 9912 libsystem_pthread.dylib 0x00007fffdcd8d746 _pthread_wqthread + 129913 libsystem_pthread.dylib 0x00007fffdcd8d221 start_wqthread + 13Thread 5 Crashed:: Dispatch queue: com.apple.cloudkit.fetchAllZoneChanges.callback.0x7faca45256300 com.apple.cloudkit.CloudKitDaemon 0x00007fff88fc0004 -[CKDPCSCacheRecordFetchOperation _decryptPCS] + 64661 libdispatch.dylib 0x00007fff936e7128 _dispatch_client_callout + 82 libdispatch.dylib 0x00007fff936fd8e8 _dispatch_queue_serial_drain + 2093 libdispatch.dylib 0x00007fff936efd41 _dispatch_queue_invoke + 10464 libdispatch.dylib 0x00007fff936fd9d2 _dispatch_queue_serial_drain + 4435 libdispatch.dylib 0x00007fff936efd41 _dispatch_queue_invoke + 10466 libdispatch.dylib 0x00007fff936fd9d2 _dispatch_queue_serial_drain + 4437 libdispatch.dylib 0x00007fff936efd41 _dispatch_queue_invoke + 10468 libdispatch.dylib 0x00007fff936e8ee0 _dispatch_root_queue_drain + 4769 libdispatch.dylib 0x00007fff936e8cb7 _dispatch_worker_thread3 + 9910 libsystem_pthread.dylib 0x00007fff93933746 _pthread_wqthread + 129911 libsystem_pthread.dylib 0x00007fff93933221 start_wqthread + 13Already filed a bug report, #28461661
Posted
by MendelK.
Last updated
.
Post not yet marked as solved
16 Replies
10k Views
Hi,We had encountered an issue saving a CKShare which uses a larger parent hierarchy of more than 5.000 records (see https://forums.developer.apple.com/thread/64194)Sadly, we just received an answer from the (very helpful and responsive) Apple Developer Technical Support that CKShare is currently only designed to share a few hundred records. The hard limit seems to be 5.000 records when creating a CKShare, the recommendation is 200.Be sure to design your apps with these limits in mind when you're planning to use CKShare. It would be great if anyone with a real-world usecase for an app can file a feature request (http://bugreport.apple.com). Mine is #28548349.
Posted
by MendelK.
Last updated
.