After upgrading to Xcode16RC, in an old project based on ObjC, I directly used the following controller code in AppDelegate:
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view.
UIButton *b = [[UIButton alloc]initWithFrame:CGRectMake(100, 100, 44, 44)];
[b setTitle:@"title" forState:UIControlStateNormal];
[self.view addSubview:b];
[b addTarget:self action:@selector(onB:) forControlEvents:UIControlEventTouchUpInside];
}
- (IBAction)onB:(id)sender{
PHPickerConfiguration *config = [[PHPickerConfiguration alloc]initWithPhotoLibrary:PHPhotoLibrary.sharedPhotoLibrary];
config.preferredAssetRepresentationMode = PHPickerConfigurationAssetRepresentationModeCurrent;
config.selectionLimit = 1;
config.filter = nil;
PHPickerViewController *picker = [[PHPickerViewController alloc]initWithConfiguration:config];
picker.modalPresentationStyle = UIModalPresentationFullScreen;
picker.delegate = self;
[self presentViewController:picker animated:true completion:nil];
}
- (void)picker:(PHPickerViewController *)picker didFinishPicking:(NSArray<PHPickerResult *> *)results{
}
Environment: Simulator iPhone 15 Pro (iOS18)
Before this version (iOS17.4), clicking the button to pop up the system photo picker interface was normal (the top boundary was within the SafeAreaGuide area), but now the top boundary of the interface aligns directly to the top of the window, and clicking the photo cell is unresponsive.
If I create a new Target, using the same codes, the photo picker page does not have the above problem.
Therefore, I suspect it may be due to the old project’s .proj file’s info.plist, buildSetting, or buildPhase lacking some default configuration key value required by the new version, (My project was built years ago may be from iOS13 or earlier ) but I cannot confirm the final cause.
iOS18.0 has the additional messages:
objc[79039]: Class UIAccessibilityLoaderWebShared is implemented in both /Library/Developer/CoreSimulator/Volumes/iOS_22A3351/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS 18.0.simruntime/Contents/Resources/RuntimeRoot/System/Library/AccessibilityBundles/WebCore.axbundle/WebCore (0x198028328) and /Library/Developer/CoreSimulator/Volumes/iOS_22A3351/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS 18.0.simruntime/Contents/Resources/RuntimeRoot/System/Library/AccessibilityBundles/WebKit.axbundle/WebKit (0x1980fc398). One of the two will be used. Which one is undefined.
AX Safe category class 'SLHighlightDisambiguationPillViewAccessibility' was not found!
Has anyone encountered the same issue as me?
Post
Replies
Boosts
Views
Activity
In the iOS app I'm developing, I've noticed that since upgrading to iOS 17 (Xcode 15.1), crashes of this type occur frequently. The crashes are random and can't be reliably reproduced. Below is a typical crash report:
CrashReporter Key: fd24cf14a51d73ebfc1852cccb1b8d50822b247c
Hardware Model: iPhone11,2
Process: MyApp [89057]
Path: /private/var/containers/Bundle/Application/06B982E0-B818-48A9-B2D1-F28999EC3BC0/MyApp.app/MyApp
Identifier: com.company.MyApp
Version: 2.0.0 (72)
Code Type: ARM-64 (Native)
Role: Foreground
Parent Process: launchd [1]
Coalition: com.company.MyApp [4659]
Date/Time: 2024-03-24 14:50:46.3982 +0800
Launch Time: 2024-03-24 14:38:38.1438 +0800
OS Version: iPhone OS 17.3.1 (21D61)
Release Type: User
Baseband Version: 6.00.00
Report Version: 104
Exception Type: EXC_BREAKPOINT (SIGTRAP)
Exception Codes: 0x0000000000000001, 0x000000018c44e838
Termination Reason: SIGNAL 5 Trace/BPT trap: 5
Terminating Process: exc handler [89057]
Triggered by Thread: 0
Kernel Triage:
VM - (arg = 0x3) mach_vm_allocate_kernel failed within call to vm_map_enter
VM - (arg = 0x3) mach_vm_allocate_kernel failed within call to vm_map_enter
VM - (arg = 0x3) mach_vm_allocate_kernel failed within call to vm_map_enter
VM - (arg = 0x3) mach_vm_allocate_kernel failed within call to vm_map_enter
VM - (arg = 0x3) mach_vm_allocate_kernel failed within call to vm_map_enter
Thread 0 name: Dispatch queue: com.apple.main-thread
Thread 0 Crashed:
0 libobjc.A.dylib 0x18c44e838 object_getClass + 48
1 Foundation 0x1930807b4 _NSKeyValueObservationInfoGetObservances + 264
2 Foundation 0x19307fc7c NSKeyValueWillChangeWithPerThreadPendingNotifications + 232
3 QuartzCore 0x19572f14c CAAnimation_setter(CAAnimation*, unsigned int, _CAValueType, void const*) + 128
4 QuartzCore 0x19574a6b4 -[CAAnimation setBeginTime:] + 52
5 QuartzCore 0x1957485b4 CA::Layer::commit_animations(CA::Transaction*, double (*)(CA::Layer*, double, void*), void (*)(CA::Layer*, CA::Render::Animation*, void*), void (*)(CA::Layer*, __CFString const*, void*), CA::Render::TimingList* (*)(CA::Layer*, void*), void*) + 740
6 QuartzCore 0x195700bf0 invocation function for block in CA::Context::commit_transaction(CA::Transaction*, double, double*) + 148
7 QuartzCore 0x195700af8 CA::Layer::commit_if_needed(CA::Transaction*, void (CA::Layer*, unsigned int, unsigned int) block_pointer) + 368
8-14QuartzCore 0x195700a84 CA::Layer::commit_if_needed(CA::Transaction*, void (CA::Layer*, unsigned int, unsigned int) block_pointer) + 252
15 QuartzCore 0x195745248 CA::Context::commit_transaction(CA::Transaction*, double, double*) + 11192
16 QuartzCore 0x19573bb80 CA::Transaction::commit() + 648
17 QuartzCore 0x19573b828 CA::Transaction::flush_as_runloop_observer(bool) + 88
18 CoreFoundation 0x1940ff7bc __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__ + 36
19 CoreFoundation 0x1940fe1c4 __CFRunLoopDoObservers + 548
20 CoreFoundation 0x1940fd8e0 __CFRunLoopRun + 1028
21 CoreFoundation 0x1940fd3f8 CFRunLoopRunSpecific + 608
22 GraphicsServices 0x1d768b4f8 GSEventRunModal + 164
23 UIKitCore 0x1965238a0 -[UIApplication _run] + 888
24 UIKitCore 0x196522edc UIApplicationMain + 340
25 MyApp 0x102c1f014 main + 140
26 dyld 0x1b6e52dcc start + 2240
By looking up information on the Exception Type and Termination Reason, I found that Apple officially mentions that EXC_BREAKPOINT (SIGTRAP) SIGNAL 5 Trace/BPT trap: 5 could be caused by Swift runtime error crashing mechanisms, mainly due to:
If you use the ! operator to force unwrap an optional value that’s nil, or if you force a type downcast that fails with the as! operator, the Swift runtime catches these errors and intentionally crashes the app.
For details, see the link: https://developer.apple.com/documentation/xcode/addressing-crashes-from-swift-runtime-errors
My project is a mix of Objc+Swift compilation, the crash usually occurs after a button click triggers a change in UIView properties, mostly related to layout. All Swift code in the project is unrelated to this type of UI. So, I speculate that it might not be related to Swift runtime errors, but I'm unsure what other possible causes could lead to the aforementioned crash.
A common denominator in all similar crash reports is that they occur on the main thread during system framework function calls,
All show multiple instances of
VM - (arg = 0x3) mach_vm_allocate_kernel failed within call to vm_map_enter
CA::Layer::commit_if_needed is always invoked. I noticed many crashes relate to CALayer setting properties internally calling CAAnimation, so I added:
@implementation CALayer (Animation)
/// Prevent crashes
+ (void)disableAnimation:(VoidBlock)block{
[CATransaction begin];
[CATransaction setValue:(id)kCFBooleanTrue forKey:kCATransactionDisableActions];
block();
[CATransaction commit];
}
@end
Using [CALayer disableAnimation:^{view.layer.someProperty = someValue;}] to disable animations has prevented some crashes, but I'm powerless in situations like the stack trace above, where all calls are made by system frameworks.
I've also noticed other similar crash issues on forums:
EXC_BREAKPOINT - libobjc.A.dylib object_getClass Crash on the main thread.
The author experienced this issue after iOS 16 and iOS 17, with very similar stack information to mine.
I suspect other potential causes might include:
Whether it's related to KVO in UI code not being correctly released.
Whether it involves calls to GPU resources from other threads. I've rewritten most of the code to ensure no GPU-related image operations occur on other threads during CoreAnimation runtime.
Whether it's related to high memory usage and peak virtual memory. My app is related to image processing, and opening 4K photos for processing typically consumes more than 500MB of memory.
If you've encountered similar situations or can help identify potential causes, please advise. Many thanks!
In my iOS project, there is an infrequent crash related to virtual memory problem. Therefore, I plan to use UITest in combination with Product/Perform Action/Profile "TestCaseName" to conduct Game Performance-type testing. This allows the automatic testing to continuously operate until the profile stops recording upon a crash. This enables me to observe the various states of the program at the time of the crash.
However, I have found that the UITest using Profile is highly unstable. The UITestCase often terminates unexpectedly during execution, leading to failed tests (Instruments is still working). Sometimes, the app is terminated immediately after startup. It seems that the use of sleep() in the code can easily cause interruption issues, which do not occur during normal UI testing.
I am wondering if anyone has experience using Profile for UITest and whether they have encountered the issues I described."
Working Environment:
XCode14.3.1, iPhone Device iOS17.2
I compared with several options to use get auxiliary images from CIImage.
These options leak AVSemanticSegmentationMatte when using debug memory graph
CIImage.init(data: data, options: [.auxiliarySemanticSegmentationSkinMatte: true])
CIImage.init(data: data, options: [.auxiliarySemanticSegmentationHairMatte: true])
CIImage.init(data: data, options: [.auxiliarySemanticSegmentationTeethMatte: true])
Other options .auxiliaryDisparity and .auxiliaryPortraitEffectsMatte do not leak AVDepthData nor AVPortraitEffectsMatte.
I really love Quartz Composer from Apple which is a quite old app, not updated for years. It works well on my 2015 mid MacBook Pro, but not on new M1 iMac. Does anyone know how to run this great app on my new machine? Thank you!
I customize the captured portrait mode photo's depthData by using fileDataRepresentationWithCustomizer method, and replace the depth data with my edited version.
- (nullable AVDepthData *)replacementDepthDataForPhoto:(AVCapturePhoto *)photo{
return myEditedDepthData;
}
The weird thing is that the returned NSData value of fileDataRepresentationWithCustomizer losts portraitMatte.
CGImageSourceRef imageSource = CGImageSourceCreateWithData((CFDataRef)nsData, NULL);
CFDictionaryRef dicRef = CGImageSourceCopyAuxiliaryDataInfoAtIndex(imageSource, 0, kCGImageAuxiliaryDataTypePortraitEffectsMatte);
the dicRef always return 0x0 after I use fileDataRepresentationWithCustomizer instead of fileDataRepresentation
Does anyone know what am I doing wrong?
I want to record the TrueDepth or Dual camera's depth data output when recording the video data. I have already managed to get the AVCaptureDepthDataOutput object and displayed it in realtime, but I also need the depth to be recorded as an individual track of AVMediaTypeVideo or AVMediaTypeMetadata in the movie, and read them back for post processing.
Compared to use AVCaptureMovieFileOutput, I use movieWriter and AVAssetWriterInputPixelBufferAdaptor to append pixel buffer. I have tried to append the streaming depth as normal AVAssetWriterInput with AVVideoCodecTypeH264, but failed.
Is it possible to append depth data buffer in the same way as video data for depth data, or with any other way of doing it?
I want to capture video with effects while saving video data with detected face info for post face beauty process in my project. Because AVCaptureVideoDataOutput doesn't work well with AVCaptureMovieFileOutput, I choose to use AVCaptureVideoDataOutput + AVAssetWriter, writing face metadata to assetWriterInput with AVAssetWriterInputMetadataAdaptor, reading back with [AVAssetReaderOutputMetadataAdaptor nextTimedMetadataGroup].
This is how I tried in writing process,
### setup metadata format for input
NSArray *specifications = @[@{
(__bridge NSString *)kCMMetadataFormatDescriptionMetadataSpecificationKey_Identifier : @"mdta/com.apple.quicktime.detected-face",
(__bridge NSString *)kCMMetadataFormatDescriptionMetadataSpecificationKey_DataType : @"com.apple.quicktime.detected-face",
CMMetadataFormatDescriptionRef metadataFormatDescription = NULL;
CMMetadataFormatDescriptionCreateWithMetadataSpecifications(kCFAllocatorDefault, kCMMetadataFormatType_Boxed, (__bridge CFArrayRef)specifications, &metadataFormatDescription);
AVAssetWriterInput * assetWriterMetadataInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeMetadata outputSettings:nil sourceFormatHint:metadataFormatDescription];
AVAssetWriterInputMetadataAdaptor *assetWriterMetadataAdaptor = [[AVAssetWriterInputMetadataAdaptor alloc]initWithAssetWriterInput:assetWriterMetadataInput];
[assetWriter addInput:assetWriterMetadataInput];
...
### write face metadata when detected using AVCaptureMetadataOutput
- (void)appendMetadataFaceObjectItems:(NSArray<AVMetadataFaceObject *> *)faces frameTime:(CMTime)frameTime{
if (assetWriter.status == AVAssetWriterStatusWriting ) {
NSMutableArray <AVMutableMetadataItem *>*metadataItems = [NSMutableArray<AVMutableMetadataItem *> new];
for (AVMetadataFaceObject *face in faces) {
AVMutableMetadataItem *metadataItem = [AVMutableMetadataItem metadataItem];
metadataItem.identifier = AVMetadataIdentifierQuickTimeMetadataDetectedFace;
metadataItem.dataType = @"com.apple.quicktime.detected-face";
metadataItem.keySpace = @"mdta";
metadataItem.key = @"com.apple.quicktime.detected-face";
metadataItem.value = face;
//other time duration setup
[metadataItems addObject:metadataItem];
}
AVMutableTimedMetadataGroup *group = [[AVMutableTimedMetadataGroup alloc]initWithItems:metadataItems timeRange:CMTimeRangeMake(frameTime, CMTimeMake(20, 600))];
BOOL success = [assetWriterMetadataAdaptor appendTimedMetadataGroup:group];
}
}
the app crashes on appendTimedMetadataGroup, saying:
Cannot write to file timed metadata group : Metadata value is an instance of AVMetadataItem, but format description does not properly describe face data'
I thought I have paired the input format description in setup with in writing, am I missing some other details?
In my project, I pack two normal maps into one large size (4096x4096) texture(R,G channel for one, and B,Alpha for another).
If I use [UIImage imageNamed:] to load the image from the asset bundle, no matter how I set the compression for the image(lossless or other options), the result UIImage always has artifacts in each color channel(each channel contains colors coming from other channel. Default lossless option is the best among others, but still not acceptable). Because I use the pixels for normal map animation, these artifacts are noticeable.
If I move the image from asset bundle to project folder, and load it by using [[UIImage alloc]initWithContentFile:], because of no compression, the artifact goes away. But I get no benefits from image caching.
So, is there any workaround to load the image without compression while can be cached?
My project will load several images using [UIImage imageNamed:] when a button is pressed, it runs smoothly under iOS12, but when I switch deploy target to iOS13, the action takes several seconds to completion which is unacceptable. (even loading 1px image become slower)
After some research, using the same .jpg image in default .xcassets , same code environment, I found that since iOS13, the default implementation of UIImage imageNamed: may have different call stacks.
the faster way
[UIImage imageNamed:inBundle:withConfiguration:]
[_UIAssetManager newAssetNamed:fromBundle:]
_UIImageCollectImagePathsForPath
___UIImageCollectImagePathsForPath_block_invoke
___UIImageCollectImagePathsForPath_block_invoke_2.168
___UIImageCollectImagePathsForPath_block_invoke.155[NSFileManager fileExistsAtPath:]
the time consuming way
[UIImage imageNamed:inBundle:withConfiguration:]
[_UIAssetManager imageNamed:configuration:]
[_UIAssetManager imageNamed:configuration:cachingOptions:attachCatalogImage:]
__78-[_UIAssetManager imageNamed:configuration:cachingOptions:attachCatalogImage:]_block_invoke[_UIAssetManager _lookUpObjectForTraitCollection:withAccessorWithAppearanceName:]
[UITraitCollection _enumerateThemeAppearanceNamesForLookup:]
__82-[_UIAssetManager _lookUpObjectForTraitCollection:withAccessorWithAppearanceName:]_block_invoke
__78-[_UIAssetManager imageNamed:configuration:cachingOptions:attachCatalogImage:]_block_invoke_2[CUICatalog namedLookupWithName:scaleFactor:deviceIdiom:deviceSubtype:displayGamut:layoutDirection:sizeClassHorizontal:sizeClassVertical:appearanceName:]
[CUICatalog _namedLookupWithName:scaleFactor:deviceIdiom:deviceSubtype:displayGamut:layoutDirection:sizeClassHorizontal:sizeClassVertical:appearanceName:]
[CUICatalog _resolvedRenditionKeyFromThemeRef:withBaseKey:scaleFactor:deviceIdiom:deviceSubtype:displayGamut:layoutDirection:sizeClassHorizontal:sizeClassVertical:memoryClass:graphicsClass:graphicsFallBackOrder:deviceSubtypeFallBackOrder:adjustRenditionKeyWithBlock:]
[CUICatalog _private_resolvedRenditionKeyFromThemeRef:withBaseKey:scaleFactor:deviceIdiom:deviceSubtype:displayGamut:layoutDirection:sizeClassHorizontal:sizeClassVertical:memoryClass:graphicsClass:graphicsFallBackOrder:deviceSubtypeFallBackOrder:localizationIdentifier:adjustRenditionKeyWithBlock:]
[CUIStructuredThemeStore _canGetRenditionWithKey:isFPO:lookForSubstitutions:]
[CUIStructuredThemeStore assetExistsForKey:]
[CUICommonAssetStorage assetExistsForKeyData:length:]
_os_unfair_lock_lock_slow
__ulock_wait
In the later way, ulock_wait is one of the heaviest methods.
I made some efforts, like dispatching imageNamed: to global queue, but nothing helps. I have no clues how to make it call the faster way.
I have writen a very simple test app using DeeplabV3 from Apple Website to recognize face segmentation from an image.
From Instruments, I found that when the predication is done, MLModel object is not released. VM:IOAccelerator object about 50M remains in the memory.
Stack Trace
IOAccelResourceCreate
[MLModel modelWithContentsOfURL:error:]
@nonobjc MLModel._allocatinginit(contentsOf:)
DeepLabV3FG16._allocationinit()
......
Original classes of MLModel and DeepLabV3FP16 are already released, but the vm is still there.
How can I solve the memory leak?
To test the billing issue scenario in StoreKitTest environment, I set the Time Rate to 1 second to 1 day for 1 year subscription, then select SKErrorUnknown for Fail Transactions. But the subscription kept auto renewing, adding new transaction for the original transaction every 7 minutes.
It seems that Faile Transactions only works for user interaction purpose.
Is there a way to manually stop the auto renew in StoreKit Test environment, or sandbox?
When developing in sandbox environment, I get receipt url from [[NSBundle mainBundle] appStoreReceiptURL] and post http request with it to sandbox server to validate.
But as StoreKitTest is a local test environment, this approach does not work ever more, I always receive respond with error 21002.
As mentioned in [https://developer.apple.com/documentation/xcode/setting_up_storekit_testing_in_xcode?language=objc)
Be sure your code uses the correct certificate in all environments. Add the following conditional compilation block to select the test certificate for testing, and the Apple root certificate otherwise.
#if DEBUG
let certificate = “StoreKitTestCertificate”
#else
let certificate = “AppleIncRootCertificate”
#endif I know I should install the StoreKitTestCertificate.cer file for my test device, but how to use the code above in my project, and how to retrieve the receipt from the local. Is there any sample code available?