Kevin, I thank you for your response. I believe progress is happening.
For clarity, let me call the arm machine the client system and the x86 machine the server system.
I am deleting a directory tree on the server system from a Java application running on the client system. Java uses basic system calls (rmdir and unlink) to delete items.
I put a breakpoint on the exception handler and discovered an interesting situation.
The failure on directory deletion is directory not empty. That should not happen because
before attempting to delete the directory, my program deleted its contents.
When I examine the directory that I could not delete (D) in Terminal on the server system, it is indeed not empty.
It contains an empty subdirectory (S), which my program previously "deleted".
A few seconds later, directory S disappeared (as viewed in Terminal on the server system)!
It appears that there is a race condition. The operation to delete S apparently
succeeded, but did not take effect immediately. The operation to delete D
somehow overtook the previous operation and failed as a result.
From Terminal on the client system, S appears to exist but
trying to list its contents fails with the fts_read error. I get the same error if I open
a new Terminal window and navigate to D and try to list S.
If I unmount the volume and reconnect, I see the same bad state in Terminal.
Listing D shows S.
Listing S gets the fts_read error.
Is this a bug or am I doing something wrong?
Is there a reliable way to work around this problem?
Post
Replies
Boosts
Views
Activity
I can confirm that the problem I was having has been fixed in Xcode 16.1 and Command Line Tools 16.1. I was confused before because although Xcode 16.1 was installed, I was actually using Command Line Tools 16.0.
You can tell if the library is correctly built if otool -l shows a hard link to the Quartz framework and no link to the QuickLookUI framework.
A very crude and heavy-handed workaround:
Add a weak link to QuickLookUI as I described previously.
Run the executable with DYLD_FORCE_FLAT_NAMESPACE=1.
The essence of the problem is that my library compiled with recent releases of Xcode identifies QLPreviewView as belonging to the namespace assocated with QuickLookUI, but on macoS 11 and earlier, it is in the namespace associated with Quartz, so it is not found using the default two-level namespace.
Alas, I spoke too soon.
Although adding the weak link to QuickLookUI allows the library to load on macOS 11, it does not link properly. The code fails trying to create a QLPreviewView.
I am using command line tools to build a dynamic library that uses QLPreviewView and targets macOS 10.10.
The API predates 10.10 (in the Quartz framework) so there should be no problem. However, even after installing Xcode 16.1, the
library contains a hard link to the QuickLookUI framework, causing it to fail on macOS releases
prior to macOS 12. I assume this hard link is created because in the current SDK Quartz imports QuickLookUI. I presume that the link should be a weak link.
A simple test program:
#import <Quartz/Quartz.h>
NSView *test()
{
NSRect bounds = NSMakeRect(0, 0, 1, 1);
QLPreviewView *preview = [[QLPreviewView alloc] initWithFrame:bounds style:QLPreviewViewStyleCompact];
return preview;
}
The build script:
cc -target x86_64-apple-macos10.10 -dynamiclib -ObjC -framework Quartz test.m
The relevant output from otool:
Load command 9
cmd LC_LOAD_DYLIB
cmdsize 88
name /System/Library/Frameworks/Quartz.framework/Versions/A/Quartz (offset 24)
time stamp 2 Wed Dec 31 16:00:02 1969
current version 1.0.0
compatibility version 1.0.0
Load command 11
cmd LC_LOAD_DYLIB
cmdsize 96
name /System/Library/Frameworks/QuickLookUI.framework/Versions/A/QuickLookUI (offset 24)
time stamp 2 Wed Dec 31 16:00:02 1969
current version 0.0.0
compatibility version 1.0.0
I was able to generate a weak link as follows:
cc -target x86_64-apple-macos10.10 -dynamiclib -ObjC -weak_framework QuickLookUI -framework Quartz test.m
I have confirmed that this fix produces a library that loads on macOS 11.
In general, you shouldn’t ship the .tbd file in your final product.
I'm not sure what you mean by final product. In this case, my final product is a framework, and the immediate "customers" for that product are developers.
The developers would be building things that use the framework (compile time role) and/or including it in applications (run time role).
So, if I combine various things you have written, it sounds like a reasonable choice is for me to imitiate what the current Xcode does when it builds a framework.
That means creating a V4 .tbd file with no UUIDs.
Yes?
I like that answer!
My explanation is a bit complicated. As you may know, the JavaNativeFoundation framework does not run natively on arm64. So, I am trying to figure out how to compile the open-source for it in a way that can be used on all architectures and macOS releases back to 10.10. I found that the only way to compile a framework to support 10.10 was to use an old version of Xcode (13.4.1 worked). When I build the framework using that Xcode, it creates a .tbd that includes UUIDs. Instead of using an old Xcode, I thought I would instead create the package using individual command tools (such as tapi) from the current Xcode. I figured that if I was unable to generate equivalent files, I must be doing something wrong. So, I'm happy to hear that the lack of UUIDs in the .tbd is not a problem.
Now, you may be thinking that I should not need a JNF framework that runs on 10.10, as 10.10 already has a JNF. That seems reasonable, assuming that the dynamic linker ignores frameworks on the path that don't support the current OS and arch.
However, someday I may wish to replace JNF with my own, much smaller framework, and I want that framework to run on 10.10 and later. So, my questions are:
Do I need a .tbd file in a framework to run on some releases of macOS?
If so, which version of the .tbd file should I use for maximum compatibility with older macOS release?
Thank you for your assistance!
The cluprit turns out to be Finder! Yes, if a directory is nested below a directory that is displayed by Finder with the Calculate All Sizes view option enabled, Finder will cache the directory cumulative size in a .DS_Store file in that directory and will update the cached size in response to FSEvents. The upshot: if a user uses the Calculate All Sizes option, standard Unix commands such as /bin/rm -rf may fail intermittently.
FB10023961
Deleting a file (any file) may generate an FSEvent that triggers a background process to examine the directory or directory tree. If the background process somehow creates .DS_Store files, that would explain the behavior. Spotlight indexing and Time Machine backup come to mind as possible culprits.
Unexplained .DS_Store files are still being created. I restarted, did a find, and found 80 new .DS_Store files in the tree that I have been looking at.
Could mdsync or backupd do this?
It is looking now that deleting a .DS_Store file is unreliable.
This is what I did:
find . -name .DS_Store -print ; find . -name .DS_Store -exec rm {} \; ; find . -name .DS_Store -print
The first print statement listed 16 files.
The second print statement listed 8 files.
That means 8 files were deleted and 8 were not.
Hmm... I repeated the deletion step and 4 of the 8 remaining files were deleted.
It seems that every other file is deleted???
I have started tracking the memory usage of the WindowServer process. In a freshly restarted system with no desktop applications other than Activity Monitor, the memory usage (as reported by Activity Monitor) is about 130 MB. Right now it is 775 MB, which is getting close to the territory where things start to hang. As applications are started and exited, the memory usage goes up and down. However, I have observed (using a script that logs the memory usage every minute) that when the system exits display sleep there can be an "instantaneous" jump in the memory usage of 100 MB or more. This additional memory usage is never reverted. By the way, I am using an Intel based iMac.
The result of not using the suspect application is that the problem did not go away, although it took longer for the problem to occur (about 5 days instead of the usual 1 or 2 days). So, the application is not the cause of the problem, but it may perform more of the operations that trigger the problem.
Did the problem of non-Apple applications losing Full Disk Access get fixed in the 11.4 release?
I just noticed that an application of mine that previously had Full Disk Access somehow lost it, and I'm not aware of installing the beta on this particular system.
The application got an error trying to access a file in a subdirectory under ~/Library/Autosave Information.
If the application did not have an entry in the Full Disk Access list of applications, would a dialog have been displayed?
That would be better than my application showing its own dialog that attempts to explain the problem and the remedy.