We have some HTML content embedded in our app. These live in two top-level folders. These folders have been localized, and so live in Resources/en.lproj/HelpContent and Resources/fr.lproj/HelpContent (for example). Similarly, we have a bunch of localized .storyboard files.Xcode 8 builds this just fine. Xcode 9b6 complains:duplicate output file
'/Users/me/Library/Developer/Xcode/DerivedData/MyApp-
fdskkibuvbsubudkmqtgsjxmyqme/Build/Products/Debug-iphoneos/MyApp.app/
HelpContent' on task: CpResource
/Users/me/Projects/Clients/MyCompany/repo/mp_vision/iOS/Controller/MyApp
/Resources/fr.lproj/HelpContent
/Users/me/Library/Developer/Xcode/DerivedData/MyApp-
fdskkibuvbsubudkmqtgsjxmyqme/Build/Products/Debug-iphoneos/MyApp.app/
HelpContent (in target 'MyApp')unable to build node:
'/Users/me/Library/Developer/Xcode/DerivedData/MyApp-
fdskkibuvbsubudkmqtgsjxmyqme/Build/Products/Debug-iphoneos/MyApp.app/
HelpContent' (node is produced by multiple commands; e.g.,
'b1132708c20f997e752faabcee01d49c82a500833121dcd848c3954357d9f7b1:
CpResource
/Users/me/Projects/Clients/MyCompany/repo/mp_vision/iOS/Controller/MyApp
/Resources/en.lproj/HelpContent
/Users/me/Library/Developer/Xcode/DerivedData/MyApp-
fdskkibuvbsubudkmqtgsjxmyqme/Build/Products/Debug-iphoneos/MyApp.app/
HelpContent' and
'b1132708c20f997e752faabcee01d49c82a500833121dcd848c3954357d9f7b1:
CpResource
/Users/me/Projects/Clients/MyCompany/repo/mp_vision/iOS/Controller/MyApp
/Resources/fr.lproj/HelpContent
/Users/me/Library/Developer/Xcode/DerivedData/MyApp-
fdskkibuvbsubudkmqtgsjxmyqme/Build/Products/Debug-iphoneos/MyApp.app/
HelpContent')The Files & Groups list shows it correctly (e.g. "HelpContent" with "HelpContent (English)" and "HelpContent (French)" as sub-folders. The folders only show up once in the Copy Bundle Resources phase.But the build stops almost immediately on this error.
Post
Replies
Boosts
Views
Activity
We build some C++ code that uses Tensorflow Lite on multiple platforms. I'm trying to link it into our iOS build. I created a universal shared library with a couple of shared libraries built by that team. They build them as individual shared libraries with one architecture per file (one for macOS, one for iOS arm64, and one for iOS x86). I put the two iOS files together using lipo, and I was able to link and run in the Simulator. Unfortunately, I can't exercise that code path in the Simulator (for other reasons), so I built and ran on a device.But when I try to run on a device, I getdyld: Library not loaded: @rpath/libtensorflowlite.so
Referenced from: /private/var/containers/Bundle/Application/E3E3B0E9-B36E-4ADB-9FFC-CA2D6D2A8AC8/MyApp.app/MyApp
Reason: no suitable image found. Did find:
/private/var/containers/Bundle/Application/E3E3B0E9-B36E-4ADB-9FFC-CA2D6D2A8AC8/MyApp.app/Frameworks/libtensorflowlite.so: no matching architecture in universal wrapperI know the App Store doesn't like x86 bits in universal shared libraries, so I used lipo in an Xcode build phase to remove the x86 part, and now my file looks like:$ file /Users/rmann/Library/Developer/Xcode/DerivedData/MyApp-fdskkibuvbsubudkmqtgsjxmyqme/Build/Products/Debug-iphoneos/MyApp.app/Frameworks/libtensorflowlite.so
/Users/rmann/Library/Developer/Xcode/DerivedData/MyApp-fdskkibuvbsubudkmqtgsjxmyqme/Build/Products/Debug-iphoneos/MyApp.app/Frameworks/libtensorflowlite.so: Mach-O universal binary with 1 architecture: [arm64:Mach-O 64-bit dynamically linked shared library arm64]
/Users/rmann/Library/Developer/Xcode/DerivedData/MyApp-fdskkibuvbsubudkmqtgsjxmyqme/Build/Products/Debug-iphoneos/MyApp.app/Frameworks/libtensorflowlite.so (for architecture arm64): Mach-O 64-bit dynamically linked shared library arm64But it still doesn't see it.I do something similar with another library we get from a third party, and everything works fine. How can I inspect this further to determine what the issue is? Thanks.
I can run my app in the Simulator just fine, but when I try to run it on a device, it chokes with:dyld: Library not loaded: @rpath/MyF.framework/MyF
Referenced from: /private/var/containers/Bundle/Application/48637C68-CE05-4B9A-BC3B-EC85295707B8/My.app/My
Reason: no suitable image found. Did find:
/private/var/containers/Bundle/Application/48637C68-CE05-4B9A-BC3B-EC85295707B8/My.app/Frameworks/MyF.framework/MyFIt’s set to Embed and Sign my framework, but it doesn't like the signature. What am I doing wrong? The CodeSign step in the Xcode build seems to go off without any errors.
We've been trying to submit an app linking against a bare dylib (that is, it's not part of a framework). It's by itself in the .app/Frameworks/ folder. But I get a rejection from App Validation:Couldn't find platform family for "libopencv_world.4.1.1.dylib". Coulnd't find CFBundleSupportedPlatforms in the Info.plist or LC_VERSION_MIN in the Mach-O for path "/Users/…/Library/Developer/Xcode/Archives/2020-02-18/App Debug 2020-02-18, 04.14 .xcarchive/Products/Applications/App.app/Frameworks/libopencv_world.4.1.1.dylib".The dylib is built for deployment on iOS 12.0 using iOS 13.2 SDK. It contains a loader command LC_BUILD_VERSION:$ otool -l libopencv_world.4.1.1.dylib | grep -A4 VERSION
cmd LC_BUILD_VERSION
cmdsize 32
platform 2
sdk 13.2
minos 12.0Prior to iOS 12.0, the linker emits a LC_VERSION_MIN_IPHONEOS loader command.It seems like App Validation is erroneously rejecting the binary, but is there something I can do to address this?
I'd like to make a macOS app that composites one or more video camera streams into an output stream that can then be used by another app as a capture device for an AVCaptureSession. Is this possible? Looking through AVFoundation and Core Media docs, there's nothing obvious. I realize I may need to create a driver, that's fine, but I don't really know where to start.
I'm developing a custom keyboard, and I would like certain data to sync across the user's devices. In the containing app, that's straightforward, and I already have my keyboard and containing app sharing data. But if the user makes a change on a device while the custom keyboard is displayed on another device, I don't see any straightforward way of letting that custom keyboard know, so it can update its display accordingly.I think I can have the container app get updated, and then possibly inform the custom keyboard via the filesystem, but this is exceedingly clunky.Since CloudKit subscriptions require responding to push notifications, I'm not sure there's a better way, is there?
I’ve been having a heckuva time getting macOS (Catalina) to let my app open the associated -wal and -shm files. Googling for answers, it seems macOS should already know about these, but if not, I can create NSIsRelatedItemType additions. But that didn't seem to work for me:
Is there something more I need to do?
If I put the main SQLite file to open inside the app's container, then SQLite can open the associated files just fine.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>CFBundleDevelopmentRegion</key>
<string>$(DEVELOPMENT_LANGUAGE)</string>
<key>CFBundleDocumentTypes</key>
<array>
<dict>
<key>CFBundleTypeName</key>
<string>SQLiteDocument</string>
<key>CFBundleTypeRole</key>
<string>Editor</string>
<key>LSHandlerRank</key>
<string>Default</string>
<key>LSItemContentTypes</key>
<array>
<string>org.sqlite.sqlite3</string>
</array>
<key>NSDocumentClass</key>
<string>$(PRODUCT_MODULE_NAME).Document</string>
</dict>
		<dict>
				<key>CFBundleTypeExtensions</key>
				<array>
						<string>sqlite-shm</string>
						<string>sqlite-wal</string>
						<string>sqlite-journal</string>
				</array>
				<key>CFBundleTypeName</key>
				<string>Support Type</string>
				<key>CFBundleTypeRole</key>
				<string>Editor</string>
				<key>NSIsRelatedItemType</key>
				<true/>
		</dict>
</array>
<key>CFBundleExecutable</key>
<string>$(EXECUTABLE_NAME)</string>
<key>CFBundleIconFile</key>
<string></string>
<key>CFBundleIdentifier</key>
<string>$(PRODUCT_BUNDLE_IDENTIFIER)</string>
<key>CFBundleInfoDictionaryVersion</key>
<string>6.0</string>
<key>CFBundleName</key>
<string>$(PRODUCT_NAME)</string>
<key>CFBundlePackageType</key>
<string>$(PRODUCT_BUNDLE_PACKAGE_TYPE)</string>
<key>CFBundleShortVersionString</key>
<string>1.0</string>
<key>CFBundleVersion</key>
<string>1</string>
<key>LSMinimumSystemVersion</key>
<string>$(MACOSX_DEPLOYMENT_TARGET)</string>
<key>NSMainStoryboardFile</key>
<string>Main</string>
<key>NSPrincipalClass</key>
<string>NSApplication</string>
<key>UTExportedTypeDeclarations</key>
<array>
<dict>
<key>UTTypeConformsTo</key>
<array>
<string>public.database</string>
<string>public.data</string>
</array>
<key>UTTypeDescription</key>
<string>SQLite3</string>
<key>UTTypeIcons</key>
<dict/>
<key>UTTypeIdentifier</key>
<string>org.sqlite.sqlite3</string>
<key>UTTypeTagSpecification</key>
<dict>
<key>public.filename-extension</key>
<array>
<string>sqlite</string>
</array>
</dict>
</dict>
</array>
<key>UTImportedTypeDeclarations</key>
<array>
<dict>
<key>UTTypeConformsTo</key>
<array>
<string>public.database</string>
<string>public.data</string>
</array>
<key>UTTypeDescription</key>
<string>SQLite3</string>
<key>UTTypeIcons</key>
<dict/>
<key>UTTypeIdentifier</key>
<string>org.sqlite.sqlite3</string>
<key>UTTypeTagSpecification</key>
<dict>
<key>public.filename-extension</key>
<array>
<string>sqlite</string>
</array>
</dict>
</dict>
</array>
</dict>
</plist>
I'm developing a little USB device for use with macOS, and the name includes the non-ASCII character ū:
LATIN SMALL LETTER U WITH MACRON
Unicode: U+016B, UTF-8: C5 AB
My source file is UTF-8 encoded, but as I understand it, USB uses UTF-16LE encoding for all its strings.
GCC (which I'm using to compile the code for the device) doesn't implement the \u unicode point escape. So I tried "productname\xc5\xab", which causes USB Prober to report the Product String as "productname\u016b".
Is that just USB Prober not properly rendering the string? Or am I still not encoding it correctly?
For years I've poked at a little personal project, an electronic schematic capture app. It's basically a specialized version of something like Illustrator or Omnigraffle, in that you create graphical objects from primitives, instantiate them onto the canvas, and connect them with polylines.
I'm very new to SwiftUI, but I'm wondering if it makes sense to build a new custom view to handle drawing this canvas as a "native" SwiftUI view. I know it's possible to wrap NSViews in SwiftUI, but if SwiftUI can handle it, I'd like to just reimplement it.
There are a number of requirements that complicate things:
This view lives inside a scroll view (or at least, it has bounds that usually extend beyond the window).
The view contains custom graphics and text.
Some graphical elements span large portions of the canvas (e.g. the poly lines connecting components).
The number of individual elements can be quite high (performance concerns). Quadtrees are often used to help with this.
It zooms
Marquee-selection
Mouse down, drag, and up changes the model in significant and varied ways.
Hovering can change appearance of some items.
Can SwiftUI handle all this? I tried to find an example or documentation, but was not having much luck. Almost everything is iOS-focused, so precise and nuanced mouse handling is uncommon.
I have some code calling this method:
mutating
func
readOffset() UInt64
{
let offset: UInt64
debugLog("readOffset")
switch (self.formatVersion)
{
case .v42:
let offset32: UInt32 = self.reader.get()
offset = UInt64(offset32)
case .v43:
offset = self.reader.get()
}
return offset
}
If I put a breakpoint on the switch statement, Xcode never stops there, and if the debugLog() call is commented out, I can't even step into the function at the call site; it just runs to the next breakpoint in my code, wherever that happens to be.
If I put the breakpoint on debugLog(), it stops at the breakpoint.
If I put breakpoints at the self.reader.get() calls, it stops at those breakpoints AND I can step into it.
This is a unit test targeting macOS, and optimization is -Onone.
Xcode 12.4 (12D4e) on Catalina 10.15.7 (19H524).
I’m writing an app that, among other things, displays very large images (e.g. 106,694 x 53,347 pixels). These are GeoTIFF images, in this case containing digital elevation data for a whole planet. I will eventually need to be able to draw polygons on the displayed image.
There was a time when one would use CATiledLayer, but I wonder what is best today. I started this app in Swift/Cocoa, but I'm toying with the idea of starting over in SwiftUI (my biggest hesitation is that I have yet to upgrade to Big Sur).
The image data I have is in strips, with an integral number of image rows per strip. Strips are not guaranteed to be contiguous in the file. Pixel formats vary, but in the motivating use case are 16 bits per pixel, with the values signifying meters. As a first approximation, I can simply display these values in a 16 bpp grayscale image.
Is the right thing to do to set up a CoreImage pipeline? As I understand it that should give me some automatic memory management, right?
I’m hoping to find out the best approach before I spend a lot of time going down the wrong path.
Is there documentation describing the semantics of a Metal CIKernel function?
I have image data where each pixel is a signed 16-bit integer. I need to convert that into any number of color values, starting with a simple shift from signed to unsigned (e.g. the data in one image ranges from about -8,000 to +20,000, and I want to simply add 8,000 to each pixel's value).
I've got a basic filter working, but it treats the pixel values as floating point, I think. I've tried using both sample_t and sample_h types in my kernel, and simple arithmetic:
extern "C"
coreimage::sample_h
heightShader(coreimage::sample_h inS, coreimage::destination inDest)
{
coreimage::sample_h r = inS + 0.1;
return r;
}
This has an effect, but I don't really know what's in inS. Is it a vector of four float16? What are the minimum and maximum values? They seem to be clamped to 1.0 (and perhaps -1.0). Well, I’ve told CI that my input image is CIFormat.L16, which is 16-bit luminance, so I imagine it's interpreting the bits as unsigned? Anyway, where is this documented, if anywhere (the correspondence between input image pixel format and the actual values that get passed to a filter kernel)?
Is there a type that lets me work on the integer values? This document - https://developer.apple.com/metal/MetalCIKLReference6.pdf implies that I can only work with floating-point values. But it doesn't tell me how they're mapped.
Any help would be appreciated. Thanks.
I've got macOS SwiftUI app that displays an image, currently using Image. I need to display information about the pixel under the mouse pointer (its position, color, etc.) in some text fields at the bottom of the window.
I can't find an appropriate event handler to attach to Image. Traditionally I would have used mouseEntered, mouseExited, and mouseMoved. These are available for an NSHostingView, but that's for wrapping native views.
I found onHover(), which takes the place of mouseEntered and mouseExited, but it neither works (the perform method is never called for me), nor does it provide movement and position.
In my macOS SwiftUI app I have a list of "layers" on the left. Clicking on a layer focuses it on the right for acting upon. Each link has a little eye icon that's used to toggle visibility of that layer in the view to the right.
I'd like to be able to click on that eye button without selecting the layer or activating it. Is that possible?
As of 11.3, DocumentGroup defaults to showing the open panel (From the release notes: "DocumentGroup apps now show an Open panel on launch, even when iCloud isn’t in use. (66446310).") Seems like it was considered a bug before that it didn't.
Thing is, I don't like this behavior and don't want it, especially while I'm working on my app. I want to automatically create a new document. Is there any way to set that?