Post not yet marked as solved
I am trying to deploy our first Core ML model through the dashboard at https://ml.developer.apple.com/. Immediately after I go into the site, I get a message that reads "Your request is invalid.".
I click "+" to add a model collection, I enter a collection ID, a description and a single model ID and then tap on "Create". I get a message that reads "One of the fields was invalid.".
I have tried changing the IDs and the description, but there is no way to make it work.
I have been trying this for hours. Could you please guide me on how to make it work?
Post not yet marked as solved
I am trying to set the description of an image. The metadata tag necessary to add the description is of type alternateText.
I create the child tag with the new description value:
let childTag = CGImageMetadataTagCreate(identifier.section.namespace, identifier.section.prefix, "[x-default]" as CFString, .string, value as CFTypeRef)
I then set the description tag like this:
let parentTag = CGImageMetadataTagCreate(identifier.section.namespace, identifier.section.prefix, identifier.tagKey, .alternateText, [childTag] as CFTypeRef)
let result = CGImageMetadataSetTagWithPath(metadata, nil, identifier.tagKey, parentTag)
However, when I write the image file, I get a runtime error message and the operation fails:
XMP Error: AltText array items must have an xml:lang qualifier
So, I create the qualifier tag like this:
let qualifierTag = CGImageMetadataTagCreate("http://www.w3.org/XML/1998/namespace" as CFString, "xml" as CFString, "lang" as CFString, .string, "x-default" as CFTypeRef)
But I have not found a way to associate this qualifier tag to the child tag with the description value.
What is the way to do it?
I want to find out which AVCaptureDevice.Format are compatible with what AVVideoCodecType, so to enable or disable possible configurations in the app's UI. As of today, for example, 1080p @ 240 is not compatible with h264, nor is 4K@60. But, I know this because Apple's Camera configuration explains it so in the options to choose between "High Efficiency" and "Most Compatible".What is the sanctioned way to determine if a certain video format can be used with a certain codec?. The way I am thinking about is a bit convoluted. To find the video codecs compatible with a video format I need to: - Create and configure a AVCaptureSession - Add a AVCaptureVideoDataOutput to the video session - Set the AVCaptureDeviceInput's device active format. - Query the AVCaptureVideoDataOutput availableVideoCodecTypesForAssetWriter to obtain the supported codecs (h264, hevc, ...).On the iPhone 11, the wide angle alone camera supports 17 420f video formats. Do I need to loop through all of them in order to obtain the supported codecs (set active format, query for each) or is there a better way?Thanks!
Post not yet marked as solved
Until beta 5, creating a NSDiffableDataSourceSnapshot worked for me like this:let snapshot = NSDiffableDataSourceSnapshot<section, photo="">()In beta 6 and 7, it crashes when it reaches that point with a "Thread 1: signal SIGABRT" message.Currently Section and Photo are defined like so:enum Section {
case main
}
struct Photo: Hashable {
let identifier : String
static func == (lhs: Photo, rhs: Photo) -> Bool {
return lhs.identifier == rhs.identifier
}
}Any hints on how to resolve this?
Post not yet marked as solved
When will Catalyst gain SwiftUI support? Will it be available before the September OS releases?
Post not yet marked as solved
I would like to know what is the sanctioned way to create the equivalent to a UISplitViewController on iPad using Swift UI to create what Xcode names "master-detail" apps.Thank you.