Post

Replies

Boosts

Views

Activity

Reply to Connecting xcode with real my visionpro
That shouldn't be a problem. I'm in a similar situation, and managed to get it setup just fine. What steps did you take to get the device working in Xcode. The following steps should get you started: Vision Pro Navigate to Settings > General > Remote Devices and remove any computer listed. Xcode Navigate to Window > Devices and Simulators. Find the Vision Pro in list on the left hand side of this window. Select it and hit pair. Vision Pro Enter the code provided by Xcode into your Vision Pro when prompted. When you attempt to compile and run, you'll likely get a message about needing to enable developer mode. Vision Pro Navigate to Settings > Privacy & Security. Find and enable "Developer Mode". Restart Vision Pro. Bare in mind that using a different Apple ID will mean you can't share your Mac screen to the Vision Pro, but everything else should work fine.
Feb ’24
Reply to Fixed aspect ratio in VisionOS view
Hi @alexvznk, This can't be solved with SwiftUI alone, but can be achieved by setting the resizing restriction preference on the UIWindowScene using requestGeometryUpdate(_:errorHandler:) For example, this will app will have a default size of 1920x1080, and resizing will be restricted to the 16:9 aspect ratio. @main struct TestApp: App { var body: some Scene { WindowGroup { Color.red .onAppear { guard let windowScene = UIApplication.shared.connectedScenes.first as? UIWindowScene else { return } windowScene.requestGeometryUpdate(.Vision(resizingRestrictions: UIWindowScene.ResizingRestrictions.uniform)) } } .defaultSize(width: 1920, height: 1080) } } You can see this API being used in the Happy Beam sample code from WWDC 23.
Feb ’24
Reply to Share multiple Transferables with ShareLink
Ah, thank you @aronskaya. That works. It was a bit of a challenge to work out exactly what the compiler wanted, but for reference something like this: /// A transferable that generates the image to be shared asynchronously. A preview /// message, image and icon are provided here so they can be used to populate the /// sharing UI, but they should not be generated asynchronously. /// struct MyTransferable: some Transferable { let renderer: Renderer var message: String { // TODO: Provide a real message string here. // return "Render" } var previewImage: Image { // TODO: Provide a real preview image here. // return Image("previewImage") } var previewIcon: Image { // TODO: Provide a real preview icon here. // return Image("previewIcon") } static var transferRepresentation: some TransferRepresentation { DataRepresentation(exportedContentType: .png) { transferable in return try await transferable.renderer.render() } } } /// A view for testing the sharing link. /// struct MyView: View { private var transferables: [MyTransferable] { [MyTransferable(), MyTransferable()] } var body: some View { ShareLink<[MyTransferable], Image, Image, DefaultShareLinkLabel>(items: transferables { transferable in SharePreview(transferable.message, image: transferable.previewImage, icon: transferable.previewIcon) } } } Thanks again
Feb ’24
Reply to No feedback during App processing
There's usually no feedback until you're either rejected or accepted. Upon approval, you will receive an email to the addresses associated with your developer account with the subject "Your submission was accepted". When you're rejected, or App Store review encounter an issue, you'll usually receive an email with the subject "We noticed an issue with your submission". Then you need to visit App Store Connect and read the messages to work out what the problem is, but they're usually not that descriptive. Generally though, once you throw it over the wall to Apple, it's a waiting game until you're hear back.
Jan ’24
Reply to Array of UIImages
As you can see from the error Xcode/Playgrounds is providing, the problem here is that your function getElements() is expecting to return a UIImage, yet you are returning a String. Additionally, the initialiser UIImage(named:) is expecting a string, but you're passing in an image. This can be fixed in one of two ways: 1 - Rewrite the getElements() function actually return a UIImage. Of course this will likely have to be an optional, as randomElement() returns an optional element to handle cases where the array is empty. func getElements() -> UIImage? { let elements = ["rectangle", "circle", "triangle"] guard let randomElement = elements.randomElement() else { return nil } return UIImage(named: randomElement) } And then at the point of usage, you can just do something like: cell.contents = getElements()?.cgImage 2 - Alternatively, you could update your function getElements() to return a string. This is basically the same, but it moves the logic to test the optional to outside of the getElements() function. func getElements() -> String? { let elements = ["rectangle", "circle", "triangle"] return elements.randomElement() } And then at the point of usage, you have to test the optional, and use it to initialise the image: if let element = getElements() { cell.contents = UIImage(named: element)?.cgImage } Personally, I prefer option one, as it removes the burden of checking the nil string from the point of usage. But it all depends on what you're trying to achieve. I would also perhaps consider renaming the function to something like randomImage(), as getElements() is perhaps a little unclear, but that's just personal preference. Hope that helps.
Jan ’24
Reply to VisionOS Destination question
The level of immersion is controlled by the immersionStyle view modifier applied to the ImmersiveSpace. The supported levels of immersion are mixed, progressive and full. More information can be found on the supported immersion styles here: https://developer.apple.com/documentation/swiftui/immersionstyle. In short, the mixed immersion style means you can show your content in the real world; the progressive immersion style allows a background to be placed to occlude parts of the real world, with the user turning the Digital Crown to control the level of immersion; and finally full immersion which uses your content to fully occlude the real world. As you can imagine, the progressive mode used by the Destination Video app means that the video partially occludes the world, but doesn't cover the full 360 degrees around the user by default. You can change this behaviour by opening the DestinationVideo.swift file in the project (where the @main entry point is defined), and replacing the following code: .immersionStyle(selection: .constant(.progressive), in: .progressive) with... .immersionStyle(selection: .constant(.full), in: .full) However, unless you need to enforce a full immersion, for example for the purposes of a game, it is perhaps better to support progressive immersion and allow the user to decide by turning the Digital Crown to control the level of immersion. For more information, I'd suggest the WWDC 23 session "Getting started with building apps for spatial computing".
Jan ’24
Reply to How to use render 3D models inside a USDZ into a ModelEntity
Loading as an Entity instead of a ModelEntity will stop the model being collapsed into a single layer, but beware as Entity.load is blocking, and you'd be much better off using the Entity(named:in:) async throws initialiser, as it's asynchronous. For example: RealityView { content in do { let chessPiecesEntity = try await Entity(named: "ChessPiecesModel") guard let chessPiecesModelEntity = chessPiecesEntity as? ModelEntity else { throw Some.error } // Prepare the model entity if you need to set materials. content.add(chessPiecesEntity) } catch { print(error.localizedDescription) } }
Jan ’24
Reply to displacement map specification
I am curious why Reality Composer Pro is exporting a displacement map, as my understand was that RealityKit doesn't support Displacement maps for PBR textures. Was this for a surface shader? If you're asking about EXR, this is the OpenEXR file format: https://openexr.com. It is essentially a high-dynamic range image format stored in a linear format. For example, it can store values greater than 0.0...1.0, which are not usually possible with low-dynamic range formats such as JPEG. Additionally, unlike image formats that store in a non-linear colorspace such as Adobe RGB, linear colorspace means that the numerical intensity of a pixel correspond proportionally to their perceived intensity, in other words a value of 0.5 is half as intense/bright as a value of 1.0, and a quarter as intense/bright as a value of 2.0...etc. This is not the case with non-linear colorspaces that are often found in formats such as JPEG. EXR is often used as the image format for USD because they are both open source formats that originated in the visual effects community. You should be able to open EXR files with either Xcode, or the Preview app on macOS.
Jan ’24
Reply to Developer cable for Vision Pro?
Also interested in this. Experience with the device so far has required a cable to connect to a Mac, but the devices we've ordered don't appear to have the cable. I'm really hoping it isn't wireless debugging only, because despite being improved in Xcode 15, wireless debugging is still extremely slow and a painful experience. Something as big as a VisionPro headset should really have an option to connect a cable - it's not a watch.
Jan ’24
Reply to How turn my function into async pattern
I assume you cannot modify the API entirely? If you have full control, you can simply do something like: func doWork(_ someValue) async { // Long time of work } If you still want to maintain the completion handler API, but wrap it with an async version, you should take a look at Continuations. There's an article on hackingwithswift.com that might help: https://www.hackingwithswift.com/quick-start/concurrency/how-to-use-continuations-to-convert-completion-handlers-into-async-functions. Using this approach, your existing function could be wrapped by a new async version quite easily. func doWork(_ someValue: Int, completionHandler: @escaping () -> Void) { let q = DispatchQueue(label: "MyLabel") q.async { // Long time of work completionHandler() } } func doWork(_ someValue: Int) async { await withCheckedContinuation { continuation in // Call the existing completion handler API. doWork(someValue) { // Resume the continuation to exit the async function. continuation.resume() } } } Care should be taken to ensure you always call the completion handler in the original API, otherwise you could have a situation where your continuation doesn't resume. You know you've usually done this if you see something like this in the console: SWIFT TASK CONTINUATION MISUSE: doWork(_:) leaked its continuation!. Continuations can return values, and even throw errors if needed. If you're curious about other functions, you can often use Xcode to refactor the function to be asynchronous, or generate an async wrapper (although mileage may vary depending on the function complexity). You can do this by selecting the function name, right clicking and choosing "Refactor", and then picking either "Convert Function to Async" or "Add Async Wrapper". The code generated from "Add Async Wrapper" is as follows, which isn't far from the example above: @available(*, renamed: "doWork(_:)") func doWork(_ someValue: Int, completionHandler: @escaping () -> Void) { let q = DispatchQueue(label: "MyLabel") q.async { // Long time of work completionHandler() } } func doWork(_ someValue: Int) async { return await withCheckedContinuation { continuation in doWork(someValue) { continuation.resume(returning: ()) } } } Hope that helps. -Matt
Jan ’24
Reply to Save images in SwiftData
No there isn't I'm afraid. You have to convert to Data. I would caution against using loadTransferable as the Transferable type is intended to short term storage (copy and paste, drag and drop...etc), and there is no guarantee how that API may change in the future. If you're storing data for extended periods, you should try and ensure consistency of the kind of data stored. In our app, we have a generic SwiftData entity for storing an image. Any other entity that wants to store an image can maintain a relationship to one of these entities. It saves us having to add logic everywhere we want to store images. The generic image entity takes an NSImage, UIImage or CGImage and grabs the data in PNG format and stores it in the model. Then extensions on those types to initialise them directly from the SwiftData entity. It makes things a little cleaner, but it's essentially the same thing, especially considering we store images in multiple places in our model. An extension on NSImage to get the PNG data. extension NSImage { /// Returns the PNG data for the `NSImage` as a Data object. /// /// - Returns: A data object containing the PNG data for the image, or nil /// in the event of failure. /// public func pngData() -> Data? { guard let cgImage = self.cgImage(forProposedRect: nil, context: nil, hints: nil) else { return nil } let bitmapRepresentation = NSBitmapImageRep(cgImage: cgImage) return bitmapRepresentation.representation(using: .png, properties: [:]) } } The basic data model for our image. We store a type and some data which is the PNG data. @Model final class ImageModel { /// The type of the image. /// /// We use different images for different things, so storing an image type /// lets us differentiate use. /// var type: ImageType = ImageType.unknown /// The image data, stored as a PNG. /// /// It is tagged with externalStorage to allow the large binary data to be stored /// externally. /// @Attribute(.externalStorage) var pngData: Data? = nil /// Initialize the image model. /// /// - Parameters: /// - type: The type of image the image model represents. /// - pngData: The image data in png format. /// init(type: ImageType, pngData: Data) { self.type = type self.pngData = pngData } #if canImport(AppKit) import AppKit /// Initialize the image model from an `NSImage`. /// /// - Parameters: /// - type: The type of the image the image model represents. /// - image: The `NSImage` to store in the image model. /// convenience init(type: ImageType, image: NSImage) throws { guard let pngData = image.pngData() else { throw GenericError.failed("Unable to get PNG data for image") } self.init(type: type, pngData: pngData) } #elseif canImport(UIKit) import UIKit /// Initialize the image model from a `UIImage`. /// /// - Parameters: /// - type: The type of the image the image model represents. /// - image: The `UIImage` to store in the image model. /// convenience init(type: ImageType, image: UIImage) throws { guard let pngData = image.pngData() else { throw GenericError.failed("Unable to get PNG data for image") } self.init(type: type, pngData: pngData) } #endif } Given an ImageModel, initialises a UIImage or NSImage from the data stored in the model. #if canImport(UIKit) import UIKit extension UIImage { /// Initialize a new `UIImage` using data from an `ImageModel`. /// /// - Parameters: /// - model: The image model to load the image from. /// convenience init?(loadingDataFrom model: ImageModel) { guard let data = model.pngData, data.isEmpty == false else { return nil } self.init(data: data) } } #elseif canImport(AppKit) import AppKit extension NSImage { /// Initialize a new `NSImage` using data from an `ImageModel`. /// /// - Parameters: /// - model: The image model to load the image from. /// convenience init?(loadingDataFrom model: ImageModel) { guard let data = model.pngData, data.isEmpty == false else { return nil } self.init(data: data) } } #endif
Jan ’24