Post

Replies

Boosts

Views

Activity

Reply to How/where to encode MV-HEVC stereo video?
Also curious on this. The visionOS (beta 3) SDK does include a sample Spatial Photo in the Photos app, but I've not come across any sample MV-HEVC videos or tools to convert any media into MV-HEVC. It's worth nothing that there are some AVAssetExportPresets that support MV-HEVC (such as this AVAssetExportPresetMVHEVC1440x1440, but I've not been able to get it to work with any media; it just throws an error off the bat.
Sep ’23
Reply to How to show immersive view as background of already built iOS App ?
Generally speaking, you'll first want to make sure that you've added Apple Vision as a supported destination in your project's General settings. It wasn't 100% clear as to whether you were building for "Apple Vision (Designed for iPad)" or "Apple Vision (visionOS)". The former is just going to run your now-visionOS compatible app in a 2D window, whereas the latter will allow you to add the necessary customizations for visionOS and create a more immersive 3D experience, which is what you'll want. I think the answer to your question is dependent on how your app is displaying its window. Ideally, you'd want to use an ImmersiveSpace that embeds your main window, then set your 360-degree background as the background of the ImmersiveSpace. Assuming you're using the SwiftUI app lifecycle, it might help to see what your @Main entry looks like. For the sake of brevity, lets presume that your entry point is something like this, where BuildMeditationView() is the SwiftUI view you showed in your screenshot; @main struct MyMeditationApp: App { var body: some Scene { WindowGroup { BuildMeditationView() } } } You could modify this so that your app launches an ImmersiveSpace as its default window; @main struct MyMeditationApp: App { var body: some Scene { WindowGroup { #if os(visionOS) ImmersiveSpace(id: "BuildMeditationSpace") { ImmersiveMeditationBuilder() }.immersionStyle(selection: .constant(.full), in: .full) #else BuildMeditationView() #endif } } } You can learn more about the ImmersiveSpace in the developer docs, which has great detail on how to modify your app's Info.plist if you do want the app to launch an ImmersiveSpace at launch, as well as what the different immersion styles are. You could then create a new SwiftUI view, ImmersiveMeditationBuilder, that serves as your ImmersiveSpace, and embed your original view into it. There are a few different ways to do this, one of my preferred ways is to add your original SwiftUI view as an attachment to a RealityView; struct ImmersiveMeditationBuilder: View { var body: some View { ZStack { RealityView { content, attachments in // Set up your video background file and player let asset = AVURLAsset(url: Bundle.main.url(forResource: "myVideoBackgroundFile", withExtension: "mp4")!) let playerItem = AVPlayerItem(asset: asset) let player = AVPlayer() // Attach the material to a large sphere that will serve as the skybox. let entity = Entity() entity.components.set(ModelComponent( mesh: .generateSphere(radius: 1000), materials: [VideoMaterial(avPlayer: player)] )) // Ensure the texture image points inward at the viewer. entity.scale *= .init(x: -1, y: 1, z: 1) // Add the background to the main content. content.add(entity) // Add the BuildMeditationView attachment if let attachment = attachments.entity(for: "myMeditationBuilderView") { content.add(attachment) } // Play the video background player.replaceCurrentItem(with: playerItem) player.play() } update: content, attachments in // Update if necessary when the view is reloaded } attachments: { Attachment(id: "myMeditationBuilderView") { BuildMeditationView() } } } } } This assumes you want the view you showed in your screenshot to be the main view at your app's launch. If not, you could also launch your ImmersiveSpace based on some user action, which is detailed in Presenting Windows and Spaces developer article.
Aug ’23
Reply to Unable to run the sample code
The sample code appears to have been updated (https://developer.apple.com/documentation/RealityKit/guided-capture-sample) and is now building successfully for me on iOS 17 Beta 8/Xcode 15 Beta 8. I've only tested briefly, but I was able to successfully capture photos and create a 3D model on-device again.
Aug ’23
Reply to VisionOS used outdoors? Is 2D video streaming standard with AVP?
The YouTube video, at the timestamp you provided, seems to be demonstrating someone playing back a video captured on an Apple Vision Pro, using the 3D video capture functionality. As was mentioned in the Create a Great Spatial Playback Experience and Deliver Video Content for Spatial Experiences, you can create video playback of 2D or 3D video using an AVPlayerViewController or a RealityKit entity that contains a VideoPlayerComponent component. Traditional 2D video can be played back in likely the same supported formats that AVPlayerViewController currently supports, or in the MV-HEVC format for video that contains depth. Apple Vision Pro does not provide developers directly access to the cameras on the device. While you can consume and watch livestreams on the device using the video playback mechanisms in those WWDC sessions, livestreaming what Apple Vision Pro sees is not possible.
Jun ’23
Reply to Immersion space becoming inactive on stepping outside of the "system-defined boundary"
I believe this was asked during the WWDC 2023 Slack sessions and it was mentioned that, for safety concerns, the app will automatically change phases if it's determined that a user is walking around as it could pose a safety hazard. While tough to say without working in a simulator or practical device, it's likely that you'll want to test your experience to properly handle such a thing. But I do not think the "walkaround" experience is directly supported due to safety concerns.
Jun ’23
Reply to Utilizing Point Cloud data from `ObjectCaptureSession` in WWDC23
My understanding from the WWDC Session Meet Object Capture for iOS, as well as talking to some of the Object Capture engineers during the WWDC 2023 Labs, is that the point cloud data captured during an ObjectCaptureSession is embedded in the HEIC file and automatically parsed during a macOS PhotogrammetrySession. So as long as you provide the input (images) directory to the PhotogrammetrySession when you instantiate it, alongside the optional checkpoint folder, I believe the macOS PhotogrammetrySession should automatically take advantage of the point cloud data for improved reconstruction times and performance. For what it's worth, in the Taking Pictures for 3D Object Capture sample code, the depth data captured and intended to be provided to the PhotogrammetrySession isn't actually embedded depth data in the HEIC file, but a separate depth mask as a TIFF file. Granted, I know HEIC files are capable of including depth information, so it might be possible that PhotogrammetrySession can intake a HEIC file with embedded depth data (and, newly this year, embedded point cloud data), but just wanted to make that distinction. At least in the Taking Pictures for 3D Object Capture sample code, each image captured includes a HEIC file, a TIFF depth file, and a TXT gravity file - I've been passing an entire folder that contains all of those files into the PhotogrammetrySession and it seems that PhotogrammetrySession leverages all provided data accordingly. To your latter question, while I'm not entirely sure if the PhotogrammterySession.PointCloud array that's returned from a PhotogrammetrySession.Request is actually the same thing that the ObjectCapturePointCloudView uses, my guess would be yes. It seems like ObjectCapturePointCloudView is a convenience SwiftUI view that beautifully renders the underlying point cloud data that would match the point cloud data returned in the PhotogrammetrySession.Request.
Jun ’23
Reply to How to set priority for App Clip Experiences
I'm not an expert on this, but is there any reason to have three App Clip experiences set up for what look like very similar URLs? I could understand having a unique App Clip Experience set up for https://example.com and https://example.com/subpath, but I'm not sure why you'd need an additional App Clip Experience set up for https://example.com/subpath?type=x (unless you're trying to do something where the x parameter is a unique business/location/event, so if the invocation URL is something like https://example.com/subpath?type=123 then you'd want a custom App Clip Experience, but if the invocation URL is something like https://example.com/subpath?type=456, then you would want it to fall back to the https://example.com/subpath)? In the Configuring the launch experience of your App Clip docs, specifically in the [Take advantage of URL prefix matching](Take advantage of URL prefix matching) section, there's some detail about the suggestion being to register as few URLs as possible, so that you can take advantage of prefix matching. Depending on your needs, I'd probably suggest just registering one App Clip Experience at https://example.com, and when you invoke your App Clip from a QR code, the App Clip can pass the entire URL into the app (which you get in your AppDelegate or SceneDelegate or App as a NSUserActivity), and then you can parse the full URL and get the path/query parameters and do what you need to do to customize the experience. In my opinion, I'd; Register one App Clip experience as https://example.com and set up the AASA file accordingly. Build out your QR codes/App Clip codes with the full URL you need (https://example.com/subpath?type=123, for example) Handle the proper experience in-app when you get the NSURLActivity and parse the URL to get the path and query path parameters so you handle accordingly). If that doesn't work for your use case, note that the article says The system then chooses the App Clip experience with the URL that has the most specific matching prefix. In theory, the matching should occur in the order you have listed in your post, since the matching would be more specific to the subpath with the parameters over just the subpath over just the base URL. Also worth considering; if you're using something like https://example.com/subpath?type=x for a unique business or other entity that you want a custom App Clip card for, maybe set up some sort of subdomain like somebusiness.example.com that redirects to https://example.com/subpath?type=x either server-side, or handle that custom subdomain in-app, so you don't have to have three URLs set up with super similar URLs and can better separate them.
Apr ’23
Reply to How to fetch an image in the Live Activity (ActivityKit)
As far as I can tell, the Live Activity itself is not capable of performing networking tasks and, thus, AsyncImage is not likely to be suitable. As the Live Activity is more aligned with how a Widget works, taking the approach of how images are shared between a host/parent app and the Widget Extension seems to be a viable way to display images on the Lock Screen/Dynamic Island within ActivityKit. Data can be shared between a host/parent app and a Widget Extension by taking advantage of App Groups (you can find this under the Signing & Capabilities tab of your project's settings in Xcode). In your host/parent target, you can enable App Groups, and choose either one of your existing App Groups (or create a new App Group). You can do the same for your Widget Extension target, as well, hereby allowing your host/parent app and the Widget Extension to have a shared App Group repository that can share data. Then, in your host/parent app, you can add in or modify an existing method that pre-fetches an image. For example, something like;     private func downloadImage(from url: URL) async throws -> URL? {         guard var destination = FileManager.default.containerURL(             forSecurityApplicationGroupIdentifier: "group.com.example.liveactivityappgroup")         else { return nil }         destination = destination.appendingPathComponent(url.lastPathComponent)                 guard !FileManager.default.fileExists(atPath: destination.path()) else {             print("No need to download \(url.lastPathComponent) as it already exists.")             return destination         }                 let (source, _) = try await URLSession.shared.download(from: url)         try FileManager.default.moveItem(at: source, to: destination)         print("Done downloading \(url.lastPathComponent)!")         return destination     } In my example here, I am taking in a URL, say of an image on the internet, checking if my "App Group" container exists for this target, appending the file name to the "App Group" container's FileManager URL, and either returning that URL, if the image already has been downloaded, or downloading the image. Once you have your image downloaded and stored in the "App Group" container, your Widget Extension/Live Activity could take advantage of this. For example, in your ActivityAttributes (or ActivityAttributes.ContentState) that you have defined for your Live Activity, you could have a var imageName: String that gets set when you create/update your Live Activity. Since this would then be passed via the context to the Widget Extension, you could do the reverse in your Widget Extension/Live Activity and render the image. For example, in your Widget Extension/Live Activity's code, you could add something like; if let imageContainer = FileManager.default.containerURL(     forSecurityApplicationGroupIdentifier: "group.com.example.liveactivityappgroup")?     .appendingPathComponent(context.state.imageName),    let uiImage = UIImage(contentsOfFile: imageContainer.path()) {     Image(uiImage: uiImage)         .resizable()         .aspectRatio(contentMode: .fill)         .frame(width: 50, height: 50) } This checks that the Widget Extension can access the App Group repository, appends the image name, as passed in the context, creates a UIImage, and if successful, then renders the UIImage using SwiftUI's Image.
Oct ’22
Reply to Changing the live activity without push notification
This presumably must be possible in some form, as the Timer app demonstrates this natively on iOS 16.1. The timer has a great countdown on the Lock Screen and Dynamic Island, and updates itself once the countdown hits zero. It seems unlikely that this is occurring through background tasks, as those tasks are not guaranteed to run at the specified time, and this behavior continues to work with Airplane Mode on, so it is not networking/remote push based. My thoughts thus far are implementing an internal timer (via Combine, SwiftUI, or UIKit) that checks if a certain condition is met (IE, it updates once a second and checks if a date is met or threshold passed) and initiates the Live Activity update, or that the Apple Timer app may be using some sort of internal/private entitlement that third-party developers do not have access to. Most of the use case examples of Live Activities do seem more based in activities that update from the network (IE sports scores updates, rideshare updates, food delivery updates) that wouldn't necessarily have updates perpetuated by the system and the exact timing would not be as relevant. Hopeful that someone from Apple can comment more as there seems to be a missing way to handle updating a Live Activity when a condition is met or a guaranteed time.
Sep ’22