Is the hardware requirement any AMD GPU with > 4GB VRAM? Or are there specific requirements for AMD GPUs, such as needing to be AMD Radeon?
Post
Replies
Boosts
Views
Activity
If anyone does have this up and running, could you advise steps necessary to allow the binary to build on Xcode 13 Beta 2? I had tried to remove the Combine output publisher, replacing with the async approach, as found in the WWDC21-10076 session, and removed the SampleOverlap parameter, to no avail.
This also resolved the issue for me. Worth nothing that the folder @seamount is advising to go to may be different depending on your Xcode version (or OS version, or some other variable). The error that the canvas is showing should indicate the folder path (most of the path was the same as @seamount's suggestion, but some slight differences on my end). As noted by others, my issue came after updating to Xcode 13.2.1 from Xcode 13.2 (perhaps worth saying that my Xcode 13.2 was installed via the Developer Portal, and I deleted it, then installed 13.2.1 from the App Store). Either way, @seamount's suggestion worked, thank you very much!
Thanks, @Polyphonic! Great catch; sorry for that typo. Thanks for the response; your approach is a great way to think about how to tackle a scenario like this and might be much simpler than what I was proposing, depending on the desired user experience.
@jlv From the WWDC 2023 session "Create a great spatial playback experience" (as can be found at this timestamp), it appears that VideoMaterial, as is used in @PatrickDevX's code sample supports RealityKit rendering, which should support 3D/stereoscopic video encoded in MV-HEVC.
@itsK My guess is that this code sample is intended to be embedded in a RealityView, which is where the content property comes from. It would likely be used as something in a SwiftUI view like;
struct MyVideoView: View {
var body: some View {
RealityView { content in
// @PatrickDevX's code here
}
}
}
(The formatting came out weird for this comment, sorry, but the code should work).
Appreciate your reply and innovation here! I am more-so looking for a local way to do file conversions from existing stereoscopic video, leveraging an iOS, macOS, or visionOS API, not necessarily a cloud-based service. The referenced WWDC session "Deliver video content for spatial experiences" provides great technical detail on what the expected files look like, but I haven't figured out how to tie all that together to do the conversion using anything like AVExportSession or an AVAssetWriter.