Has there been an adjustment in the maximum number of DrawableQueues that can be swapped for textures in VisionOS 2? Or an adjustment in the total amount of RAM allowed in a scene?
I have been having a difficult time getting more than one DrawableQueue to appear when it worked fine in VisionOS 1.x.
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Post
Replies
Boosts
Views
Activity
When using .mov video files for creating video materials in RealityKit, they display correctly on my modelEntity. However, when I tried using a video file in the .mp4 format, I only get a solid black material. Does AVKit support playing .mp4 video files on visionOS 1.2?
Hello just getting this error as I am trying to run the any new project this error is looking up
Thanks
Zipzy Games
X
I followed the WWDC video to learn Sharplay. I understood the first creation of seats, but I couldn't learn some of the following content very well, so I hope you can give me a list code. The contents are as follows:
I have already taken a seat.
struct TeamSelectionTemplate: SpatialTemplate {
let elements: [any SpatialTemplateElement] = [
.seat(position: .app.offsetBy(x: 0, z: 4)),
.seat(position: .app.offsetBy(x: 1, z: 4)),
.seat(position: .app.offsetBy(x: -1, z: 4)),
.seat(position: .app.offsetBy(x: 2, z: 4)),
.seat(position: .app.offsetBy(x: -2, z: 4)),
]
}
I hope you can give me a SharePlay Button. After pressing it, it will assign all users in Facetime to a seat with elements quantified in TeamSelectionTemplate. Thank you very much.
When I wanted to call the Reality Composer Pro scene containing Object Tracking, I tried the following code:
RealityView { content in
if let model = try? await Entity(named: "Scene", in: realityKitContentBundle) {
content.add(model)
}
}
Obviously, this is wrong. We need to add some configurations that can enable Object Tracking to Reality View. What do we need to add?
How to display the user's own persona in a view
It's a common system interaction to look at an item in SwiftUI and tap to select it.
I'm confused how to do the same with ModelEntities.
How do I use gaze to select a ModelEntity for context based actions? e.g. look at the green sphere and tap to pull up a menu. Or look in a direction and clap to **** away virtual objects etc. etc.
If this is not possible is there a workaround?
Can a TabletopKit game with a variable number of players also have a variable number of AI, each playing a specific position? For example, a game that can handle 3-7 players, being set up for a specific game where 4 of the players are real, and an AI plays 2 other locations? The Kit appears to be setup to only be able to handle real (human) players. Thank you!
Does the current version of TabletopKit support having two or more game players to be at the same physical location? In these cases, the players would not want to see a Facetime persona around the table but instead should be able to see the physical player. Any other remote players would be able to see the personas of those players since they are not at that location. There are a couple of issues in this scenario (shared position of the board, players' location around the table, etc.), but they should be solvable. Thank you!
Is it possible to determine where walls are in a shared space setting? Or does it have to be in immersive mode?
Are there any workarounds for getting location of walls in shared settings? I want things to be able to latch onto walls.
With quite some excitement I read about visionOS 2's new feature to automatically turn regular 2D photos into spatial photos, using machine learning. It's briefly mentioned in this WWDC video:
https://developer.apple.com/wwdc24/10166
My question is: Can developers use this feature via an API, so we can turn any image into a spatial image, even if it is not in the device photo library?
We would like to download an image from our server, convert it on the visionPro on-the-fly and display it as a spatial photo.
In the example https://developer.apple.com/documentation/imageio/writing-spatial-photos, we see that for each image encoded with the photo we include the following information:
kCGImagePropertyGroups: [
kCGImagePropertyGroupIndex: 0,
kCGImagePropertyGroupType: kCGImagePropertyGroupTypeStereoPair,
(isLeft ? kCGImagePropertyGroupImageIsLeftImage : kCGImagePropertyGroupImageIsRightImage): true,
kCGImagePropertyGroupImageDisparityAdjustment: encodedDisparityAdjustment
],
Which will identify which image is left, and which is right, also information about group type = stereo pair.
Now, how do you read those back?
I tried to implement a reading simply with CGImageSourceCopyPropertiesAtIndex, and that did not work, getting back "No property groups found."
func tryToReadThose() {
guard
let imageData = try? Data(contentsOf: outputImageURL),
let source = CGImageSourceCreateWithData(imageData as NSData, nil)
else {
print("cannot read")
return
}
for i in 0..<CGImageSourceGetCount(source) {
guard let imageProperties = CGImageSourceCopyPropertiesAtIndex(source, i, nil) as? [String: Any] else {
print("cannot read options")
continue
}
if let propertyGroups = imageProperties[String(kCGImagePropertyGroups)] as? [Any] {
// Process the property groups as needed
print(propertyGroups)
} else {
print("No property groups found.")
}
//print(imageProperties)
}
}
I assume maybe CGImageSourceCopyPropertiesAtIndex expects something as a 3rd parameter. But in the https://developer.apple.com/documentation/imageio/cgimagesource "Specifying the Read Options" I don't see anything related to that.
I was following Explore object tracking for visionOS to load an object reference, but got this error:
Failed to load reference object from URL: ObjectTrackingProvider.Error(code: referenceObjectLoadingFailed, errorDescription: "The operation couldn’t be completed. (com.apple.arkit error 1101.)", failureReason: "", recoverySuggestion: ""
Here is what I have, not sure if it is an code error, or something with the system:
private func loadReferenceObject() {
Task {
// Load the reference object
let refObjURL = Bundle.main.url(forResource: "objectTrackerBox", withExtension: ".referenceobject")
if let refObjURL = refObjURL {
do {
let refObj = try await ReferenceObject(from: refObjURL)
logMessage = "Reference object loaded successfully: \(refObj)"
print(logMessage)
} catch {
logMessage = "Failed to load reference object from URL: \(error)"
print(logMessage)
}
} else {
logMessage = "Failed to find the reference object file."
print(logMessage)
}
}
}
I've been trying to get the drag gesture up and running so I can move my 3D model around in my immersive space, but for some reason I am not able to move it around. The model shows up in my visionOS 1.0 Simulator, but I can't seem to get it to move around. Would love some help with this and some resources too that would be helpful. Here's a snippet of my Reality View code
import SwiftUI
import RealityKit
import RealityKitContent
struct GearRealityView: View {
static var modelEntity = Entity()
var body: some View {
RealityView { content in
if let model = try? await Entity(named: "LandingGear", in: realityKitContentBundle) {
GearRealityView.modelEntity = model
content.add(model)
}
}.gesture(
DragGesture()
.targetedToEntity(GearRealityView.modelEntity)
.onChanged({ value in
GearRealityView.modelEntity.position = value.convert(value.location3D, from: .local, to: GearRealityView.modelEntity.parent!)
})
)
}
}
Hi,
Object Capture's original sample code was released last year, and this year there was a talk about adding area mode to it. The talk links to the old Object Capture code - when can I expect to have the new one with area mode, and is there anything I can help you with to have it published faster?
Thanks!
I downloaded Xcode 16 and updated my macOS to 15, but I keep getting this error when trying to build the game in simulator or in device
[xrsimulator] Exception thrown: The operation couldn’t be completed. (realitytool.RKAssetsCompiler.RKAssetsCompilerError error 3.)
Is there a way to make a SpatialAudio object autoplay its assigned track strictly via settings in Reality Composer Pro, no Swift code involved?
Hey,
In the "Explore object tracking for visionOS" session we explore how a Globe can be tracked, and objects can be anchored to various positions. My question is if the physical Globe is rotated, will the anchored objects also respond to this in real-time?
I would like to overlap a virtual map on top of a physical globe, so when the user rotates the physical globe, the virtual map also seamlessly responds. Is this possible using Object Tracking?
Thanks
Are you planning on publishing a complete sample code project related to the Explore object tracking for visionOS session (wwdc2024/10101)?
The animation at 12:50 where the globe opens up was especially impressive. Seeing how that was done while tracking to the globe would be very interesting. (I realize that we would have to create our own globe object in order for the code to work.)
I was wondering of anyone had guidance on how to “livestream“ MV-HEVC content. More specifically, I have a left and right eye view for stereoscopic content (perhaps, for example, the views were taken from a stereoscopic video being passed through an AVPlayer). I know, based on sample code, that I can convert the stereoscopic video into a MV-HEVC file using AVAssetWriter. However, how would I take the stereoscopic video and encode it, in realtime, to a stream that could then leverage HLS Tools to deliver to clients? Is AVFoundation capable of this directly? Or is there an API within VideoToolbox that can help with this?