I would like to implement an expression that pops out from the window to Immersive based on the following WWDC video.
This video introduces the new features of visionOS 2.0 in the form of refurbishing Apple's sample app BOT-anist.
https://developer.apple.com/jp/videos/play/wwdc2024/10153/?time=1252
In the video, it looks like ImmersiveSpaceAppModel is newly implemented.
However, the key code is not mentioned anywhere.
You pass appModel.robot as the from argument to the transform method of RealityViewContent.
It seems that BOT-anist has been updated once and can be downloaded from the following URL, but there is no class such as ImmersiveSpaceAppModel implemented in this app either.
https://developer.apple.com/documentation/visionos/bot-anist
Has it been further updated again?
Frankly, I'm not sure if it is possible to proceed as per the WWDC video.
Translated with DeepL.com (free version)
visionOS
RSS for tagDiscuss developing for spatial computing and Apple Vision Pro.
Posts under visionOS tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I want a sentence custom hover effect, not a button.
I want a hover effect when you look at one sentence out of many sentences.
So I searched for reference videos https://youtu.be/DftRTx1oX6E , https://developer.apple.com/videos/play/wwdc2023/10110/ on apple youtube and visionOS documentation.
But I haven't gotten anywhere near my wish feature yet.
I respectfully request someone to help me. :)
Hello,
I checked following documentations.
Vision | Apple Developer Documentation
Discover Swift enhancements in the Vision framework - WWDC24 - Videos - Apple Developer
I saw Vision Framework is available on visionOS.
So I want to know that if it's possible using Vision Framework on visionOS for tracking human and animal body poses. Or are there some limits to use this on visionOS?
In the particle effect of RealityKit, there is a Type:
ParticleEmitterComponent.Presets
He can invoke certain particle effects in certain systems, but I am interested in learning how to modify these particle entities (such as adjusting the color, the number of particles, the generation range…)?
Dear Apple Engineers,
I am working on a project in visionOS and need to implement a curved surface effect for video playback, where the width of the surface can be dynamically adjusted. Specifically, I want the video to be displayed on a curved surface (similar to a scroll unfolding), and the user should be able to adjust the width of this surface.
I have the following specific questions:
How can I implement a curved surface for video playback and ensure the video content is not stretched or distorted on the surface?
How can I create a dynamic curved surface (such as a bending plane) in RealityKit or visionOS, where the width can be adjusted by the user?
Is it possible to achieve more complex curved surface effects (such as scroll unfolding or bending) using Shaders or other techniques?
Thank you very much for your help!
I'm experiencing audio issues while developing for visionOS when playing PCM data through AVAudioPlayerNode.
Issue Description:
Occasionally, the speaker produces loud popping sounds or distorted noise
This occurs during PCM audio playback using AVAudioPlayerNode
The issue is intermittent and doesn't happen every time
Technical Details:
Platform: visionOS
Device: vision pro / simulator
Audio Framework: AVFoundation
Audio Node: AVAudioPlayerNode
Audio Format: PCM
I would appreciate any insights on:
Common causes of audio distortion with AVAudioPlayerNode
Recommended best practices for handling PCM playback in visionOS
Potential configuration issues that might cause this behavior
Has anyone encountered similar issues or found solutions? Any guidance would be greatly helpful.
Thank you in advance!
I noticed that when I enter the fully immersive view and then click the X button below the window, the immersive space remains active, and the only way to dismiss it is to click the digital crown. On other apps (Disney+ for example), closing out of the main window while in immersive mode also closes out the immersive space. I tried applying an onDisappear modifier to the the Modules view with a dismissImmersiveSpace, but that doesn't appear to do anything.
Any help would be appreciated.
Hi, I am having some troubles creating a "nested" RealityView content using MapKit attachment.
I am building a visionOS app that has horizontal MapKit map as an attachment to RealityView. I want to display 3D pins on that map, therefore I am using native map annotation and inside of these annotations, I create a new RealityView just for the 3D pin. This worked completely fine, unitil I wanted to have those RealityViews interact with each other.
By interaction of those RealityViews I mean that I wanted to group entities from the first "main" RealityViews content with the 3D pins using ModelSortGroupComponent.
Why I want this? I want to make the map circular, that is not a problem. Problem is that when I move the map with 3D pins, these pins have their own RealityView space and are only bounded by volumetric window dimensions. What happes is that these pins float next to the map (shown on attached image). So I came up with this solution: create a custom "toroid" like 3D entity model that occludes the pins that go outside the map region. In order to occlude only the pins, I need to use ModelSortGroupComponent to group the "toroid" entity with 3D pins entities (as described in another forum thread).
To summarize: need the content of the superior RealityView to interact with map attachment annotations RealityView content in order to group them. There might be of course another, better way to achieve my whole goal, so I would naturally appreciate any help or guidance.
Image below showing 3D pins on circular map. Since pins RealityView does no know anything about other RealityViews, it just overlows and hangs in space until is cropped by volumetric window boundary.
Simplified code:
var body: some View {
let modelSortGroup = ModelSortGroup(depthPass: .prePass)
RealityView { content, attachments in
var mainEntity = Entity()
// My other entities here...
if let mapAttachment = attachments.entity(for: "mapAttachment") {
// Edit map properties, position, horizontal layout etc.
mainEntity.addChild(mapAttachment)
}
// Create and add to content mask "toroid" entity mapMaskEntity. Use OcclusionMaterial() material.
mapMaskEntity.components.set(ModelSortGroupComponent(group: modelSortGroup, order: 0))
// For all pins, somehow also set the group
// 3DPinEntity.components.set(ModelSortGroupComponent(group: modelSortGroup, order: 1))
content.add(mainEntity)
} attachments: {
Attachment(id: "mapAttachment") {
Map {
ForEach(mapViewModel.clusters, id: \.id) { cluster in
Annotation("", coordinate: cluster.coordinate) {
MapPin3DView(cluster: cluster)
}
}
}
.clipShape(Circle())
}
}
}
// MapPin3DView is an map annotation view that includes a model of 3D pin and some details like image etc., uses RealityView.
struct MapPin3DView: View {
var body: some View {
RealityView { content in
// 3D pin entities...
}
}
}
Hey, I have Enterprise Access on the account and have added the passthrough capability and the entitlement on the main project and the "Broadcast Upload" extension, too.
The broadcast works except it returns a black screen.
I am attaching some screenshots below of the entitlement file. I have tried searching online to no avail, so any help would be greatly appreciated. I am also attaching the code.
import Foundation
import AVFoundation
import ReplayKit
class VideoAssetWriter {
private var isRecording = false
private var outputStream: OutputStream?
private func setupConnection() {
guard outputStream == nil else { return }
print("setting up connection.")
let serverIP = macIP
let port = 12345
var readStream: Unmanaged<CFReadStream>?
var writeStream: Unmanaged<CFWriteStream>?
CFStreamCreatePairWithSocketToHost(kCFAllocatorDefault,
serverIP as CFString,
UInt32(port),
&readStream,
&writeStream)
guard let writeStream = writeStream?.takeRetainedValue() else {
print("Failed to create write stream")
return
}
self.outputStream = writeStream as OutputStream
self.outputStream?.open()
}
func startRecording() {
isRecording = true
}
func processVideoSampleBuffer(_ sampleBuffer: CMSampleBuffer) {
print("Processing Sample 1")
guard isRecording else { return }
print("Processing Sample 2")
sendVideoChunkToServer(sampleBuffer)
}
private func sendVideoChunkToServer(_ sampleBuffer: CMSampleBuffer) {
guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
print("Processing Sample 3")
let ciImage = CIImage(cvPixelBuffer: imageBuffer)
let context = CIContext()
guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { return }
print("Processing Sample 4")
let image = UIImage(cgImage: cgImage)
if let imageData = image.jpegData(compressionQuality: 0.5) {
guard imageData.count <= 10_000_000 else {
print("Frame too large: \(imageData.count) bytes")
return
}
if outputStream == nil {
setupConnection()
}
print("sending frame size up connection.")
// Convert to network byte order (big-endian)
var frameSize = UInt32(imageData.count).bigEndian
let sizeData = Data(bytes: &frameSize, count: MemoryLayout<UInt32>.size)
_ = sizeData.withUnsafeBytes { outputStream?.write($0.baseAddress!.assumingMemoryBound(to: UInt8.self), maxLength: sizeData.count) }
print("sending image data up connection.")
// Send frame data
_ = imageData.withUnsafeBytes { outputStream?.write($0.baseAddress!.assumingMemoryBound(to: UInt8.self), maxLength: imageData.count) }
}
}
func stopRecording() {
isRecording = false
outputStream?.close()
outputStream = nil
}
}
This is the broadcast picker view wrapper:
// Broadcast Picker View wrapper
struct BroadcastButtonView: UIViewRepresentable {
func makeUIView(context: Context) -> RPSystemBroadcastPickerView {
let broadcastPickerView = RPSystemBroadcastPickerView(
frame: CGRect(x: 0, y: 0, width: 200, height: 200)
)
// Make sure this matches your broadcast extension bundle identifier
broadcastPickerView.preferredExtension = "my-extension-bundle-identifier"
broadcastPickerView.showsMicrophoneButton = false
return broadcastPickerView
}
func updateUIView(_ uiView: RPSystemBroadcastPickerView, context: Context) {
}
}
The extension SampleHandler:
override func broadcastPaused() {
print("paused broadcast")
// User has requested to pause the broadcast. Samples will stop being delivered.
}
override func broadcastResumed() {
print("resumed broadcast")
// User has requested to resume the broadcast. Samples delivery will resume.
}
override func processSampleBuffer(_ sampleBuffer: CMSampleBuffer, with sampleBufferType: RPSampleBufferType) {
print("broadcast received")
assetWriter?.processVideoSampleBuffer(sampleBuffer)
}
Looking forward to any and all help.
Information Property list:
Information property list for the extension:
The capabilities:
In RealityView, physical components are only applicable to certain solids. How can I simulate the physical effects of water and cloth?
enity.components.set(PhysicalComponent)
Hey there, I am working on an app that displays environmental data using PNG color channels to represent data ranges, which gets overlayed on a map. The sampled values aren't what I'm expecting though... for example an RGB value of 0x7f0000 (R = 0.5, G = 0, B = 0) would be seen as 0.21, 0, 0 in the shader. This basically makes it unusable if I'm trying to show scientific data... I'm half wondering if I am completely misunderstanding how sampling works in RealityKit / RealityComposerPro. Anybody have any idea why it works like this?
Actual result (chart labels added in photoshop):
Expected:
Red > 0.1 Shader Graph
We're developing a VisionOS application, where we would like to do product recognition (like food items).
We have enterprise entitlements and therefore also main camera access for VisionOS. We send this live camera frames to a trained CoreML model where we will receive 2D coordinates from the model detection prediction.
Now, we would like to create a 3D anchor on the detected items so it can be visible for user. The 3D anchor is going to be the class name of the detected item.
How do we transform this 2D coordinate from the model prediction to a 3D anchor?
Hi
Hopefully someone can share some ideas on how to accomplish this.
I know we can load models from realityKitContentBundle like
let model = try? await Entity(named: “testModel”, in: realityKitContentBundle)
But this is in the root of RealityKitContent.rkassets , if I have the models in some subfolder then I have to add the complete path like
let model = try? await Entity(named: “/superModels/testModel”, in: realityKitContentBundle)
What I want is to be able to search recursively in all folders for that file as I have several subfolders with different models.
Any suggestion ?
Thanks in advance.
Guillermo
This issue has been since visionOS 1 unless that is how it is supposed to work. As you can see in the screen capture the shadows from the top box are shown on all 3 boxes below.
This is a screen capture in composer pro but the same thing happens in the Vision Pro.
Is there any way to stop this behavior and just have shadows on the first object below the object that is casting the shadows ?
[The problem that is occurring]
The game apps in development are compatible with iOS, macOS, tvOS, and visionOS.
In the Game app under development, the AppleArcade access point is placed in the main menu.
In visionOS, when the main menu is opened, the GameCenter dashboard is automatically launched within 1~2 seconds after the main menu is displayed.
This condition occurs every time the menu is re-opened.
On iOS, macOS, and tvOS, the dashboard appears after pressing the GameCenter access point icon.
[What you want to solve]
We would like to make it so that the Game Center dashboard is launched after the access point icon is pressed on visionOS as it is on other operating systems.
Or, if there is a standard implementation method for visionOS, please let us know.
Hi,
I would like to create an app that has a volumetric window in the middle and two 2D windows on the sides. When I tried that, the 2D windows are positioned slightly below the volumetric window for some reason (image1).
Looks like the base (hadle, close button, etc.) of the volumetric window is aligned with the center of the whole 2D window. I would like all the window bases to be aligned (image2). (I can of course do this manually by dragging the window down a bit with my hand, but that’s an inconvenience for my usecase.)
I tried making the whole volumetric window content higher, but that did not help and the content actually went far above the 2D windows (image3).
I suppose this was some design choice when creating the whole window positioning behavior on VisionOS.
Am I doing something wrong? Is there a way to achieve what I want or a better way to customize the position of windows, not just 5 predefined positions in defaultWindowPlacement?
Image1 - current:
Image2 - what I want:
Image3 - current, larger content:
Code:
import SwiftUI
@main
struct placementTestApp: App {
@Environment(\.openWindow) var openWindowAction
var body: some Scene {
WindowGroup(id: "volume") {
VolumetricWindowView()
.onAppear {
openWindowAction(id: "first")
openWindowAction(id: "second")
}
}
.windowStyle(.volumetric)
.volumeWorldAlignment(.gravityAligned)
WindowGroup(id: "first") {
NormalWindowView()
}
.defaultWindowPlacement { _, context in
if let mainWindow = context.windows.first(where: { $0.id == "volume" }) {
WindowPlacement(.leading(mainWindow))
} else {
WindowPlacement()
}
}
WindowGroup(id: "second") {
NormalWindowView()
}
.defaultWindowPlacement { _, context in
if let mainWindow = context.windows.first(where: { $0.id == "volume" }) {
WindowPlacement(.trailing(mainWindow))
} else {
WindowPlacement()
}
}
}
}
When I try to open Immersive space I got error like below:-
HALC_ProxyIOContext::IOWorkLoop: skipping cycle due to overload
How to solve it any idea?
How can I guide users to set their preferred language in visionOS? At this point, the behavior seems to be different from that of iOS.
i have normal application flow and at one place i have to open immesive space for image seen 360 view but currently when ever i run application it start with immersive space my app normal flow is not start.What the isssue here?
I'm building a SwiftUI+RealityKit app for visionOS, macOS and iOS. The main UI is a diorama-like 3D scene which is shown in orthographic projection on macOS and as a regular volume on visionOS, with some SwiftUI buttons, labels and controls above and below the RealityView.
Now I want to add UI that is positioned relative to some 3D elements in the RealityView, such as a billboarded name label over characters with a "show details" button and such.
However, it seems the whole RealityView Attachments API is visionOS only? The types don't even exist on macOS. Why is it visionOS only? And how would I overlay SwiftUI elements over a RealityView using SwiftUI code on macOS if not with attachments?