Hello,
I want to know who is in charge for the integration of a new ARKit update for the Unreal Engine.
We want to use the AR function from the AVP with Ubreal, but the ARKit version is too old.
visionOS
RSS for tagDiscuss developing for spatial computing and Apple Vision Pro.
Posts under visionOS tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi,
We are a team of student working on a project featuring the Vision Pro, and we'd simply like to know if a 3rd party app can access the video stream of the front cameras?
From our tests, FaceTime for exemple, is able to screen share the entire stream that the user is seeing. (real world + app windows), but apps as Discord, are only able to share app windows, the real world is fully black.
Is it a privacy security, or is it because 3rd parties app doesn't yet support the stream of the front cameras?
To give some more context, we'd like to screenshot an area of the view (real world), with a pinch and drag gesture, and then access the screenshot to work on it. How would we be able to access the video stream?
Thanks in advance for your help,
MrCubic
We are working on an app for the vision pro which has high polygons count and lots of high resolution textures. Everything looks smooth and in general very well, The issue is the moment we turn on voice control even if it is not being used the visuals at the center start to stutter left to right. Has anyone seen this ?, it must be a bug, any workaround ?
Thanks,
Guillermo
I have an app which have an Immersive Space view and it needs the user to have a button in the bottom which have a fixed place in front of the user head like a dashboard in game or so but when the user get too close to any3d object in the view it could cover the button and make it inaccessible and it mainly would prevent the app for being approved like that in appstoreconnect I was working before on SceneKit and there was something like camera view Znear and Zfar which decide when to hide the 3d model if it comes too close or gets too far and I wonder if there is something like that in realityView / RealityKit 4.
Here is My Code and the screenshots follows
import SwiftUI
import RealityKit
struct ContentView: View {
@State var myHead: Entity = {
let headAnchor = AnchorEntity(.head)
headAnchor.position = [-0.02, -0.023, -0.24]
return headAnchor
}()
@State var clicked = false
var body: some View {
RealityView { content, attachments in
// create a 3d box
let mainBox = ModelEntity(mesh: .generateBox(size: [0.1, 0.1, 0.1]))
mainBox.position = [0, 1.6, -0.3]
content.add(mainBox)
content.add(myHead)
guard let attachmentEntity = attachments.entity(for: "Dashboard") else {return}
myHead.addChild(attachmentEntity)
}
attachments: {
// SwiftUI Inside Immersivre View
Attachment(id: "Dashboard") {
VStack {
Spacer()
.frame(height: 300)
Button(action: {
goClicked()
}) {
Text(clicked ? "⏸️" : "▶️")
.frame(maxWidth: 48, maxHeight: 48, alignment: .center)
.font(.extraLargeTitle)
}
.buttonStyle(.plain)
}
}
}
}
func goClicked() {
clicked.toggle()
}
}
I’m developing a visionOS app using EnterpriseKit, and I need access to the main camera for QR code detection. I’m using the ARKit CameraFrameProvider and ARKitSession to capture frames, but I’m encountering this error when trying to start the camera stream:
ar_camera_frame_provider_t: Failed to start camera stream with error: <ar_error_t Error Domain=com.apple.arkit Code=100 "App not authorized.">
Context:
VisionOS using EnterpriseKit for camera access and QR code scanning.
My Info.plist includes necessary permissions like NSCameraUsageDescription and NSWorldSensingUsageDescription.
I’ve added the com.apple.developer.arkit.main-camera-access.allow entitlement as per the official documentation here.
My app is allowed camera access as shown in the logs (Authorization status: [cameraAccess: allowed]), but the camera stream still fails to start with the “App not authorized” error.
I followed Apple’s WWDC 2024 sample code for accessing the main camera in visionOS from this session.
Sample of My Code:
import ARKit
import Vision
class QRCodeScanner: ObservableObject {
private var arKitSession = ARKitSession()
private var cameraFrameProvider = CameraFrameProvider()
private var pixelBuffer: CVPixelBuffer?
init() {
Task {
await requestCameraAccess()
}
}
private func requestCameraAccess() async {
await arKitSession.queryAuthorization(for: [.cameraAccess])
do {
try await arKitSession.run([cameraFrameProvider])
} catch {
print("Failed to start ARKit session: \(error)")
return
}
let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions: [.left])
guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else { return }
Task {
for await cameraFrame in cameraFrameUpdates {
guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue }
self.pixelBuffer = mainCameraSample.pixelBuffer
// QR Code detection code here
}
}
}
}
Things I’ve Tried:
Verified entitlements in both Info.plist and .entitlements files. I have added the com.apple.developer.arkit.main-camera-access.allow entitlement.
Confirmed camera permissions in the privacy settings.
Followed the official documentation and WWDC 2024 sample code.
Checked my provisioning profile to ensure it supports ARKit camera access.
Request:
Has anyone encountered this “App not authorized” error when accessing the main camera via ARKit in visionOS using EnterpriseKit? Are there additional entitlements or provisioning profile configurations I might be missing? Any help would be greatly appreciated! I haven't seen any official examples using new API for main camera access and no open source examples either.
Hello.
I have an application that exports a 3D object with vertex color to USDC. I'm using an MDLAsset and its functionality to export to USDC with [asset exportAssetToURL:[NSURL fileURLWithPath:filePath]]. In version 1.3 of the system, everything works correctly. But after updating to version 2.0, the exported object appears white (using the same code).
Any suggestions?
Thank you very much.
Hello everyone, I'm a Computer Science student. My supervisor has given me some topics for my final year project, and one of them involves using Vision Pro for facial recognition—specifically, identifying a designated face to display specific information.
As a developer, my understanding of Vision Pro is quite limited. I've done some research online and found that Unity and Xcode are used as development tools. Traditionally, facial recognition is done using OpenCV.
However, I've come across articles stating that Apple, due to security reasons, cannot implement facial recognition. I’d like to ask if that’s true. Also, with VisionOS 2 featuring object tracking and image tracking, could these methods potentially replace facial recognition?
https://developer.apple.com/documentation/compositorservices/drawing_fully_immersive_content_using_metal
I'm following this doc to use metal in vision os.
I noticed that the tangent is being deprecated which is being used in the sample
https://developer.apple.com/documentation/compositorservices/layerrenderer/drawable/view
will the sample code be updated?
Hi,
I'm facing an issue with my Unity-based app when deploying it to the AVP. Often, after building and running the app on the device, the audio gets muted. I couln't find any setting that let me unmute it. The only solution I've found is to reset the device settings, which makes the audio work again.
Here are a few things I’ve noticed:
The sound works fine when I reset my device’s settings.
I haven't changed any sound or audio settings on the device before or after deploying the app.
The issue doesn’t always occur immediately, but when it does, resetting settings seems to be the only fix.
Could there be something in the AVP audio configuration that causes this problem? I’d appreciate any advice or suggestions.
Thanks!
Hi there, I'm trying to test the "Drawing fully immersive content using Metal" , but when I select Language: Swift, it still shows Objective C code in some sample codes.
Please check and update the document Swift Code, thank you.
In iOS, to display a RealityView, you can assign a value to the content.camera property:
content.camera = .virtual
However, how can this be implemented in macOS and tvOS?
Hi,
We've been leveraging AppClips on iOS for a while now to distribute native-app quality AR experiences (utilising ARKit and RealityKit) with the accessibility of a website.
This has been a crucial differentiator for us and is a core driver for our business.
Since our authoring tools also allows to run the same AR experiences on Vision Pro it would be amazing if they could be triggered by App Clips here as well. We've got this feedback from clients and users multiple times and since there seems to already be some basic App Clips support (e.g when registering the custom lens inserts) integrated into the system we would immensely appreciate if this feature could be opened up for 3rd party developers as well.
Associated feedback ID: FB13348462
Thank you!
After installing MacOS Sequoia, Xcode 16 & RealityComposer Pro 2 my Apple Vision Pro projects (worked perfectly with Xcode 15) started to give me Tool terminated by signal 'Segmentation fault: 11' error while compiling RealityKitContent assets.
This happens only when I try to build project with .usdz models exported from Blender, but when I try with sample models from apple website it works nice with no errors. Is there any solution?
We are working on to enable enterprise api account from our developer account. But it is not showing me that option. We are referring below link from apple :
https://developer.apple.com/help/account/get-started/apple-developer-enterprise-program-api/
We don't find "Apple Developer Enterprise Program API configuration" in our developer account Inside Integration tab
I am attach our developer account Screenshot. Please guide us!!
Hello,
Would it be possible to use any of the available visionOS environments when I use an app that requires me to be in an immersive space? I'm developing an app where users can start the immersive space experience by pressing a button. In my case, it would be helpful if the user could still choose a visionOS environment using the Digital Crown, but currently, it seems to be unavailable after opening an immersive space.
Thank you very much in advance!
Hi everyone, I want to add new joint in addition to joints that provided by ARKit. for example extract the position of wrist and elbow, then add new joint between them in the middle of arm. I can't find a good documentation that can explain ARKit very well. If there is another information that I can use, please share it with me. thanks.
When opening the Game Center dashboard via the Access Point, the Game Center dashboard appears BEHIND any content in the window with z depth (default type not volumetric). It obscures the dashboard and this makes it unusable.
Alerts have the same placement.
The new defaultWindowPlacement would probably suffice, but I don't think there's a way to apply that to the Game Center window.
What to do?
Thanks.
I tried to use the application icon from sample project https://developer.apple.com/documentation/visionos/diorama, but the 3 layers of the app icon are not separated when I hover on the icon in the Vision Pro simulator. Could you please advise how to fix the problem? I am using the latest Xcode Version 15.4 (15F31d). Thank you.
Does anyone have a fix for this or is this a bug? I just updated to Xcode 16 and simulator 2.0 yesterday and running my app previously on 1.2 took just a few seconds to load. With 2.0 it takes several minutes. Even if I launch any of the small Developer apps available in the Vision samples section they all take at least 5 minutes to run.
How do I fix this? If I open in device it still launches in less than 10 seconds but that's not always convenient for me.
MacBook Pro, M3, 18G, Sonoma 14.6.1
Hi all,
I’m quite new to XR development in general and need some guidance.
I want to create a function that simply tells me if my palm is facing me or not (returning a bool), but I honestly have no idea where to start.
I saw an earlier Reddit post about 6 months that essentially wanted the same thing I need, but the only response was this:
Consider a triangle made up of the wrist, thumb knuckle, and little finger metacarpal (see here for the joints, and note that naming has changed slightly since this WWDC video): the orientation of this triangle (i.e., whether the front or back is visible) seen from the device location should be a very exact indication of whether the user’s palm is showing or not.
While I really like this solution, I genuinely have no idea how to code it, and no further code was provided. I’m not asking for the entire implementation, but rather just enough to get me on the right track.
Heres basically all I have so far (no idea if this is correct or not):
func isPalmFacingDevice(hand: HandSkeleton, devicePosition: SIMD3<Float>) -> Bool {
// Get the wrist, thumb knuckle and little finger metacarpal positions as 3D vectors
let wristPos = SIMD3<Float>(hand.joint(.wrist).anchorFromJointTransform.columns.3.x,
hand.joint(.wrist).anchorFromJointTransform.columns.3.y,
hand.joint(.wrist).anchorFromJointTransform.columns.3.z)
let thumbKnucklePos = SIMD3<Float>(hand.joint(.thumbKnuckle).anchorFromJointTransform.columns.3.x,
hand.joint(.thumbKnuckle).anchorFromJointTransform.columns.3.y,
hand.joint(.thumbKnuckle).anchorFromJointTransform.columns.3.z)
let littleFingerPos = SIMD3<Float>(hand.joint(.littleFingerMetacarpal).anchorFromJointTransform.columns.3.x,
hand.joint(.littleFingerMetacarpal).anchorFromJointTransform.columns.3.y,
hand.joint(.littleFingerMetacarpal).anchorFromJointTransform.columns.3.z)
}