I have found a different approach that works for me. I have abandoned setting the content.environment property and use a sky dome model for the background and use ImageBasedLightComponent and ImageBasedLightReceiverComponent to choose the lighting for the cube.
For more information on this approach, see WWDC session Optimize your 3D assets for spatial computing, and jump to sections
15:07 - Sky dome setup
16:03 - Image-based lighting
I did everything programmatically (instead of using Reality Composer Pro), but it pretty much works the same.
Sample code (caveat, I have no idea if this is the preferred approach, but it works for me):
import SwiftUI
import RealityKit
import os.log
struct MyRealityView: View {
@Binding var useNebulaForLighting: Bool
@Binding var showNebula: Bool
@State private var nebulaIbl: ImageBasedLightComponent?
@State private var indoorIbl: ImageBasedLightComponent?
@State private var iblEntity: Entity?
@State private var litCube: Entity?
@State private var skydome: Entity?
var body: some View {
RealityView { content in
// Create a red cube 1m on a side
let mesh = MeshResource.generateBox(size: 1.0)
let simpleMaterial = SimpleMaterial(color: .red, isMetallic: false)
let model = ModelComponent(
mesh: mesh,
materials: [simpleMaterial]
)
let redBoxEntity = Entity()
redBoxEntity.components.set(model)
content.add(redBoxEntity)
litCube = redBoxEntity
// Get hi-res texture to show as background
let immersion_name = "BlueNebula"
guard let resource = try? await TextureResource(named: immersion_name) else {
fatalError("Unable to load texture.")
}
var material = UnlitMaterial()
material.color = .init(texture: .init(resource))
// Create sky dome sphere
let sphereMesh = MeshResource.generateSphere(radius: 1000)
let sphereModelComponent = ModelComponent(mesh: sphereMesh, materials: [material])
// Create an entity and set its model component
let sphereEntity = Entity()
sphereEntity.components.set(sphereModelComponent)
// Trick/hack to make the texture image point inward to the viewer.
sphereEntity.scale *= .init(x: -1, y: 1, z: 1)
// Add sky dome to the scene
skydome = sphereEntity
skydome?.isEnabled = showNebula
content.add(skydome!)
// Create Image Based Lighting entity for scene
iblEntity = Entity()
content.add(iblEntity!)
// Load low-res nebula resource for image based lighting
if let environmentResource = try? await EnvironmentResource(named: "BlueNeb2") {
let iblSource = ImageBasedLightComponent.Source.single(environmentResource)
let iblComponent = ImageBasedLightComponent(source: iblSource)
nebulaIbl = iblComponent
}
// Load low-res indoor light resource for image based lighting
if let environmentResource = try? await EnvironmentResource(named: "IndoorLights") {
let iblSource = ImageBasedLightComponent.Source.single(environmentResource)
let iblComponent = ImageBasedLightComponent(source: iblSource)
indoorIbl = iblComponent
}
// Set initial settings
applyModelSettings()
}
update: { content in
applyModelSettings()
}
.realityViewCameraControls(CameraControls.orbit)
}
func applyModelSettings() {
// Set image based lighting
if (useNebulaForLighting == true)
&& (litCube != nil)
&& (nebulaIbl != nil) {
iblEntity!.components.set(nebulaIbl!)
let iblrc = ImageBasedLightReceiverComponent(imageBasedLight: iblEntity!)
litCube?.components.set(iblrc)
}
else if (useNebulaForLighting == false)
&& (litCube != nil)
&& (indoorIbl != nil) {
iblEntity!.components.set(indoorIbl!)
let iblrc = ImageBasedLightReceiverComponent(imageBasedLight: iblEntity!)
litCube?.components.set(iblrc)
}
// set skydome's status
skydome?.isEnabled = showNebula
}
}
Post
Replies
Boosts
Views
Activity
I figured it out. I just needed to request permission for Core Location services.
Added this code to one of my objects:
locationManager = CLLocationManager()
locationManager?.delegate = myLocationDelegate
locationManager?.requestWhenInUseAuthorization()
For my app sandbox, I also enabled
Outgoing Connections (Client)
Location
(I'm not certain the second one is needed)
I now get the SSID and BSSID.
Also, the app now shows up in System Setting's Location Services, where I guess the user can turn it on or off.
I've largely solved it following the information from this discussion:
https://forums.developer.apple.com/forums/thread/746728
Side note: When I look at my virtual Mac screen while in immersive mode (a new feature) to look for the console messages and press the 'A' button, I lose the ability to detect the button presses in my app. Does the input focus switch?
I made the code as simple as possible: a single 5cm target square placed at location (0, 0, -1.5)
I then tapped on target while standing in several locations, and the reported tap location was off along the Z axis by about 0.5m.
Here were several tap locations (Z location is underlined):
tap location: SIMD3(0.0067811073, 0.019996116, -1.1157947), name: target
tap location: SIMD3(-0.00097223074, 0.019996116, -1.1036792), name: target
tap location: SIMD3(0.0008024718, 0.019995179, -1.1074299), name: target
tap location: SIMD3(-0.009804221, 0.019996116, -1.0694565), name: target
tap location: SIMD3(-0.0037206858, 0.019995492, -1.0778457), name: target
tap location: SIMD3(-0.009298846, 0.019996116, -1.0772702), name: target
Here is the code to set up the RealityView:
import SwiftUI
import RealityKit
import RealityKitContent
struct ImmersiveView: View {
@StateObject var model = MyModel()
/// Spatial tap gesture that tells the model the tap location.
var myTapGesture: some Gesture {
SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { event in
let location3D = event.convert(event.location3D, from: .global, to: .scene)
let entity = event.entity
model.handleTap(location: location3D, entity: entity)
}
}
var body: some View {
RealityView { content in
model.setupContentEntity(content: content)
}
.gesture(myTapGesture)
}
}
Here is the model code:
import Foundation
import SwiftUI
import RealityKit
import RealityKitContent
import ARKit
import os.log
@MainActor class MyModel: ObservableObject {
private var realityViewContent: RealityViewContent?
/// Capture RealityViewContent and create target
///
/// - Parameter content: container for all RealityView content
func setupContentEntity(content: RealityViewContent) {
self.realityViewContent = content
placeTargetObject()
}
/// Place a small red target at position 0, 0, -1.5
///
/// I will look at this position and tap my fingers. The tap location
/// should be near the same position (0, 0, -1.5)
func placeTargetObject() {
guard let realityViewContent else { return }
let width: Float = 0.05
let height: Float = 0.02
let x: Float = 0
let y: Float = 0
let z: Float = -1.5
// Create red target square
let material = SimpleMaterial(color: .red, isMetallic: false)
let mesh = MeshResource.generateBox(width: width, height: height, depth: width)
let target = ModelEntity(mesh: mesh, materials: [material])
// Add collision and target component to make it tappable
let shapeBox = ShapeResource.generateBox(width: width, height: height, depth: width)
let collision = CollisionComponent(shapes: [shapeBox], isStatic: true)
target.collision = collision
target.components.set(InputTargetComponent())
// Set name, position, and add it to scene
target.name = "target"
target.setPosition(SIMD3<Float>(x,y + height/2, z), relativeTo: nil)
realityViewContent.add(target)
}
/// Respond to the user tapping on an object by printing name of entity and tap location
///
/// - Parameters:
/// - location: location of tap gesture
/// - entity: entity that was tapped
func handleTap(location: SIMD3<Float>, entity: Entity) {
os_log("tap location: \(location), name: \(entity.name, privacy: .public)")
}
}
Example of the small red target:
No SwiftUI views inside the ImmersiveSpace. I only have Entity and ModelEntity instances created manually with RealityKit.
I'll try some additional experiments. I think I will place a small object programmatically 1.5 meters in front of me (but not have it tappable), look at it, and tap (I assume the gaze will go through to hit the floor below it), and then compare the event's location3D with the object placed at a specific location.
I made some progress.
When creating the SceneReconstructionProvider, specify the classifications mode.
let sceneReconstruction = SceneReconstructionProvider(modes: [.classification])
Then the MeshAnchor.Geometry's classification property is set, and here are some example values
count: 11342
description: GeometrySource(count: 11342, format: MTLVertexFormat(rawValue: 45))
format: 45
componentsPerVector: 1
offset: 0
stride: 1
So I am guessing the buffer contains a sequence values that map to the MeshAnchor.MeshClassification raw values. (Now I just need to figure out which MTLVertexFormat case has a raw value of 45 :-)
Edit: uchar is type 45. So, the buffer contains a sequence of unsigned bytes.
Oops. It helps if I read the details. I'm rolling back to Beta 8
Note: Xcode > Settings > Platforms shows is there visionOS simulator is there. I'm going to delete all my Xcodes and try a fresh install.
I have found a workaround, but I don't know if this a good design or not.
In "Deployment Info" in Xcode, I set iPhone/iPad Orientation to Portrait only. Then when I rotated the device to the side, the cube doesn't change sizes.
(Note: I am getting the device's eulerAngles and converting them to a quaternion and applying that to the PerspectiveCamera)
SharePlay with visionOS appears to hide location data of other people from the app. For example, data about the other personas (outside of their immersion status) is not exposed via APIs. I am guessing this is for privacy reasons (?).
I am not sure how Apple handles (or will handle) people in the same physical room. So far, I haven't seen any examples of (or WWDC videos) covering this. I look forward to some clarification and examples.
One possible workaround for people in the same physical room is to anchor the virtual content to an image. Print that image on a piece of paper and place it on the floor or a table. The two participants should see the same virtual content in the same location and same orientation because it is tied to something physical (the printed paper).
Just bumping this up in part because I ran into this problem again, but this time when I was trying to find the underlying problem with <> issue in a SwiftUI coordinator.
I started a new project today beginning with "Augmented Reality App" template, and now I get this ARKit error with iOS 16.4 but not iOS 16.2 (or any other iOS minimum deployment). Sometimes I feel like Apple is gaslighting me. :-)
This issue also prevents Xcode from knowing a variable's type, showing <> (see screenshot below), which causes additional problems. (Again, all problems go away when I target a different minimum deployment target)
Values for "Minimum Deployments" value where I get the error and where I do not.
iOS 16.4 - error
iOS 16.3 - NO error
iOS 16.2 - NO error
I can replicate the error by creating a new iOS project with the template "Augmented Reality App", and then simply add "import ARKit" to the group of imports.
In this example with minimum deployment set to iOS 16.4, Xcode doesn't know what type foo is. It shows <>
But when I change the minimum deployment type to iOS 16.3, Xcode knows foo is of type PerspectiveCamera.
I have yet to do this (I am working towards it), but there are a number of videos on YouTube showing how to rig Blender models to work with Apple's ARKit (on YouTube's site, search for "iPhone blendshapes" or "ARKit blendshapes"). I've also seen people willing to rig models for a nominal fee to support Apple's blendshapes.
I think Apple's documentation on ARFaceAnchor is good starting point.
I've also played a bit with Apple's Tracking and Visualizing Faces sample app. The code is a little out of date, but is still a good place to start.
My personal recommendation (as a regular developer; not someone from Apple) would be to start with Apple's WWDC sessions on Spatial Computing.
The iPhone/iPad AR experiences will probably need a different user experience from Apple Vision Pro AR experiences. For example, on iPhone and iPad, because the user's hands are holding the device, they can't do much in the way of interacting with content. Also, Apple recommends AR experiences for iPhone & iPad only last for 1-2 minutes at a time for various reasons. See Apple's 2022 WWDC session Qualities of great AR experiences.
After that, I recommend starting from Apple's oldest sessions (2019, at the bottom of that web page) and working forward in time.
Finally, while Apple Vision Pro is the coolest platform for AR (I desperately want a dev kit), don't ignore the approximately 1 billion people with iPhones and iPads who could run your AR/VR applications if you target those platforms.
Can Apple's Reality Converter convert the .stl file to .usdz? (I don't think the old version could)
On occasion, I've imported models into Blender, exported them as GLTF files, and then used Reality Converter to convert them to USDZ.
One issue I've run into in this process is the unit size. For example, in Reality Converter, I will change the unit size from meter to centimeter and then back to meter and then export from Reality Converter. For some reason, this solves the problem of a model appearing at 1/100 the size you expect.
I believe this is the expected behavior and consistent with RealityKit on the iPhone when attaching an anchor to a vertical surface.
Attached is an old RealityKit test I did, where I anchored the object to the wall. In this case, the green cube is along the +X axis, the red cube is the +Y axis, and the blue cube is the +Z axis.
If I recall correctly, when I attached things to the ceiling, the +Y axis pointed down. In general, I believe the +Y axis is the normal from the surface.