We are attempting to update the texture on a node. The code below works correctly when we use a color, but it encounters issues when we attempt to use an image. The image is available in the bundle, and it image correctly in other parts of our application. This texture is being applied to both the floor and the wall. Please assist us with this issue."
for obj in Floor_grp[0].childNodes {
let node = obj.flattenedClone()
node.transform = obj.transform
let imageMaterial = SCNMaterial()
node.geometry?.materials = [imageMaterial]
node.geometry?.firstMaterial?.diffuse.contents = UIColor.brown
obj.removeFromParentNode()
Floor_grp[0].addChildNode(node)
}
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Post
Replies
Boosts
Views
Activity
Hi all,
Up until a couple of days ago I was able to open and run Reality Composer Pro on my intel-based Mac. I tried to open it again this morning and I now receive the notification "Reality Composer is not supported on this Mac".
I understand that I will eventually need a new computer with Apple silicon but it was nice to be able to start exploring Shader Graphs with my existing computer for now.
Any suggestions? Perhaps go back to an earlier version of the beta Xcode - maybe the latest version disabled my ability to run RCP?
I'm running Version 15.1 beta (15C5042i) of Xcode on an Intel i7 MacBook Pro.
Thanks, in advance!
Hello all -
I'm experiencing a shading error when I have two UnlitSurface shaders using images for color and opacity. When the shaders are applied to two mesh planes, one placed in front of the other, the shader in front will render and the plane mesh will mask out and not render what is behind.
Basically - it looks like the opacity map on the shader in front is creating a 'mask'.
I've attached some images here to help explain.
Has anyone experienced this error? And how can I go about fixing this - thx!
How do we author a Reality File like the ones under Examples with animations at https://developer.apple.com/augmented-reality/quick-look/
??
For example, "The Hab" : https://developer.apple.com/augmented-reality/quick-look/models/hab/hab_en.reality
Tapping on various buttons in this experience triggers various complex animations. I don't see any way to accomplish this in Reality Composer.
And I don't see any way to export/compile to a "reality file" from within Xcode.
How can I use multiple animations within a single GLTF file?
How can I set up multiple "tap target" on a single object, where each one triggers a different action?
How do we author something similar? What tools do we use?
Thanks
I'm developing a 3D scanner works on a iPad(6th gen, 12-inch).
Photogrammetry with ObjectCaptureSession was successful, but other trials are not.
I've tried Photogrammetry with URL inputs, these are pictures from AVCapturePhoto.
It is strange... if metadata is not replaced, photogrammetry would be finished but it seems to be no depthData or gravity info were used. (depth and gravity is separated files). but if metadata is injected, this trial are fails.
and this time i tried to Photogrammetry with PhotogrammetrySamples sequence and it also failed.
the settings are:
camera: back Lidar camera,
image format: kCVPicelFormatType_32BGRA(failed with crash) or hevc(just failed) image
depth format: kCVPixelFormatType_DisparityFloat32 or kCVPixelFormatType_DepthFloat32
photoSetting: isDepthDataDeliveryEnabled = true, isDepthDataFiltered = false, embeded = true
I wonder iPad supports Photogrammetry with PhotogrammetrySamples
I've already tested some sample codes provided by apple:
https://developer.apple.com/documentation/realitykit/creating_a_photogrammetry_command-line_app
https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_depth_using_the_lidar_camera
https://developer.apple.com/documentation/realitykit/taking_pictures_for_3d_object_capture
What should I do to make Photogrammetry successful?
Can AR projects run on a visionOS simulator?
I have configured ARKit and PlaneDetectionProvider, but after running the code in the simulator, PlaneEntity is not displayed correctly
import Foundation
import ARKit
import RealityKit
class PlaneViewModel: ObservableObject{
var session = ARKitSession()
let planeData = PlaneDetectionProvider(alignments: [.horizontal])
var entityMap: [UUID: Entity] = [:]
var rootEntity = Entity()
func start() async {
do {
if PlaneDetectionProvider.isSupported {
try await session.run([planeData])
for await update in planeData.anchorUpdates {
if update.anchor.classification == .window { continue }
switch update.event {
case .added, .updated:
updatePlane(update.anchor)
case .removed:
removePlane(update.anchor)
}
}
}
} catch {
print("ARKit session error \(error)")
}
}
func updatePlane(_ anchor: PlaneAnchor) {
if entityMap[anchor.id] == nil {
// Add a new entity to represent this plane.
let entity = ModelEntity(
mesh: .generateText(anchor.classification.description)
)
entityMap[anchor.id] = entity
rootEntity.addChild(entity)
}
entityMap[anchor.id]?.transform = Transform(matrix: anchor.originFromAnchorTransform)
}
func removePlane(_ anchor: PlaneAnchor) {
entityMap[anchor.id]?.removeFromParent()
entityMap.removeValue(forKey: anchor.id)
}
}
var body: some View {
@stateObject var planeViewModel = PlaneViewModel()
RealityView { content in
content.add(planeViewModel.rootEntity)
}
.task {
await planeViewModel.start()
}
}
Hello,
I want to use Apple's PhotogrammetrySession to scan a window. However, ObjectCaptureSession seems to be a monotasker and won't allow capture to occur with anything but a small object on a flat surface.
So, I need to manually feed data into PhotogrammetrySession. But when I do, it focuses way too much on the scene behind the window, sacrificing detail on the window itself.
Is there a way for me to either coax ObjectCaptureSession into capturing an area on the wall, or for me to restrict PhotogrammetrySession's target bounding box manually? How does ObjectCaptureSession communicate the limited bounding box to PhotogrammetrySession?
Thanks,
Sebastian
Greetings,
I've been using RPScreenRecorder to record the screen, getting the buffer of it and copying it into a different one to use it, but recently, due to the last big iOS update, now each time I execute it, it crashes. Probably is because it's copying it while the first buffer is being overwritten. That's why I'm using a semaphore. But even with that, it still crashes. I don't know why it doesn't work.
I tried to make a CIImage from the buffer to later copy it in a new buffer, but it keeps crashing. I tried to put some conditions so it's not nil, but nothing is working. I don't know how could it work. I even tried to create a new empty buffer and use it, and that works properly, so the problem is when I combine the RPScreenRecorder buffer and the copying of it. And the worst thing is that before that update, everything was working properly. Do you know any way that I could make it work?
I have an app which shows a screen using ObjectCaptureView and each time that the view appears and disappears the memory increases around 400-500Mb.
After check the memory graph I found that I was related to the SwiftUI's view which is creating the ARKit view under the hood.
To be sure that the ObjectCaptureView was which has the memory leak I only commented in the view the line to create the ObjectCaptureView keeping the rest of the logic to handle the session state, feedback, etc.
I have various .reality files published on a website as part of a learning product, which I deployed Feb. 2023 using the latest Reality Composer at the time.
Users informed me that none of the .reality files will open on iOS 17, which I have confirmed. They still open fine on iOS 16.
On iOS 17 the QuickLook viewer says "Object requires a newer version of iOS."
What gives? Did Apple deprecated .reality, or are these designed only to work on one version of iOS only?
I want to convert this uikit code to swiftui but i have some problems and it doesn't work, please help me
See LICENSE folder for this sample’s licensing information.
Abstract:
The sample app's main view controller.
*/
import UIKit
import RealityKit
import ARKit
import Combine
class ViewController: UIViewController, ARSessionDelegate {
@IBOutlet var arView: ARView!
var character: BodyTrackedEntity?
let characterOffset: SIMD3<Float> = [-1.0, 0, 0]
let characterAnchor = AnchorEntity()
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
arView.session.delegate = self
guard ARBodyTrackingConfiguration.isSupported else {
fatalError("This feature is only supported on devices with an A12 chip")
}
// 运行人体跟踪配置。
let configuration = ARBodyTrackingConfiguration()
arView.session.run(configuration)
arView.scene.addAnchor(characterAnchor)
var cancellable: AnyCancellable? = nil
cancellable = Entity.loadBodyTrackedAsync(named: "character/robot").sink(
receiveCompletion: { completion in
if case let .failure(error) = completion {
print("Error: Unable to load model: \(error.localizedDescription)")
}
cancellable?.cancel()
}, receiveValue: { (character: Entity) in
if let character = character as? BodyTrackedEntity {
character.scale = [1.0, 1.0, 1.0]
self.character = character
cancellable?.cancel()
} else {
print("Error: Unable to load model as BodyTrackedEntity")
}
})
}
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
for anchor in anchors {
guard let bodyAnchor = anchor as? ARBodyAnchor else { continue }
// 更新角色定位点位置的位置。
let bodyPosition = simd_make_float3(bodyAnchor.transform.columns.3)
characterAnchor.position = bodyPosition + characterOffset
characterAnchor.orientation = Transform(matrix: bodyAnchor.transform).rotation
if let character = character, character.parent == nil {
// 1. the body anchor was detected and
// 2. the character was loaded.
characterAnchor.addChild(character)
}
}
}
}
Here's the code I wrote in SwiftUI
import SwiftUI
import RealityKit
import ARKit
import Combine
struct ContentView : View {
var body: some View {
ARViewContainer().edgesIgnoringSafeArea(.all)
}
}
struct ARViewContainer: UIViewRepresentable {
var character: BodyTrackedEntity?
let characterOffset: SIMD3<Float> = [-1.0, 0, 0] /
let characterAnchor = AnchorEntity()
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
guard ARBodyTrackingConfiguration.isSupported else {
fatalError("This feature is only supported on devices with an A12 chip")
}
let configuration = ARBodyTrackingConfiguration()
arView.session.run(configuration)
arView.scene.addAnchor(characterAnchor)
var cancellable: AnyCancellable? = nil
cancellable = Entity.loadBodyTrackedAsync(named: "character/robot").sink(
receiveCompletion: { completion in
if case let .failure(error) = completion {
print("Error: Unable to load model: \(error.localizedDescription)")
}
cancellable?.cancel()
}, receiveValue: { (character: Entity) in
if let character = character as? BodyTrackedEntity {
character.scale = [1.0, 1.0, 1.0]
self.character = character
cancellable?.cancel()
} else {
print("Error: Unable to load model as BodyTrackedEntity")
}
})
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {}
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
for anchor in anchors {
guard let bodyAnchor = anchor as? ARBodyAnchor else { continue }
let bodyPosition = simd_make_float3(bodyAnchor.transform.columns.3)
characterAnchor.position = bodyPosition + characterOffset
characterAnchor.orientation = Transform(matrix: bodyAnchor.transform).rotation
if let character = character, character.parent == nil {
// 1. the body anchor was detected and
// 2. the character was loaded.
characterAnchor.addChild(character)
}
}
}
}
#Preview {
ContentView()
}
在 Full 模式下,
我创建一球体 半径 10 ,给球添加 CollisionComponent 与 InputTargetComponent
我接着创建一个0.2 正方体 也添加了 上面的两组件
又添加。一个 attrach 的附件信息
代码如下
` RealityView{content,attachments in
let meshgenerate = MeshResource.generateSphere(radius: 10)
let collisionShape = ShapeResource.generateSphere(radius: 10 )
var sp = ModelEntity(mesh: meshgenerate)
sp.components.set(CollisionComponent(shapes: [collisionShape]))
sp.components.set(InputTargetComponent())
sp.transform.scale *= .init(-1, 1, 1)
sp.name = "sp"
content.add(sp)
let ont = ModelEntity(mesh: MeshResource.generateBox(size: 0.2) )
ont.components.set(CollisionComponent(shapes: [ShapeResource.generateBox(size: .init(x: 0.2, y: 0.2, z: 0.2))]))
ont.components.set(InputTargetComponent())
ont.name = "ont"
ont.position = .init(x: 0, y: 0, z: -2)
content.add(ont)
if let stack = attachments.entity(for: "aid")
{
stack.name = "sssssss"
stack.setPosition(.init(x: 0, y: 1.5, z: -1), relativeTo: nil)
// stack.generateCollisionShapes(recursive: false)
//stack.components.set(InputTargetComponent())
content.add(stack)
}
}
attachments: {
let rostion = Rotation3D(angle: Angle2D(degrees: 30), axis: .x)
Attachment(id: "aid") {
Button {
print("sss","Button")
} label: {
Text("New Color")
.font(.extraLargeTitle)
.padding(40)
}
.background(.yellow)
}
} .gesture(TapGesture().targetedToAnyEntity().onEnded({ value in
print("sss" ,"TapGesture",value.entity.name)
//openwind(id: "main")
}))
只有球台可以出发 gesture 其他的 EntityModel 及 附加的信息 都无法触发 gestrue
我知道问题出在 其他实体放到了球内,同时因为球体有 InputTargetComponent 组件我如果想 不求出 InputTargetComponent 情况下 希望他的附件信息也能触发gesture,应该如何解决
Hi -
I've searched all over the docs and might simply be missing something very big. Is raycasting available on front-facing True Depth camera like on the iPad Pro?
I'm working on an application currently where I'm in ARFaceTrackingConfiguration and a simple raycast from the screen center is not yielding results.
That same code in World configuration using the rear camera is producing results.
My understanding, given the examples around bitmojis and face tracking, was that the front camera would have essentially the same depth data as the rear, just with less total distance available.
Thanks for setting me straight! This is a very big deal for this particular project and I'm fearful I missed something in my pre-planning and investigation.
Kane
Hi. I want to make iOS app that when use camera AR, it can show someplace around me have annotations. ARGeoAnchor something but I don't have any idea. Can anyone give me some keywords? I can use Mapkit to search but don't know how to map it in AR.
Is it possible to render a Safari-based webview in full immersive space, so an app can show web pages there?
Greetings!
I have made use of Apple ARKit documentations to create a simple ARKit application which utilizes SceneKit (Tried Metal too)
I am currently unsure of how to make use of SmoothedSceneDepth(SceneDepth) in general to acquire the DepthData from the DataMap acquired in the View.
is there any particular method or way that I can access this data for displaying the depth.
would be grateful with any inputs or suggestions.
Thanks in advance
Hello everyone
I'm using the detectPlane feature in ARKit
Get back ARPlaneAnchor from ARSCNViewDelegate (func renderer(SCNSceneRenderer, didAdd: SCNNode, for: ARAnchor), func renderer(SCNSceneRenderer, didUpdate: SCNNode, for: ARAnchor)), func renderer(SCNSceneRenderer, didRemove: SCNNode, for: ARAnchor)
Occasionally ARPlaneAnchors are cleared by the call func renderer(SCNSceneRenderer, didRemove: SCNNode, for: ARAnchor) from ARKit
I think that after deleting ARPlanAnchor, ARkit will recreate an ARPlaneAnchor in that location.
so is there any relationship between deleted ARPlanAnchor and newly created ARPlaneAnchor?
(Does the identifier, name, .... information reflect that relationship?)
https://developer.apple.com/documentation/arkit/arkit_in_ios/content_anchors/visualizing_and_interacting_with_a_reconstructed_scene
It says that fourth-generation iPad Pro running iPad OS 13.4 or later works because of the lidar. If iPhone 13 also has lidar then would it work too?
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
arView.debugOptions = .showStatistics // Error:
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {}
}
-[MTLDebugRenderCommandEncoder validateCommonDrawErrors:]:5775: failed assertion `Draw Errors Validation
Vertex Function(vsSdfFont): the offset into the buffer viewConstants that is bound at buffer index 4 must be a multiple of 256 but was set to 61840.
'