Hi all.
I can get disparity/depth data map from AVDepthData.depthDataMap and directly use it to generate a depth image. I found that under some situations, objects on the depth image cannot be clearly distinguished.
When using disparity data, objects below 1 meter can't be clearly distinguished.
When using depth data, objects larger than 1 meter can't be clearly distinguished.
Does anyone know why this happens and how to fix it ?
ARKit
RSS for tagIntegrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.
Post
Replies
Boosts
Views
Activity
Example Code:
struct ContentView : View {
@State private var isRemoveEntityModel = false
var body: some View {
ZStack(alignment: .bottom) {
ARViewContainer(isRemoveEntityModel: $isRemoveEntityModel).edgesIgnoringSafeArea(.all)
Button {
isRemoveEntityModel = true
} label: {
Image(systemName: "trash")
.font(.system(size: 35))
.foregroundStyle(.orange)
}
}
}
}
struct ARViewContainer: UIViewRepresentable {
@Binding var isRemoveEntityModel: Bool
let arView = ARView(frame: .zero)
func makeUIView(context: Context) -> ARView {
let model = CustomEntityModel()
model.transform.translation.y = 0.05
model.generateCollisionShapes(recursive: true)
arView.installGestures(.all, for: model) // --> After executing this line of code, it allows the deletion of a custom EntityModel in ARView.scene, but the deinit {} method of the custom EntityModel is not executed.
let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2(0.2, 0.2)))
anchor.children.append(model)
arView.scene.anchors.append(anchor)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {
if isRemoveEntityModel {
let customEntityModel = uiView.scene.findEntity(named: "Box_EntityModel")
uiView.gestureRecognizers?.removeAll() // --->After executing this line of code, ARView.scene can correctly delete the CustomEntityModel, and the deinit {} method of CustomEntityModel can also be executed properly. However, other CustomEntityModels in ARView.scene lose their Gestures as well.
customEntityModel?.removeFromParent()
}
}
}
class CustomEntityModel: Entity, HasModel, HasAnchoring, HasCollision {
required init() {
super.init()
let mesh = MeshResource.generateBox(size: 0.1)
let material = SimpleMaterial(color: .gray, isMetallic: true)
self.model = ModelComponent(mesh: mesh, materials: [material])
self.name = "Box_EntityModel"
}
deinit {
print("CustomEntityModel_remove")
}
}
We scan the room using the RoomPlan API, and after the scan, we obtain objects with a white color along with shadows and shading. However, upon updating the color of these objects, we experience a loss of shadows and shading.
RoomPlan scan
After Update
After adding gestures to the EntityModel, when it is necessary to remove the EntityModel, if the method uiView.gestureRecognizers?.removeAll() is not executed, the instance in memory cannot be cleared. However, executing this method affects gestures for other EntityModels in the ARView. Does anyone have a better method to achieve this?
Example Code:
struct ContentView : View {
@State private var isRemoveEntityModel = false
var body: some View {
ZStack(alignment: .bottom) {
ARViewContainer(isRemoveEntityModel: $isRemoveEntityModel).edgesIgnoringSafeArea(.all)
Button {
isRemoveEntityModel = true
} label: {
Image(systemName: "trash")
.font(.system(size: 35))
.foregroundStyle(.orange)
}
}
}
}
ARViewContainer:
struct ARViewContainer: UIViewRepresentable {
@Binding var isRemoveEntityModel: Bool
let arView = ARView(frame: .zero)
func makeUIView(context: Context) -> ARView {
let model = CustomEntityModel()
model.transform.translation.y = 0.05
model.generateCollisionShapes(recursive: true)
__**arView.installGestures(.all, for: model)**__ // here--> After executing this line of code, it allows the deletion of a custom EntityModel in ARView.scene, but the deinit {} method of the custom EntityModel is not executed.
arView.installGestures(.all, for: model)
let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2<Float>(0.2, 0.2)))
anchor.children.append(model)
arView.scene.anchors.append(anchor)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {
if isRemoveEntityModel {
let customEntityModel = uiView.scene.findEntity(named: "Box_EntityModel")
// --->After executing this line of code, ARView.scene can correctly delete the CustomEntityModel, and the deinit {} method of CustomEntityModel can also be executed properly. However, other CustomEntityModels in ARView.scene lose their Gestures as well.
__** uiView.gestureRecognizers?.removeAll()**__
customEntityModel?.removeFromParent()
}
}
}
CustomEntityModel:
class CustomEntityModel: Entity, HasModel, HasAnchoring, HasCollision {
required init() {
super.init()
let mesh = MeshResource.generateBox(size: 0.1)
let material = SimpleMaterial(color: .gray, isMetallic: true)
self.model = ModelComponent(mesh: mesh, materials: [material])
self.name = "Box_EntityModel"
}
deinit {
**print("CustomEntityModel_remove")**
}
}
Hello Guys,
I am currently stuck on understanding how I can place a 3D Entity from a USDZ file or a Reality Composer Pro project in the middle of a table in a mixed ImmersiveSpace. When I use the
AnchorEntity(.plane(.horizontal, classification: .table, minimumBounds: SIMD2<Float>(0.2, 0.2)))
it just places it somewhere on the table and not in the middle and not in the orientation of the table so the edges are not aligned.
Has anybody got a clue on how to to this? I would be very thankful for a response,
Thanks
Is there a way to capture video on the front facing camera (ie the selfie camera) on iPhone while using face anchors, left/right eye transforms for AR?
Hope to get support for ARKit in high resolution
using this api rightnow:
NSArray<ARVideoFormat *> *supportedVideoFormats = [ARWorldTrackingConfiguration supportedVideoFormats];
Qs:
If a high-resolution ARVideoFormat is not included in the supportedVideoFormats, is it still supported?"
Hello everyone,
I'm working on an AR app. There I load a 3D model of an human arm and place it on a QR code (ARImageAnchor). The user can now move the model and change its texture.
Is it possible to draw on this 3D model with my finger?
I have seen videos where models react to a touch. But I don't just want to touch the model, I want to create a small sphere exactly at the point where I touch the model, for example.
I would like to be able to draw a line on the arm. My model has a CollisionShape.
Error:
RoomCaptureSession.CaptureError.exceedSceneSizeLimit
Apple Documentation Explanation:
An error that indicates when the scene size grows past the framework’s limitations.
Issue:
This error is popping up in my iPhone 14 Pro (128 GB) after a few roomplan scans are done. This error shows up even if the room size is small. It occurs immediately after I start the RoomCaptureSession after the relocalisation of previous AR session (in world tracking configuration). I am having trouble understanding exactly why this error shows and how to debug/solve it.
Does anyone have any idea on how to approach to this issue?
We are working on a world scale AR app that leverages the device location and heading to place objects in the streets, so that they are correctly and stably anchored to certain locations.
Since the geo-tracking imagery is only available in certain cities and areas, we are trying to figure out how to fallback when geo-tracking is not available as the device move away, to still retain good AR camera accuracy. We might need to come up with some algorithm using the device GPS, to line up the ARCamera with our objects.
Question: Does geo-tracking always provide greater than or equal to the accuracy of world tracking, for a GPS outdoor AR experience?
If so, we can simply use the ARGeoTrackingConfiguration for the entire time, and rely on the ARView keeping itself aligned. Otherwise, we need to switch between it and ARWorldTrackingConfiguration when geo-tracking is not available and/or its accuracy is low, then roll our own algorithm to keep the camera aligned.
Thanks.
I have an iOS app that uses (camera) video feed and applies CoreImage filters to simulate a specific real world effect (for educational purposes).
Now I wanted to make a similar app for visionOS and apply the same CoreImage filters to the content (live view) users sees while wearing Apple Vision Pro headset.
Is there a way to do it with current APIs and what would you recommend?
I saw that we cannot get video feed from camera(s), is there a way to do it with ARKit and applying the filters somehow using that?
I know visionOS is a young/fresh platform but any help would be great!
Thank you!
How persistent is the storage of the WorldTrackingProvider and its underlying world map reconstruction?
The documentation mentions town-to-town anchor recovery, and recovery between sessions, but is that including device restarts and app quits? There are no clues about how persistent it all is.
I'm constructing a RealityView where I'd like to display content in front of user's face.
When testing, I found that the deviceAnchor I initially get was wrong, so I implement following code to wait until the deviceAnchor I get from worldTrackingProvider has the correct value:
private let arkitSession = ARKitSession()
private let worldTrackingProvider = WorldTrackingProvider()
var body: some View {
RealityView { content, attachments in
Task {
do {
// init worldTrackingProvider
try await arkitSession.run([worldTrackingProvider])
// wait until deviceAnchor returns correct info
var deviceAnchor : DeviceAnchor?
// continuously get deviceAnchor and check until it's valid
while (deviceAnchor == nil || !checkDeviceAnchorValid(Transform(matrix: deviceAnchor!.originFromAnchorTransform).translation)) {
deviceAnchor = worldTrackingProvider.queryDeviceAnchor(atTimestamp: CACurrentMediaTime())
}
let cameraTransform = Transform(matrix: deviceAnchor!.originFromAnchorTransform)
// ...codes that update my entity's translation
} catch {
print("Error: \(error)")
}
}
}
}
private func checkDeviceAnchorValid(_ translation: SIMD3<Float>) -> Bool {
// codes that check if the `deviceAnchor` has a valid translation.
}
However, I found that sometimes I can't get out from the while loop defined above. Not because my rules inside checkDeviceAnchorValid func are too strict, but because the translation I get from deviceAnchor is always invalid(it is [0,0,0] and never changed)
Why is this happening? Is this a known issue? I wonder if I can get recalled when the worldTrackingProvider returns the correct deviceAnchor,
Scenario: building with old shopfront to be renewed, we have a visual of new concept. Is there an app that can give us the coordinates in line with the plane of the front of the building so we can map the visual on and it alter perspective as you walk around as it ‘sticks’ to front of real building please? Gif attached is visual concept but showing historic pic
I have an app idea that would map an OLD photo onto the front of the same existing building. The underlying work has already been done see https://lowestoftoldandnow.org/full/strolleast#45
but obviously you would have to accurately record 4 points in 3d space but also the user of the app would have to take these points (given to them by the app) and map them back onto the real world with the same accuracy. If the photo was partly on the next door building it would not work.
I am beginning to think that the technology is not there yet :-(
Why high-level RealityKit's APIs are only available on visionOS?
RealityView & Model3D only to name some.
On other platforms currently, the only way to deploy RealityKit & or ARKit, is by using either UIKit or UIKit's integration with SwiftUI (UIViewRepresentable).
Are these newer APIs coming to other platforms as well?
So I have a RealityView with an Entity (from my bundle) being rendered in it like so:
struct ImmersiveView: View {
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let entity = try? await Entity(named: "MyContent", in: realityKitContentBundle) {
content.add(entity)
}
}
}
}
Is it possible to programatically transform the entity? Specifically I want to (1) translate it horizontally in space, eg 1m to the right, and (2) rotate it 90°. I've been looking through the docs and haven't found the way to do this, but I fear I'm not too comfortable with Apple docs quite yet.
Thanks in advance!
i do not really know how this works but hi I am Philemon.
for a school assignment I need to program a app I have 2 years for this and it is for people that are interested in coding. I want to make a iOS app that can make 3d models from pictures (photogrammetry) and I know that there are already apps for this but I want to code this myself. I have a little bit of experience coding c# in unity but I really don't know where to start can someone help me? and I know that apple has reality kit but I want that people without a LiDAR Scanner can use this too.
so where do I start witch language do I need to learn?
every comment is welcome!!!
kind regards Philemon
My application uses ARKit to capture faces in real time, there are two occasional crashes during use, I can not reproduce it, the following is the crash stack, These are all system API calls. I have no clue, any suggestions to fix it? Thank you so much!
Additional information:
BUG IN CLIENT OF LIBPLATFORM: Trying to recursively lock an os_unfair_lock
the first kind:
EXC_BREAKPOINT 0x00000001f6d2d20c
0 libsystem_platform.dylib _os_unfair_lock_recursive_abort + 36
1 libsystem_platform.dylib _os_unfair_lock_lock_slow + 284
2 SceneKit C3DTransactionGetStack + 160
3 SceneKit _commitImplicitTransaction + 36
4 CoreFoundation CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION + 36
5 CoreFoundation __CFRunLoopDoObservers + 548
6 CoreFoundation __CFRunLoopRun + 1028
7 CoreFoundation CFRunLoopRunSpecific + 608
8 Foundation -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 212
9 Foundation -[NSRunLoop(NSRunLoop) run] + 64
10 UIKitCore __66-[UIViewInProcessAnimationManager
startAdvancingAnimationManager:]_block_invoke_7 + 108
11 Foundation NSThread__start + 732
12 libsystem_pthread.dylib _pthread_start + 136
13 libsystem_pthread.dylib thread_start + 8
the second kind:
已崩溃:com.apple.arkit.ardisplaylink.0x28083bd80
EXC_BREAKPOINT 0x00000001fe43920c
0 libsystem_platform.dylib _os_unfair_lock_recursive_abort + 36
1 libsystem_platform.dylib _os_unfair_lock_lock_slow + 284
2 SceneKit C3DTransactionGetStack + 160
3 SceneKit _commitImplicitTransaction + 36
4 CoreFoundation CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION + 36
5 CoreFoundation __CFRunLoopDoObservers + 548
6 CoreFoundation __CFRunLoopRun + 1028
7 CoreFoundation CFRunLoopRunSpecific + 608
8 CoreFoundation CFRunLoopRun + 64
9 ARKitCore -[ARRunLoop _startThread] + 616
10 Foundation NSThread__start + 732
11 libsystem_pthread.dylib _pthread_start + 136
12 libsystem_pthread.dylib thread_start + 8
I want to have realtime image anchor tracking together with RoomPlan.
But it's frustrating to not seeing any thing that can support this.
Because it is useful to have interactive things in the scanned room.
Ideally it should be running the same time, but if not possible, how do you align the two tracking space if running RoomPlan and then ARKit image tracking? sounds like headache