Post

Replies

Boosts

Views

Activity

Reply to Exporting Point Cloud as 3D PLY Model
Hi @JeffCloe, Forgive me if this gets lengthy, but I'll try to stick to the key points on how I achieved success with this task. @gchiste's help pointed me in all of the right directions, and while I'm certainly an ARKit/point cloud novice, that help taught me quite a bit. To keep the reply as brief as possible, I will assume that you have the Visualizing a Point Cloud using Scene Depth - https://developer.apple.com/documentation/arkit/visualizing_a_point_cloud_using_scene_depth project already downloaded and accessible. Firstly, you'll need to have some methodology to tell the app when you are done "scanning" the environment, and to save the point clouds to a file. For ease, I added a simple UIButton to my ViewController.swift's viewDidLoad method; // Setup a save button let button = UIButton(type: .system, primaryAction: UIAction(title: "Save", handler: { (action) in self.renderer.savePointsToFile() })) button.translatesAutoresizingMaskIntoConstraints = false self.view.addSubview(button) NSLayoutConstraint.activate([ button.centerXAnchor.constraint(equalTo: self.view.centerXAnchor), button.centerYAnchor.constraint(equalTo: self.view.centerYAnchor) ]) Naturally, in your Renderer.swift, you'll need to add a new method to handle when the button is tapped. Additionally, you'll likely want to add a variable to your Renderer.swift file, something like, var isSavingFile = false, to prevent the Save button from being tapped repeatedly while a save is in process. More importantly, setting up your savePointsToFile() method in Renderer.swift, is where the bulk of the work takes place. private func savePointsToFile() { &#9;guard !self.isSavingFile else { return } &#9;self.isSavingFile = true // 1 var fileToWrite = "" let headers = ["ply", "format ascii 1.0", "element vertex \(currentPointCount)", "property float x", "property float y", "property float z", "property uchar red", "property uchar green", "property uchar blue", "property uchar alpha", "element face 0", "property list uchar int vertex_indices", "end_header"] for header in headers { fileToWrite += header fileToWrite += "\r\n" } // 2 for i in 0..<currentPointCount { // 3 let point = particlesBuffer[i] let colors = point.color // 4 let red = colors.x * 255.0 let green = colors.y * 255.0 let blue = colors.z * 255.0 // 5 let pvValue = "\(point.position.x) \(point.position.y) \(point.position.z) \(Int(red)) \(Int(green)) \(Int(blue)) 255" fileToWrite += pvValue fileToWrite += "\r\n" } // 6 let paths = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask) let documentsDirectory = paths[0] let file = documentsDirectory.appendingPathComponent("ply_\(UUID().uuidString).ply") do { // 7 try fileToWrite.write(to: file, atomically: true, encoding: String.Encoding.ascii) self.isSavingFile = false } catch { print("Failed to write PLY file", error) } } Going through the method, I'll try to detail my approach broken down by notation; 1) A .ply file, as I recently learned, requires a header detailing the file format, number of vertex, and formats for the points x, y, z parameters, as well as color parameters. In this case, since we are using only points and not a mesh, the header indicates there will be no faces to the model, and we end our header with a notation that the index of each vertex is an integer. As a whole, it's worth mentioning that I am effectively just creating a "text" file, with each line (that isn't the header) being the details for each point, and then saving that file out with a .ply extension. 2) Using the currentPointCount, which is already being calculated and incremented through the sample project, I am iterating from 0 through the number of collected points. 3) Using the index, I am accessing the relevant point through the particlesBuffer, which provides me the point as a ParticleUniforms. This gives me access to the relevant point data, which includes the point's X, Y, and Z position, as well as the RGB color values. 4) I am setting up the colors as its own item, then multiplying the Red, Green, and Blue by 255 to get the relevant RGB color. The color data is saved as a simd_float_3, which sets each color value to X, Y, Z components (red is X, green is Y, blue is Z). 5) Creating a string with the data formatted as the .ply file expects allows it to be written to be appended to the existing fileToWrite, which already contains our header. After some trial and error, I found this syntax here created the best result (in this case, converting the RGB values from Floats to Ints, which rounds them). The last column indicates the alpha value of the point, which I am setting to 255, as each pixel should be fully visible. The pvValue string is appended to fileTowrite, as is a return carriage so the next point is added to the subsequent line. 6) Once all of the points have been added to fileToWrite, I am setting up a file path/file name as to where I want to write the file. 7) Finally, the file is being written to my desired destination. At this point, you could decide what you want to do with the file, whether that's to provide the user an option to save/share, upload it somewhere, etc. I'm setting my isSavingFile to false, and that's the setup. Once I grab my saved file (in my case, I provide the user a UIActivityController to save/share the file), and preview it (I'm using Meshlab on my Mac for preview purposes), I see the rendered point cloud. I've also tried uploading to SketchFab and it seems to work well. A few notes; My end goal is to save the point cloud as a .usdz file, not necessarily a .ply. @gchiste pointed me in the right direction, by creating something like a SCNSphere and coloring its material's diffuse contents with the relevant point cloud color, then setting its X, Y, and Z position in my SceneKit's view coordinate space. I did manage to get a SceneKit representation of the point cloud working, but the app crashes when I try to save out as a .usdz, with no particular indication as to why it's crashing. I filed feedback on this issue. The PLY file generated can be quite large depending on how many points you've gathered. While I'm a novice at PLY and modeling, I believe that writing the PLY file in a different format, such as working with little endian or big endian encoding, could yield smaller PLY file results. I haven't figured that out yet, but I saw an app in the App Store that seems to gather the point clouds/generate a PLY file, and the resulting PLY is in little endian format, and much smaller in file size. Just worth mentioning. This does not at all account of performance (which, there might be more efficient ways of doing this), nor providing the user any feedback that file writing is taking place, which can be time consuming. If you're planning to use this in a production app, just things to consider.
Oct ’20
Reply to RealityKit default project centered in screen?
Would you be able to provide a sample of your code? I just created a new Xcode project, matching your noted configuration (File -> New Project -> Augmented Reality App, choosing RealityKit as the content technology, SwiftUI as the interface, and Swift as the language), and launched on an iPhone 11 Pro Max, as well as an 11" iPad Pro, 2nd Generation. On both devices, the AR view was full-screen and did not show any borders on the top or bottom. For posterity, this is the code Xcode automatically generated for my ContentView; import SwiftUI import RealityKit struct ContentView : View {     var body: some View {         return ARViewContainer().edgesIgnoringSafeArea(.all)     } } struct ARViewContainer: UIViewRepresentable {     func makeUIView(context: Context) -> ARView {         let arView = ARView(frame: .zero)         // Load the "Box" scene from the "Experience" Reality File         let boxAnchor = try! Experience.loadBox()         // Add the box anchor to the scene         arView.scene.anchors.append(boxAnchor)         return arView     }     func updateUIView(_ uiView: ARView, context: Context) {} } #if DEBUG struct ContentView_Previews : PreviewProvider {     static var previews: some View {         ContentView()     } } #endif
Oct ’20
Reply to SwiftUI -> Scenekit: tap gesture recognizer
There are a handful of changes that would need to be made to accommodate performing hit testing in the manner you've detailed. Here are a few thoughts based on your code sample; You are passing a variety of arguments (such as your scene, point of view, and options) to your SceneView UIViewRepresentable, but nowhere in your SceneView do those arguments seem to exist. You are also trying to pass the scene's camera node, even though you haven't added the scene to the SCNView yet. This will not work, since it won't be able to check for a camera node until you add the scene to the SCNView. A UIViewRepresentable is a struct, not a class. You won't be able to use a UITapGestureRecognizer on a struct, which means you'll need to create a Coordinator, which is a class, to be called when a user taps. Perhaps most importantly, your code for handleTap() is not using the existing SCNView, but is creating a new SCNView each time the user taps. Since this new SCNView is empty, with no scenes or nodes added to it, it will never return a successful hit test result, as there is nothing to perform a hit test on. As an aside, you mentioned using an onTapGesture modifier in your SwiftUI code. The onTapGesture modifier does much of the heavy lifting for setting up a tap gesture, in much the way you've manually instantiated, configured, and handled your tap gesture in SceneView, via your UITapGestureRecognizer. The onTapGesture in SwiftUI does all of that instantiation and configuration for you, only requiring you to provide what should actually happen when a user taps. While that would simplify your code, the issue with this is that onTapGesture does not provide you the location of the tap, which would be necessary to perform a hit test. As such, your approach of using a UITapGestureRecognizer makes the most sense. To overcome these issues, I've tested the below code, which is running successfully and performing a successful hit test (the ship turns green when tapped, then returns to its original material accordingly). As such, your ContentView would look more like this; import SwiftUI import SceneKit struct ContentView: View {     var scene = SCNScene(named: "art.scnassets/ship.scn")     var body: some View {         SceneView(             scene: scene!,             options: []         )     } } struct ContentView_Previews: PreviewProvider {     static var previews: some View {         ContentView()     } } Subsequently, your SceneView UIViewRepresentable would look more like this; import SwiftUI import SceneKit struct SceneView: UIViewRepresentable { &#9;&#9; &#9;&#9;var scene: SCNScene &#9;&#9;var options: [Any] &#9;&#9; &#9;&#9;var view = SCNView() &#9;&#9; &#9;&#9;func makeUIView(context: Context) -> SCNView { &#9;&#9;&#9;&#9; &#9;&#9;&#9;&#9;// Instantiate the SCNView and setup the scene &#9;&#9;&#9;&#9;view.scene = scene &#9;&#9;&#9;&#9;view.pointOfView = scene.rootNode.childNode(withName: "camera", recursively: true) &#9;&#9;&#9;&#9;view.allowsCameraControl = true &#9;&#9;&#9;&#9; &#9;&#9;&#9;&#9;// Add gesture recognizer &#9;&#9;&#9;&#9;let tapGesture = UITapGestureRecognizer(target: context.coordinator, action: #selector(context.coordinator.handleTap(_:))) &#9;&#9;&#9;&#9;view.addGestureRecognizer(tapGesture) &#9;&#9;&#9;&#9; &#9;&#9;&#9;&#9;return view &#9;&#9;} &#9;&#9; &#9;&#9;func updateUIView(_ view: SCNView, context: Context) { &#9;&#9;&#9;&#9;// &#9;&#9;} &#9;&#9; &#9;&#9;func makeCoordinator() -> Coordinator { &#9;&#9;&#9;&#9;Coordinator(view) &#9;&#9;} &#9;&#9; &#9;&#9;class Coordinator: NSObject { &#9;&#9;&#9;&#9;private let view: SCNView &#9;&#9;&#9;&#9;init(_ view: SCNView) { &#9;&#9;&#9;&#9;&#9;&#9;self.view = view &#9;&#9;&#9;&#9;&#9;&#9;super.init() &#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9; &#9;&#9;&#9;&#9;@objc func handleTap(_ gestureRecognize: UIGestureRecognizer) { &#9;&#9;&#9;&#9;&#9;&#9;// check what nodes are tapped &#9;&#9;&#9;&#9;&#9;&#9;let p = gestureRecognize.location(in: view) &#9;&#9;&#9;&#9;&#9;&#9;let hitResults = view.hitTest(p, options: [:]) &#9;&#9;&#9;&#9;&#9;&#9; &#9;&#9;&#9;&#9;&#9;&#9;// check that we clicked on at least one object &#9;&#9;&#9;&#9;&#9;&#9;if hitResults.count > 0 { &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9; &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;// retrieved the first clicked object &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;let result = hitResults[0] &#9;&#9;&#9;&#9; &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;// get material for selected geometry element &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;let material = result.node.geometry!.materials[(result.geometryIndex)] &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9; &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;// highlight it &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;SCNTransaction.begin() &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;SCNTransaction.animationDuration = 0.5 &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9; &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;// on completion - unhighlight &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;SCNTransaction.completionBlock = { &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;SCNTransaction.begin() &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;SCNTransaction.animationDuration = 0.5 &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9; &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;material.emission.contents = UIColor.black &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9; &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;SCNTransaction.commit() &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;material.emission.contents = UIColor.green &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;SCNTransaction.commit() &#9;&#9;&#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;} &#9;&#9;} } Also worth nothing, the SceneView - https://developer.apple.com/documentation/scenekit/sceneview component of SwiftUI can be configured for displaying a SCNScene, like so; SceneView(scene: scene, &#9; options: .allowsCameraControl ) With that said, the allowsHitTesting - https://developer.apple.com/documentation/swiftui/list/allowshittesting(_:) modifier is not specific to SceneKit, and isn't referring to a hit test as you know it in the SceneKit framework. Your approach of using a UIViewRepresentable gives you the control you need to achieve the task you've presented through your sample code.
Oct ’20
Reply to ARKit Front Facing camera for devices without truedepth camera.
Yes, the ARFaceTrackingConfiguration, which would enable AR experiences on the front-facing camera, is now supported (as of iOS 14/iPadOS 14) on any device that includes both a front-facing camera and an A12 Bionic chip or later, regardless of whether that front-facing camera is a TrueDepth camera. This is indicated on the ARKit Developer - https://developer.apple.com/augmented-reality/arkit/ page, under the Expanded Face Tracking Support section.
Oct ’20
Reply to TrueDepth Camera - Body Tracking
ARBodyTracking is only supported on the rear camera (at the time of this writing, in case anyone stumbles across this later, the current release is iOS/iPadOS 14, which indicates the back camera feed is required for an ARBodyTrackingConfiguration - https://developer.apple.com/documentation/arkit/arbodytrackingconfiguration). If your app supports iOS/iPadOS 14 or higher, you could consider leveraging the Vision framework, which is capable of detecting body poses and joints, on both the front and rear camera. The Detecting Human Body Poses in Images - https://developer.apple.com/documentation/vision/detecting_human_body_poses_in_images documentation details much about this technology, including a list of supported joints. While ARBodyTracking supports recognizing more joints and applying the transforms to a 3D model, for a motion capture/puppet-like effect, depending on your use case, the Vision-base body pose recognition could suffice your needs.
Oct ’20
Reply to Looking for a VIEW IN SPACE ARKIT Application or developer
Admittedly, these forums will likely not yield many suggestions for specific app recommendations (they are much more geared towards developers and working with the underlying technologies). That said, there are two things that stand out in my mind that might satisfy your goal. One thing to consider is that iOS/iPadOS already includes technology to view 3D/AR objects in “space” (rather, the “real world”), by way of the AR Quick Look technology. This technology supports allowing users to open 3D/AR objects in the .usdz or .reality file format, without the need for an app at all (those .usdz or .reality files could be provided to a user through AirDrop, Mail, Messages, or viewed directly from a link to a file in Safari). The Quick Look Gallery - https://developer.apple.com/augmented-reality/quick-look/ has some great samples of files you can view on your iOS/iPadOS device, to get an idea of how easy it is to view AR content accordingly. AR Quick Look leverages many underlying AR technologies (such as object occlusion, person occlusion, realistic lighting, etc.) that makes the content appear truly in the real world space. Adding to that, as you mentioned art in your post, you may have desire to have scenarios where users can view art as it would appear on a wall in their home; this would mean some technology would need to detect a vertical surface, then apply the 3D/AR art object to that space, again, making it appear as though it is truly there. AR Quick Look has the capability to do this, when working with .reality files. I would suggest having a look at Reality Composer, which you can learn more about in the Creating 3D Content with Reality Composer - https://developer.apple.com/documentation/realitykit/creating_3d_content_with_reality_composer documentation. Reality Composer would allow you to create an AR experience that anchors your 3D/AR content to a vertical surface, would allow you to import your 3D/AR art object into Reality Composer to preview it in the real world, then export your experience as a .reality file that could be provided to a user, which they could view on an iOS/iPadOS device without an app. Reality Composer is available from Apple in the App Store for iOS/iPadOS, and is included with Xcode for macOS.
Oct ’20
Reply to ARKit Object Detection
You are correct that the Scanning and Detecting 3D Objects - https://developer.apple.com/documentation/arkit/scanning_and_detecting_3d_objects page offers sample code for a project that, once loaded on your iOS/iPadOS device, can scan and capture objects. Once scanned, the objects are saved to your desired destination in the .arobject file format. The saved .arobject file can then be added to your Xcode project, and you can configure your ARSession to load the .arobject (or multiple AR objects, if you plan to have your app detect more than one object), which will serve as a reference for object detection. That’s the high-level overview; the Scanning and Detecting 3D Objects page has great documentation on how to use the sample object scanning app, once it’s loaded on your device, and how to add the .arobject into your project/setup the code to make use of the file. Alternatively, Reality Composer (available as a download from the App Store for iOS/iPadOS) includes this same functionality, with regards to scanning an object, which can then serve as the anchor to show 3D content in AR, once the scanned object is seen by your user. I would venture to say that the sample project you referenced in your post is a bit more manual, but gives you more control over the entire workflow, whereas Reality Composer simplifies the process. Along those lines, if you opt to use the sample project you referenced, you will be adding your 3D content directly into your Xcode project, and you will be responsible for handling the code to add that 3D content to your scene once your object is recognized. With Reality Composer, you will be adding your 3D content into Reality Composer, which will give you the opportunity to more visually preview what the 3D content looks like in contrast to the scanned object. You can learn more about working with Reality Composer in the Creating 3D Content with Reality Composer - https://developer.apple.com/documentation/realitykit/creating_3d_content_with_reality_composer documentation. If your question was, more directly, asking how to load the code Apple provides onto a device, you will want to download the sample code, launch the .xcodeproj to open Xcode, set the proper profile to allow the app to be signed with your Apple Developer account, and run on a device. Again, a high-level overview, but I would have a look at the Running Your App in the Simulator or on a Device - https://developer.apple.com/documentation/xcode/running_your_app_in_the_simulator_or_on_a_device documentation (in your case, you’d be running the project on a device, not the Simulator, which is detailed in that documentation; the Simulator does not have support for ARKit).
Oct ’20
Reply to WidgetKit / SwiftUI Text Timer Centering
I asked what I believe to be a similar question - https://developer.apple.com/forums/thread/662642 yesterday, and a helpful community member indicated that adding the .multilineTextAlignment(.center) modifier was successful in centering the text. I tested this inside of a HStack (as well as a HStack embedded in a VStack and the timer text did appear centered. I am only finding this issue within a Widget and not similar behavior in a traditional SwiftUI view.
Oct ’20
Reply to To read sensor data from AirPods Pro
@haptic - The CMHeadphoneMotionManager only functions with AirPods Pro (this was mentioned in a WWDC session, which I'm having trouble finding, but I do recall it in mind). I am not from Apple and cannot comment as to whether or not the functionality could/would/should exist on other AirPods, but I would recommend starting with the isDeviceMotionAvailable - https://developer.apple.com/documentation/coremotion/cmheadphonemotionmanager/3585093-isdevicemotionavailable check in your app before allowing motion tracking to begin.
Oct ’20
Reply to SwiftUI Text Alignment Issue with Date and WidgetKit
Thank you for your reply, @Jax. That was a great piece of advice. I will file feedback on this, as I do not believe the Text alignment in WidgetKit is behaving as expected. That said, your suggestion worked properly. Subsequently, I was able to find the ideal experience even without setting the HStack's alignment and spacing; HStack { &#9; Text(Date().addingTimeInterval(600), style: .offset) &#9;&#9; .multilineTextAlignment(.center) } You may have that in there for your particular design needs, but wanted to comment. This is great, thanks!
Oct ’20
Reply to Dynamic Model
There are a litany of possible ways to bring a model into Xcode for use with an ARKit app. Depending on the format your model is in, alongside which content technology you choose to use for your app, you will find different methodologies for the best experience. In most cases, models are brought into Xcode as .usdz or .dae objects. These can be imported directly into a project by choosing File -> Add Files To [Project], then choosing your desired model and indicating which target you plan to add to the model to. Assuming you plan to bundle your model into your project, you can then access your model by its direct path. For example, if your model is "person.usdz", you would be able to locate the path to your model in the app's bundle, like so; let modelURL = Bundle.main.path(forResource: "person", ofType: "usdz") The same methodology would apply if you were to download your model from a server. Using a networking session, like a URLSession, you could download your model locally to the device, locate the path for the download, and use that to load the model. If using SceneKit as your content technology, you can create a SCNScene for your model, like so; let scene = try! SCNScene(url: modelURL, options: [.checkConsistency: true]) Subsequently, you could also load your model as a SCNNode, like so; let referenceNode = SCNReferenceNode(url: modelURL) referenceNode.load() If using RealityKit as your content technology, the Loading Entities from a File - https://developer.apple.com/documentation/realitykit/entity/stored_entities/loading_entities_from_a_file documentation provides guidance on how to load your model, again, either from the app's bundle, or if necessary, asynchronously, as to ensure a smooth experience for your user. RealityKit does expect a .usdz model, whereas SceneKit can load a .usdz or .dae. Subsequently, the Model I/O - https://developer.apple.com/documentation/modelio framework has additional guidance on loading models from other file formats in your app.
Sep ’20
Reply to Testing Local Experience for App Clip Not Working
@rynning That did the trick. Quite frankly, I did not know there was even a built-in QR code scanner via Control Center. Presumably, users can use the Camera app or the Control Center QR code scanner (I'd imagine the Camera app must have to verify the Associated Domains file, which could explain why it didn't work via the Camera app, but did using your methodology). Thank you again!
Sep ’20
Reply to How can I get current App clip size?
I am going to point you to the Creating an App Clip with Xcode - https://developer.apple.com/documentation/app_clips/creating_an_app_clip_with_xcode documentation, specifically, the Keep Your App Clip Small in Size section. This section has a great tip for for exporting your App Clip from Xcode, and how to see the App Clip's uncompressed size. Succinctly, you can export your App Clip by "archiv[ing] the app clip’s corresponding app, open the Organizer window, select the archive, and click Distribute App." Then, you can "[e]xport the app clip as an Ad Hoc or Development build with App Thinning and Rebuild from Bitcode enabled." Upon exporting, in the output folder you chose to export to, you will see a Thinning Size Report.txt file. This file will include the uncompressed size of the App Clip, which you can use as a guide for trying to ensure your App Clip fits under 10MB.
Sep ’20
Reply to Creating a texture for a SCNGeometry using the ARKit camera?
Hi @gchiste, Thank you for your reply! I will file an enhancement request using Feedback Assistant for this topic, requested an API that would provide such functionality. Additionally, thank you for your guidance regarding "texture mapping for real world objects" as a starting point to learn more about this topic. This is exactly what I was looking for, in terms of best understanding the overall topic ahead of me and how to learn about the potential challenges and solutions. Thank you again!
Sep ’20
Reply to Access ARKit from Safari and WebXR
Just a fellow developer and AR enthusiast, I would encourage you to submit feedback for Safari - https://www.apple.com/feedback/safari.html directly to Apple. I've been very happy with using the USDZ file format with Safari on iOS, iPadOS, and macOS, and while I am not completely familiar with WebXR, I definitely think letting Apple know that this is of interest for them to support in Safari.
Sep ’20