Your code is incredibly well-thought, creative, and a great example of how to build procedural geometries in RealityKit. Really great work! I'm wondering why it is necessary to build the floor geometry based on the individual sizing and vertices of the components of the RoomPlan scan. More specifically, if you build the RoomPlan sample project, the RoomCaptureView experience builds what appears to be a rectangle matching (or slightly extending past) the bounds of the scan as a floor, but the shape never changes from a rectangle, nor is it custom based on a uniquely shaped room. Based on this, and to negate having to worry about rotations, it might be easier to build a rectangle as the floor, using the bounds of the RoomPlan model, and add it as a child to the loaded Entity. You could do something like;
let entityBounds = myRoomPlanEntity.visualBounds(relativeTo: nil) // Get the bounds of the RoomPlan scan Entity.
let width = entityBounds.extents.x + 0.025 // Slightly extend the width of the "floor" past the model, adjust to your preference.
let height = Float(0.002) // Set the "height" of the floor, or its thickness, to your preference.
let depth = entityBounds.extents.z + 0.0125 // Set the length/depth of the floor slightly past the model, adjust to your preference.
let boxResource = MeshResource.generateBox(size: SIMD3<Float>(width, height, depth))
let material = SimpleMaterial(color: .white, roughness: 0, isMetallic: true)
let floorEntity = ModelEntity(mesh: boxResource, materials: [material])
let yCenter = (entityBounds.center.y * 100) - 1.0 // Set the offset of the floor slightly from the mode, adjust to your preference.
floorEntity.scale = [100.0, 100.0, 100.0] // Scale the model by a factor of 100, as noted in the [release notes](https://developer.apple.com/documentation/ios-ipados-release-notes/ios-ipados-16-release-notes) for working with RoomPlan entities.
floorEntity.position = [entityBounds.center.x * 100, yCenter, entityBounds.center.z * 100]
myRoomPlanEntity.addChild(floorEntity)
Not sure if this works for your needs, but might be a way to avoid having to worry about the rotation offset when building a custom geometry for the floor.
Post
Replies
Boosts
Views
Activity
The PhotogrammetrySession.Configuration.SampleOverlap property has been removed from the PhotogrammetrySession API it appears. If your machine is updated with Xcode 13 Beta 2 (or likely higher) and macOS Monterey Beta 2 (or likely higher), you should be able to adapt the Creating a Photogrammetry Command-Line App project to adopt the changes and run successfully.
The thread referenced by @azi has detail on how to adapt the sample project to handle the new changes.
Thanks for all of your comments. I’m sorry if the specific line numbers did not align with the sample project. If anyone is still having difficulty adapting the sample project, I hope the linked code might help (it exceeded the character count to post here).
To note, it is imperative that your machine is running macOS Monterey Beta 2. As far as I can tell, the Photogrammetry API had modifications that are included in Monterey Beta 2 (and likely above) only. Even though Xcode 13 Beta 2 (and likely above) is aware of those changes, they are only functional with macOS Monterey Beta 2 (and likely above). This is presumably why @rickrl is facing the error detailed; the PhotogrammetrySession.Outputs Does not exist in macOS Monterey Beta 1.
I've posted my updated main.swift on this Gist. I am not sure if Apple allows linking to outside sources, but hoping that helps. I've tried to comment the code to indicate where changes took place and what those changes were.
I was able to build and run the project using Xcode 13, Beta 2 (13A5155e), after installing macOS Monterey Beta 2 (21A5268h). There were some modifications needed to the Creating a Photogrammetry Command-Line App, as it appears the PhotogrammetrySession framework has had some changes between macOS 12 Beta 1 and Beta 2. I made the following changes;
I removed lines 35-39 in main.swift. It appears that the sampleOverlap property of PhotogrammetrySession.Configuration has been removed. Removing lines 35-39 resolves the HelloPhotogrammetry struct conformance errors.
It appears the PhotogrammetrySession.Output property has been removed, replacing the built-in Combine publisher with an ASyncInterator. This was alluded to in the WWDC21-10076 session, but I replaced the Combine publisher (lines 69-103) in main.swift with the following;
Task.init(priority: .default) {
do {
for try await output in session.outputs {
switch output {
case .requestProgress(let request, fractionComplete: let fraction):
handleRequestProgress(request: request, fractionComplete: fraction)
case .requestComplete(let request, let result):
handleRequestComplete(request: request, result: result)
case .requestError(let request, let error):
print("Request \(String(describing: request)) had an error: \(String(describing: error))")
case .processingComplete:
// All requests are done so you can safely exit.
print("Processing is complete!")
Foundation.exit(0)
case .inputComplete: // Data ingestion has finished.
print("Data ingestion is complete. Beginning processing...")
case .invalidSample(let id, let reason):
print("Invalid Sample! id=\(id) reason=\"\(reason)\"")
case .skippedSample(let id):
print("Sample id=\(id) was skipped by processing.")
case .automaticDownsampling:
print("Automatic downsampling was applied!")
default:
print("Output: unhandled message: \(output.localizedDescription)")
}
}
}
}
Because of this change, I also removed subscriptions property from withExtendedLifetime (just below the session output code), which should change the call from withExtendedLifetime((session, subscriptions)) to withExtendedLifetime(session).
TLDR; Remove the sampleOverlap property from main.swift and change the session.output, which was a Combine publisher, to session.outputs, then iterate over the AsyncIterator, as was demoed in the WWDC21-10076 session. If you have Xcode 13 Beta 2 and macOS Monterey Beta 2 installed, this should now run as expected.
Are videos with transparency now supported as textures in RealityKit? I do not recall seeing this notated anywhere in the recent WWDC sessions, but perhaps I overlooked this. Thanks!
Adding a comment that I am facing the same issue, on a 2013 Intel Mac Pro (which has 64GB RAM and AMD FirePro D700 6GB ). Per the comment above, this seems to satisfy the requirements per the WWDC 2021 session 10076.
My personal understanding of the location-based limitation is that ARKit is using mapping data to localize the position of the user, and that mapping data has only be prepared for a certain number of locations. By using what I understand to be the "Look Around" images, which you may be familiar with from Apple Maps, ARKit can locate a user not just by using GPS/location services, but can couple that with the iOS device's sensors (for direction/altitude) and image-based mapping for precise location and distance fro a given point. In places that are very condensed (a great example being New York City or San Francisco), using using image-based mapping data allows for exact positioning and creating AR experiences that interact with the environment.
I think a great example of this is seen in the WWDC 2020 session on ARKit. - https://developer.apple.com/wwdc20/10611 The engineer uses the Ferry Building in San Francisco as a point of interest and builds an AR experience using the GPS coordinates of the Ferry Building, as well as the altitude. Without localizing the user's location, it might be tough to pinpoint exactly how far from the Ferry Building the user is; San Francisco is a dense city, and the user could be ten feet from the building, twenty feet from the building, or across the street from the building. Effectively matching what the ARKit camera sees, the localization helps to ensure the experience in consistent across all users.
In short; it seems the list of variable cities is growing, but is not available everywhere yet.
Yes, this sample must be run on a physical device (the project should run on any physical device, but will cause a runtime error near-immediately if that device does not have a supported LiDAR scanner). The Simulator will not support running the project as it does not have capabilities to run the necessary camera functionality nor LiDAR/depth functionality.
Hi @HeoJin,
I certainly could not fancy myself an expert in Metal or working with LiDAR/point clouds in any way, but the help I received in this thread was what get me in the right direction to understanding how to work with the data being gathered and rendered by the LiDAR scanner/Metal.
My suggestion is to begin with the Visualizing a Point Cloud Using Scene Depth - https://developer.apple.com/documentation/arkit/visualizing_a_point_cloud_using_scene_depth sample project that Apple provides, and have a look at the comments in this thread to gather an understanding of where the points are being saved. Namely, this code from @gchiste;
commandBuffer.addCompletedHandler { [self] _ in		
	 print(particlesBuffer[9].position) // Prints the 10th particles position
}
If you have a look in Renderer.swift in that referenced sample project, you will find that particlesBuffer is already a variable, which is a buffer that contains an array of ParticleUniforms (which has the position, rather, coordinate, of each point, the color values of each point, as well as the confidence of each point and an index).
What I ended up doing, per my comment to @JeffCloe, is to iterate over the particlesBuffer "array", using the currentPointCount, which is another variable you will find in Renderer.swift. As an example;
for i in 0..<currentPointCount {
	 let point = particlesBuffer[i]
}
Doing that would give you access to each gathered point from the "scan" of the environment. That said, I have a way to go to learning more myself on this topic, including improving efficiency, but exploring that particlesBuffer really helped me to gather an understanding of what's happening here.
Hi @mjbmjbjhjghkj,
Without knowing more about the inner-workings of the app you referenced, it would be tough to answer how they are building their scene reconstruction. Keep in mind that there are multiple ways to use the LiDAR scanner on iPad/iPhone to recreate the scanned environment and export as a 3D model (the two common ways, as provided by Apple's sample code, are creating "point cloud" representations of the environment, and reconstructing a scene using the ARMeshGeometry - https://developer.apple.com/documentation/arkit/armeshgeometry type, as noted in the Visualizing a Point Cloud Using Scene Depth - https://developer.apple.com/documentation/arkit/visualizing_a_point_cloud_using_scene_depth and Visualizing and Interacting with a Reconstructed Scene - https://developer.apple.com/documentation/arkit/world_tracking/visualizing_and_interacting_with_a_reconstructed_scene sample projects, respectively).
From a quick glance, it looks more as though the app you referenced is building a 3D model of the environment by capturing the geometry of the world and converting that geometry (which would be gathered as an ARMeshGeometry) to something like a SCNGeometry, then using that SCNGeometry to create a SCNNode, and then, add each SCNNode to a SCNScene to build a "growing" model and finally exporting as a 3D object. How the app you referenced creates the environmental texture to apply to that 3D model, so it appears like a "real" representation, is not something I am familiar with, though the thread you referenced here on the Developer Forums has some extensive detail and discussion about how to convert ARMeshGeometry to SCNGeometry for this purpose.
Hi @gchiste,
Thank you for your reply! I am not specifying disableGroundingShadows in my render options, though your comment pointed me in a few directions that is uncovering the root of my issue. I ended up downloading a sample .usdz from Apple's Quick Look gallery page, adding that to my app, and found that that model, too, had no shadows. As I looked through my code, I found that I was complicating how I was creating my AnchorEntity, and simplified to be an anchor with horizontal plane anchoring, which brought in shadows for the downloaded model.
Moreover, I can now see that my desired .usdz model does, in fact, have ground shadows. Albeit, they are lighter than I would have expected, and am analyzing the file in a 3D modeling program to better understand what may have been the root cause of the issue. As such, I will follow-up with a technical support incident if the issue persists, but I believe your clue regarding the model's anchor and the way I was creating the original AnchorEntity was causing the anomaly. Thank you!
Hi @cyber_denis,
No, that sample you are providing is not actually downloading a file. When you are trying to load an Entity, as you indicate in your line
let entity = try? Entity.load(contentsOf: fileUrl)
the loading of the file is expected to come from local storage on your device (either by way of an asset being bundled with the app when you compile it, or by downloading an asset).
For what you are trying to achieve, you will need to use some methodology of downloading the file from the web, saving to a location that you app has access to (such as the app's Documents directory), and then reference that path when trying to load your Entity or AR object. The first example in Apple's Downloading Files from Websites - https://developer.apple.com/documentation/foundation/url_loading_system/downloading_files_from_websites will get you there.
For example, here is an updated version of your ViewController.swift. See notations below;
import UIKit
import ARKit
import QuickLook
class ViewController: UIViewController, QLPreviewControllerDataSource {
		
		// 1
		var modelURL: URL?
		override func viewDidLoad() {
				super.viewDidLoad()
				
				// 2
				self.downloadSampleUSDZ()
		}
		@IBAction func startDecoratingButtonPressed(_ sender: Any) {
				
				// 3
				guard modelURL != nil else { return }
				let previewController = QLPreviewController()
				previewController.dataSource = self
				present(previewController, animated: true, completion: nil)
				
		}
		
		// 4
		func downloadSampleUSDZ() {
				let url = URL(string: "https://developer.apple.com/augmented-reality/quick-look/models/drummertoy/toy_drummer.usdz")!
				let downloadTask = URLSession.shared.downloadTask(with: url) { urlOrNil, responseOrNil, errorOrNil in
				
				 guard let fileURL = urlOrNil else { return }
				 do {
						 let documentsURL = try
								 FileManager.default.url(for: .documentDirectory,
																				 in: .userDomainMask,
																				 appropriateFor: nil,
																				 create: false)
						 let savedURL = documentsURL.appendingPathComponent(url.lastPathComponent)
						 try FileManager.default.moveItem(at: fileURL, to: savedURL)
						 self.modelURL = savedURL
				 } catch {
						 print ("file error: \(error)")
				 }
		 }
		 downloadTask.resume()
		}
		
		func numberOfPreviewItems(in controller: QLPreviewController) -> Int { return 1 }
			
		func previewController(_ controller: QLPreviewController, previewItemAt index: Int) -> QLPreviewItem {
				// 5
				let previewItem = ARQuickLookPreviewItem(fileAt: self.modelURL!)
				previewItem.canonicalWebPageURL = URL(string: "https://developer.apple.com/augmented-reality/quick-look/models/drummertoy/")
				previewItem.allowsContentScaling = false
				return previewItem
		}
}
1) Add a new variable to your ViewController.swift file, which will hold the local URL of the file, once it's been downloaded and saved to storage.
2) Call a new function (discussed below), to download the file from a web URL.
3) Make sure that we've set the modelURL, hereby confirming the file was downloaded successfully. Otherwise, do not do anything, as we have nothing to preview.
4) Again, this is adapted from the Downloading Files from Websites - https://developer.apple.com/documentation/foundation/url_loading_system/downloading_files_from_websites documentation, but you are establishing the URL of the file you want to download, and configuring a task to download that file. Once the file is downloaded, we save it to the app's Documents directory, using the original file name, and set modelURL to be the path to the local URL of the downloaded file.
5) We set the URL of the ARQuickLookPreviewItem to the local URL of the file, then preview.
This is a very abstract example, which does not take into account things like error handling, keeping the user informed of what is happening (otherwise they are just tapping the button repeatedly until the file is ready), and a litany of other considerations to deliver an ideal user experience (including adding the ability to dynamically provide the web URL of the file(s) you wish to download, as this example is hard-coding the web URL to "toy_drummer.usdz"). However, this code does compile when I test on my device, and I can preview the AR model after it downloads from the web.
Hi @cyber_dennis,
I would venture a guess that the toy_drummer.usdz file isn't actually saved in your bundle. I have created a project, copied your code exactly, added a button to my storyboard, attached it to the startDecoratingButtonPressed() method, downloaded the toy_drummer.usdz from Apple's website, dragged it into my application's files in Xcode, and ran the app. I was able to tap the button and successfully view the AR model.
I may be misreading your question, but it sounded like you are expecting to be able to load the .usdz model directly from a website URL, which is not going to work in this sense. If you have a .usdz model saved somewhere on the web (such as how the toy_drummer.usdz is really saved at https://developer.apple.com/augmented-reality/quick-look/models/drummertoy/toy_drummer.usdz), you would need to use something like a URLSession to download the model to local storage, then replace fileUrl in your code with the path to the downloaded file. Alternatively, if you would prefer to bundle your .usdz models with your app, you could download the .usdz models from the web, drag them into Xcode, and replace fileUrl with the relevant file names. For the former, downloading from the web, see Downloading Files from Websites - https://developer.apple.com/documentation/foundation/url_loading_system/downloading_files_from_websites, which should point you in the right direction.
Hi @Ricards97,
You are indeed correct that the savePointsToFile() method should not be private. That was an error on my part when posting; great catch and my apologies.
With regards to your question as to why the .ply file is not saving; the likely answer is that the file is saving, but to the Documents directory of the app's container (which is used for internal storage and not something that you have direct access to via the UI). While not totally specific to the point cloud example, there are a few approaches you can take to access the saved .ply;
Saving via iTunes File Sharing/Files App
You can enable both "iTunes File Sharing" and "Supports opening file in place" in your app's Info.plist file. Doing so would serve a two-fold purpose; you would be able to connect your iOS device to your Mac and access your device via Finder. In the "Files" tab, you could navigate to your point cloud app, and you should see the saved .ply files (which could then be copied to another location on your Mac for use). Subsequently, this will also make the .ply file accessible via the Files app on your iOS device, which should make it easier to access the file and use it for your desired purpose.
To do so, add the following entries to your Info.plist, setting the value of each to true;
UIFileSharingEnabled
LSSupportsOpeningDocumentsInPlace
Saving via delegate method
You could also implement a mechanism in your app to call a method once your file is done writing, then present a UI element within your app that would provide you to do something with the saved .ply file (such as share it via iMessage, e-mail it, save it to Files, etc.).
I'd suggest having a look at documentation on how delegates work, but for a quick approach, try the following; In Renderer.swift, at the end of the file (outside of the Renderer class), add a new protocol;
protocol RendererDelegate: class {
	 func didFinishSaving(path: URL)
}
Also in Renderer.swift, within the Renderer class, where variables are being defined, add a new variable like so;
weak var delegate: RendererDelegate?
Lastly, still in Renderer.swift, you will want to add a line near the end of the savePointsToFileMethod(), just after the self.isSavingFile = false line. That line would call the delegate and provide the .ply file's URL. Like so;
delegate?.didFinishSaving(path: file)
Over in your ViewController.swift file, you'll need to do a few things; conform ViewController to the RendererDelegate class, add a function to handle when the delegate's didFinishSaving method is called, and call a UI component to allow you to share the .ply file.
At the top of ViewController.swift, where the ViewController class is defined, you will see that the class already conforms to UIViewController and ARSessionDelegate. Add a comma after ARSessionDelegate and conform to RendererDelegate, so that line now looks as so;
final class ViewController: UIViewController, ARSessionDelegate, RendererDelegate {
...
Somewhere within the ViewController class, add a new method to handle when the delegate is called. You likely are receiving an error that ViewController does not conform to RendererDelegate at this point. This step should take care of that error. The method should appear like so;
func didFinishSaving(file: URL) {
	//
}
In the viewDidLoad() method, you will need to inform the Renderer that ViewController is the delegate. Currently, Renderer is being instantiated like so;
renderer = Renderer(session: session, metalDevice: device, renderDestination: view)
Just below this line, add a new line;
renderer.delegate = self
Lastly, in your didFinishSaving() method that we created in ViewController, you can add a UIActivityViewController which will make the method appear like so;
func didFinishSaving(path: URL) {
	 DispatchQueue.main.async {
		 let ac = UIActivityViewController(activityItems: [path], applicationActivities: nil)
				if let popover = ac.popoverPresentationController {
				popover.sourceView = self.confidenceControl
		 }
		 self.present(ac, animated: true)
	 }
}
This was a lengthly and very verbose explanation of how to add a delegate callback and present a UIActivityViewController to allow you to work with the saved .ply file. To note, if you are running your app on iPad, your UIActivityController needs a "source" to present from, by way of a popover. In my example, I am using the confidenceControl element that already exists in the sample app, but if you've modified the UI of that sample app, you may need to use another popover point, such as a button or other UI element.
Hi @gchiste,
Thanks very much for your reply! You are very right; there are many transforms happening and it is certainly a specific use case. I've never used a Technical Support Incident, but I appreciate you commenting on this, as I will certainly take that path and gather the best details possible to provide for help. This seems like a great scenario to make use of such a path for further learning and resolution.
Have a great day!