Will do! I have submit a Feedback/Bug report regarding this issue. Thanks!
Post
Replies
Boosts
Views
Activity
I, too, have this question. I would like to develop an application that will offer anchors within the mapped regions, but do not live within one of those mapped regions for testing. It would be great to be able to use ARGeoTracking for debug purposes without localizing, just for the sake of testing code and ensure it itself is running normally.
@Deanpankhurst, you presumably could use CoreLocation and ARKit on their own to provide similar functionality to what ARGeoTracking offers, wherein you set up a sort of geofence to determine when a user is within a specific geolocation based or longitude/latitude coordinates and a radius within it, that use an ARWorldTrackingConfiguration to locate a plane and add an AR object.
As you stated, you'd never hit the same level of precision as one does with ARGeoTracking, though if you wanted to go further (this is what I was trying before ARGeoTracking was announced), you could go out and photograph the location yourself, train an image recognition (or object recognition) model in CreateML, then bring that into your app. That way, you could use the user's geolocation to get a general idea of their location, then perform analysis to determine if their camera feed sees a point of interest that you've photographed for your model, and even go further as to convert the bounding box around your point of interest to AR coordinates, and anchor your AR object to the detected model.
As stated in the WWDC session, ARGeoTracking will become available in more places throughout the summer.
You likely have already seen it, but these two articles provide a plethora of information on rigging your own model for use with the ARBodyTracking ("motion capture") configuration in ARKit;
https://developer.apple.com/documentation/arkit/rigging_a_model_for_motion_capture
https://developer.apple.com/documentation/arkit/validating_a_model_for_motion_capture
I had attempted this a few months ago myself, going crazy trying to use a rigged 3D model with ARBodyTracking (specifically, I downloaded a rigged 3D model off of a website, brought it into Blender, and tried to rename the joints to match the skeletal structure of the sample robot.usdz model provided in Apple's sample project, referenced in that first link).
I am not versed enough in Blender to provide more, but I ended up using a trial of Maya 2020 to import Apple's robot.usdz model, separated the skeleton from the mesh, removed the mesh, imported my rigged 3D model, deleted the skeleton, then bound the mesh to Apple's skeleton. While I still had some work to do with skin weights to improve joints, this worked perfectly for testing with ARBodyTracking.
I am not entirely certain what it is your app is supposed to do (not familiar enough with Amazon's API or what is being scanned to query the ratings), but I was able to generate AR text when testing your app. Specifically, in the configureOCR() function of your ViewController.swift file, I added a small debug, just after you handle the observation from the VNRecognizedTextObservation array;
DispatchQueue.main.async {
	 self.ocrTextView.text = "Blah blah blah"
	 self.showARText()
}
(I added this on line 163, if that helps). Once I ran your app, scanned a document, and saved, the text was properly set to "Blah blah blah" and showARText was called successfully. I did not receive any errors and the text appeared in AR.
My guess is that something in the formatting of your response from the API call is causing an issue. I would try just setting the text of the ocrTextView to a default string for debug purposes, just to prove that your code works, and then backtrack and try to figure out why the URLSession/parsing is causing the error. Subsequently, it may also be worth saving the desired "AR text" to its own string variable, rather than trying to generate the SCNText object from the text property of your subclassed UITextView (I'm not sure why it should matter, but you're just adding more possible failure points by having to worry about the OcrTextView initializing properly and dealing with all of the auto layout and UI setup, when all you're after is the text).
Is there a particular project or task you are looking to accomplish that you could before, but are unable to now? As far as I understand it, SCNGeometry has not been deprecated in any way.
To your question, no, there is no way to use SCNGeometry in RealityKit, as RealityKit serves as an alternative to SceneKit (with regards to Augmented Reality apps), not as a replacement. There are many cases in which SceneKit proves to be a more suitable choice for Augmented Reality, and also cases where RealityKit serves to be a more suitable choice.
ARMeshGeometry is a component of ARKit, totally independent of SceneKit or RealityKit. ARKit runs the AR experience, whereas SceneKit and RealityKit handle the rendering of 3D content for use in AR. You can use ARMeshGeometry alongside SceneKit, though from your posts, it seems like you are looking for an easy way to take the ARMeshGeometry and create a SCNGeometry from it, which is not what it does.
My understanding may be a bit more rudimentary, but I find that the ARMeshGeometry (which includes the classification of the "type" of surface the mesh's vertices include) to be incredibly valuable for the types of apps I am looking to build. There are many different use cases for AR; many want to use the LiDAR camera to build 3D representations of the world around them, creating point clouds and such, though for me personally, creating 3D content that interacts with doors, walls, seats, ceilings, etc. is useful. If you clarify what you are trying to achieve with ARMeshGeometry and SCNGeometry, perhaps others will have thoughts on how to achieve it.
I'm no expert, but why is CoreML not an option? CoreML is not limited to visual/sound/language/text, and you can use the coremltools to convert models from TensorFlow, PyTorch, Keras, Caffe, LIBSVM, scikit-learn, and XGBoost to a .coreml model that could then leverage on-device encryption.
Presuming that is not an option, I'd think you need to try and identify the particular touch-points where a model could become compromised. For example, if it is bundled with the app at the time in which the app is published to the App Store, would that model be accessible to anyone with access to the device? Subsequently, I'd think you could even go more old-school and consider compressing and encrypting the model when it is bundled with the app/downloaded from a server, then decompress and decrypt the model using either a key hard-coded in the app's code, or some mechanism to confirm the user's identity and approval to decrypt (using CryptoKit or something of the like).
Again, no expert on the topic, but saw that it's been a few days since you posted this and I'd be curious of a more Apple-approved answer, too.
While this may not prove to be the most robust of answers, I believe this would be a use case for AVAssetWriter. Succinctly, you could hypothetically instantiate an AVAssetWriter instance before your ReplayKit session begins, then call startWriting() once your app calls the startCapture() method, per the sample app you linked to. Once stopCapture() is called in the app, you could stop the AVAssetWriter, which would then save the created file to a defined location.
Going to your question directly, the processAppAudioSample(sampleBuffer: CMSampleBuffer) method would be modified to append the sample buffer to the file being created, by way of the AVAssetWriter's input.
In short, you set up your AVAssetWriter, call startWriting() when your ReplayKit session begins, append each sample buffer to the AVAssetWriter's input, then mark the writing as finished once your ReplayKit session ends.
AVAssetWriter is available on iOS, macOS, tvOS, and through Mac Catalyst, and is a part of AVFoundation. Admittedly, AVAssetWriter can be a bit to wrap your head around at first (it has many, many configurable options as you're dealing with file formats, codecs, resolutions, bitrates, etc.), but there are plenty of great tutorials and sample code available online. It took me some time to grasp it, but for what you're after, you should find plenty of samples on how to just create a file by appending the sample buffers, and you can learn more about the configurable options as you need.
As an aside, if you're looking to record computer audio/microphone audio only, you don't need ReplayKit to be involved. I'm not sure of your use case, so ReplayKit may be relevant, but AVFoundation provides tools to capture audio from a built-in/external microphone, or computer audio, without using ReplayKit (ReplayKit does, however, provide a great interface for users and considers user privacy, so it should not be discredited).
Reality Composer is available from within Xcode. You can access the software by launching Xcode and choosing Xcode -> Open Developer Tool -> Reality Composer. This is true of the current release of Xcode, available in the Mac App Store, as well as Xcode 12 Beta, if you are developing for iOS/iPadOS 14/macOS Big Sur.
There are a limited set of ARKit frameworks available on macOS (as compared to iOS/iPadOS). You can determine which frameworks are available by viewing their documentation on https://developer.apple.com/documentation/arkit, and noting whether they support macOS (or macOS by way of Mac Catalyst) and determine which macOS version is supported.
With that said, you can install Xcode 12 Beta on macOS 10.15.4 (Catalina) or higher, and do not need to jump to macOS 11 (Big Sur) if you are leveraging ARKit technologies that are noted to run on macOS Catalina. While I may be generalizing, many of the newest ARKit 4 frameworks are geared towards iOS 14/iPadOS 14, and your consideration in those cases would more-so be whether you want to install a beta of iOS 14/iPadOS 14 on your devices, not so much whether you want to update your Mac to macOS 11 at this time.
Sorry if I misunderstood your question, but I'm still a bit confused. SCNGeometry has not been deprecated in any way, and you can use the mesh data gathered from the LiDAR camera in SceneKit, alongside SCNGeometry. You are not required to use RealityKit to work with the LiDAR camera. The mesh data gathered from the LiDAR camera comes as an ARMeshGeometry, which is independent of RealityKit or SceneKit.
To your point, while I have no more knowledge than any other developers on these forums, I have no reason to suspect SceneKit has been deprecated in any way. There are many cases in which I opt to use SceneKit in AR apps, as no suitable alternative exists in RealityKit. Many Apple sample projects related to ARKit still leverage SceneKit, including projects posted from the latest WWDC 2020, so there certainly is capability across the board.
If you're willing to share what it is you are trying to accomplish, perhaps someone here would be able to help. I can speak for myself, but I've jumped into RealityKit where prudent for my AR apps, but am using SceneKit in other cases, and would imagine that some of the talented developers here could offer some tips on how to get started using either framework depending on your needs.
As far as I understand your inquiry, this should be possible to do. There are a few steps and considerations you will need to take, but as a whole, you should be able to download reference images (and some relevant metadata) from a web-based resource, instantiate a set of ARReferenceImages, and use those ARReferenceImages as anchors for your AR session. In my mind, you would need to take these steps;
Begin an ARWorldTrackingConfiguration or ARImageTrackingConfiguration so your users have a responsive experience and see the camera feed immediately.
Create an empty set of ARReferenceImages, which will be used to hold your downloaded images.
Download the necessary images from your web-based resource, using a common URLSession (or any optimal method for downloading images).
Instantiate an ARReferenceImage for each of your downloaded images. Note that you will also need to have awareness of the image's orientation, as well as the image's physical width in the real world (you may want to have some data, such as a JSON file, that contains the URLs of each image, alongside their real-world size, in meters).
Insert each of your new ARReferenceImages to the empty ARReferenceImages set.
Reset your session and configure a new ARWorldTrackingConfiguration or ARImageTrackingConfiguration, setting your newly created ARReferenceImages set to the configuration's detectionImages or trackingImages property, respectively.
Instantiate an AnchorEntity from the ARImageAnchor, which you should receive in a delegate method whenever a tracked image is found.
If I were to build an app trying to achieve this, I'd go step by step.
Set Up A Session For A Responsive Experience
Assuming you already have a RealityKit project going, you could add this in your viewDidLoad method.
let configuration = ARWorldTrackingConfiguration()
sceneView.session.run(configuration)
Create An Empty Set of ARReferenceImages
You could add this to a local function or as a property accessible to your entire ViewController or class.
var newReferenceImages: Set<ARReferenceImage> = Set<ARReferenceImage>()
Download the Images
You can use an asynchronous URLSession to download your images (and any relevant metadata). This should return each image as a UIImage. As your question focuses more on ARKit/RealityKit, this is being skipped, but you should be able to find plenty of resources with regards to downloading images online.
Instantiate a New ARReferenceImage from Each Downloaded Image
For each downloaded image, you could create a new ARReferenceImage.
let myImage = ARReferenceImage(downloadedImage.cgImage!, orientation: CGImagePropertyOrientation.up, physicalWidth: width)
In your case, you will want to consider how you are acquiring the CGImagePropertyOrientation (whether that is being determined by a function you already have in your app, as your sample in the question shows, setting to .up as a default, or some other methodology). The same with the physical width of the image; you'll want to acquire that from somewhere prior to this step.
Insert Each ARReferenceImage Into the Empty ARReferenceImages Set
newReferenceImages.insert(myImage)
Reset Session and Re-Configure
Once you have added each ARReferenceImage to the ARReferenceImages set, you can reset your session and apply this set to the configuration. I would recommend a function like such;
func resetSession() {
	 let configuration = ARWorldTrackingConfiguration()
	 configuration.detectionImages = newReferenceImages
	 configuration.maximumNumberOfTrackedImages = 1
	 session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
}
Your choice of maximumNumberOfTrackedImages should be a number suitable for your app's experience.
Create AnchorEntity from Each ARImageAnchor
Presumably, you will have already set an ARSessionDelegate somewhere in your setup. This should allow your delegate to receive a call each time new anchors are added, which will be provided as the more general ARAnchor. Therefore, I would use that delegate function like such;
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
	 for anchor in anchors {
			if let myAnchor = anchor as? ARImageAnchor {
				 let imageAnchor = AnchorEntity(anchor: myAnchor)
				 /* Do something with the anchor here if necessary, such as adding an Entity to the model.	For example;
				 let model = try! Entity.load(named: "myModel")
				 model.position = imageAnchor.position
				 imageAnchor.addChild(model)
					*/
				 self.scene.addAnchor(imageAnchor)
			}
	 }
}
Note that this example does not take into account efficiency (I.E. instantiating the should probably happen somewhere earlier in your app's lifecycle), but as a whole, this should point you in the general direction of each step necessary to download, build, and set your ARReferenceImages.
Accessing the .usdz file is really no different than accessing any other image, video, or data off of the internet. As far as I am aware, there is nothing inherently built into the .usdz or .reality file format that would allow you to restrict their ability to be downloaded.
With that said, while this is unrelated to Apple/ARKit, you may want to consider the protections one would put in place when accessing an image or video that should only be available to a specific set of users. In that case, looking into a signed URL (if that is a feature of your web host/CDN), or URL obfuscation, could be prudent. I'd look at the .usdz as nothing more than an image; you could protect it in the same way as an image, as its capability of being downloaded is exactly that of an image or video.
As far as I can tell, the checkAvailability - https://developer.apple.com/documentation/arkit/argeotrackingconfiguration/3571351-checkavailability documentation has an updated latest of locales which support ARGeoTracking (at time of writing this, that includes San Francisco Bay Area, Los Angeles, New York, Chicago, and Miami). It was stated in the Explore ARKit 4 - WWDC 2020 Session - https://developer.apple.com/videos/play/wwdc2020/10611/ that more locales would be added throughout the summer, and presumably, into the future.
From personal experience, I can say that I've had varying success in the locales noted. For example, I've found success outside of Los Angeles, into parts of Orange County, but at other times, failed to have success at random places in Santa Monica. As a whole, I find success in most parts of Los Angeles County, all of Manhattan, most of Brooklyn, etc.
The easiest approach to determining if ARGeoTracking is supported at a locale is to use the checkAvailabilityAt - https://developer.apple.com/documentation/arkit/argeotrackingconfiguration/3571350-checkavailability method. For example, this is my approach to checking whether ARGeoTracking/localization is supported at a given coordinate (in this case, Times Square in New York City):
let coordinates = CLLocationCoordinate2D(latitude: 40.75921100, longitude: -73.98463800)
ARGeoTrackingConfiguration.checkAvailability(at: coordinates) { (available, error) in
			if let err = error {
				 print("Error with ARGeoTracking check:", error)
			}
			print("Is ARGeoTracking available here?", available)
}
My suggestion would be to gather a list of the coordinates you wish to use within your app, then run the aforementioned check on each coordinates at runtime to determine if such functionality would be supported (this way you will be prepared to handle cases in which the functionality is supported in the future, or gracefully handle if such location is not supported).
Thank you for clarifying that, gchiste. If the session does not have to be reset, does that mean that the ARImageTrackingConfiguration could be run when the app launches (to keep it responsive for the user), and the trackingImages property could be set after the fact? Or would it make more sense to wait to begin the session until all of the reference images have been downloaded/created?
Is there any documentation available to indicate the difference between snapshot and placeholder. While I do see that @pdm noted that snapshot is asynchronous, I was under the impression that snapshot's goal is to provide a quick representation of the widget, as will be previewed in the widget gallery. I was also under the impression that placeholder is relevant in cases where the widget will be rendered on the home screen before data is available. In effect;
Snapshot - Should provide real data, asynchronously, but should return this data as quickly as possible for rendering in the widget gallery.
Placeholder - Should provide data as quickly as possible, synchronously, which will be (in a future beta) automatically rendered as redacted where relevant to provide a rendered UI.
Timeline - The standard timeline entries can be provided asynchronously, and as they do not need to be provided quickly (necessarily), can gather the relevant data from network resources or the app for optimal experience.
Do I have that right?