Posts

Post not yet marked as solved
7 Replies
iPhone 7 lacks a TrueDepth camera and is, therefore, not capable for performing Face ID login. Not sure what you are referencing that iOS 13.1 includes Face ID; Face ID has been a feature of iOS for any device with a TrueDepth camera going back to the release of iPhone X in September 2017. iPhone 7 can continue to use Touch ID for authentication, but not Face ID.
Post not yet marked as solved
1 Replies
Is your book a practical book that "comes to life" when a user uses an app to point their phone at a page? If yes, you would likely want to look into Apple's sample code for "Detecting Images in an AR Experience" (https://developer.apple.com/documentation/arkit/detecting_images_in_an_ar_experience) for samples of how to detect an image and augment it with 3D content. In your case, you could provide a sample image of each page in the book that you would want your users to recognize, at which point ARKit could be scanning for that image and, if it is located, add relevant anchors for you to "attach" your 3D content to. The animations themselves, even if video files, could be applied in this case. Additionally, you may want to consider looking into Reality Composer, which might let you prototype this idea in a more effective way than simply relying on code.To your questions, it's tough to say if this is something that you could/should build using ARKit. ARKit is a framework that does all of the heavy processing of Augmented Reality for you; it's not a program to build AR experiences in. If looking for that, Reality Composer would be your best bet. Additionally, ARKit is for Apple devices - it has no cross-platform functionality to run on Android devices. You could look into third-party libraries for cross-platform functionality (of which, a few do exist), but you lose much of Apple's streamlined approach to leverage the best software and hardware integration, and may provide your users a sub-par experience if you opt to not use ARKit.Your last question; I would agree that animations would likely be too large to be bundled with the app. You could store such animations on any server, so as long as you have some sort of API to allow your app to contact that server and download the media (something you would need to implement using a URLSession). Looking into "On Demand Resources" is another technology Apple provides that could prove useful in this case, but in most scenarios, a developer would likely store the animation resources/videos on a server, the app would then either have a list (or download a list) of the necessary animation files, then download each of those files locally for use, all while showing the user a progress bar/informing them what's happening.
Post not yet marked as solved
1 Replies
Just taking a guess here, but I am pretty sure that your app cannot access anything at the path "/private/var/mobile/Containers/Data/Application/"Since iOS apps live within their own container, they do not have access to the device's file system (and the path you provided also doesn't seem like a full path to an image or 3D object scan). Ideally, you would either want to bundle your 3D reference objects into the app itself (they don't have to live in the assets catalog, they could live in an oragnized folder amongst your code), and programatically instantiate the object, or, for more dynamic control, have AR reference objects live on a server somewhere, and have the app download the objects so their device always has the latest saved into storage.You seem like you're creating the ARReferenceObject properly, it just doesn't seem like you have the right path to the objects. I would suggest either saving an object locally amongst your code, and calling it by doing something like;let completeURL = Bundle.main.url(forResource: "fileName", withExtension: "arobject")If that approach works, you could expand to storing a multitude of AR objects in a folder, run through the contents of that folder, and create each ARReferenceObject to add to the configuration. Or, you could also take the approach of saving the arobjects on a server and having your app download and save them locally, then build the ARReferenceObjects from the path saved locally upon download.
Post not yet marked as solved
12 Replies
Has anyone had success in bringing in any custom-made 3D rigged model into Xcode for use with ARBodyTracking? I've yet to find any documentation that details the exact skeletal structure necessary for this technology, though I imagine something in ARKit must "map" the detected human structure to a 3D model, and therefore, there must be some relevant naming convention.
Post not yet marked as solved
1 Replies
As far as I understand it, apps shouldn't be deployed with their core basis relying around a feature that could be limited to only certain hardware. You could gracefully inform your users if their device is not supported for a particular feature (I.E. use an alert to inform a user that posture tracking is not supported on their device), but your app should probably provide other functionality relevant to all users. I'm just a fellow developer, but I have a feeling your app will be rejected for having its functionality only work on A12+ devices.
Post not yet marked as solved
4 Replies
You can access the depth data using ARKit's ARFrame, retrieving the capturedDepthData property - https://developer.apple.com/documentation/arkit/arframe/2928208-captureddepthdataThis should provide the same AVDepthData information as AVFoundation would, at which point you could use the ARFrame's capturedImage property to perform any depth-based processing you deem relevant.