Posts

Post not yet marked as solved
0 Replies
567 Views
With the availability of tracking AppClipCodeAnchor in ARKit on iOS/iPadOS 14.3+, I'm curious if there is a way to determine the rotation (or more specifically, the "angle") at which the App Clip Code is detected. For example, an App Clip Code could appear on a business card, which a user might have laying flat on a table (therefore at a 0° angle). In another case, an App Clip Code could be printed and mounted to a wall, such as in a museum or a restaurant (therefore at a 90° angle). Anchoring AR experiences (especially ones built in Reality Composer) to the detected AppClipCodeAnchor results in a strange behavior when the App Clip Code is anything other than 0°, as the content appears "tethered" to the real-world App Clip Code, and therefore appears unexpectedly rotated without manually transforming the rotation of the 3D content. When I print the details of the AppClipCodeAnchor, once detected in my ARKit session, I can see that a human-readable descriptor for the "angle" of the detected code is available. However, I can't seem to figure out how to determine this property from the AppClipCodeAnchor's transform. Is there an easy way to rotate 3D content to match the rotation of the scanned App Clip Code?
Posted Last updated
.
Post not yet marked as solved
3 Replies
1.6k Views
Much of this question is adapted from the idea of building a SCNGeometry from an ARMeshGeometry, as indicated in this - https://developer.apple.com/forums/thread/130599?answerId=414671022#414671022 very helpful post by @gchiste. In my app, I am creating a SCNScene with my scanned ARMeshGeometry built as SCNGeometry, and would like to apply a "texture" to the scene, replicating what the camera saw as each mesh was built. The end goal is to create a 3D model somewhat representative of the scanned environment. My understanding of texturing (and UV maps) is quite limited, but my general thought is that I would need to create texture coordinates for each mesh, then sample the ARFrame's capturedImage to apply to the mesh. Is there any particular documentation or general guidance one might be able to provide to create such an output?
Posted Last updated
.
Post not yet marked as solved
2 Replies
1.5k Views
I am currently working with RealityKit to load a USDZ model from my application's bundle. My model is being added like so; var modelLoading: Cancellable? modelLoading = Entity.loadAsync(named: name) .receive(on: RunLoop.main) .sink(receiveCompletion: { (completion) in modelLoading?.cancel() }, receiveValue: { (model) in model.setScale(SIMD3(repeating: 5.0), relativeTo: nil) let parentEntity = ModelEntity() parentEntity.addChild(model) let entityBounds = model.visualBounds(relativeTo: parentEntity) parentEntity.collision = CollisionComponent(shapes: [ShapeResource.generateBox(size: entityBounds.extents).offsetBy(translation: entityBounds.center)]) self.arView.installGestures(for: parentEntity) 		let anchor = AnchorEntity(plane: .horizontal) 		anchor.addChild(aparentEntity) 		arView.scene.addAnchor(anchor) }) When my model is added to the scene, which works as expected, I notice that the model has no "ground shadows." This varies in comparison to viewing this model via AR Quick Look, as well as when loading a Reality Composer project (.rcproject), which seems to automatically add grounding shadows. While I have done some research into PointLight, DirectionalLight, and SpotLight entities, I am quite a novice at 3D modeling, and just only seek to add a shadow just below the object, to give it a more realistic appearance on tables. Is there a methodology for achieving this?
Posted Last updated
.
Post not yet marked as solved
2 Replies
2.1k Views
I am trying to follow the guidance for testing a Local Experience, as listed in the Testing Your App Clip’s Launch Experience - https://developer.apple.com/documentation/app_clips/testing_your_app_clip_s_launch_experience documentation. I have successfully created my App Clip target, and can confirm that running the App Clip on my device does launch the App Clip app as I expected. Further, I can successfully test the App Clip on device, by setting the _XCAppClipURL argument in the App Clip's scheme. I would like to test a Local Experience. The documentation states that for testing Local Experiences; To test your app clip’s invocation with a local experience, you don’t need to add the Associated Domains Entitlement, make changes to the Apple App Site Association file on your web server, or create an app clip experience for testing in TestFlight. Therefore, I should be able to configure a Local Experience with any desired domain in Settings -> Developer -> Local Experience, generate a QR code or NFC tag with that same URL, and the App Clip experience should appear. I have taken the following steps; Built and run my App Clip on my local device. In Settings -> Developer -> Local Experience, I have registered a new experience using a URL prefix https://somewebsite.com Set my Bundle ID to com.mycompany.myapp.Clip, which exactly matches the Bundle Identifier, as listed in Xcode, under my App Clip target. Generated a QR code which directs me to https://somewebsite.com In theory, I believe I should be able to open the Camera app on my device, point the camera at the QR code, and the App Clip experience should appear. However, I received mixed experiences. 50% of the time, I receive a pop-up directing me to open https://somewebsite.com in Safari, the other 50% of the time, no banner or action occurs whatsoever. Is this an issue anyone has faced before, or have I pursued these steps out of order?
Posted Last updated
.
Post not yet marked as solved
0 Replies
601 Views
I am a bit confused on the proper usage of GeometryReader. For example, I have a SwiftUI View, like so; 	 var body: some View {         VStack {             Text("Hello, World!")                 .background(Color.red)             Text("More Text")                 .background(Color.blue)         }     } This positions my VStack perfectly in the middle of the device, both horizontally and vertically. At some point, I may need to know the width of the View's frame, and therefore, want to implement a GeometryReader; var body: some View {         GeometryReader { geometry in             VStack {                 Text("Hello, World!")                     .background(Color.red)                 Text("More Text")                     .background(Color.blue)             }         }     } While I now have access to the View's frame using the GeometryProxy, my VStack is now moved to the top left corner of the device. Why is this? Subsequently, is there any way to get the size of the View without having the layout altered?
Posted Last updated
.
Post not yet marked as solved
2 Replies
662 Views
I have noticed that iOS 14, macOS 11, and tvOS 14 include the ability to process video files using a new VNVideoProcessor class. I have tried to leverage this within my code, in an attempt to perform a VNTrackObjectRequest, with no success. Specifically, my observations report invalid within the body, and the confidence and detected bounding box never change. I am setting up my code like such; let videoProcessor = VNVideoProcessor(url: videoURL) let asset = AVAsset(url: videoURL) let completion: VNRequestCompletionHandler = { request, error in 		let observations = request.results as! [VNObservation] 		if let observation = observations.first as? VNDetectedObjectObservation {                 print("OBSERVATION:", observation) 		} } let inputObservation = VNDetectedObjectObservation(boundingBox: rect.boundingBox) let request: VNTrackingRequest = VNTrackObjectRequest(detectedObjectObservation: inputObservation, completionHandler: completion) request.trackingLevel = .accurate do { 	 try videoProcessor.add(request, withProcessingOptions: [:]) 	 try videoProcessor.analyze(with: CMTimeRange(start: .zero, duration: asset.duration)) } catch(let error) { 	 print(error) } A sample output I receive in the console during observation is; OBSERVATION: <VNDetectedObjectObservation: 0x2827ee200> 032AB694-62E2-4674-B725-18EA2804A93F requestRevision=2 confidence=1.000000 timeRange={{0/90000 = 0.000}, {INVALID}} boundingBox=[0.333333, 0.138599, 0.162479, 0.207899] I note that the observation reports something is invalid, alongside the fact that the confidence is always reported as 1.000000 and the bounding box coordinates never change. I'm unsure if this has to do with my lack of VNVideoProcessingOption setup or something else I am doing wrong.
Posted Last updated
.
Post not yet marked as solved
0 Replies
397 Views
Is any documentation available for supporting the AfterBurner card in third-party applications? Documentation for the AfterBurner card indicates that support is available for third-party developers, but I cannot seem to find any documentation that would indicate how to take advantage of this hardware within my own video processing application.Thanks!
Posted Last updated
.
Post not yet marked as solved
2 Replies
1.3k Views
With the latest few releases of Reality Composer beta (on iOS), the ability to create a scene using a 3D object as an "anchor" is now a possibility. I have created a scene using the object anchor choice, scanned my object in 3D within Reality Composer, and can successfully test this experience by viewing my Reality Composer project in AR, then choosing "Play," and seeing my 3D item appear when the object is detected.For posterity, my scanned anchor is a bottle, and my 3D item is a metallic sphere.When attemting to bring this experience to Xcode, I am unsure how to use the object as the anchor. I am loading like so;let bottle = try! Bottle.loadScene() arView.scene.anchors.append(bottle)In this case, Bottle is my .rcproject and my scene is named "scene." When I build and run the project, my 3D item (the metallic sphere) appears immediately on screen, rather than remaining hidden until the "object anchor" (the bottle) is detected.Using the Scanning and Detecting 3D Objects documentation as a guide, do I need to manually set up the ARWoldTracking reference objects, like so;let configuration = ARWorldTrackingConfiguration() guard let referenceObjects = ARReferenceObject.referenceObjects(inGroupNamed: "gallery", bundle: nil) else { fatalError("Missing expected asset catalog resources.") } configuration.detectionObjects = referenceObjects sceneView.session.run(configuration)And if so, how do I get access to the .arobject scanned from Reality Composer?Thanks!
Posted Last updated
.
Post not yet marked as solved
0 Replies
517 Views
Wondering if anyone could shed some light on a question. I am being tasked with building a feature in my app that would allow a user to "stream" their iPhone camera to another nearby iOS device (either wireless or hard-wired). Both devices would be running a custom app I develop, but I have the following requirements;Functionality must exist whether or not internet connectivity is available. This removes the option to do any sort of RTMP or HLS livestream to a server.The "preview" device must be relatively responsive and low-latency, though some loss of quality would be acceptable as this is solely for preview purposes.A hard-wired solution (such as connecting iPad Pro with USB-C to iPhone XS Max with Lightning) would be feasible and preferred, if possible.I've attempted to build this functionality using MultiPeer Connectivity frameworks. While I've been successful in compressing my sample buffers to a small file size, and transmitting between the devices, overall interference and connectivity can significantly degrade the experience, resulting in huge latency issues. I am using a semaphore to ensure I receive and display frames in order, but the latency is too much of an issue to consider this a solution.Are there any suggested frameworks to investigate that might yield results for this scenario?
Posted Last updated
.