Posts

Post marked as solved
10 Replies
2.8k Views
I get this error message in the OS Logs when I run the sample code from this Apple link - https://developer.apple.com/documentation/sirikit/media/managing_audio_with_sirikit: Failed to fetch user token error: An unknown error occurred Return code is 0 too which seems strange as well. I've generated a JWT using swift run generateToken <team-id> <key-id> from the SwiftJWTExample package: JSON Web Token: "<token here>" Header: {"typ":"JWT","alg":"ES256","kid":"..."} Payload: {"iss":"...","iat":...,"exp":...} Signature: ... And have checked my team id and key id. I've enabled App groups for the extension and the main target, and Siri for the main target, and am running on an iOS 14 device, compiled with Xcode beta. Hope you can help me out!
Posted Last updated
.
Post not yet marked as solved
0 Replies
870 Views
I am trying to improve the performance of drawing the skeleton with body tracking as I am getting noticeable lag even when further than 5 metres away, and with stable iPhone XS camera. The tracking is not close to the showcased performance in WWDC-10043 demo video. I have also tried using: let request = VNDetectHumanBodyPoseRequest(completionHandler: ) ... however the results were the same, and I also tried using revision 1 of the algorithm: let request = VNDetectHumanBodyPoseRequest( ) request.revision = VNDetectHumanBodyPoseRequestRevision1 ... and this didn't help either. Here's my current code: /// Extracts poses from a frame. class Predictor {   func processFrame(_ samplebuffer: CMSampleBuffer) throws -> [VNRecognizedPointsObservation] {     // Perform Vision body pose request     let framePoses = extractPoses(from: samplebuffer)          // Select the most promiment person.     guard let pose = framePoses.first else {       return []     }          return framePoses   }      func extractPoses(from sampleBuffer: CMSampleBuffer) -> [VNRecognizedPointsObservation] {     let requestHandler = VNImageRequestHandler(cmSampleBuffer: sampleBuffer, orientation: .down)          let request = VNDetectHumanBodyPoseRequest()          do {       // Perform the body pose-detection request.       try requestHandler.perform([request])     } catch {       print("Unable to perform the request: \(error).\n")     }          return bodyPoseHandler(request: request, error: nil)   }   func bodyPoseHandler(request: VNRequest, error: Error?) -> [VNRecognizedPointsObservation] {     guard let observations =             request.results as? [VNRecognizedPointsObservation] else {       print("Empty observations.\n\n")       return []     }     return observations   } } class CameraViewController: AVCaptureVideoDataOutputSampleBufferDelegate { func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {     let observations = try? predictor.processFrame(sampleBuffer)     observations?.forEach { processObservation($0) } } func processObservation(_ observation: VNRecognizedPointsObservation) {          // Retrieve all torso points.     guard let recognizedPoints =             try? observation.recognizedPoints(forGroupKey: .all) else {       return     }          let storedPoints = Dictionary(uniqueKeysWithValues: recognizedPoints.compactMap { (key, point) -> (String, CGPoint)? in       return (key.rawValue, point.location)     })          DispatchQueue.main.sync {       let mappedPoints = Dictionary(uniqueKeysWithValues: recognizedPoints.compactMap { (key, point) -> (String, CGPoint)? in         guard point.confidence > 0.1 else { return nil }         let norm = VNImagePointForNormalizedPoint(point.location,                                                   Int(drawingView.bounds.width),                                                   Int(drawingView.bounds.height))         return (key.rawValue, norm)       })              let time = 1000 * observation.timeRange.start.seconds              // Draw the points onscreen.       DispatchQueue.main.async {         self.drawingView.draw(points: mappedPoints)       }     }   } } Thanks in advance, I hope you can help me out! :)
Posted Last updated
.
Post marked as solved
3 Replies
1.4k Views
I am using the new Vision VNDetectHumanBodyPoseRequest from Apple - https://developer.apple.com/documentation/vision/detecting_human_body_poses_in_images, but I am not sure why my mapped coordinates are incorrect with the front-facing camera. My drawing code is correct as I can see the nodes, but they are in the completely wrong position. I tried setting different orientations in the VNImage request handler: VNImageRequestHandler(cmSampleBuffer: sampleBuffer, orientation: .left) The file is attached. CameraViewController.swift - https://developer.apple.com/forums/content/attachment/44a9bd58-1244-4745-b72b-c3c047485023 Thanks for the help!
Posted Last updated
.
Post not yet marked as solved
1 Replies
3.4k Views
Is it possible to improve the security of the apple-app-site-association file? I don't want testers to access the file and learn about how to find hidden routes or build a word list for further testing. I also would be exposing my app identifier, which I am not fully comfortable doing.
Posted Last updated
.
Post marked as solved
1 Replies
623 Views
I am keen to develop some app clips for my latest apps. The API seems ready, but the QR code is not going to be released till a few months time. My question is when will App Clips be publishable on the App Store? Do we have to wait till QR codes become public too, or can we still use them using NFC or Maps only. Thanks.
Posted Last updated
.
Post not yet marked as solved
0 Replies
772 Views
I saw these 2 new architectures which work great for SwiftUI - ELM and the composable architecture (see PointFree for more info). I know that UIKit apps in general has MVC, MVVM, MVVM-C, ELM, VIPER and probably many more. SwiftUI seems more like a MVVM model, ELM or Composable. I am building an AR app which has a menu home screen, calendar screen, tracking metrics screen, as well as a camera capture scene. I know that ideally capture sessions shouldn't be started when not in use, so only when needed to avoid use of expensive computations. But I don't know how best to architect my app i.e which architecture to use for a mixed ARKit and SwiftUI app with many screens not just for camera capture. What are the best practices for building mixed AR apps?
Posted Last updated
.