Posts

Post not yet marked as solved
2 Replies
1.5k Views
With the release of Xcode 13, a large section of my vision framework processing code became errors and cannot compile. All of these have became deprecated. This is my original code:  do {       // Perform VNDetectHumanHandPoseRequest       try handler.perform([handPoseRequest])       // Continue only when a hand was detected in the frame.       // Since we set the maximumHandCount property of the request to 1, there will be at most one observation.       guard let observation = handPoseRequest.results?.first else {         self.state = "no hand"         return       }       // Get points for thumb and index finger.       let thumbPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyThumb)       let indexFingerPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyIndexFinger)       let middleFingerPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyMiddleFinger)       let ringFingerPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyRingFinger)       let littleFingerPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyLittleFinger)       let wristPoints = try observation.recognizedPoints(forGroupKey: .all)               // Look for tip points.       guard let thumbTipPoint = thumbPoints[.handLandmarkKeyThumbTIP],          let thumbIpPoint = thumbPoints[.handLandmarkKeyThumbIP],          let thumbMpPoint = thumbPoints[.handLandmarkKeyThumbMP],          let thumbCMCPoint = thumbPoints[.handLandmarkKeyThumbCMC] else {         self.state = "no tip"         return       }               guard let indexTipPoint = indexFingerPoints[.handLandmarkKeyIndexTIP],          let indexDipPoint = indexFingerPoints[.handLandmarkKeyIndexDIP],          let indexPipPoint = indexFingerPoints[.handLandmarkKeyIndexPIP],          let indexMcpPoint = indexFingerPoints[.handLandmarkKeyIndexMCP] else {         self.state = "no index"         return       }               guard let middleTipPoint = middleFingerPoints[.handLandmarkKeyMiddleTIP],          let middleDipPoint = middleFingerPoints[.handLandmarkKeyMiddleDIP],          let middlePipPoint = middleFingerPoints[.handLandmarkKeyMiddlePIP],          let middleMcpPoint = middleFingerPoints[.handLandmarkKeyMiddleMCP] else {         self.state = "no middle"         return       }               guard let ringTipPoint = ringFingerPoints[.handLandmarkKeyRingTIP],          let ringDipPoint = ringFingerPoints[.handLandmarkKeyRingDIP],          let ringPipPoint = ringFingerPoints[.handLandmarkKeyRingPIP],          let ringMcpPoint = ringFingerPoints[.handLandmarkKeyRingMCP] else {         self.state = "no ring"         return       }               guard let littleTipPoint = littleFingerPoints[.handLandmarkKeyLittleTIP],          let littleDipPoint = littleFingerPoints[.handLandmarkKeyLittleDIP],          let littlePipPoint = littleFingerPoints[.handLandmarkKeyLittlePIP],          let littleMcpPoint = littleFingerPoints[.handLandmarkKeyLittleMCP] else {         self.state = "no little"         return       }               guard let wristPoint = wristPoints[.handLandmarkKeyWrist] else {         self.state = "no wrist"         return       } ... } Now every line from thumbPoints onwards results in error, I have fixed the first part (not sure if it is correct or not as it cannot compile) to :         let thumbPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.thumb.rawValue)        let indexFingerPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.indexFinger.rawValue)        let middleFingerPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.middleFinger.rawValue)        let ringFingerPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.ringFinger.rawValue)        let littleFingerPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.littleFinger.rawValue)        let wristPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.littleFinger.rawValue) I tried many different things but just could not get the retrieving individual points to work. Can anyone help on fixing this?
Posted Last updated
.
Post marked as solved
3 Replies
1.3k Views
I suspect this has something to do with the new XCode update. After updating to XCode 13, my app now gives me the error of Fatal errorUnexpectedly found nil while unwrapping an Optional value The error comes from the first line:      let scene = SCNScene(named: "art.scnassets/Pokeball.scn")!     let ship = scene.rootNode.childNode(withName: "pokeball", recursively: false)!     ship.position = SCNVector3(0, 0, -0.3)     sceneView.scene = scene! I did not modify the code so I have no idea why it is no longer functional. I double checked the file path and names are correct, I swapped to the default ship model file and still fails. What do I need to change to fix this?
Posted Last updated
.
Post not yet marked as solved
0 Replies
632 Views
I created a SCNBox in real time based on a group of anchors that I do not know the locations in advance. I create the SCNBox using the following code:            let nodes: [SCNNode] = getMyNodes()           for node in nodes {             if(sceneView.anchor(for: node)?.name != nil) && (sceneView.anchor(for: node)?.name != "dot"){               parentNode.addChildNode(node)             }           }           let width = (abs(parentNode.boundingBox.0.x) + abs(parentNode.boundingBox.1.x))           let height = (abs(parentNode.boundingBox.0.y) + abs(parentNode.boundingBox.1.y))           let length = (abs(parentNode.boundingBox.0.z) + abs(parentNode.boundingBox.1.z))           let box = SCNBox(width: CGFloat(width), height: CGFloat(height), length: CGFloat(0.3), chamferRadius: 0)           let boxNode = SCNNode(geometry: box)           boxNode.position = parentNode.boundingSphere.center           box.firstMaterial?.diffuse.contents = UIColor.white           box.firstMaterial?.transparency = 0.4           boxNode.position = parentNode.boundingSphere.center           boxNode.name = "box"           boxNode.addChildNode(parentNode)           sceneView.scene.rootNode.addChildNode(boxNode)           boundingBox = boxNode So how do I get the vertices position of the box? The geometry of the node contains the location, length, width and height but it does not contain the location of its vertices. I also obtain the the location of the finger using the vision framework. I need to know the location of the vertex so I can enlarge or shrink the box with respect to the finger location. I tried to to calculate the vertex using the position of the center and the length and with but the location does not add back to the finger location. I think this has to do something with different scale system.
Posted Last updated
.
Post marked as solved
1 Replies
726 Views
Reposting for better tags. Please see the details for the original post: https://developer.apple.com/forums/thread/688268 I tried to pair the CATransaction.begin() and CATransaction.commit() but it does not work. Maybe I am doing it wrong.
Posted Last updated
.
Post not yet marked as solved
1 Replies
992 Views
Taken from the handPose (https://developer.apple.com/documentation/vision/detecting_hand_poses_with_vision) example, camerView.swift file: func showPoints(_ points: [CGPoint], color: UIColor) {     pointsPath.removeAllPoints()     for point in points {       pointsPath.move(to: point)       pointsPath.addArc(withCenter: point, radius: 5, startAngle: 0, endAngle: 2 * .pi, clockwise: true)     }     overlayLayer.fillColor = color.cgColor     CATransaction.begin()     CATransaction.setDisableActions(true)     overlayLayer.path = pointsPath.cgPath     CATransaction.commit()   } Instead of drawing two points, I want to modify the code so it draws four points, and using a different color for each finger points.    func showPoints2(_ points: [CGPoint], color: [UIColor]) {     pointsPath.removeAllPoints()     pointsPath.move(to: points[0])     pointsPath.addArc(withCenter: points[0], radius: 5, startAngle: 0, endAngle: 2 * .pi, clockwise: true)     overlayLayer.fillColor = UIColor.red.cgColor     CATransaction.begin()     CATransaction.setDisableActions(true)     overlayLayer.path = pointsPath.cgPath     pointsPath.move(to: points[1])     pointsPath.addArc(withCenter: points[1], radius: 5, startAngle: 0, endAngle: 2 * .pi, clockwise: true)     overlayLayer.fillColor = UIColor.green.cgColor     CATransaction.begin()     CATransaction.setDisableActions(true)     overlayLayer.path = pointsPath.cgPath     pointsPath.move(to: points[2])     pointsPath.addArc(withCenter: points[2], radius: 5, startAngle: 0, endAngle: 2 * .pi, clockwise: true)     overlayLayer.fillColor = UIColor.blue.cgColor     CATransaction.begin()     CATransaction.setDisableActions(true)     overlayLayer.path = pointsPath.cgPath     pointsPath.move(to: points[3])     pointsPath.addArc(withCenter: points[3], radius: 5, startAngle: 0, endAngle: 2 * .pi, clockwise: true)     overlayLayer.fillColor = UIColor.black.cgColor     CATransaction.begin()     CATransaction.setDisableActions(true)     overlayLayer.path = pointsPath.cgPath     CATransaction.commit()   } My modified code does render the 4 points, but the color is always the last color I have selected in the function, in this case black color. How to to make it renders 4 different colors?
Posted Last updated
.
Post marked as solved
1 Replies
848 Views
I have an array of CGPoint containing various coordinates. I need to apply the filter to x coordinates and y coordinates separately. I am not sure how to do this the Swift way so I unpack the coordinate using this way currently.       var xvalues: [CGFloat] = []       var yvalues: [CGFloat] = []       if (observation1.count) == 5{         for n in observation1 {           xvalues.append(n.x)           yvalues.append(n.y)         }         filter1 = convolve(xvalues, sgfilterwindow5_order2)         filter2 = convolve(yvalues, sgfilterwindow5_order2) I am sure there is a more elegant way to do this. How to do this without unpacking the array?
Posted Last updated
.
Post not yet marked as solved
0 Replies
587 Views
I can run the given example by Apple (https://developer.apple.com/documentation/vision/vndetecthumanhandposerequest). However, I am not sure what are the options to fine tune its behaviour? I know you can control by the confidence level, is there other ways to control the points to make the detection more consistent?
Posted Last updated
.