Post

Replies

Boosts

Views

Activity

With in Apple PhotogrammterySession, Variable related with real scale.
In ARKit, I took few Color CVPixelBuffers and Depth CVPixelBuffers, ran PhotogrammetrySession with PhotogrammetrySamples. In my service, precise real scale is important, so I tried to figure out what is related to the rate of real scale model created. I did some experiments, and I set same number of images(10 pics), same object, same shot angles, distance to object(30cm, 50cm, 100cm). But even with above same controlled variables, sometimes, it generate real scale, and sometimes not. Because I couldn't get to source code of photogrammetry and how it work inside, I wonder do I miss and how can I create real scale every time if it's possible.
1
0
720
Sep ’23
Object Capture With only manual capturing
Is it possible to capture only manually (automatic off) on object capture api ? And can I proceed to capturing stage right a way? Only Object Capture API captures real scale object. Using AVFoundation or ARKit, I've tried using lidar capturing HEVC or create PhotogrammetrySample, It doesn't create real scale object. I think, during object capture api, it catches point cloud, intrinsic parameter, and it help mesh to be in real scale. Does anyone knows 'Object Capture With only manual capturing' or 'Capturing using AVFoundation for real scale mesh'
2
0
1.2k
Jul ’23
PhotogrammetrySession and PhotogrammetrySample
In WWDC 2021, It saids 'we also offer an interface for advanced workflows to provide a sequence of custom samples. A PhotogrammetrySample includes the image plus other optional data such as a depth map, gravity vector, or custom segmentation mask.' But in code, PhotogrammetrySession initialize with data saved directory. How can I give input of PhotogrammetrySamples to PhotogrammetrySession?
0
0
597
Jul ’23
AVFoundation with lidar and this year's RealityKit Object Capture.
With AVFoundation's builtInLiDARDepthCamera, if I save photo.fileDataRepresentation to heic, it only has Exif and TIFF metadata. But, RealityKit's object capture's heic image has not only Exif and TIFF, but also has HEIC metadata including camera calibration data. What should I do for AVFoundation's exported image has same meta data?
2
0
1.3k
Jun ’23
Not with Object Capture Session, I tried with RealityKit
With code below, I added color and depth image from RealityKit ARView, and ran Photogrammetry on iOS device, the mesh looks fine, but the scale of the mesh is quit different with real world scale. let color = arView.session.currentFrame!.capturedImage let depth = arView.session.currentFrame!.sceneDepth!.depthMap //😀 Color let colorCIImage = CIImage(cvPixelBuffer: color) let colorUIImage = UIImage(ciImage: colorCIImage) let depthCIImage = CIImage(cvPixelBuffer: depth) let heicData = colorUIImage.heicData()! let fileURL = imageDirectory!.appendingPathComponent("\(scanCount).heic") do { try heicData.write(to: fileURL) print("Successfully wrote image to \(fileURL)") } catch { print("Failed to write image to \(fileURL): \(error)") } //😀 Depth let context = CIContext() let colorSpace = CGColorSpace(name: CGColorSpace.linearGray)! let depthData = context.tiffRepresentation(of: depthCIImage, format: .Lf, colorSpace: colorSpace, options: [.disparityImage: depthCIImage]) let depth_dir = imageDirectory!.appendingPathComponent("IMG_\(scanCount)_depth.TIF") try! depthData!.write(to: depth_dir, options: [.atomic]) print("depth saved") And also tried this. let colorSpace = CGColorSpace(name: CGColorSpace.linearGray) let depthCIImage = CIImage(cvImageBuffer: depth, options: [.auxiliaryDepth : true]) let context = CIContext() let linearColorSpace = CGColorSpace(name: CGColorSpace.linearSRGB) guard let heicData = context.heifRepresentation(of: colorCIImage, format: .RGBA16, colorSpace: linearColorSpace!, options: [.depthImage : depthCIImage]) else { print("Failed to convert combined image into HEIC format") return } Does Anyone know why and how to fix this?
1
0
1k
Jun ’23
ARKIT & HAND POSE RECOGNITION
Please give me some answer... On arkit, I put these codes... func session(_ session: ARSession, didUpdate frame: ARFrame) { let handler = VNImageRequestHandler(cvPixelBuffer: frame.capturedImage, orientation: .up, options: [:]) do {             try? handler.perform([handPoseRequest])             guard let observation = handPoseRequest.results?.first as?VNRecognizedPointsObservation else {return}             let thumb = try! observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyThumb)         } catch {} } At last part with let thumb.. segmentation fault 11 comes up. I saw a twit of video running this, but I tried this for week in any possible ways, I couldn't figure this out. Please, please give me some answer 😭
2
0
1.4k
Jul ’20
Arkit & hand pose detection
On Arkit project, in funtion - func session(_ session: ARSession, didUpdate frame: ARFrame) I tried to get - guard let observation = handPoseRequest.results?.first as? VNRecognizedPointsObservation else { return } and get - let thumb = try! observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyThumb) segmentation fault 11 pop up. Is this a bug ? or did I made any mistake?
0
0
1.2k
Jun ’20
Unprojecting color image into MeshAnchors
Like answered on previous question. [If you want to color the mesh based on the camera feed, you could do so manually, for example by unprojecting the pixels of the camera image into 3D space and color the according mesh face with the pixel's color.] Could you be able to give some hint how to solve this? I'm not really familiar with this concept.
1
0
928
Jun ’20