Posts

Post not yet marked as solved
1 Replies
731 Views
With code below, I added color and depth image from RealityKit ARView, and ran Photogrammetry on iOS device, the mesh looks fine, but the scale of the mesh is quit different with real world scale. let color = arView.session.currentFrame!.capturedImage let depth = arView.session.currentFrame!.sceneDepth!.depthMap //😀 Color let colorCIImage = CIImage(cvPixelBuffer: color) let colorUIImage = UIImage(ciImage: colorCIImage) let depthCIImage = CIImage(cvPixelBuffer: depth) let heicData = colorUIImage.heicData()! let fileURL = imageDirectory!.appendingPathComponent("\(scanCount).heic") do { try heicData.write(to: fileURL) print("Successfully wrote image to \(fileURL)") } catch { print("Failed to write image to \(fileURL): \(error)") } //😀 Depth let context = CIContext() let colorSpace = CGColorSpace(name: CGColorSpace.linearGray)! let depthData = context.tiffRepresentation(of: depthCIImage, format: .Lf, colorSpace: colorSpace, options: [.disparityImage: depthCIImage]) let depth_dir = imageDirectory!.appendingPathComponent("IMG_\(scanCount)_depth.TIF") try! depthData!.write(to: depth_dir, options: [.atomic]) print("depth saved") And also tried this. let colorSpace = CGColorSpace(name: CGColorSpace.linearGray) let depthCIImage = CIImage(cvImageBuffer: depth, options: [.auxiliaryDepth : true]) let context = CIContext() let linearColorSpace = CGColorSpace(name: CGColorSpace.linearSRGB) guard let heicData = context.heifRepresentation(of: colorCIImage, format: .RGBA16, colorSpace: linearColorSpace!, options: [.depthImage : depthCIImage]) else { print("Failed to convert combined image into HEIC format") return } Does Anyone know why and how to fix this?
Posted
by HeoJin.
Last updated
.
Post not yet marked as solved
2 Replies
888 Views
With AVFoundation's builtInLiDARDepthCamera, if I save photo.fileDataRepresentation to heic, it only has Exif and TIFF metadata. But, RealityKit's object capture's heic image has not only Exif and TIFF, but also has HEIC metadata including camera calibration data. What should I do for AVFoundation's exported image has same meta data?
Posted
by HeoJin.
Last updated
.
Post not yet marked as solved
1 Replies
394 Views
In ARKit, I took few Color CVPixelBuffers and Depth CVPixelBuffers, ran PhotogrammetrySession with PhotogrammetrySamples. In my service, precise real scale is important, so I tried to figure out what is related to the rate of real scale model created. I did some experiments, and I set same number of images(10 pics), same object, same shot angles, distance to object(30cm, 50cm, 100cm). But even with above same controlled variables, sometimes, it generate real scale, and sometimes not. Because I couldn't get to source code of photogrammetry and how it work inside, I wonder do I miss and how can I create real scale every time if it's possible.
Posted
by HeoJin.
Last updated
.
Post not yet marked as solved
2 Replies
852 Views
Is it possible to capture only manually (automatic off) on object capture api ? And can I proceed to capturing stage right a way? Only Object Capture API captures real scale object. Using AVFoundation or ARKit, I've tried using lidar capturing HEVC or create PhotogrammetrySample, It doesn't create real scale object. I think, during object capture api, it catches point cloud, intrinsic parameter, and it help mesh to be in real scale. Does anyone knows 'Object Capture With only manual capturing' or 'Capturing using AVFoundation for real scale mesh'
Posted
by HeoJin.
Last updated
.
Post not yet marked as solved
0 Replies
433 Views
In WWDC 2021, It saids 'we also offer an interface for advanced workflows to provide a sequence of custom samples. A PhotogrammetrySample includes the image plus other optional data such as a depth map, gravity vector, or custom segmentation mask.' But in code, PhotogrammetrySession initialize with data saved directory. How can I give input of PhotogrammetrySamples to PhotogrammetrySession?
Posted
by HeoJin.
Last updated
.
Post not yet marked as solved
1 Replies
484 Views
I found out that it works well in this new api on beta. But I want to make more customizable settings, so want to make one with AVFoundation. I want to know the AVCaptureSession and AVCapturePhotoSettings that is applied on this new API.
Posted
by HeoJin.
Last updated
.
Post marked as solved
4 Replies
1.8k Views
I've built an app with ARKit 3.5 previously. With [configuration.sceneReconstruction = .mesh], I put all meshAnchors to 3D models. Do I able to filter this meshAnchors by confidence, and add color data from camera feed? Or with demo code with MetalKit, how could I able to convert point cloud into 3d model?
Posted
by HeoJin.
Last updated
.
Post not yet marked as solved
1 Replies
885 Views
I tried to use handposerequest on arkit. It does get result on VNRecognizedPointsObservation. But, when I tried to get information of detail, like : let thumbPoint = try!observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyThumb) [Segmentation fault: 11 ] keep comes up. Is this a bug ? or do I making some mistakes?
Posted
by HeoJin.
Last updated
.
Post not yet marked as solved
2 Replies
1.1k Views
Please give me some answer... On arkit, I put these codes... func session(_ session: ARSession, didUpdate frame: ARFrame) { let handler = VNImageRequestHandler(cvPixelBuffer: frame.capturedImage, orientation: .up, options: [:]) do {             try? handler.perform([handPoseRequest])             guard let observation = handPoseRequest.results?.first as?VNRecognizedPointsObservation else {return}             let thumb = try! observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyThumb)         } catch {} } At last part with let thumb.. segmentation fault 11 comes up. I saw a twit of video running this, but I tried this for week in any possible ways, I couldn't figure this out. Please, please give me some answer 😭
Posted
by HeoJin.
Last updated
.
Post not yet marked as solved
1 Replies
778 Views
Like answered on previous question. [If you want to color the mesh based on the camera feed, you could do so manually, for example by unprojecting the pixels of the camera image into 3D space and color the according mesh face with the pixel's color.] Could you be able to give some hint how to solve this? I'm not really familiar with this concept.
Posted
by HeoJin.
Last updated
.
Post not yet marked as solved
0 Replies
1.1k Views
On Arkit project, in funtion - func session(_ session: ARSession, didUpdate frame: ARFrame) I tried to get - guard let observation = handPoseRequest.results?.first as? VNRecognizedPointsObservation else { return } and get - let thumb = try! observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyThumb) segmentation fault 11 pop up. Is this a bug ? or did I made any mistake?
Posted
by HeoJin.
Last updated
.