With code below, I added color and depth image from RealityKit ARView, and ran Photogrammetry on iOS device, the mesh looks fine, but the scale of the mesh is quit different with real world scale.
let color = arView.session.currentFrame!.capturedImage
let depth = arView.session.currentFrame!.sceneDepth!.depthMap
//π Color
let colorCIImage = CIImage(cvPixelBuffer: color)
let colorUIImage = UIImage(ciImage: colorCIImage)
let depthCIImage = CIImage(cvPixelBuffer: depth)
let heicData = colorUIImage.heicData()!
let fileURL = imageDirectory!.appendingPathComponent("\(scanCount).heic")
do {
try heicData.write(to: fileURL)
print("Successfully wrote image to \(fileURL)")
} catch {
print("Failed to write image to \(fileURL): \(error)")
}
//π Depth
let context = CIContext()
let colorSpace = CGColorSpace(name: CGColorSpace.linearGray)!
let depthData = context.tiffRepresentation(of: depthCIImage,
format: .Lf,
colorSpace: colorSpace,
options: [.disparityImage: depthCIImage])
let depth_dir = imageDirectory!.appendingPathComponent("IMG_\(scanCount)_depth.TIF")
try! depthData!.write(to: depth_dir, options: [.atomic])
print("depth saved")
And also tried this.
let colorSpace = CGColorSpace(name: CGColorSpace.linearGray)
let depthCIImage = CIImage(cvImageBuffer: depth,
options: [.auxiliaryDepth : true])
let context = CIContext()
let linearColorSpace = CGColorSpace(name: CGColorSpace.linearSRGB)
guard let heicData = context.heifRepresentation(of: colorCIImage,
format: .RGBA16,
colorSpace: linearColorSpace!,
options: [.depthImage : depthCIImage]) else {
print("Failed to convert combined image into HEIC format")
return
}
Does Anyone know why and how to fix this?
Post
Replies
Boosts
Views
Activity
With AVFoundation's builtInLiDARDepthCamera,
if I save photo.fileDataRepresentation to heic, it only has Exif and TIFF metadata.
But, RealityKit's object capture's heic image has not only Exif and TIFF, but also has HEIC metadata including camera calibration data.
What should I do for AVFoundation's exported image has same meta data?
In ARKit, I took few Color CVPixelBuffers and Depth CVPixelBuffers, ran PhotogrammetrySession with PhotogrammetrySamples.
In my service, precise real scale is important, so I tried to figure out what is related to the rate of real scale model created.
I did some experiments, and I set same number of images(10 pics), same object, same shot angles, distance to object(30cm, 50cm, 100cm).
But even with above same controlled variables, sometimes, it generate real scale, and sometimes not.
Because I couldn't get to source code of photogrammetry and how it work inside, I wonder do I miss and how can I create real scale every time if it's possible.
Is it possible to capture only manually (automatic off) on object capture api ?
And can I proceed to capturing stage right a way?
Only Object Capture API captures real scale object.
Using AVFoundation or ARKit, I've tried using lidar capturing HEVC or create PhotogrammetrySample, It doesn't create real scale object.
I think, during object capture api, it catches point cloud, intrinsic parameter, and it help mesh to be in real scale.
Does anyone knows 'Object Capture With only manual capturing' or 'Capturing using AVFoundation for real scale mesh'
In WWDC 2021, It saids 'we also offer an interface for advanced workflows to provide a sequence of custom samples.
A PhotogrammetrySample includes the image plus other optional data such as a depth map, gravity vector, or custom segmentation mask.'
But in code, PhotogrammetrySession initialize with data saved directory.
How can I give input of PhotogrammetrySamples to PhotogrammetrySession?
If I make custom point cloud, how can I send this to photogrammetry session? Does it seperately saved to directory? Or does it saved into heic image?
I found out that it works well in this new api on beta.
But I want to make more customizable settings, so want to make one with AVFoundation.
I want to know the AVCaptureSession and AVCapturePhotoSettings that is applied on this new API.
I tried to add com.apple.developer.kernel.increased-memory-limit entitlements, but don't know which capability to add.
In WWDC21 Lecture video, he said's only iphone with neural engine support captureTextFromCamera Function.
As I tried, it does work on iPhone, but not on ipad pro 5th. gen.
During beta, does it planed to support iPad-os?
I've built an app with ARKit 3.5 previously.
With [configuration.sceneReconstruction = .mesh],
I put all meshAnchors to 3D models.
Do I able to filter this meshAnchors by confidence,
and add color data from camera feed?
Or with demo code with MetalKit, how could I able to convert point cloud into 3d model?
I tried to use handposerequest on arkit.
It does get result on VNRecognizedPointsObservation.
But, when I tried to get information of detail,
like : let thumbPoint = try!observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyThumb)
[Segmentation fault: 11 ] keep comes up.
Is this a bug ? or do I making some mistakes?
Please give me some answer...
On arkit, I put these codes...
func session(_ session: ARSession, didUpdate frame: ARFrame) {
let handler = VNImageRequestHandler(cvPixelBuffer: frame.capturedImage, orientation: .up, options: [:])
do {
Β Β Β Β Β Β try? handler.perform([handPoseRequest])
Β Β Β Β Β Β guard let observation = handPoseRequest.results?.first as?VNRecognizedPointsObservation else {return}
Β Β Β Β Β Β let thumb = try! observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyThumb)
Β Β Β Β } catch {}
}
At last part with let thumb.. segmentation fault 11 comes up.
I saw a twit of video running this, but I tried this for week in any possible ways, I couldn't figure this out.
Please, please give me some answer π
Like answered on previous question.
[If you want to color the mesh based on the camera feed, you could do so manually, for example by unprojecting the pixels of the camera image into 3D space and color the according mesh face with the pixel's color.]
Could you be able to give some hint how to solve this?
I'm not really familiar with this concept.
In your video and demo codes, I get that detect hand landmarks with VNImageRequestHandler in captureOutput functions.
With what method, could I able to draw hand skeleton as shown in video?
On Arkit project, in funtion -
func session(_ session: ARSession, didUpdate frame: ARFrame)
I tried to get -
guard let observation = handPoseRequest.results?.first as? VNRecognizedPointsObservation else { return }
and get -
let thumb = try! observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyThumb)
segmentation fault 11 pop up.
Is this a bug ? or did I made any mistake?