Post

Replies

Boosts

Views

Activity

Depth from ARKit is slight off the ARKIT FaceAnchor mesh?
I wrote an app that saves out the RGB+D image of ARKit, as well as exports the FaceAnchor SCNView as a USD file for each image. After converting the RGB+D to a 3D pointcloud and applying the camera transformation to it (so it would be at the right position/rotation/scale in relation to the USD camera), I noticed the depth for the pointcloud is a bit closer to the camera than the actual FaceAnchor mesh generated by ARKit. The example below demonstrates the problem very well: I expected the mouth to be to be aligned with the lips of the FaceAnchor mesh (and the depth teeth actually inside the FaceAnchor mesh mouth), but as you can see, is way closer to the camera than that. By applying an offset of 0.03 (or 3cm considering the depth is in meters) to depth before applying the camera transform to it, I was able to align the pointcloud much better with the FaceAnchor mesh, as showed bellow: But the problem is this 0.03 offset was just a trial and error until I got something that resembles what I expected with this specific photo, at this distance of the face to the device... Nothing guarantees it will work on any situation since it's not a proper mathematical solution to the problem. I'm guessing ARKit generates the mesh a bit behind the actual depth for some reason, so in this case, how can I properly transform the depth so it aligns with the FaceAnchor mesh position? Any insights would be greatly appreciated, but an official answer to the issue from someone from apple would be even better! -H
1
1
429
Jul ’23
PhotogrammetrySample.objectMask causing crash...
So, by adding my own mask to a PhotogrammetrySample, I'm getting a crash with this message: libc++abi: terminating with uncaught exception of type std::__1::bad_function_call terminating with uncaught exception of type std::__1::bad_function_call Program ended with exit code: 9 I'm using this extension to NSImage to convert a 8bit alpha only TIF to the required mask CVPixelBuffer: extension NSImage { // function used by depthPixelBuffer and disparityPixelBuffer to actually crate the CVPixelBuffer func __toPixelBuffer(PixelFormatType: OSType) -> CVPixelBuffer? { var bitsPerC = 8 var colorSpace = CGColorSpaceCreateDeviceRGB() var bitmapInfo = CGImageAlphaInfo.noneSkipFirst.rawValue // if we need depth/disparity if PixelFormatType == kCVPixelFormatType_DepthFloat32 || PixelFormatType == kCVPixelFormatType_DisparityFloat32 { bitsPerC = 32 colorSpace = CGColorSpaceCreateDeviceGray() bitmapInfo = CGImageAlphaInfo.none.rawValue | CGBitmapInfo.floatComponents.rawValue } // if we need mask else if PixelFormatType == kCVPixelFormatType_OneComponent8 { bitsPerC = 8 colorSpace = CGColorSpaceCreateDeviceGray() bitmapInfo = CGImageAlphaInfo.none.rawValue } let width = Int(self.size.width) let height = Int(self.size.height) let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary var pixelBuffer: CVPixelBuffer? let status = CVPixelBufferCreate(kCFAllocatorDefault, width, height, PixelFormatType, attrs, &pixelBuffer) guard let resultPixelBuffer = pixelBuffer, status == kCVReturnSuccess else { return nil } CVPixelBufferLockBaseAddress(resultPixelBuffer, CVPixelBufferLockFlags(rawValue: 0)) guard let context = CGContext(data: CVPixelBufferGetBaseAddress(resultPixelBuffer), width: width, height: height, bitsPerComponent: bitsPerC, bytesPerRow: CVPixelBufferGetBytesPerRow(resultPixelBuffer), space: colorSpace, bitmapInfo: bitmapInfo) else { return nil } // context.translateBy(x: 0, y: height) // context.scaleBy(x: 1.0, y: -1.0) let graphicsContext = NSGraphicsContext(cgContext: context, flipped: false) NSGraphicsContext.saveGraphicsState() NSGraphicsContext.current = graphicsContext draw(in: CGRect(x: 0, y: 0, width: width, height: height)) NSGraphicsContext.restoreGraphicsState() CVPixelBufferUnlockBaseAddress(resultPixelBuffer, CVPixelBufferLockFlags(rawValue: 0)) return resultPixelBuffer } // return the NSImage as a color 32bit Color CVPixelBuffer func colorPixelBuffer() -> CVPixelBuffer? { return __toPixelBuffer(PixelFormatType: kCVPixelFormatType_32ARGB) } func maskPixelBuffer() -> CVPixelBuffer? { return __toPixelBuffer(PixelFormatType: kCVPixelFormatType_OneComponent8) } // return NSImage as a 32bit depthData CVPixelBuffer func depthPixelBuffer() -> CVPixelBuffer? { return __toPixelBuffer(PixelFormatType: kCVPixelFormatType_DepthFloat32) } // return NSImage as a 32bit disparityData CVPixelBuffer func disparityPixelBuffer() -> CVPixelBuffer? { return __toPixelBuffer(PixelFormatType: kCVPixelFormatType_DisparityFloat32) } } So basically, I call: var sample = PhotogrammetrySample(id:count, image: heic_image!) ... guard let sampleBuffer = NSImage(contentsOfFile: file) else { fatalError("readCVPixelBuffer: Failed to read \(file)") } let imageBuffer = sampleBuffer.maskPixelBuffer() sample.objectMask = imageBuffer! When the photogrammerySession runs, It does reads all images and masks first, and after the Data ingestion is complete. Beginning processing... It crashes, and I get the error message: libc++abi: terminating with uncaught exception of type std::__1::bad_function_call terminating with uncaught exception of type std::__1::bad_function_call Program ended with exit code: 9 Is anyone experiencing this?
2
0
1.8k
Jan ’22
recovering the real size of a mesh created with photogrammetry using depth map and gravity vector.
So, I've modified the CaptureSample IOS app to take photos using the truedepth front camera. It worked perfectly, and I have TIF depth maps together with the gravity vector and the photos I took. Using the HelloPhotogrammetry command line, I created the meshes without any problems. I notice the meshes have a consistent size between then, for example, creating a mesh of my face and a mesh of my nose, the nose mesh fits perfectly on top of the nose on the face mesh! Great! BUT, when I open the meshes in Maya, for example, they are really really tiny! I was expecting to see the objects in the proper scale, and hopefully bee able to even take measurements in maya to see if they would match the real measurements of the scanned object, but they don't seem to come on the right size at all. I tried set Maya to meters, centimetres and milimetres, but it always imports the meshes really tiny. I have to apply a scale of 100 to be able to see the meshes. But then they don't measure correctly. By try and error, I was able to find that scaling the meshes by 86 would make then match the real world scale in centimetres. Is there a proper space conversion that needs to be applied to the mesh to convert it to the real world scale? Would the problem be that I'm using the truedepth camera instead of the back camera, and the depth map value is coming in a different scale than what HelloPhotogrammetry expects?
3
0
2.2k
Oct ’21