It's because your array of PhotogrammetrySamples is not ordered correctly - make sure they are sorted by their PhotogrammetrySample.id.
Post
Replies
Boosts
Views
Activity
Not the easiest error to interpret...
Same problem, I'm re-building an old ARKit project, and get the same error The app links to non-public libraries in Payload/XYZ.app/XYZ: /System/Library/PrivateFrameworks/AVFCapture.framework/AVFCapture, /System/Library/PrivateFrameworks/AVFCore.framework/AVFCore (ID: d6c054fd-1c55-40b0-8338-9f333e0d625b) building on Xcode 15 beta, for iOS 14. I don't know what image_picker_ios is, however I'm not using any packages/pods etc, and a search even for picker doesn't even turn up in my source.
In iOS 17 you get great results with VNGenerateForegroundInstanceMaskRequest() - make your image a CVPixelBuffer, then feed it into something like this:
var maskRequest = VNGenerateForegroundInstanceMaskRequest()
let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:])
do {
try handler.perform([maskRequest])
if let observation = maskRequest.results?.first {
let allInstances = observation.allInstances
do {
let maskedImage = try observation.generateMaskedImage(ofInstances: allInstances, from: handler, croppedToInstancesExtent: false)
let maskImage = imageFromCVPixelBuffer(maskedImage)
sceneView.scene.background.contents = maskImage
} catch {
print("Error: \(error.localizedDescription)")
}
}
} catch {
print("Failed to perform Vision request: \(error)")
}
The only way I can think of getting the result without leaving your app is to present your image from a WKWebView which allows using the "select subject" in the context menu, not perfect, but a possible workaround while we waiting for a more direct solution - I'm going to look into [https://developer.apple.com/documentation/webkit/viewing_desktop_or_mobile_web_content_using_a_web_view) to present my image. Is there a better way?? Not sure, can't find anything out there yet.
I'm keen to work this out too. I don't think I'd like to use the x-callback way. I've looked into trying VNGenerateObjectnessBasedSaliencyImageRequest, but it's only a 64x64 sized mask that gets returned and even still does not produce the same results.
This will turn the pale color output from a MTLTexture back to normal saturation. Example from Xcode default Metal project at line 178 in Renderer.swift, I've added an SCNView to my Storyboard for testing the output in the scnView.
if let texture = renderPassDescriptor.colorAttachments[0].texture { scnView.scene?.background.contents = texture.makeTextureView(pixelFormat: .bgra8Unorm_srgb) }
Have a look to see if your camera supports 4K with print(ARWorldTrackingConfiguration.supportedVideoFormats)
On an iPhone 13 Pro Max it’s only 4K camera is listed as <ARVideoFormat: 0x2834f2e90 imageResolution=(3840, 2160) pixelFormat=(420f) framesPerSecond=(30) captureDeviceType=AVCaptureDeviceTypeBuiltInWideAngleCamera captureDevicePosition=(1)>]
So in your config change to that format.
configuration.videoFormat = ARWorldTrackingConfiguration.supportedVideoFormats[12]
That’s on an iPhone 13 Pro Max 128gig - which did has the limited ProRes recording duration, so perhaps it works on the bigger storage models, or just future products.
Incidentally noticed that confidence levels beyond Low are not currently working on latest iOS 14.5 beta 18E5178a on iPadPro 11", this is effecting Apple Point Cloud demo project too. This makes monitoring re-projection not too easy as the point cloud is very noisy.
Thanks - I'm starting to get acceptable results with the worldPoint function. This is how I've implemented it:
for vv in 0..depthHeight {
for uu in 0..depthWidth {
let z = -depthValues[uu + vv * depthWidth]
let viewMatInverted = (sceneView.session.currentFrame?.camera.viewMatrix(for: UIApplication.shared.statusBarOrientation))!.inverse
let worldPoint = worldPoint(cameraPoint: SIMD2(Float(uu), Float(vv)), eyeDepth: z, cameraIntrinsicsInversed: intrinsics.inverse, viewMatrixInversed: viewMatInverted * rotateToARCamera )
points.append(SCNVector3(worldPoint))
}
}
Some points seem off - however I'll need to start checking each point with their corresponding confidence map to check. Thanks heap, this has been a great start.
Hey thanks so much for the quick reply.
At line 28, this is what I'm trying, however the points are appearing off camera in a thin line distant from the camera - so I guess I need to submit a TSI to get any further help?
let worldPoint = worldPoint(cameraPoint: SIMD2(x,y), eyeDepth: z, cameraIntrinsicsInversed: simd_float3x3(intrinsics.inverse), viewMatrixInversed: frame.camera.projectionMatrix.inverse)
points.append(SCNVector3(worldPoint))
Thanks heaps for this so far.
I'd say we're not the only ones... fingers crossed.
Yeah, this is really annoying, it's a vital part of the AR chain - definitely fix.
Changing the order around sure doesn't work for me.