Post not yet marked as solved
Greetings, everyone,
It appears that the documentation does not provide any information on the process of creating a Spatial or Digital Persona. I was wondering if anyone is familiar with how to create one using the iPhone TrueDepth camera?
Thank you in advance,
Post not yet marked as solved
Hello everyone,
I encountered some compiler errors while following a WWDC video on converting a colorization PyTorch model to CoreML. I have followed all the steps correctly, but I'm facing issues with the following lines of code provided in the video:
In the colorize() method, there is a line:
let modelInput = try ColorizerInput(inputWith: lightness.cgImage!)
This line expects a cgImage as input, but the auto-generated Model class only accepts an MLMultiArray or MLShapedArray, not an image. Video conversion step did not cover setting the input or output as ImageType.
In the extractColorChannels() method, there are a couple of lines:
let outA: [Float] = output.output_aShapedArray.scalars
let outB: [Float] = output.output_bShapedArray.scalars
However, I only have output.var183_aShapedArray available. In other words, there is no var183_bShapedArray.
I would appreciate any thoughts or suggestions you may have regarding these issues. Thank you.
Link to the WWDC22 session 10017 https://developer.apple.com/videos/play/wwdc2022/10017/