Hi Ed,
I haven't split it out into a test project yet but that may be the next step. This is a complex app which always makes the extraction a bit more of a process.
One interesting note - I have this functionality allowing Siri to 'read' the screen in this way for two types of content. One is in the app's photo gallery which is very similar to the sample code. That one works - when it shares with ChatGPT, it correctly identifies the type as 'photo' not 'screenshot' and my Transferable implementation is called.
The second one that's not working is using the .reader.document schema, i.e. I want to share a text document with Siri but instead it only wants to share a screenshot.
Looking at the code, other than the schema type being different, the mechanics are basically the same which is where I'm confused and was curious if anyone else had done this and could be helpful.
Perhaps a sample is the next thing...