I have an iOS app that uses (camera) video feed and applies CoreImage filters to simulate a specific real world effect (for educational purposes).
Now I wanted to make a similar app for visionOS and apply the same CoreImage filters to the content (live view) users sees while wearing Apple Vision Pro headset.
Is there a way to do it with current APIs and what would you recommend?
I saw that we cannot get video feed from camera(s), is there a way to do it with ARKit and applying the filters somehow using that?
I know visionOS is a young/fresh platform but any help would be great!
Thank you!