There does not appear to be an API for this "neural engine" so it's not something developers can use directly in their own apps.
Maybe Core ML can take advantage of it, but that is impossible to tell right now as no one has access to this kind of device at the moment, and Apple hasn't documented any of this.
I would guess that the “neural engine” chip is highly customized for facial recognition. I could be wrong, but I don’t think so. Since it’s being used for such a sensitive process, I don’t think they’d ever give us developers access to it. And after all, the GPU in the phone is quite powerful enough for most tasks. I’ve run some very complex CoreML models and I’m amazed at how optimized it is.