For converting a model created from a custom library on top of Pytorch (such as the Fast.ai library) which may include unsupported operations, would it more practical to implement the same model without using the custom library on top of Pytorch prior to using Core ML Converters?