Core ML Converters and custom libraries on top of Pytorch

For converting a model created from a custom library on top of Pytorch (such as the Fast.ai library) which may include unsupported operations,  would it more practical to implement the same model without using the custom library on top of Pytorch prior to using Core ML Converters?  
With coremltools 4, you can directly convert PyTorch models to Core ML. To do this, you must create a "trace" first. This involves running the PyTorch model on a dummy input. It then captures all the operations the PyTorch model performs. It's quite possible that the trace of a fastai model doesn't really contain any custom operations, since fastai is mostly a wrapper around PyTorch.

So I'd simply try to do the conversion first -- it might just work. ;-)

Core ML Converters and custom libraries on top of Pytorch
 
 
Q