1 Reply
      Latest reply on Apr 2, 2020 1:51 PM by Yousifa
      maxxar Level 1 Level 1 (0 points)


        I've been struggling with a model for style transfer (the MSG-Net: https://github.com/zhanghang1989/MSG-Net), which gives wrong outputs when using coreml on gpu devices.


        The wrong output on an iPad 2 distantly resembles the correct one (features are visible, but colors are badly distorted, see below), while

        output on an iphone 8 is just pitch black.

        Correct outputs: https://imgur.com/a/3Ti4u

        Wrong outputs: https://imgur.com/a/XEWQG


        To see if it is a memory issues causing this, I reduced the image dimensions from 720^2 to 256^2, however to no avail (same result as before on iphone&ipad).

        I converted the model to float16 on macOS to see if its fp precision issues, but nope, still works on macOS.

        Results/failures are the same in ios 11.2 and 11.3 beta


        Tested platforms/devices:

        - coreml on macOs: works

        - model in simulator: works

        - model on device with cpuOnly flag: works

        - model on device without cpuOnly flag: does not work

        As "works" I specify modes which reproduce the same output as the torch reference implementation


        Digging deeper, if I hand in the features after the last instance normalization layer, everything works fine.

        So at first I indentified the instancenorm layer as the culprit. However, after reimplementing it using a custom layer, the outputs stay the same. So it can't be the instancenorm layer alone.


        At this point, I've pretty much exhausted my debugging capabilities and believe it to be a coreml bug in the gpu implementation.

        Has anyone had similar issues, or should this be filed as a bug report ?


        Here is the decoder part of the network (the encoder works well):

        Decoder net:


        Decoder shapes: