The output which is executed in Core ML is different from the output which is executed in Python.

I found the weird outputs.

The following code resulted in different outputs between executing in Core ML and Python.

lenmax = 6000
model1 = Sequential()

model1.add(Conv1D(filters=5, kernel_size=1, padding="same", name="conv1", input_shape=(lenmax,12)))
model1.add(Activation('relu'))

sgd = keras.optimizers.SGD(lr=0.0000001, momentum=0.0, decay=0.0, nesterov=False)
model1.compile(optimizer=sgd, loss='mse')
y = np.load("./fuga.npy")
model1.fit(X, y, epochs=1)

model1.save("./hoge.h5")

u = model1.predict(X)
np.save("./hoge.npy", u)
A = np.load("./hoge.npy")
print(A[0])



However, Changing a line4 to the folling code resulted in the same output between executing in Core ML and Python.

  
 
model1.add(Conv1D(filters=5, kernel_size=1, padding="same", name="conv1", input_shape=(lenmax,12)))

Changing the kernel_size in the keras model causes the invalid output when you execute in Core ML.
Is this a bug?