Posts

Post not yet marked as solved
1 Replies
529 Views
I have an Objective C++ wrapper class called ImageDividerWrapper in which there is a function I want to use in a Swift class called FrameProcessor. Inside the ImageDividerWrapper.h file I made sure to import the bridging header. I also double-checked that the bridging header was referenced and spelled correctly in Project/Build Settings/Swift Compiler. I also deleted derived data, cleaned build folder, etc.. Also imported ImageDividerWrapper.h into my bridging header file: #import "ImageDividerWrapper.h" And in my ImageDividerWrapper header file I include: #import "Vsn3-Bridging-Header.h" Unfortunately, I still continue to get the error: "No such module 'ImageDividerWrapper'" when trying to directly import the Objective C++ class into my Swift file with: import ImageDividerWrapper If anyone who has solved this problem before can point me in the right direction, I would appreciate it so much! Thank you!
Posted Last updated
.
Post not yet marked as solved
0 Replies
493 Views
Hi all, I just tried to integrate my ML model (TF to CoreML) into my Xcode project, but couldn't create a performance report. As far as I'm aware, you only need to drag your .mlmodel file into the Navigator. I took this model from TF Hub and converted it to CoreML, and it has images as inputs and MultiArray as outputs (don't know if that has any significance). Other than that, I haven't made any changes to the model itself. If anyone could point me in the right direction that would be very much appreciated! I've included a screenshot of the error here:
Posted Last updated
.
Post not yet marked as solved
0 Replies
450 Views
I am trying to convert a model I found on TensorFlow hub to CoreML so I can use it in an iOS app I'm developing. Converting the model so far has been quite simple except that I get an NotImplementedError when specifying ImageType as output. This is the code I used: model = tf.keras.Sequential([ tf.keras.layers.InputLayer(input_shape=(256, 256, 3)), tf_hub.KerasLayer( "https://tfhub.dev/rishit-dagli/mirnet-tfjs/1" ) ]) model.build([1, 256, 256, 3]) # Batch input shape. mlmodel = ct.convert(model, convert_to="mlprogram", inputs=[ct.ImageType()], outputs=[ct.ImageType()]) If only the inputs are specified as ImageType, then no error occurs, but when I include a specification for the outputs as ImageType, I get this error: NotImplementedError: Image output 'Identity' has symbolic dimensions in its shape FYI: I'm using TensorFlow version 2.12 and CoreML 6.3 Is there any way around this? Or, am I doing this wrong? I'm quite new to machine learning and CoreML, so any helpful input is much appreciated. Thanks in advance!
Posted Last updated
.