Hello, after a segmented investigation, we found that there is a problem in the preprocessing part of the model. The input data should be of float32 type. Tf may automatically convert the input data type according to the input data type of the model during model inference, but the om model does not Doing this step according to the input data type of the model resulted in the deviation of the input data and no error was reported. The inference code of the om model we wrote based on the original model pre-processing and post-processing code is modified as shown in the figure, and I hope it can help you:
