we followed steps from Huawei forum https://bbs.huaweicloud.com/forum/thread-45383-1-1.html to perform .ckpt -> .pb -> .om conversion
our script, that is attached to this message (yolo_main.txt) is slightly modified code from https://gitee.com/Atlas200DK/sample-fasterrcnndetection-python. Main changes are listed below:
- we changed width and height everywhere its needed to 416
- we removed infoTensor from datalist, and pass [inputImageTensor] to inference
- to parse net output, we used code from last code snippet available here: https://machinelearningmastery.com/how-to-perform-object-detection-with-yolov3-in-keras/
- we changed cv2.cvtColor(src_image, cv2.COLOR_BGR2RGB) to cv2.cvtColor(src_image, cv2. COLOR_BGR2YUV_I420)
- we added possibility to feed model with photos
TEST IMAGE

Results of inference with class threshold = 0.6, and nms threshold = 0.5 (same as in PC script):
There were 520 detections with min. 60% confidence and at least one detection for each class.

At the same time we tried to run .pb model at PC. This script is also attached to email (yolo_v3_pb.txt) – and results were perfect – 4 detections with correctly matched bounding boxes.

Do you have any idea, why results are so different?
We managed to successfully convert and test simple digits classifier trained on MNIST dataset. Model was built utilizing TensorFlow 1.12 API. The obtained accuracy on Atlas was above 90%, as expected based upon training results. This model was trained utilizing RGB images (BW channel was repeated 3 times along channels axis).
How to convert yolov3 model to make it work as good as on PC?
We tried dozens conversion settings, color space conversions and input files, but each time inference’s output didn’t meet PC results.

