1

I recently trained an object detection model using Tensorflow 1.15 and the test result using Python is good. However, after converting it to .tflite format, running the model on Android showed a drastic drop in its performance.

Does the performance loss happen during model conversion to tf-lite? Is there anyway to avoid this loss of performance during conversion?

Reference:

Source of Training: https://github.com/tensorflow/models/tree/master/research/object_detection

Base Model for Transfer Learning: ssd_mobilenet_v1

Model Conversion: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tensorflowlite.md

Python Test Script: https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb

Android Demo App: https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection

1 Answer 1

1

The first step I would do is to test it with a local Python Interpreter(after conversion); in that way, if you locally test with Python and the results are much poorer, then there is something wrong with the conversion. Normally, the post-training quantization should not drastically reduce the accuracy of your model, only by 2-3% in the worst case.

If the results are not bad when you feed images to your local Python Interpreter,(i.e. when you locally test your converted tf-lite model), then it means that there is a problem with the way you are feeding your input data on Android. Ensure that the exact same preprocessing steps are applied when feeding data to your images on your mobile app, like the ones during the training phase.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.