It looks easy when you read the tutorial about convert model, with 2 lines of codes.
Yes it works, but in some conditions.
Three things that you must know about TFLite Converter before you use the converter:
1. Consider the Supported types
At that article, we know that TFLite converter doesn’t support string and float16, at least not yet.
There are some tutorials about text classification that use String for the input shape parameter at input layer, for example this tutorial provided by tensorflow. For now, you cannot convert the model from that tutorial into TFLite.
But actually you can make the text classification by encoding the text string into float or int.
2. You Can Use Tensorflow Lite Model Maker, for not Supported Input Type
If you don’t wanna make your hands dirty by encoding the text input into float or integer to make the model supported by the converter, you can create TFlite model for text classification with Tensorflow Lite Model Maker.
3. Quantization Integer Only
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
for input_value in tf.data.Dataset.from_tensor_slices(train_images).batch(1).take(100):
# Get sample input data as a numpy array in a method of your choosing.
converter.representative_dataset = representative_dataset_gen
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8 # or tf.uint8
converter.inference_output_type = tf.int8 # or tf.uint8
tflite_quant_model = converter.convert()
- This results in a smaller model and increased inferencing speed, which is valuable for low-power devices such as microcontrollers. This data format is also required by integer-only accelerators such as the Edge TPU.
- This converts input and output type float 32 to uint_8.
- Cannot use float 64 to convert to uint_8.
- Firebase ML use uint_8 for produce tflite model.