TFLite Converter, Easy Converter?

ade sueb
2 min readSep 14, 2020

It looks easy when you read the tutorial about convert model, with 2 lines of codes.

converter=tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model=converter.convert()

Yes it works, but in some conditions.

Three things that you must know about TFLite Converter before you use the converter:

1. Consider the Supported types

At that article, we know that TFLite converter doesn’t support string and float16, at least not yet.

There are some tutorials about text classification that use String for the input shape parameter at input layer, for example this tutorial provided by tensorflow. For now, you cannot convert the model from that tutorial into TFLite.

But actually you can make the text classification by encoding the text string into float or int.

2. You Can Use Tensorflow Lite Model Maker, for not Supported Input Type

If you don’t wanna make your hands dirty by encoding the text input into float or integer to make the model supported by the converter, you can create TFlite model for text classification with Tensorflow Lite Model Maker.

Tensorflow provides Some Model Makers that you can use: Image Classification, Text Classification and Question Answer.

3. Quantization Integer Only

import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
def representative_dataset_gen():
for input_value in tf.data.Dataset.from_tensor_slices(train_images).batch(1).take(100):
# Get sample input data as a numpy array in a method of your choosing.
yield [input]
converter.representative_dataset = representative_dataset_gen
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8 # or tf.uint8
converter.inference_output_type = tf.int8 # or tf.uint8
tflite_quant_model = converter.convert()
  • This results in a smaller model and increased inferencing speed, which is valuable for low-power devices such as microcontrollers. This data format is also required by integer-only accelerators such as the Edge TPU.
  • This converts input and output type float 32 to uint_8.
  • Cannot use float 64 to convert to uint_8.
  • Firebase ML use uint_8 for produce tflite model.

--

--

ade sueb

Still believe, can change the world with code..