ImportError:没有名为'tensorflow.contrib.lite.python.tflite_convert'的模块

问题描述 投票:1回答:3

我尝试使用tflite_convert将saved_model.pb(对象检测API)文件转换为.tflite,但是当我在C:\Users\LENOVO-PC\tensorflow>目录上的cmd上执行此命令时,其中克隆了tensorflow git,

tflite_convert \ --output_file=/saved_model/maonani.tflite \ --saved_model_dir=/saved_model/saved_model

我收到一个错误说

ImportError: No module named 'tensorflow.contrib.lite.python.tflite_convert'

完整的源日志是

C:\Users\LENOVO-PC\tensorflow>tflite_convert \ --output_file=/saved_model/maonani.tflite \ --saved_model_dir=/saved_model/saved_model
Traceback (most recent call last):
  File "c:\users\lenovo-pc\appdata\local\programs\python\python35\lib\runpy.py", line 184, in _run_module_as_main
    "__main__", mod_spec)
  File "c:\users\lenovo-pc\appdata\local\programs\python\python35\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Users\LENOVO-PC\AppData\Local\Programs\Python\Python35\Scripts\tflite_convert.exe\__main__.py", line 5, in <module>
ImportError: No module named 'tensorflow.contrib.lite.python.tflite_convert'

无论如何在WINDOWS上将我的.pb文件转换为.tflite?

python windows tensorflow object-detection tensorflow-lite
3个回答
1
投票

您好我的解决方案是以下列方式使用Linux Windows子系统Linux - see

然后从源代码ubuntu安装

然后需要pip3安装--upgrade“tensorflow = 1.7 *”然后如果你试图运行toco它将无法识别。

解决方案是转到文件夹

~/.local/bin/

在那里你会发现toco,这是一个python文件。

python3 ~/.local/bin/toco

你会得到toco的“exe”。

转换你可以运行命令解释in https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2-tflite/#2

只需将-graph_def_file = tf_files / retrained_graph.pb更改为--input_file = tf_files / retrained_graph.pb

希望这有助于某人

注意:如果您缺少pip3,则需要安装它


1
投票

我按照本网站上的说明操作:

https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2-tflite/#2

但是,似乎tflearn_convert不再支持Windows。所以我决定在Windows上使用Ubuntu。在创建了安装tensorflow的虚拟环境之后,我通过在终端中键入toco来检查“toco”。然后去指导使用toco。

usage: /home/hieu/venv/bin/toco

标志:

    --input_file=""                         string  Input file (model of any supported format). For Protobuf formats, both text and binary are supported regardless of file extension.
    --output_file=""                        string  Output file. For Protobuf formats, the binary format will be used.
    --input_format=""                       string  Input file format. One of: TENSORFLOW_GRAPHDEF, TFLITE.
    --output_format=""                      string  Output file format. One of TENSORFLOW_GRAPHDEF, TFLITE, GRAPHVIZ_DOT.
    --default_ranges_min=0.000000           float   If defined, will be used as the default value for the min bound of min/max ranges used for quantization.
    --default_ranges_max=0.000000           float   If defined, will be used as the default value for the max bound of min/max ranges used for quantization.
    --inference_type=""                     string  Target data type of arrays in the output file (for input_arrays, this may be overridden by inference_input_type). One of FLOAT, QUANTIZED_UINT8.
    --inference_input_type=""               string  Target data type of input arrays. If not specified, inference_type is used. One of FLOAT, QUANTIZED_UINT8.
    --input_type=""                         string  Deprecated ambiguous flag that set both --input_data_types and --inference_input_type.
    --input_types=""                        string  Deprecated ambiguous flag that set both --input_data_types and --inference_input_type. Was meant to be a comma-separated list, but this was deprecated before multiple-input-types was ever properly supported.
    --drop_fake_quant=false                 bool    Ignore and discard FakeQuant nodes. For instance, to generate plain float code without fake-quantization from a quantized graph.
    --reorder_across_fake_quant=false       bool    Normally, FakeQuant nodes must be strict boundaries for graph transformations, in order to ensure that quantized inference has the exact same arithmetic behavior as quantized training --- which is the whole point of quantized training and of FakeQuant nodes in the first place. However, that entails subtle requirements on where exactly FakeQuant nodes must be placed in the graph. Some quantized graphs have FakeQuant nodes at unexpected locations, that prevent graph transformations that are necessary in order to generate inference code for these graphs. Such graphs should be fixed, but as a temporary work-around, setting this reorder_across_fake_quant flag allows TOCO to perform necessary graph transformaitons on them, at the cost of no longer faithfully matching inference and training arithmetic.
    --allow_custom_ops=false                bool    If true, allow TOCO to create TF Lite Custom operators for all the unsupported TensorFlow ops.
    --drop_control_dependency=false         bool    If true, ignore control dependency requirements in input TensorFlow GraphDef. Otherwise an error will be raised upon control dependency inputs.
    --debug_disable_recurrent_cell_fusion=false     bool    If true, disable fusion of known identifiable cell subgraphs into cells. This includes, for example, specific forms of LSTM cell.

还有很多...

之后,我使用此命令转换文件:

 toco --input_file="tf_files/retrained_graph.pb" --output_file="tf_files/optimized_graph.lite" --input_format="TENSORFLOW_GRAPHDEF" --output_format="TFLITE" --input_shape="1,224,224,3" --input_array="input" --output_array="final_result" --inference_type="FLOAT" --input_data_type="FLOAT"

然后应在tf_files中找到optimized_graph.lite


0
投票

根据这个帖子:Tensorflow discussions

问题实际上是Windows上不支持现在的模块。您可以关注该主题并查看是否有相关的更新。

P.S。:有些人声称git-clone和bazel构建帮助重新解决了这个问题,而不是pip安装,你也可以试试这个问题,但如果能够解决这个问题会有严重的疑问。

© www.soinside.com 2019 - 2024. All rights reserved.