在CPP中运行张量流模型

问题描述 投票:0回答:1

我使用tf.keras训练了模型。我通过[

将此模型转换为“ .pb”
import os
import tensorflow as tf
from tensorflow.keras import backend as K
K.set_learning_phase(0)

from tensorflow.keras.models import load_model
model = load_model('model_checkpoint.h5')
model.save('model_tf2', save_format='tf')

这将创建一个文件夹'model_tf2',其中包含'资产',变量和saved_model.pb

我正在尝试将此模型加载到cpp中。参考许多其他帖子(主要是Using Tensorflow checkpoint to restore model in C++),我现在可以加载模型。

    RunOptions run_options;
    run_options.set_timeout_in_ms(60000);
    SavedModelBundle model;
    auto status = LoadSavedModel(SessionOptions(), run_options, model_dir_path, tags, &model);
    if (!status.ok()) {
        std::cerr << "Failed: " << status1;
        return -1;
    }

cmd output showing model was loaded

上面的屏幕快照显示已加载模型。

我有以下问题

  1. 如何进行模型的前向传递?
  2. 我知道'tag'可以是gpu,服务,训练。。服务和gpu有什么区别?
  3. 我不了解LoadSavedModel的前两个参数,即会话选项和运行选项。他们的目的是什么?另外,您能通过语法示例帮助我理解吗?我通过查看另一个stackoverflow帖子来设置run_options,但是我不明白它的目的。

谢谢! :)

c++ tensorflow deep-learning tensorflow-serving tensorflow2.0
1个回答
0
投票

下面是Patwie在评论中提到的用于执行模型的向前传递的代码:

#include <tensorflow/core/protobuf/meta_graph.pb.h>
#include <tensorflow/core/public/session.h>
#include <tensorflow/core/public/session_options.h>
#include <iostream>
#include <string>

typedef std::vector<std::pair<std::string, tensorflow::Tensor>> tensor_dict;

/**
 * @brief load a previous store model
 * @details [long description]
 *
 * in Python run:
 *
 *    saver = tf.train.Saver(tf.global_variables())
 *    saver.save(sess, './exported/my_model')
 *    tf.train.write_graph(sess.graph, '.', './exported/graph.pb, as_text=False)
 *
 * this relies on a graph which has an operation called `init` responsible to
 * initialize all variables, eg.
 *
 *    sess.run(tf.global_variables_initializer())  # somewhere in the python
 * file
 *
 * @param sess active tensorflow session
 * @param graph_fn path to graph file (eg. "./exported/graph.pb")
 * @param checkpoint_fn path to checkpoint file (eg. "./exported/my_model",
 * optional)
 * @return status of reloading
 */
tensorflow::Status LoadModel(tensorflow::Session *sess, std::string graph_fn,
                             std::string checkpoint_fn = "") {
  tensorflow::Status status;

  // Read in the protobuf graph we exported
  tensorflow::MetaGraphDef graph_def;
  status = ReadBinaryProto(tensorflow::Env::Default(), graph_fn, &graph_def);
  if (status != tensorflow::Status::OK()) return status;

  // create the graph
  status = sess->Create(graph_def.graph_def());
  if (status != tensorflow::Status::OK()) return status;

  // restore model from checkpoint, iff checkpoint is given
  if (checkpoint_fn != "") {
    tensorflow::Tensor checkpointPathTensor(tensorflow::DT_STRING,
                                            tensorflow::TensorShape());
    checkpointPathTensor.scalar<std::string>()() = checkpoint_fn;

    tensor_dict feed_dict = {
        {graph_def.saver_def().filename_tensor_name(), checkpointPathTensor}};
    status = sess->Run(feed_dict, {}, {graph_def.saver_def().restore_op_name()},
                       nullptr);
    if (status != tensorflow::Status::OK()) return status;
  } else {
    // virtual Status Run(const std::vector<std::pair<string, Tensor> >& inputs,
    //                  const std::vector<string>& output_tensor_names,
    //                  const std::vector<string>& target_node_names,
    //                  std::vector<Tensor>* outputs) = 0;
    status = sess->Run({}, {}, {"init"}, nullptr);
    if (status != tensorflow::Status::OK()) return status;
  }

  return tensorflow::Status::OK();
}

int main(int argc, char const *argv[]) {
  const std::string graph_fn = "./exported/my_model.meta";
  const std::string checkpoint_fn = "./exported/my_model";

  // prepare session
  tensorflow::Session *sess;
  tensorflow::SessionOptions options;
  TF_CHECK_OK(tensorflow::NewSession(options, &sess));
  TF_CHECK_OK(LoadModel(sess, graph_fn, checkpoint_fn));

  // prepare inputs
  tensorflow::TensorShape data_shape({1, 2});
  tensorflow::Tensor data(tensorflow::DT_FLOAT, data_shape);

  // same as in python file
  auto data_ = data.flat<float>().data();
  data_[0] = 42;
  data_[1] = 43;

  tensor_dict feed_dict = {
      {"input_plhdr", data},
  };

  std::vector<tensorflow::Tensor> outputs;
  TF_CHECK_OK(
      sess->Run(feed_dict, {"sequential/Output_1/Softmax:0"}, {}, &outputs));

  std::cout << "input           " << data.DebugString() << std::endl;
  std::cout << "output          " << outputs[0].DebugString() << std::endl;

  return 0;
}
  1. 如果要使用GPU对模型进行推断,则可以一起使用标记ServeGPU

  2. C ++中的参数session_options等效于tf.ConfigProto(allow_soft_placement=True, log_device_placement=True)

这意味着,如果allow_soft_placement为true,则在]中将一个操作放置在CPU上>

((i)没有针对OP(或)的GPU实现

((ii)没有已知的GPU设备或未注册(或)

((iii)需要与来自CPU的reftype输入一起放置。

    如果要使用
  1. run_options,即提取图执行的运行时统计信息,则使用

    参数Profiler。它将有关执行时间和内存消耗的信息添加到事件文件中,并允许您在tensorboard中查看此信息。

  2. 使用session_options

  3. run_options的语法在上述代码中给出。
© www.soinside.com 2019 - 2024. All rights reserved.