好玩的人工智能
快乐的深度学习

从DNNClassifier到tensorflow Serving部署

Tensorflow利用DNNClassifier分类有serving初级使用

利用Tensorflow高级库DNNClassifier进行分类训练,并部署到serving上,
关于高级库部署到serving上网上资料太少了,自己费尽九牛二虎之力才搞定,在此分享下,能帮助一些童鞋最好了。

import pandas as pd
  import tensorflow as tf
  import numpy as np
  from tensorflow.contrib.learn import DNNClassifier


  CATEGORICAL_COLUMNS = []
  CONTINUOUS_COLUMNS = ['MEAN_INTERVAL_CALL', 'SD_INTERVAL_CALL',
                        'NUM_CALL', 'MEAN_DURATION',
                        'MOST_DURATION', 'MOST_DURATION_NUM',
                        'SD_DURATION', 'TOTAL_DURATION',
                        'TRK_NUM',
]
LABEL_COLUMN = 'TAG'


  def input_fn(df):
      continuous_cols = {k: tf.constant(df[k].values, shape=[df[k].size, 1]) for k in CONTINUOUS_COLUMNS}

      feature_cols = dict(continuous_cols)

      if CATEGORICAL_COLUMNS:
          categorical_cols = {
              k: tf.SparseTensor(
                  indices=[[i, 0] for i in range(df[k].size)],
                  values=df[k].values,
                  dense_shape=[df[k].size, 1])
              for k in CATEGORICAL_COLUMNS}

      feature_cols.update(categorical_cols)
      label = tf.constant(df[LABEL_COLUMN].values, shape=[df[LABEL_COLUMN].size, 1])

      return feature_cols, label


  def create_columns(continuous_columns):
      deep_columns = []
      for column in continuous_columns:
          column = tf.contrib.layers.real_valued_column(column)
          deep_columns.append(column)
      return deep_columns


  def main():
      training_data = pd.read_csv('./data/20180105_label.csv',
                                  skipinitialspace=True,
                                  engine='python',
                                  dtype=np.float64,
                                  iterator=True,
                                  )

      test_data = pd.read_csv('./data/20180107_label.csv',
                              skipinitialspace=True,
                              engine='python',
                              dtype=np.float64,
                              iterator=True,
                          )
      deep_columns = create_columns(CONTINUOUS_COLUMNS)

      model = DNNClassifier(feature_columns=deep_columns,
                     model_dir='./model',
                     hidden_units=[10, 10],
                     n_classes=2,
                     input_layer_min_slice_size=10000)

      tf.logging.set_verbosity(tf.logging.INFO)
      training_data_chunk = training_data.get_chunk(1000000000)
      model.fit(input_fn=lambda: input_fn(training_data_chunk),
         steps=100)

      tf.logging.info("end fit model")

      test_data_chunk = test_data.get_chunk(10000)

      accuracy = model.evaluate(input_fn=lambda: input_fn(test_data_chunk),
                          steps=100)['accuracy']
      print(accuracy * 100)

if __name__ == '__main__':
      main()

定义serving保存模型,不然没法利用serving部署。
激动人心的时刻到了,你是不是在想代码会很多,结果出乎意料:
首先定义一个名为 _serving_input_receiver_fn的函数用来返回特定格式的input_fn。

def _serving_input_receiver_fn():
     feature_placeholders = {k: tf.placeholder(tf.float64, [None]) for k in CONTINUOUS_COLUMNS}
     features = { key: tf.expand_dims(tensor, -1) for key, tensor in feature_placeholders.items()
      }
      return tf.contrib.learn.utils.input_fn_utils.InputFnOps(
          features,
          None,
          feature_placeholders
      )

然后在main函数里定义一个导出函数就行啦,一条语句搞定。是不是很惊喜。

model.export_savedmodel(export_dir_base="./serving_model",serving_input_fn=_serving_input_receiver_fn,)

保存完成,你就会在你刚刚定义的路径下看到类似这种结构 

到此,你的模型训练和到处serving格式就完成啦,接下来就部署到serving上吧。
3. serving部署
什么?serving安装很烦?不存在的,docker简单搞定。what?不会用docker,那我只能说你out了,赶紧去学吧。
首先下载镜像:

docker pull bitnami/tensorflow-serving

然后就可以运行了,将刚刚的模型保存路径挂载到容器的/bitnami/model-data中就可以了。

docker run -d  --name tensorflow-serving -e TENSORFLOW_SERVING_MODEL_NAME=dnn -p 9000:9000  -v /chenxf/python_project/dnn_classifier/serving_model/:/bitnami/model-data docker.io/bitnami/tensorflow-serving

查看下日志

docker logs -f  tensorflow-serving

看到这种输出就成功了

8077573}
2018-03-30 02:47:18.090297: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: dnn version: 1518077573}
2018-03-30 02:47:18.090316: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: dnn version: 1518077573}
2018-03-30 02:47:18.090349: I external/org_tensorflow/tensorflow/contrib/session_bundle/bundle_shim.cc:360] Attempting to load native SavedModelBundle in bundle-shim from: /bitnami/model-data/1518077573
2018-03-30 02:47:18.090374: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:236] Loading SavedModel from: /bitnami/model-data/1518077573
2018-03-30 02:47:18.108009: I external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2
2018-03-30 02:47:18.135780: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:155] Restoring SavedModel bundle.
2018-03-30 02:47:18.152961: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:190] Running LegacyInitOp on SavedModel bundle.
2018-03-30 02:47:18.155245: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:284] Loading SavedModel: success. Took 64855 microseconds.
2018-03-30 02:47:18.155578: I tensorflow_serving/core/loader_harness.cc:86] Successfully loaded servable version {name: dnn version: 1518077573}
E0330 02:47:18.160940286      55 ev_epoll1_linux.c:1051]     grpc epoll fd: 3
2018-03-30 02:47:18.162335: I tensorflow_serving/model_servers/main.cc:288] Running ModelServer at 0.0.0.0:9000 ...

至此serving已经成功运行啦,是不是很简单,你以后迭代训练的时候serving会自动检测并部署最新的model(模型保存路径不变更)是不是很智能!
那么接下来该结束了?no,no部署上去了得用啊!怎么使用呢?

serving使用
首先你的装个serving client。目前只支持python2,python3得费点力气。
可以参考官网serving安装

pip install tensorflow-serving-api

输入下面语句验证下

python -c “import tensorflow_serving”

接下里就编写serving client端了,不多说了看注释吧,很详细了

from grpc.beta import implementations
import numpy as np
import tensorflow as tf

from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2
import pandas as pd

# 定义serving server端
tf.app.flags.DEFINE_string('server', 'localhost:9000',
                           'PredictionService host:port')
FLAGS = tf.app.flags.FLAGS

#和训练时的参数一样定义连续性字段
CONTINUOUS_COLUMNS = ['MEAN_INTERVAL_CALL', 'SD_INTERVAL_CALL',
                      'NUM_CALL', 'MEAN_DURATION',
                      'MOST_DURATION', 'MOST_DURATION_NUM',
                      'SD_DURATION', 'TOTAL_DURATION',
                      'TRK_NUM',
]
# 读取测试的文件,这里只取了前10行
data = pd.read_csv('../dnn_classifier/data/20180107_label.csv')
data = data[0:10]
n_samples = 100

# 获取server的host和port
host, port = FLAGS.server.split(':')

#连接serving server
channel = implementations.insecure_channel(host, int(port))
stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)

# Send request
request = predict_pb2.PredictRequest()
request.model_spec.name = 'dnn'
request.model_spec.signature_name = tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY


for column in CONTINUOUS_COLUMNS:
    request.inputs[column].CopyFrom(
        tf.contrib.util.make_tensor_proto(data[column].values,
                                          shape=[data[column].size],
                                          dtype=tf.double))

result = stub.Predict(request, 10.0, )  # 10 means timeout

print(result.outputs['classes'])

看看输出结果:0,1就是分类的结果。至此完成一个训练到部署到预测的完整过程了!!

未经允许不得转载:零点智能 » 从DNNClassifier到tensorflow Serving部署
分享到: 更多 (0)

评论 抢沙发

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址

零点智能 人工智能社区,加Q群:469331966

投稿&建议&加Q群