TF-benchmark_model is deployed to android platform to test model performance
Host: A pc platform for compilation. The environment of this article is Ubuntu 16.04.
Target: android platform for scoring (mainly instruction set architecture). The environment of this paper is armeabi-v7a.
This article only compiles an android version of benchmark_model tool
1. Deploy Tensorflow and Bazel on Host
(a) Create a warehouse for Tensorflow locally
unaguo@unaguo:~/backends/$ git clone https://github.com/tensorflow/tensorflow ... unaguo@unaguo:~/backends/$ git checkout r1.8 ... unaguo@unaguo:~/backends/$ ./configure
Note: configure is a global configuration file in the tensorflow root directory.
All but Jetmalloc use No.
(b) Create a Bazel warehouse locally
Download jdk8 first
unaguo@unaguo:~/backends/$ sudo apt-get install openjdk-8-jdk
Installation of Bazel, this section is copied brother, not my configuration process, I will check later.
unaguo@unaguo:~/backends/$ echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list unaguo@unaguo:~/backends/$ curl https://bazel.build/bazel-release.pub.gpg | sudo apt-key add - unaguo@unaguo:~/backends/$ sudo apt-get update && sudo apt-get install bazel
_No need to compile the entire Tensorflow with Bazel
2. On Host, configure the cross-compiler chain (NDK)
see github Project of Tensorflow Compiling Special NDK & SDK
(a) Create local repositories of projects in the above links
unaguo@unaguo:~/backends/$ git clone https://github.com/snipsco/tensorflow-build.git
(b) Compilation engineering (i.e., configuring cross-compilation chains)
unaguo@unaguo:~/backends/$ cd tensorflow-build unaguo@unaguo:~/backends/tensorflow-build/$ ./compile-android.sh
3. benchmark_model tool for compiling Android
(a) Reference/tensorflow/tensorflow/tools/benchmark/README.md
In markdown, refer to AndroidApp routine project To edit Workspace, edit the instructions for editing Workspace
stay
unaguo@unaguo:~/backends/tensorflow/$ ./configure
After that,
Answer "Yes" when the script asks to automatically configure the ./WORKSPACE
(b) cd to tensorflow under Workspace
Ensure that when bazel is compiled, the directory is under tensorflow
unaguo@unaguo:~/backends/$ cd tensorflow unaguo@unaguo:~/backends/tensorflow/$
Compile benchmark_model
unaguo@unaguo:~/backends/tensorflow/$ bazel build -c opt \ --crosstool_top=//external:android/crosstool \ --cpu=armeabi-v7a \ --host_crosstool_top=@bazel_tools//tools/cpp:toolchain \ --config monolithic \ tensorflow/tools/benchmark:benchmark_model
Note: In this operation, I don't remember if I would report a "no build-tools" error.
If this error occurs, my solution is to change the name of the / tensorflow-build/target/android-ndk-r12b/build folder to build-tools
4. push the tool into the mobile phone using adb
(a) Download platform-tools
Because in the github cross-compiler chain project mentioned above, my git came down without platform-tools, and I don't know. Anyway, my brother gave it to me.
platform-tools from This link Download, should be the official page I read to give the link, dependable
(b) After decompression, put it in the tensorflow-build project just now.
The entire platform-tools folder is placed in the / tensorflow-build/target/android-ndk-r12b/ folder, and a directory in the / platforms folder.
Then create a shortcut to the adb2 file under platform-tools
~/backends/tensorflow
Under the directory. Later in tensorflow's workspace, you don't need to type the full path of the adb address.
Of course, you can also configure adb in the system environment.
Then push the benchmark_model tool into the phone under the cd to / tensorflow path
First, make sure that the phone is connected, and already root, then set USB debug to transfer file mode.
Then?
unaguo@unaguo:~/backends/tensorflow/$ ./adb push bazel-bin/tensorflow/tools/benchmark/benchmark_model /data/local/tmp
The same way to push the model file into the mobile phone, usually push into / data/local/tmp path
For example:
unaguo@unaguo:~/backends/tensorflow/$ ./adb push /your/path/to/model/MS512.pb /data/local/tmp
5. Test the model on android mobile phone (run points)
The adb shell goes into the adb debugging. Note that we have put the adb shortcut into the tensorflow directory before using the adb instructions./
unaguo@unaguo:~/backends/tensorflow/$ ./adb shell Z91:/ #
Then use benchmark_model
For example:
Z91:/ # /data/local/tmp/benchmark_model \ --graph=/data/local/tmp/MS512_O.pb \ --input_layer="input_1:0" \ --input_layer_shape="1,512,512,3" \ --input_layer_type="float" \ --output_layer="proba/Sigmoid:0" \ --show_run_order=false \ --show_time=false \ --show_memory=true \ --show_summary=true \ --show_flops=true \ --max_num_runs=50 \
Several options that must be modified:
Options | Definitions |
---|---|
–graph | Model name |
–input_layer | Enter node name |
–input_layer_shape | Size of input value |
–output_layer | Output node name |
Appendix 1: Recording bug s
(a) When testing on a mobile phone, there is a bug that cannot calculate FLOPS:
FLOPs calculation failed with Invalid argument: You must feed a value for placeholder tensor 'input_1' with dtype float and shape [?,512,512,3]
Solution: Add 0 after the name of the input node, that is:
--input_layer="input_1:0"