This paper takes tensorrt7 2.3. 4 as an example.
Environment
- Hardware environment: Intel X86 processor Tesla V100
- Software environment: Ubuntu 16 04/Ubuntu18. 04
- Virtual environment: docker above 19.03
NVIDIA Driver
reference resources TensorRT7.2.3 Installation Preparation , TensorRT7 requires cuda10 as a minimum 2 or above, so the maximum version of CUDA supported by your driver is required to be no less than 10.2.
Host computer
1. Unload the original drive:
$ sudo apt remove --purge nvidia-*
2. Install a new drive:
get into NVIDIA driver download , search for the download driver installation package by type.
This paper selects Support cuda11 460.91 of 2 Version 03
Run $sudo sh NVIDIA-Linux-x86_64-460.91.03.run install as prompted.
3. Update docker
If the container is used, update NVIDIA docker runtime accordingly, and run the command:
$ sudo apt install nvidia-docker-runtime
Docker
$ docker run -itd --name=tensorrt_test -h tensorrt_test -v /home/nas:/home/data nvidia/cuda:10.2-devel-ubuntu16.04 bash
Enter container $docker exec -it tensorrt_test bash
CUDA
TensorRT7.2.3.4 support 10.2, 11.0 update 1, 11.1 update 1, and 11.2 update 1 , choose the appropriate version to download and install. This article uses 11.0 update 1.
1. Download CUDA installation package
For Installer Type, it is recommended to select runfile(local)
$ wget https://developer.download.nvidia.com/compute/cuda/11.0.3/local_installers/cuda_11.0.3_450.51.06_linux.run
2. Install cuda
$ sudo sh cuda_11.0.3_450.51.06_linux.run
#.. A bunch of protocol instructions accept/decline/quit: accept #Acceptance agreement Install NVIDIA Accelerated Graphics Driver for Linux-x86_64? (y)es/(n)o/(q)uit: n #Whether to install the graphics card driver. Since the graphics card driver is already installed, select n Install the CUDA 11.0 Toolkit?(y)es/(n)o/(q)uit: y #Whether to install CUDA toolkit, select y Enter Toolkit Location [ default is /usr/local/cuda-11.0 ]: #Toolkit installation path. Enter by default Do you want to install a symbolic link at /usr/local/cuda? (y)es/(n)o/(q)uit: y #Add a soft link. If you have installed another version of cuda before, unless you are sure you want to use this new version of cuda, it is recommended to select no here, because specifying the link will point cuda to this new version. Install the CUDA 11.0 Samples? (y)es/(n)o/(q)uit: n #Install the sample. It depends on your own situation here Enter CUDA Samples Location [ default is /root ]: #If you select n above, you do not need to manage this item. Otherwise, you can select the sample installation address yourself # ***Installation information*** Installing the CUDA Toolkit in /usr/local/cuda-11.0 ... Finished copying samples. =========== = Summary = =========== Driver: Not Selected Toolkit: Installed in /usr/local/cuda-11.0 Samples: Not Selected Please make sure that - PATH includes /usr/local/cuda-11.0/bin - LD_LIBRARY_PATH includes /usr/local/cuda-11.0/lib64, or, add /usr/local/cuda-11.0/lib64 to /etc/ld.so.conf and run ldconfig as root To uninstall the CUDA Toolkit, run the uninstall script in /usr/local/cuda-11.0/bin Please see CUDA_Installation_Guide_Linux.pdf in /usr/local/cuda-11.0/doc/pdf for detailed information on setting up CUDA. ***WARNING: Incomplete installation! This installation did not install the CUDA Driver. To install the driver using this installer, run the following command, replacing <CudaInstaller> with the name of this run file: sudo <CudaInstaller>.run -silent -driver Logfile is /tmp/cuda_install_6388.log # ***Installation complete***
If a Missing recommended library error occurs during installation, run the following command to install related dependencies:
$ sudo apt install freeglut3-dev build-essential libx11-dev libxmu-dev libxi-dev libgl1-mesa-glx libglu1-mesa libglu1-mesa-dev
3. Configure CUDA environment variables
Modify ~ / bashrc file add environment variables at the end of the file:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64 export PATH=$PATH:/usr/local/cuda/bin export CUDA_HOME=$CUDA_HOME:/usr/local/cuda
Make it effective after saving: source ~ / bashrc
Note: when setting the environment variable above, CUDA is used instead of cuda-11.0. This is mainly to facilitate us to switch the CUDA version, so that we don't have to go to the value of the environment variable every time. When we want to use other versions, we just need to delete the soft link and re-establish the soft link to another version of CUDA (note that the name of the soft link is CUDA, because it should be consistent with that set in the ~ /. bashrc file)
sudo rm -rf cuda sudo ln -s /usr/local/cuda-10.2 /usr/local/cuda
4. Install cudnn
- get into cudnn Download Select the 8.1 installation package to download.
- Unzip $tar -xzvf cudnn-x.x-linux-x64-v8 x.x.x.tgz
- Copy the cudnn library file to the cuda Directory:
$ sudo cp cuda/include/cudnn*.h /usr/local/cuda/include $ sudo cp -P cuda/lib64/libcudnn* /usr/local/cuda/lib64 $ sudo chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*
TensorRT
1. Download TensorRT7 installation package
get into TensorRT7.2.3 download Select the appropriate version to download the installation package. TAR package is recommended. Tensorrt-7.2 is selected in this paper 3.4. Ubuntu-16.04. x86_ 64-gnu. cuda-11.0. cudnn8. 1.tar. gz
2. Decompression
$ version=7.2.3.4 $ os=Ubuntu-16.04 $ arch=$(uname -m) $ cuda=cuda-11.0 $ cudnn=cudnn8.1 $ tar xzvf TensorRT-${version}.${os}.${arch}-gnu.${cuda}.${cudnn}.tar.gz
3. Configure environment variables
At ~ / Add export LD at the end of bashrc_ LIBRARY_ PATH=$LD_ LIBRARY_ PATH:TensorRT-7.2. 3.4/lib
Make it effective $source ~ / bashrc
4. Install python package
$ cd TensorRT-${version}/python $ sudo pip3 install tensorrt-*-cp3x-none-linux_x86_64.whl
5. Install the graphsurgeon
$ cd TensorRT-${version}/graphsurgeon $ sudo pip3 install graphsurgeon-0.4.5-py2.py3-none-any.whl
6. Install onnx graphsurgeon
$ cd TensorRT-${version}/onnx_graphsurgeon $ sudo pip3 install onnx_graphsurgeon-0.2.6-py2.py3-none-any.whl
7. Install UFF (optional)
If you need to use tensorrt with a transformed tensorflow model, you also need to install UFF
$ cd TensorRT-${version}/uff $ sudo pip3 install uff-0.6.9-py2.py3-none-any.whl
Check and install which convert to UFF
8. Install pycuda and verify tensorrt
$ sudo pip3 install pycuda==2020.1
Enter python environment
import tensorrt import pycuda
If all can be imported normally, the installation is successful