TensorRT Integration Information
TensorRT
NVIDIA
Linux
Environment Preparation
Operators Supported
Intermediate
Expert
Steps
Environment Preparation
Besides basic Environment Preparation, CUDA and TensorRT is required as well. Current version only supports CUDA version 10.1 and TensorRT version 6.0.1.5.
InstallCUDA 10.1, set the installed directory to environment viriable as ${CUDA_HOME}
. Our build script uses this environment viriable to seek CUDA.
InstallTensorRT 6.0.1.5, set the installed directory to environment viriable as ${TENSORRT_PATH}
. Our build script uses this environment viriable to seek TensorRT.
Build
In the Linux environment, use the build.sh script in the root directory of MindSpore Source Code to build the MindSpore Lite package integrated with TensorRT. First configure the environment variable MSLITE_GPU_BACKEND=tensorrt
, and then execute the compilation command as follows.
bash build.sh -I x86_64
For more information about compilation, see Linux Environment Compilation.
Integration
Integration instructions
When developers need to integrate the use of TensorRT features, it is important to note:
Configure the TensorRT backend, For more information about using Runtime to perform inference, see Using Runtime to Perform Inference (C++).
Compile and execute the binary. If you use dynamic linking, please refer to Compilation Output with compilation option
-I x86_64
. Please set environment variables to dynamically link related libs.
export LD_LIBRARY_PATH=mindspore-lite-{version}-{os}-{arch}/runtime/lib/:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=user-installed-tensorrt-path/lib/:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=user-installed-cuda-path/lib/:$LD_LIBRARY_PATH
Using Benchmark testing TensorRT inference
Pass the build package to a device with a TensorRT environment(TensorRT 6.0.1.5) and use the Benchmark tool to test TensorRT inference. Examples are as follows:
Test performance
./benchmark --device=GPU --modelFile=./models/test_benchmark.ms --timeProfiling=true
Test precision
./benchmark --device=GPU --modelFile=./models/test_benchmark.ms --inDataFile=./input/test_benchmark.bin --inputShapes=1,32,32,1 --accuracyThreshold=3 --benchmarkDataFile=./output/test_benchmark.out
For more information about the use of Benchmark, see Benchmark Use.
For environment variable settings, you need to set the directory where the
libmindspore-lite.so
(under the directorymindspore-lite-{version}-{os}-{arch}/runtime/lib
), TensorRT and CUDAso
libraries are located, to${LD_LIBRARY_PATH}
.
Supported Operators
For supported TensorRT operators, see Lite Operator List.