TensorRT

Description

Generate the ONNX Graph then create a TRT Session from it. Type : polymorphic.

 

Input parameters

 

Β Model in : object,Β the ONNX object serves as the parent class that provides the core structure and functionalities shared by Inference.

Sessions Parameters : cluster

intra_op_num_threadsΒ : integer, number of threads used within each operator to parallelize computations. If the value is 0, ONNX Runtime automatically uses the number of physical CPU cores.
inter_op_num_threadsΒ : integer, number of threads used between operators, to execute multiple graph nodes in parallel. If set to 0, this parameter is ignored when execution_mode is ORT_SEQUENTIAL. In ORT_PARALLEL mode, 0 means ORT automatically selects a suitable number of threads (usually equal to the number of cores).
execution_modeΒ : enum, controls whether the graph executes nodes one after another or allows parallel execution when possible.ORT_SEQUENTIAL runs nodes in order, ORT_PARALLEL runs them concurrently.
deterministic_compute : boolean,
forces deterministic execution, meaning results will always be identical for the same inputs.
graph_optimization_levelΒ : enum, defines how much ONNX Runtime optimizes the computation graph before running the model.
optimized_model_file_pathΒ : path,
file path to save the optimized model after graph analysis.
profiling output dir : path, specifies the directory where ONNX Runtime will save profiling output files. If you set this parameter to a valid (non-empty) path, profiling is automatically enabled. However, if the path is empty, profiling will not be activated.

TRT Parameters : cluster

Device And StreamΒ : cluster,

Device Index : integer, ID of the GPU used for inference (default = 0).
Use Custom Compute Stream? : boolean, if true, uses a user-defined CUDA compute stream instead of the default stream.
Custom Compute Stream (Pointer) : integer, pointer or reference to the user’s CUDA stream to be used during inference.

Parser & Subgraph OptionsΒ : cluster,

Max Partition Attemps : integer, maximum number of attempts for TensorRT to partition and identify compatible subgraphs within the ONNX model.
Minimum Subgraph Size : integer, minimum number of nodes required in a subgraph before TensorRT can take ownership of it.

Memory & Workspace Management : cluster,

Max Workspace Memory (Bytes) : integer, maximum workspace memory allocated for TensorRT (0 = use maximum available GPU memory).
Share Memory Between Subgraphs : boolean, if true, allows multiple TensorRT subgraphs to reuse the same execution context and memory buffers.
Auxiliary Stream Count : integer, defines how many auxiliary CUDA streams are created per main inference stream.

        • -1 = automatic heuristic mode
        • 0 = minimize memory usage
        • >0 = allow multiple auxiliary streams for parallel execution

Precision & Numeric ModesΒ : cluster,

Enable FP16 : boolean, enables half-precision (float16) computation for faster performance on supported GPUs.
Enable BF16 : boolean, enables bfloat16 precision for models trained in BF16 format.
Enable INT8 : boolean, enables INT8 quantization for maximum inference speed.
INT8 Calibration Table Name : path, specifies the name or path of the calibration table used for INT8 mode.
Use Native INT8 Calibration? : boolean, if true, uses the calibration table generated directly by TensorRT.

Engine Build, Cache & PathsΒ : cluster,

Enable Engine Caching? : boolean, saves compiled TensorRT engines to disk to avoid rebuilding on future runs.
Engine Cache Directory : path, path where the TensorRT engine cache will be stored.
Enable Engine Decryption? : boolean, enables support for encrypted engine files.
Decryption Library Path : path, path to the library used to decrypt TensorRT engine files.
Build Engines Sequentially? : boolean, forces TensorRT to build engines one at a time instead of in parallel.
Enable Timing Cache? : boolean, enables TensorRT’s timing cache to accelerate repeated builds.
Timing Cache Directory : path, path where timing cache data will be stored.
Force Timing Cache Use? : boolean, forces the reuse of the timing cache even if the device profile has changed.
Enable Detailed Logs? : boolean, enables detailed logging for each build step and timing stage.
Use Build Heuristics? : boolean, uses heuristic algorithms to reduce engine build time (may affect optimal performance).
Enable Weight Sparsity? : boolean, allows TensorRT to exploit sparsity in model weights for better performance.
Engine Optimization Level (0–5)) : integer, controls the engine builder optimization level.

        • 0–2 = fast build, reduced performance
        • 3 = default balance between speed and quality
        • 4–5 = best performance, longer build time

Paths & PluginsΒ : cluster,

Extra Plugin Libraries : array, list (semicolon-separated) of additional plugin library paths to load.
Tactic Source Rules : string, defines which tactic sources TensorRT should include or exclude (e.g. “-CUDNN,+CUBLAS”).
Enabled Preview Features : enum, comma-separated list of experimental TensorRT features to enable (e.g. “ALIASED_PLUGIN_IO_10_03”).

Profile ShapesΒ : cluster,

Minimum Input Shapes : string, specifies the smallest input dimensions used to build the TensorRT engine.
Maximum Input Shapes : string, specifies the β€œtypical” input size that TensorRT will optimize for.
Optimal Input Shapes : string, defines the upper bound for dynamic input dimensions accepted by the engine.

Advanced & DLA / CUDA Graph : cluster,

Enable DLA Acceleration? : boolean, enables inference execution on NVIDIA’s Deep Learning Accelerator (if supported).
DLA Core Index : integer, selects which DLA core to use (0 = default).
Dump TensorRT Subgraphs? : boolean, dumps all TensorRT-compiled subgraphs to disk for debugging.
Force LayerNorm in FP32? : boolean, forces LayerNorm operations to run in full precision (FP32) for numerical stability.
Enable CUDA Graph Execution? : boolean, executes the TensorRT engine using CUDA Graph for improved launch efficiency.

Output parameters

 

Inference out : object, inference session.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Accelerator library to run it).

Table of Contents