Create Inference Session From Model

Description

Initialize an Inference Session from a DeepLearning Toolkit Model. Type : polymorphic.

 

Input parameters

 

Β Execution Device : enum, selects the hardware device on which the model will run.
Β Model in : object, model architecture.
Β Parameters :Β cluster,

Β Sessions Parameters :Β cluster

Β intra_op_num_threadsΒ :Β integer,Β number of threads used within each operator to parallelize computations. If the value is 0, ONNX Runtime automatically uses the number of physical CPU cores.
Β inter_op_num_threadsΒ :Β integer,Β number of threads used between operators, to execute multiple graph nodes in parallel. If set to 0, this parameter is ignored whenΒ execution_modeΒ isΒ ORT_SEQUENTIAL. InΒ ORT_PARALLELΒ mode, 0 means ORT automatically selects a suitable number of threads (usually equal to the number of cores).
Β execution_modeΒ :Β enum,Β controls whether the graph executes nodes one after another or allows parallel execution when possible.ORT_SEQUENTIALΒ runs nodes in order,Β ORT_PARALLELΒ runs them concurrently.
Β deterministic_compute :Β boolean,Β 
forces deterministic execution, meaning results will always be identical for the same inputs.
Β graph_optimization_levelΒ :Β enum,Β defines how much ONNX Runtime optimizes the computation graph before running the model.
Β optimized_model_file_pathΒ :Β path,Β 
file path to save the optimized model after graph analysis.

Β CUDA Parameters :Β cluster

Β device idΒ :Β integer,Β selects which GPU to use (0 = first GPU).
Β algo :Β enum,Β controls the algorithm used for cuDNN convolutions.

Output parameters

 

Inference out : object, inference session.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents