QuantizeWithOrder

Description

Quantize input matrix to specific layout used in cublaslt.

 

Input parameters

 

specified_outputs_name :Β array, this parameter lets you manually assign custom names to the output tensors of a node.

Β Graphs in :Β cluster, ONNX model architecture.

input (heterogeneous) – F : object, input tensor of (ROWS, COLS). if less than 2d, will broadcast to (1, X). If 3d, it is treated as (B, ROWS, COS).
scale_input (heterogeneous) – S : object, scale of the input.

Β Parameters :Β cluster,

Β order_input :Β enum, cublasLt order of input matrix. ORDER_COL = 0, ORDER_ROW = 1, ORDER_COL32 = 2, ORDER_COL4_4R2_8C = 3, ORDER_COL32_2R_4R4 = 4. Please referΒ https://docs.nvidia.com/cuda/cublas/index.html#cublasLtOrder_t for their meaning.
Default value β€œORDER_COL”.
Β order_output :Β enum, cublasLt order of output matrix.
Default value β€œORDER_COL”.
Β training?Β :Β boolean, whether the layer is in training mode (can store data for backward).
Default value β€œTrue”.
Β lda coeff :Β float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value β€œ1”.

Β name (optional) :Β string, name of the node.

Output parameters

 

Β output (heterogeneous) – Q : object, output tensor.

Type Constraints

Q in (tensor(int8)) : Constrain input and output types to int8 tensors.

F in (tensor(float), tensor(float16)) : Constrain to float types.

S in (tensor(float)) : Constrain Scale to float32 types.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents