QAttention

Description

Quantization of Multi-Head Self Attention.

 

Input parameters

 

specified_outputs_name :Β array, this parameter lets you manually assign custom names to the output tensors of a node.

Β Graphs in :Β cluster, ONNX model architecture.

input (heterogeneous) – T1 : object, 3D input tensor with shape (batch_size, sequence_length, input_hidden_size).
weights (heterogeneous) – T2 : object, 2D input tensor with shape (input_hidden_size, 3 * hidden_size), hidden_size = num_heads * head_size.
bias (heterogeneous) – T3 : object, 1D input tensor with shape (3 * hidden_size).
input_scale (heterogeneous) – T3 : object, scale of quantized input tensor. It’s a scalar, which means a per-tensor/layer quantization.
weight_scale (heterogeneous) – T3 : object, scale of weight scale. It’s a scalar or a 1D tensor, which means a per-tensor/per-column quantization.Its size should be 3 * hidden_size if it is per-column quantization.
mask_index (optional, heterogeneous) – T4 : object, attention mask index with shape (batch_size).
input_zero_point (optional, heterogeneous) – T1 : object, zero point of quantized input tensor.It’s a scalar, which means a per-tensor/layer quantization.
weight_zero_point (optional, heterogeneous) – T2 : object, zero point of quantized weight tensor. It’s a scalar or a 1D tensor, which means a per-tensor/per-column quantization.Its size should be 3 * hidden_size if it is per-column quantization.
past (optional, heterogeneous) – T3 : object, past state for key and value with shape (2, batch_size, num_heads, past_sequence_length, head_size).

Β Parameters :Β cluster,

do_rotary :Β boolean, whether to use rotary position embedding.
Default value β€œFalse”.
mask_filter_value : float, the value to be filled in the attention mask.
Default value β€œ-10000”.
num_heads : integer, number of attention heads.
Default value β€œ0”.
past_present_share_buffer :Β integer, corresponding past and present are same tensor, its shape is (2, batch_size, num_heads, max_sequence_length, head_size).
Default value β€œ0”.
scaleΒ : float, custom scale will be used if specified.
Default value β€œ0”.
unidirectionalΒ :Β boolean, whether every token can only attend to previous tokens.
Default value β€œFalse”.
Β training?Β :Β boolean, whether the layer is in training mode (can store data for backward).
Default value β€œTrue”.
Β lda coeff :Β float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value β€œ1”.

Β name (optional) :Β string,Β name of the node.

Output parameters

 

Β Graphs out :Β cluster, ONNX model architecture.

output (heterogeneous) – T3 : object, 3D output tensor with shape (batch_size, sequence_length, hidden_size).
present (optional, heterogeneous) – T3 : object, present state for key and value with shape (2, batch_size, num_heads, past_sequence_length + sequence_length, head_size).

Type Constraints

T1Β in (tensor(int8),Β tensor(uint8)) : Constrain input and output types to int8 tensors.

T2Β in (tensor(int8),Β tensor(uint8)) : Constrain input and output types to int8 tensors.

T3Β in (tensor(float),Β tensor(float16)) : Constrain input and output types to float tensors.

T4Β in (tensor(int32)) : Constrain mask index to integer types.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents