GatherND

Description

GivenΒ dataΒ tensor of rankΒ rΒ >= 1,Β indicesΒ tensor of rankΒ qΒ >= 1, andΒ batch_dimsΒ integerΒ b, this operator gathers slices ofΒ dataΒ into an output tensor of rankΒ qΒ +Β rΒ -Β indices_shape[-1]Β -Β 1Β -Β b.

 

 

indicesΒ is an q-dimensional integer tensor, best thought of as aΒ (q-1)-dimensional tensor of index-tuples intoΒ data, where each element defines a slice ofΒ data

batch_dimsΒ (denoted asΒ b) is an integer indicating the number of batch dimensions, i.e the leadingΒ bΒ number of dimensions ofΒ dataΒ tensor andΒ indicesΒ are representing the batches, and the gather starts from theΒ b+1Β dimension.

Some salient points about the inputs’ rank and shape:

  1. r >= 1 and q >= 1 are to be honored. There is no dependency condition to be met between ranksΒ rΒ andΒ q
  2. The firstΒ bΒ dimensions of the shape ofΒ indicesΒ tensor andΒ dataΒ tensor must be equal.
  3. b < min(q, r) is to be honored.
  4. TheΒ indices_shape[-1]Β should have a value between 1 (inclusive) and rankΒ r-bΒ (inclusive)
  5. All values inΒ indicesΒ are expected to be within bounds [-s, s-1] along axis of sizeΒ sΒ (i.e.)Β -data_shapeΒ <=Β indices[...,i]Β <=Β data_shapeΒ -Β 1. It is an error if any of the index values are out of bounds.

The output is computed as follows:

The output tensor is obtained by mapping each index-tuple in theΒ indicesΒ tensor to the corresponding slice of the inputΒ data.

  1. IfΒ indices_shape[-1]Β >Β r-bΒ => error condition
  2. IfΒ indices_shape[-1]Β ==Β r-b, since the rank ofΒ indicesΒ isΒ q,Β indicesΒ can be thought of asΒ NΒ (q-b-1)-dimensional tensors containing 1-D tensors of dimensionΒ r-b, whereΒ NΒ is an integer equals to the product of 1 and all the elements in the batch dimensions of the indices_shape. Let us think of each suchΒ r-bΒ ranked tensor asΒ indices_slice. EachΒ scalar valueΒ corresponding toΒ data[0:b-1,indices_slice]Β is filled into the corresponding location of theΒ (q-b-1)-dimensional tensor to form theΒ outputΒ tensor (Example 1 below)
  3. IfΒ indices_shape[-1]Β <Β r-b, since the rank ofΒ indicesΒ isΒ q,Β indicesΒ can be thought of asΒ NΒ (q-b-1)-dimensional tensor containing 1-D tensors of dimensionΒ <Β r-b. Let us think of each such tensors asΒ indices_slice. EachΒ tensor sliceΒ corresponding toΒ data[0:b-1,Β indices_sliceΒ ,Β :]Β is filled into the corresponding location of theΒ (q-b-1)-dimensional tensor to form theΒ outputΒ tensor (Examples 2, 3, 4 and 5 below)

This operator is the inverse ofΒ ScatterND.

Β 

Β 

Input parameters

 

specified_outputs_name :Β array, this parameter lets you manually assign custom names to the output tensors of a node.

Β Graphs in :Β cluster, ONNX model architecture.

Β dataΒ (heterogeneous) – T :Β object, tensor of rank r >= 1.
Β indices (heterogeneous) – tensor(int64) : object, tensor of rank q >= 1. All index values are expected to be within bounds [-s, s-1] along axis of size s. It is an error if any of the index values are out of bounds.

Β Parameters :Β cluster,

batch_dims : integer, the number of batch dimensions. The gather of indexing starts from dimension of data[batch_dims:].
Default value β€œ0”.
Β training?Β :Β boolean, whether the layer is in training mode (can store data for backward).
Default value β€œTrue”.
Β lda coeff :Β float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value β€œ1”.

Β name (optional) :Β string, name of the node.

Output parameters

 

Β outputΒ (heterogeneous) – T :Β object, tensor of rank q + r – indices_shape[-1] – 1.

Type Constraints

T in (tensor(bfloat16),Β tensor(bool),Β tensor(complex128),Β tensor(complex64),Β tensor(double),Β tensor(float),Β tensor(float16),Β 
tensor(int16),Β tensor(int32),Β tensor(int64),Β tensor(int8),Β tensor(string),Β tensor(uint16),Β tensor(uint32),Β tensor(uint64),Β tensor(uint8)) : Constrain input and output types to any tensor type.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents