NegativeLogLikelihoodLoss

Description

A NegativeLogLikelihoodLoss operator computes (weighted) negative log likelihood loss.

 

 

Its β€œinput” tensor has the shape of (N, C, d1, d2, …, dk) where k >= 0. The β€œinput” tensor contains log-probabilities for input[n, :, d_1, d_2,…, d_k] being in a class of [0, C). The operator’s β€œtarget” input tensor has the shape of (N, d1, d2, …, dk). It encodes class labels (one of C classes) or it may contain a special value (indicated by an attribute ignore_index) for N x d1 x d2 x … x dk samples. The loss value for input[n, :, d_1, d_2,…d_k] being classified as class c = target[n][d_1][d_2]…[d_k] is computed as : loss[n][d_1][d_2][d_k] = input[n][c][d_1][d_2][d_k].

When an optional β€œweight” is provided, the sample loss is calculated as : loss[n][d_1][d_2][d_k] = input[n][c][d_1][d_2][d_k] * weight[c].

 

loss is zero for the case when target-value equals ignore_index.

loss[n][d_1][d_2]...[d_k] = 0, when target[n][d_1][d_2]...[d_k] = ignore_index

 

If β€œreduction” attribute is set to β€œnone”, the operator’s output will be the above loss with shape (N, d1, d2, …, dk). If β€œreduction” attribute is set to β€œmean” (the default attribute value), the output loss is (weight) averaged : mean(loss), if “weight” is not provided,

or if weight is provided, sum(loss) / sum(weight[target[n][d_1][d_2][d_k]]]), for all samples.

If β€œreduction” attribute is set to β€œsum”, the output is a scalar:Β sum(loss).

See alsoΒ https://pytorch.org/docs/stable/nn.html#torch.nn.NLLLoss.

 

Input parameters

 

specified_outputs_name :Β array, this parameter lets you manually assign custom names to the output tensors of a node.

Β Graphs in :Β cluster, ONNX model architecture.

input (heterogeneous) – TΒ : object, input tensor of shape (N, C) or (N, C, d1, d2, …, dk).
target (heterogeneous) – Tind : object, target tensor of shape (N) or (N, d1, d2, …, dk). Target element value shall be in range of [0, C). If ignore_index is specified, it may have a value outside [0, C) and the target values should either be in the range [0, C) or have the value ignore_index.
weight (optional, heterogeneous) – T : object, optional rescaling weight tensor. If given, it has to be a tensor of size C. Otherwise, it is treated as if having all ones.

Β Parameters : cluster,

ignore_index :Β integer, specifies a target value that is ignored and does not contribute to the input gradient. It’s an optional value.
Default value β€œ0”.
reduction : enum, type of reduction to apply to loss: none, sum, mean (default). β€˜none’: the output is the loss for each sample. β€˜sum’: the output will be summed. β€˜mean’: the sum of the output will be divided by the sum of applied weights.
Default value β€œmean”.
Β training?Β :Β boolean, whether the layer is in training mode (can store data for backward).
Default value β€œTrue”.
Β lda coeff :Β float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value β€œ1”.

Β name (optional) :Β string, name of the node.

Output parameters

 

loss (heterogeneous) – T : object, the negative log likelihood loss.

Type Constraints

T in (tensor(double),Β tensor(float),Β tensor(float16)) : Constrain input, weight, and output types to floating-point tensors.

Tind in (tensor(int32),Β tensor(int64)) : Constrain target to integer types.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).
Table of Contents