Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Add

Description

Setup and add the add layer into the model during the definition graph step. Type : polymorphic.

 

Input parameters

 

 Models in : array, model architecture.

Parameters : layer parameters.

training? : boolean, whether the layer is in training mode (can store data for backward).
Default value β€œTrue”.
lda coeff : float, defines the coefficient by which the loss derivative will be multiplied before being sent to the previous layer (since during the backward run we go backwards).
Default value β€œ1”.

name (optional) : string, name of the layer.

Output parameters

 

Model out : model architecture.

Dimension

Input shape

All layer used for add must have same output shape.
Refer Layer tensor output shape used.

 

Output shape

Same as input shape.

Example

All these exemples are snippets PNG, you can drop these Snippet onto the block diagram and get the depicted code added to your VI (Do not forget to install Deep Learning library to run it).

Add layer with two identical input layer shape

1 – Generate a set of data

We generate an array of data of type single and shape [batch_size = 10, input_dim = 5] (same input shape).

2 – Define graph

We first define two input layers named input_dense1 and input_dense2. This layers is setup as an input array shaped [input_dim = 5].
In order to have same output shape for added dense layers we define for both of these the same β€œunits” parameter (units = 5) (refer Dense layer add to graph documentation for more details).
We construct an array of the two graphs generated at the input Add as an interpretation of the Dense1 + Dense2 operation.

3 – Summarize graph

Returns the summary of the model in file text.

4 – Run graph

We call the forward method and retrieve the result with the β€œPrediction 2D” method.
This method returns two variables, the first one is the layer information (cluster composed of the layer name, the graph index and the shape of the output layer) and the second one is the prediction with a shape of [batch_size, units] (Dense output shape).

Add layer with two different input layer shape

1 – Generate a set of data

We generate two array of data of type single and shape1 [batch_size = 10, input_dim = 5] and shape2 [batch_size = 10, input_dim = 15] (different input shape).

2 – Define graph

We first define two input layers named input_dense1 and input_dense2. This layers is setup as an input array shaped [input_dim = 5] and [input_dim = 15].
In order to have same output shape for added dense layers we define for both of these the same β€œunits” parameter (units = 5) (refer Dense layer add to graph documentation for more details).
We construct an array of the two graphs generated at the input Add as an interpretation of the Dense1 + Dense2 operation.

3 – Summarize graph

Returns the summary of the model in file text.

4 – Run graph

We call the forward method and retrieve the result with the β€œPrediction 2D” method.
This method returns two variables, the first one is the layer information (cluster composed of the layer name, the graph index and the shape of the output layer) and the second one is the prediction with a shape of [batch_size, units] (Dense output shape).

 

Tags:

Table of Contents