Status update #1 | Architecture

June 27, 2022

As we have recently decided to better communicate on the HAIBAL project by making a weekly status, I will start this first progress report by talking about the overall progress of the project.

HAIBAL deep learning library for LabVIEW project started in July 2021, almost a year ago and due to financial concerns had to be stopped last November and December before starting again. It has been 10 months that we have been working on this magnificent challenge.


The first determining step for us was to develop an object architecture for the whole project.
A HAIBAL model is an object, it contains its graph, its weights, its loss functions and all its parameters.

To be able to use the model you just have to execute its forward method.
In the same way, in order to train this model, we will also use the loss and backward methods.


  • 16 activation functions (ELU, Exponential, GELU, HardSigmoid, LeakyReLU, Linear, PRELU, ReLU, SELU, Sigmoid, SoftMax, SoftPlus, SoftSign, Swish, TanH, ThresholdedReLU)
  • 84 functional layers (Dense, Conv, MaxPool, RNN, Dropoutโ€ฆ)
  • 14 loss functions (BinaryCrossentropy, BinaryCrossentropyWithLogits, Crossentropy, CrossentropyWithLogits, Hinge, Huber, KLDivergence, LogCosH, MeanAbsoluteError, MeanAbsolutePercentage, MeanSquare, MeanSquareLog, Poisson, SquaredHinge)
  • 15 initialization functions (Constant, GlorotNormal, GlorotUniform, HeNormal, HeUniform, Identity, LeCunNormal, LeCunUniform, Ones, Orthogonal, RandomNormal, Random,Uniform, TruncatedNormal, VarianceScaling, Zeros)
  • 7 optimizers (Adagrad, Adam, Inertia, Nadam, Nesterov, RMSProp, SGD)


Last January, we developed a converter to convert Keras HDF5 backup files into HAIBAL objects. This allows us to easily import any Keras model. We should do the same for meta pytorch in the next months to make HAIBAL fully compatible with the main existing libraries.


This month we finished working on our memory manager which is our system for managing the memory of the platforms. It will allow us to optimize the creation of memory space before operations. The memory manager works according to two distinct techniques. The first one favors the minimal use of memory but can lead to a slower execution while the second one will allocate more memory but will be more optimized in terms of execution speed.

TIGR1.2.5 release notes

TIGR1.2.5 release notes

All release notes are available at this page .Download link Release NotesV1.2.5 Date of releaseย 30 November 2023 Enhancements: Palette Removal: Removed the LabVIEW version palette. New Palettes...

read more
TIGR1.2 release notes

TIGR1.2 release notes

All release notes are available at this page .Download link Release NotesV1.2 Date of releaseย 24 Octobre 2023 Features Bugs/Improve Comment Update palette We changed the operation of the palette...

read more
TIGR1.1.1 release notes

TIGR1.1.1 release notes

All release notes are available at this page .Download link Release NotesV1.1.1 Date of releaseย 30 Septembre 2023 Features Bugs/Improve Comment Update palette โ€“ ROI selector work now in any...

read more