SOTA Installer
SOTA is a comprehensive ecosystem built within LabVIEW, designed for deep learning and graph computing. It enables rapid prototyping, high-performance deployment, and seamless ONNX integration within a fully sovereign, industry-ready environment.
SOTA vs SOTA Local
SOTA is the cloud version of our framework, providing centralized management of models and data. On the other hand, SOTA Local allows you to work directly on your machine, without a cloud connection, while retaining the core features of SOTA.
Toolkits (.stk)
Here, you can download the following toolkits:
LabVIEW Deep Learning Module
A comprehensive solution for training, optimizing, and deploying deep learning models, integrating ONNX Runtime for multi-platform compatibility and enhanced performance.
LabVIEW Computer Vision Module
Designed for computer vision tasks, this toolkit simplifies segmentation, object detection, and tracking with integrated annotation and image processing tools.
LabVIEW Accelerator Module
A LabVIEW-native acceleration module enabling fast, deterministic execution of ONNX graphs for advanced computing and real-time performance.
LabVIEW Generative AI Module
Specifically designed to generate content with generative AI models, such as text and images, seamlessly integrating into industrial and academic workflows.
LabVIEW CUDA Module
Accelerate data processing in LabVIEW using NVIDIA GPUs and CUDA, enabling fast parallel execution for compute-intensive tasks.
Tools (.stl)
Here, you can download the following tools:
FIG
An interactive graphical tool based on LabVIEW, designed to simplify model development and real-time visualization.
Netron for LabVIEW
An intuitive model visualization tool designed to explore and analyze neural network architectures with ease, providing a clear and interactive representation of each layer and its parameters.
Hardware installation driver (.sdv)
Here, you can download the following Hardware installation driver:
CUDA 12.5
A powerful driver installation enabling high-performance, GPU-accelerated applications with NVIDIA CUDA.
TensorRT
A high-performance deep learning inference library and runtime developed by NVIDIA.
OneDNN
A performance-optimized library for deep learning and high-intensity computation on Intel CPUs.
OpenVINO
A toolkit for optimizing and deploying deep learning inference developed by Intel.
Models (.smd)
Here, you can download the following Models:
Reinforcement learning environments (.sev)
Here, you can download the following Reinforcement learning environments:
Gymnasium
A modern extension for OpenAI Gym, offering a wide variety of reinforcement learning environments.
Example (.sxp)
Here, you can download the following Example:
