- This topic has 0 replies, 1 voice, and was last updated September 9 by .
-
Topic
-
Our project, initially named “G” in homage to LabVIEW’s graphical language, has evolved far beyond its original ambitions and scope. Today, this name no longer reflects the scale or depth of the progress made over the past three years. It is important to emphasize that our primary goal is no longer just to offer a high-performance deep learning framework under LabVIEW, but rather to highlight the innovation and unique value of our unified and comprehensive ecosystem: SOTA. We now aim to provide a full catalog of AI solutions.
At first, we developed HAIBAL, an easy-to-use and regularly updated deep learning library. This project was initially focused on creating a LabVIEW-specific toolkit, inspired by standards like Keras. However, increasing demand led us to create a tool named FIG (File Importer and Generator), designed to make it easy to import models from Keras, thus facilitating the migration of existing projects into our environment.
Recognizing market needs, we quickly expanded our offering by developing TIGR Vision, a tool specialized in image display and manipulation, providing a comprehensive range of functionalities for computer vision. Our journey also led to the creation of GIM (Graphic Infrastructure Manager), designed to simplify the distribution of our solutions while ensuring independence from third-party platforms.
In the same spirit of innovation, we developed the PERRINE toolkit, which provides optimized GPU data processing, ensuring efficient pre- and post-processing of data before and after model execution on GPUs using HAIBAL. The positive reception of these developments motivated us to further enrich our catalog, including the ongoing development of Annotator, a high-performance annotation tool for computer vision. Meanwhile, the growing interest in Large Language Models (LLMs), Small Language Models (SLMs), and Visual Language Models (VLMs) has led us to consider future development of LM Studio, which will facilitate the customization and deployment of these models.
Our key innovation addresses a fundamental need: to offer user-friendly tools that cover all AI development requirements, available at no extra cost, easy to deploy, and regularly updated. To achieve our goals, and aware of the upcoming optimization challenges as well as the limitations of our HAIBAL solution, we decided this year to refocus our technological efforts with a complete overhaul of HAIBAL to meet our objectives. Our main decision was to adopt ONNX Runtime, a standardized and high-performance open-source runtime, ensuring inference optimization, compatibility with various frameworks like PyTorch and TensorFlow that we previously lacked, and simplicity of deployment. This development is completed, but its release is still forthcoming.
Significant updates have also been made to FIG, which now offers full compatibility with both PyTorch and TensorFlow frameworks, as well as the integration of NETRON, providing modern model visualization. This development is complete, but its release is also upcoming.
As for GIM, it has undergone a major update: in addition to managing installations, it will now handle online licensing via a proprietary cloud infrastructure. It will also allow one-click downloads of ready-to-use models, and its new user-friendly interface, inspired by modern best practices such as Adobe Cloud or Steam, will offer an optimized user experience. This development is finished, but its release is yet to come.
Finally, given the scale of the upcoming transformations, we have decided to proceed with a global rebranding of our solutions: HAIBAL will become the Deep Learning Toolkit, TIGR will become the Computer Vision Toolkit, and PERRINE will be renamed the GPU Accelerator. This initiative aims to clarify and simplify the identification of each of our solutions. All of these innovations will be launched in October 2024.
- You must be logged in to reply to this topic.