Yolo segmentation model conversion from python

  • Creator
    Topic
  • #259594
    Ahmed AboabdallahAhmed Aboabdallah
    Participant
      @demhack

      I was testing conversion from a Yolo segmentation model in python to onnx format then implementing it in the example Yolov11 camera.

      The conversion process was as the following
      yolo export model=yolo11m-seg.pt format=onnx

      The updates made are the following

      Replacing the example model to be a path and editing the resize vi to make the image 640*640

      the DrawYolyv8BBoxSegmentation block gives an error 1097.

      Tracing this error more and reviewing the inputs I noticed that the array shape is different between my model and the example model so I opened netron to compare.

      The example model has a lot of layers that are not present in the original model these layers does make the extraction of mask easier.

      Can some one guide me through the whole process to convert my model to onnx so that it can work and have these extra layers.

    • Author
      Replies
    • #259598
      Youssef MENJOURYoussef MENJOUR
      Admin
        @youssefmenjour

        Hi Ahmed,

        Thanks for your message and the details you’ve shared.

        Corentin from our team will take care of your request. He’ll help you with the YOLO segmentation model conversion to ONNX and assist you in handling the missing layers to ensure compatibility with the YoloV11 camera example.

        We’ll get back to you shortly with a complete analysis.

        Best regards,

        Youssef

        #259618
        Corentin MaravatCorentin Maravat
        Participant
          @coco

          Hi,

          Great debugging — you’re absolutely right: the DrawYolov8BBoxSegmentation VI expects a very specific ONNX output format. The error 1097 usually appears when the model’s outputs don’t match what the LabVIEW wrapper expects (especially shapes and types).

          In the provided example, the YOLOv11 model was not used directly from Ultralytics export. We modified the ONNX graph to include post-processing layers directly inside the model — that’s why you see additional layers in Netron. These layers make the extraction of masks much easier and are tailored to work with our LabVIEW function.

          At this stage, using a model exported directly from Ultralytics can indeed be challenging. The DrawYolov8BBoxSegmentation VI was specifically built to work with this internal example, and does not yet support arbitrary YOLO exports. If you want to use your own exported model, you’ll need to implement a custom post-processing step (to apply NMS, extract boxes, masks, etc.).

          We recommend including the parameter nms=True when exporting with Ultralytics, like this:

          yolo export model=yolo11m-seg.pt format=onnx imgsz=640,640 opset=17 nms=True

          This will generate cleaner and more interpretable outputs that are easier to work with in LabVIEW.

          Once you have post-processed data (bounding boxes, masks, classes), you can use other functions from the Computer Vision Toolkit to display results manually.

          That said, if you’re okay waiting a bit, we’re currently working on a more integrated solution. Our goal is to let users load any YOLO ONNX model exported from Ultralytics and automatically generate a compatible graph (including post-processing) that works directly with our display functions. However, this feature is still under development — especially the logic to dynamically adapt to image sizes, number of classes, and segmentation heads.

          Thanks again for your feedback — you’re helping us improve the toolkit for everyone.

          Graiphic team

        • You must be logged in to reply to this topic.