Using UNet for Automatic Segmentation of CT Lung Images

  • Creator
    Topic
  • #53673
    Peter HerrmannPeter Herrmann
    Participant
      @pieth

      Our goal is the development of a UNet with Haibal for the automatic segmentation of lungs in computed tomographic (CT) images. First as a 2D UNet to segment the lung into individual CT slices. The image data are individual slices with a slice thickness of 0.6 mm to 5 mm. The complete lung can be reconstructed and rendered from a complete stack of images (with a slice thickness of 5 mm, that is about 50 – 60 slices). That’s why we would like to program a 3D UNet with Haibal (which is not possible with DeepLTK) in order to use the full information of the volume for segmentation. Predicted lung masks and ground truth (manually created masks) should be compared with the intersection over union (IoU) metric. Also known as the Jaccard index.

      How can we realize a UNet with Haibal?


      Previous work on Ngene’s Deep Learning Toolkit
      The network we used is based on the U-Net architecture. The U-Net was programmed with the graphical programming language LabVIEW , with which we had many years of experience in the development of software for image analysis, and the Deep Learning Toolkit for LabVIEW (Ngene, Armenia).

      The unique concept of U-Net is that it is able to generate a new, altered image as the output from an input image, after appropriate processing. This is very useful for generating segmentation images. The U-Net is a so-called fully convolutional network. Our U-Net programmed with LabVIEW is shown in Figure 1. The architecture has a symmetric “U” shape and consists of two major parts: a contraction path (left side) and an expansion path (right side). The path follows the typical architecture of a convolution neural network. It consists of the repeated application of two convolution layers, each layer with batch normalization, followed by an activation function. In all convolution layers we use a filter kernel size of 3 × 3 pixels. For each convolution we used the so called “SAME” padding type, which means there is automatically enough padding that the output image of the convolution layer has the same dimensions as the input image. For downsampling we chose a stride of 2 to halve the size of the input image. For upsampling we use an upsampling layer. This layer increases the dimensionality (rows and columns) of output feature maps by doubling the values (stride = 2).


      The DeepLTK still has a number of limitations: IOU and Dice coefficient are the only metrics so far. More will be added in the next releases. Shape quality performance metrics like ASSD or BF-Score are not yet supported. The only optimization algorithms are SGD and Adam. Further algorithms such as Adagrad, AdaDelta, RMSProp, Nesterov … are being developed for the next releases.  A 3D semantic segmentation architecture is still not possible with the DeepLTK.

      16-01-2023_14-54-48

      Figure 1: Our used U-Net architecture. Each green or blue box corresponds to a multi-channel feature map. The number of channels is shown above the box. The specifications 512 × 512 to 16 × 16 (in the lowest resolution) show the x, y dimensions in pixels of the input and output images (or feature maps).

      References:

      1. Using Artificial Intelligence for Automatic Segmentation of CT Lung Images in ARDS
      2. Original UNet by Ronneberger et al
    • Author
      Replies
    • #53678
      Youssef MENJOURYoussef MENJOUR
      Admin
        @youssefmenjour

        How can we help you Peter ?
        If you already have your model running on keras we can import it in HAIBAL. We will do it internally because we didn’t yet release the importator tool (release is for next month).

         

        #53688
        Peter HerrmannPeter Herrmann
        Participant
        Participant
          @pieth

          No Youssef, not on Keras.

          My model was only developed with Ngene´s Deep Learning Toolkit.

          I would like to program the same UNet architecture with Haibal. Then train the network and save the trained network (weights, configuration, etc.). I would then like to reload the trained and saved network in Haibal, segment unknown lung CTs and quantify them with my image processing software (which was also programmed with LabVIEW).


          Example:

          The user interface that I programmed.

          reg-mse2

          UNet with downsampling path (Image size from 256×256 to 16×16 pixel, no of convolution filters from 64 to 1024) and upsampling path (upsampling and concatenation. Image size increase from 16×16 to 256×256 pixel and no of conv. filters decrease from 1024 to 64).

          unet

          Downsampling VI:

          downsample

          Upsampling and Concatenation VI:

          upsample_concat

          #53694
          Youssef MENJOURYoussef MENJOUR
          Admin
            @youssefmenjour

            Hi @PietH , it looks like you want to add HTML content to your reply. This is an option we did not have enabled but now it is possible.

            To do this, simply add your html code through the Text tab of the response field (see screenshoot)
            textreply

            You can also use “BBcode” to partially format your text in the future.
            I invite you to edit your message so that it will be displayed properly

            For more information, you can consult the “Formatting” part of this article.

            #53705
            Peter HerrmannPeter Herrmann
            Participant
            Participant
              @pieth

              Hi Julien,

              I used the html tags only in the Text tab. Switched back to Visual and it looks good.

              The html formatting was only gone after it was activated by the moderator.

              Unfortunately, I can no longer edit my posts afterwards.

              #53707
              Youssef MENJOURYoussef MENJOUR
              Admin
                @youssefmenjour

                Hello again,

                I just checked and indeed a user could only edit his answer for a limited time. Now you can edit your answer anytime.

                #53712
                Youssef MENJOURYoussef MENJOUR
                Admin
                  @youssefmenjour

                  Dear Peter,

                  Making your model architecture and run it with HAIBAL will not be a problem.

                  If you already trained your model with another framework and you can exctract weights of the differents layers, it’s possible to reinject it in HAIBAL. There is a powerfull Weight editor available in HAIBAL. (Get / Set Weights functionalities).

                  #53714
                  Peter HerrmannPeter Herrmann
                  Participant
                  Participant
                    @pieth

                    Dear Youssef,

                    Since the previous model was not that efficient, I would like to program a UNet from scratch with HAIBAL.
                    That was also the main reason why I chose HAIBAL. There are now so many modifications that are simply not feasible with the DeepLTK, such as 3D UNet for example

                    In our research project, the goal for me is to program a multiclass segmentation. So to segment different organs in the CT image or in the MRT (magnetic resonance imaging) image. e.g. lungs, trachea, heart, liver etc.

                    But first I have to master the basics in HAIBAL.  I need your help for the first basic steps in developing a UNet with HAIBAL.

                    Maybe the following Keras examples will help you
                    UNet (Kera’s example: https://pyimagesearch.com/2022/02/21/u-net-image-segmentation-in-keras/)
                    SegNet (an encoder-decoder architecture,  https:/ /github.com/0bserver07/Keras-SegNet-Basic).

                    https://github.com/divamgupta/image-segmentation-keras

                    #53760
                    Youssef MENJOURYoussef MENJOUR
                    Admin
                      @youssefmenjour

                      Ok, we will work on a basic encoder-decoder CNN (SegNet) and a basic U-Net examples for the future release to help you.

                      #53716
                      Peter HerrmannPeter Herrmann
                      Participant
                      Participant
                        @pieth

                        Dear Youssef,

                        I would like to program a UNet from scratch with HAIBAL in order to train it with new data.

                        My goal within our research project is a multiclass segmentation, i.e. not just one class (the lungs), but several classes (lungs, heart, trachea, etc.). There are now also very interesting modifications of UNet to optimize the segmentation process (e.g. 3D U-Net, Attention U-Net, Inception U-Net, 5 Residual U-Net, Recurrent Convolutional Network, Dense U-Net, U-Net++, Adversarial U-Net), which cannot be programmed with DeepLTK.

                        In order to program these new U-Net modifications, however, I first have to master the basics in HAIBAL!
                        1.) Development of an encoder-decoder (SegNet) with HAIBAL and
                        2.) Development of a U-Net with HAIBAL
                        Youssef, I need your help for this.

                        Maybe the examples in Keras will help:

                        UNet: https://pyimagesearch.com/2022/02/21/u-net-image-segmentation-in-keras/

                        https://github.com/divamgupta/image-segmentation-keras

                        https://keras.io/examples/vision/oxford_pets_image_segmentation/

                        https://modelzoo.co/model/popular-image-segmentation-models

                        #54009
                        Youssef MENJOURYoussef MENJOUR
                        Admin
                          @youssefmenjour

                          Ok let me check all of this.

                          I’ll first fix your licence problem in priority and then we will start to work on examples helping you.

                          #55077
                          Youssef MENJOURYoussef MENJOUR
                          Admin
                            @youssefmenjour

                            Dear Peter,

                            A complete example of UNET is now available on the last release of HAIBAL. 🙂

                            Have a fun !

                            image_2023_02_02T10_28_51_500ZUnet V1unet illustration

                             

                            #57267
                            Peter HerrmannPeter Herrmann
                            Participant
                            Participant
                              @pieth

                              Dear Youssef,

                              Oh great thank you. I will have a look.

                              In the meantime I had already dealt with the previous example.

                              I modified the MedUNet VI a bit. e.g. programmed the original UNet. Reduced the images to 256×256 with Imaq Vision…
                              In the training loop, I passed the 4D validation data (you call it test data) directly to the poly_model.vi (Forward Input 4D). So not from a parallel loop via a global variable.

                              modified
                              The data type is exactly the same as in your example. However, I get an error message:
                              “Our model expects 4 input(s), but it receives 1 input array”.

                              error
                              Can you tell me what the error message means?

                               

                              #57277
                              Youssef MENJOURYoussef MENJOUR
                              Admin
                                @youssefmenjour

                                Hi Peter,

                                Could you send me the summary of your model please.

                                I need to see the modification you did. :).
                                it seems you setup some inputs layers.

                                Thank you,

                                #57338
                                Peter HerrmannPeter Herrmann
                                Participant
                                Participant
                                  @pieth

                                  Hi Youssef,

                                  where I can find the summary text file?

                                  Thank you.

                                  #57339
                                  Youssef MENJOURYoussef MENJOUR
                                  Admin
                                    @youssefmenjour

                                    Hi Peter,

                                    By default summary is poped up and you can save it where you want.

                                    Youssef

                                     

                                    #57346
                                    Peter HerrmannPeter Herrmann
                                    Participant
                                    Participant
                                      @pieth

                                      Hi Youssef,

                                      Yes, I know that the summary should actually open up.
                                      I controlled the workstation remotely via VPN and the summary didn’t pop up.

                                      If I copy everything to my computer at home, the summary pops up.

                                      I’ll try it next week on the workstation directly.
                                      I saved the summary. However, the order of the layers is a bit confusing for me.
                                      I will email you the UNet model.

                                      Maybe you can take a look why the summary doesn’t show the correct order of the layers.

                                      Thank you

                                      Peter

                                      #57596
                                      Peter HerrmannPeter Herrmann
                                      Participant
                                      Participant
                                        @pieth

                                        Dear Youssef,

                                        I found the mistake.. The numbering of the convolution layers was wrong.
                                        I had given double names.
                                        It works now
                                        I programmed the Unet vi again and checked everything twice.

                                        Peter

                                      • You must be logged in to reply to this topic.