scholarly journals Semantic Segmentation of Hippocampal Subregions With U-Net Architecture

Author(s):  
Soraya Nasser ◽  
Moulkheir Naoui ◽  
Ghalem Belalem ◽  
Saïd Mahmoudi

The Automatic semantic segmentation of the hippocampus is an important area of research in which several convolutional neural networks (CNN) models have been used to detect the hippocampus from whole cerebral MRI. In this paper we present two convolutional neural networks the first network ( Hippocampus Segmentation Single Entity HSSE) segmented the hippocampus as a single entity and the second used to detect the hippocampal sub-regions ( Hippocampus Segmentation Multi Class HSMC), these two networks inspire their architecture of the U-net model. Two cohorts were used as training data from (NITRC) (NeuroImaging Tools & Resources Collaboratory (NITRC)) annotated by ITK-SNAP software. We analyze this networks alongside other recent methods that do hippocampal segmentation, the results obtained are encouraging and reach dice scores greater than 0.84

The Automatic semantic segmentation of the hippocampus is an important area of research in which several convolutional neural networks (CNN) models have been used to detect the hippocampus from whole cerebral MRI. In this paper we present two convolutional neural networks the first network ( Hippocampus Segmentation Single Entity HSSE) segmented the hippocampus as a single entity and the second used to detect the hippocampal sub-regions ( Hippocampus Segmentation Multi Class HSMC), these two networks inspire their architecture of the U-net model. Two cohorts were used as training data from (NITRC) (NeuroImaging Tools & Resources Collaboratory (NITRC)) annotated by ITK-SNAP software. We analyze this networks alongside other recent methods that do hippocampal segmentation, the results obtained are encouraging and reach dice scores greater than 0.84


2020 ◽  
Vol 25 (1) ◽  
pp. 43-50
Author(s):  
Pavlo Radiuk

AbstractThe achievement of high-precision segmentation in medical image analysis has been an active direction of research over the past decade. Significant success in medical imaging tasks has been feasible due to the employment of deep learning methods, including convolutional neural networks (CNNs). Convolutional architectures have been mostly applied to homogeneous medical datasets with separate organs. Nevertheless, the segmentation of volumetric medical images of several organs remains an open question. In this paper, we investigate fully convolutional neural networks (FCNs) and propose a modified 3D U-Net architecture devoted to the processing of computed tomography (CT) volumetric images in the automatic semantic segmentation tasks. To benchmark the architecture, we utilised the differentiable Sørensen-Dice similarity coefficient (SDSC) as a validation metric and optimised it on the training data by minimising the loss function. Our hand-crafted architecture was trained and tested on the manually compiled dataset of CT scans. The improved 3D UNet architecture achieved the average SDSC score of 84.8 % on testing subset among multiple abdominal organs. We also compared our architecture with recognised state-of-the-art results and demonstrated that 3D U-Net based architectures could achieve competitive performance and efficiency in the multi-organ segmentation task.


Heliyon ◽  
2021 ◽  
Vol 7 (2) ◽  
pp. e06226
Author(s):  
Diedre Carmo ◽  
Bruna Silva ◽  
Clarissa Yasuda ◽  
Letícia Rittner ◽  
Roberto Lotufo

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Vishal Singh ◽  
Pradeeba Sridar ◽  
Jinman Kim ◽  
Ralph Nanan ◽  
N. Poornima ◽  
...  

2021 ◽  
Vol 40 (1) ◽  
Author(s):  
David Müller ◽  
Andreas Ehlen ◽  
Bernd Valeske

AbstractConvolutional neural networks were used for multiclass segmentation in thermal infrared face analysis. The principle is based on existing image-to-image translation approaches, where each pixel in an image is assigned to a class label. We show that established networks architectures can be trained for the task of multiclass face analysis in thermal infrared. Created class annotations consisted of pixel-accurate locations of different face classes. Subsequently, the trained network can segment an acquired unknown infrared face image into the defined classes. Furthermore, face classification in live image acquisition is shown, in order to be able to display the relative temperature in real-time from the learned areas. This allows a pixel-accurate temperature face analysis e.g. for infection detection like Covid-19. At the same time our approach offers the advantage of concentrating on the relevant areas of the face. Areas of the face irrelevant for the relative temperature calculation or accessories such as glasses, masks and jewelry are not considered. A custom database was created to train the network. The results were quantitatively evaluated with the intersection over union (IoU) metric. The methodology shown can be transferred to similar problems for more quantitative thermography tasks like in materials characterization or quality control in production.


Sign in / Sign up

Export Citation Format

Share Document