Deep learning for thermal image segmentation to measure canopy temperature of Brassica oleracea in the field

Author(s):  
Yu Jiang ◽  
Lanshuan Shuang ◽  
Changying Li ◽  
Andrew H. Paterson ◽  
Jon Robertson
2019 ◽  
Vol 10 (1) ◽  
pp. 202 ◽  
Author(s):  
Jaime Giménez-Gallego ◽  
Juan D. González-Teruel ◽  
Manuel Jiménez-Buendía ◽  
Ana B. Toledo-Moreo ◽  
Fulgencio Soto-Valles ◽  
...  

The crop water stress index (CWSI) is one of the parameters measured in deficit irrigation and it is obtained from crop canopy temperature. However, image segmentation is required for non-leaf region exclusion in temperature measurement, as it is critical to obtain the temperature values for the calculation of the CWSI. To this end, two image-segmentation models based on support vector machine (SVM) and deep learning have been studied in this article. The models have been trained with different parameters (encoder depth, optimizer, learning rate, weight decay, validation frequency and validation patience), and several indicators (accuracy, precision, recall and F1 score/dice coefficient), as well as prediction, training and data preparation times are discussed. The results of the F1 score indicator are 83.11% for SVM and 86.27% for deep-learning models. More accurate results are expected for the deep-learning model by increasing the dataset, whereas the SVM model is worthwhile in terms of reduced data preparation times.


2021 ◽  
Vol 1861 (1) ◽  
pp. 012067
Author(s):  
Yu’ang Niu ◽  
Yuanyang Zhang ◽  
Liping Ying ◽  
Hong Li ◽  
Wenbo Chen ◽  
...  

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Andreas M. Weng ◽  
Julius F. Heidenreich ◽  
Corona Metz ◽  
Simon Veldhoen ◽  
Thorsten A. Bley ◽  
...  

Abstract Background Functional lung MRI techniques are usually associated with time-consuming post-processing, where manual lung segmentation represents the most cumbersome part. The aim of this study was to investigate whether deep learning-based segmentation of lung images which were scanned by a fast UTE sequence exploiting the stack-of-spirals trajectory can provide sufficiently good accuracy for the calculation of functional parameters. Methods In this study, lung images were acquired in 20 patients suffering from cystic fibrosis (CF) and 33 healthy volunteers, by a fast UTE sequence with a stack-of-spirals trajectory and a minimum echo-time of 0.05 ms. A convolutional neural network was then trained for semantic lung segmentation using 17,713 2D coronal slices, each paired with a label obtained from manual segmentation. Subsequently, the network was applied to 4920 independent 2D test images and results were compared to a manual segmentation using the Sørensen–Dice similarity coefficient (DSC) and the Hausdorff distance (HD). Obtained lung volumes and fractional ventilation values calculated from both segmentations were compared using Pearson’s correlation coefficient and Bland Altman analysis. To investigate generalizability to patients outside the CF collective, in particular to those exhibiting larger consolidations inside the lung, the network was additionally applied to UTE images from four patients with pneumonia and one with lung cancer. Results The overall DSC for lung tissue was 0.967 ± 0.076 (mean ± standard deviation) and HD was 4.1 ± 4.4 mm. Lung volumes derived from manual and deep learning based segmentations as well as values for fractional ventilation exhibited a high overall correlation (Pearson’s correlation coefficent = 0.99 and 1.00). For the additional cohort with unseen pathologies / consolidations, mean DSC was 0.930 ± 0.083, HD = 12.9 ± 16.2 mm and the mean difference in lung volume was 0.032 ± 0.048 L. Conclusions Deep learning-based image segmentation in stack-of-spirals based lung MRI allows for accurate estimation of lung volumes and fractional ventilation values and promises to replace the time-consuming step of manual image segmentation in the future.


Author(s):  
Sixian Chan ◽  
Cheng Huang ◽  
Cong Bai ◽  
Weilong Ding ◽  
Shengyong Chen

2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Changyong Li ◽  
Yongxian Fan ◽  
Xiaodong Cai

Abstract Background With the development of deep learning (DL), more and more methods based on deep learning are proposed and achieve state-of-the-art performance in biomedical image segmentation. However, these methods are usually complex and require the support of powerful computing resources. According to the actual situation, it is impractical that we use huge computing resources in clinical situations. Thus, it is significant to develop accurate DL based biomedical image segmentation methods which depend on resources-constraint computing. Results A lightweight and multiscale network called PyConvU-Net is proposed to potentially work with low-resources computing. Through strictly controlled experiments, PyConvU-Net predictions have a good performance on three biomedical image segmentation tasks with the fewest parameters. Conclusions Our experimental results preliminarily demonstrate the potential of proposed PyConvU-Net in biomedical image segmentation with resources-constraint computing.


2021 ◽  
Vol 188 ◽  
pp. 106319
Author(s):  
Jaime Giménez-Gallego ◽  
Juan D. González-Teruel ◽  
Fulgencio Soto-Valles ◽  
Manuel Jiménez-Buendía ◽  
Honorio Navarro-Hellín ◽  
...  

2021 ◽  
Vol 11 (4) ◽  
pp. 1965
Author(s):  
Raul-Ronald Galea ◽  
Laura Diosan ◽  
Anca Andreica ◽  
Loredana Popa ◽  
Simona Manole ◽  
...  

Despite the promising results obtained by deep learning methods in the field of medical image segmentation, lack of sufficient data always hinders performance to a certain degree. In this work, we explore the feasibility of applying deep learning methods on a pilot dataset. We present a simple and practical approach to perform segmentation in a 2D, slice-by-slice manner, based on region of interest (ROI) localization, applying an optimized training regime to improve segmentation performance from regions of interest. We start from two popular segmentation networks, the preferred model for medical segmentation, U-Net, and a general-purpose model, DeepLabV3+. Furthermore, we show that ensembling of these two fundamentally different architectures brings constant benefits by testing our approach on two different datasets, the publicly available ACDC challenge, and the imATFIB dataset from our in-house conducted clinical study. Results on the imATFIB dataset show that the proposed approach performs well with the provided training volumes, achieving an average Dice Similarity Coefficient of the whole heart of 89.89% on the validation set. Moreover, our algorithm achieved a mean Dice value of 91.87% on the ACDC validation, being comparable to the second best-performing approach on the challenge. Our approach provides an opportunity to serve as a building block of a computer-aided diagnostic system in a clinical setting.


Sign in / Sign up

Export Citation Format

Share Document