scholarly journals A New Method for Forest Canopy Hemispherical Photography Segmentation Based on Deep Learning

Forests ◽  
2020 ◽  
Vol 11 (12) ◽  
pp. 1366
Author(s):  
Kexin Li ◽  
Xinwang Huang ◽  
Jingzhe Zhang ◽  
Zhihu Sun ◽  
Jianping Huang ◽  
...  

Research Highlights: This paper proposes a new method for hemispherical forest canopy image segmentation. The method is based on a deep learning methodology and provides a robust and fully automatic technique for the segmentation of forest canopy hemispherical photography (CHP) and gap fraction (GF) calculation. Background and Objectives: CHP is widely used to estimate structural forest variables. The GF is the most important parameter for calculating the leaf area index (LAI), and its calculation requires the binary segmentation result of the CHP. Materials and Methods: Our method consists of three modules, namely, northing correction, valid region extraction, and hemispherical image segmentation. In these steps, a core procedure is hemispherical canopy image segmentation based on the U-Net convolutional neural network. Our method is compared with traditional threshold methods (e.g., the Otsu and Ridler methods), a fuzzy clustering method (FCM), commercial professional software (WinSCANOPY), and the Habitat-Net network method. Results: The experimental results show that the method presented here achieves a Dice similarity coefficient (DSC) of 89.20% and an accuracy of 98.73%. Conclusions: The method presented here outperforms the Habitat-Net and WinSCANOPY methods, along with the FCM, and it is significantly better than the Otsu and Ridler threshold methods. The method takes the original canopy hemisphere image first and then automatically executes the three modules in sequence, and finally outputs the binary segmentation map. The method presented here is a pipelined, end-to-end method.

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Andreas M. Weng ◽  
Julius F. Heidenreich ◽  
Corona Metz ◽  
Simon Veldhoen ◽  
Thorsten A. Bley ◽  
...  

Abstract Background Functional lung MRI techniques are usually associated with time-consuming post-processing, where manual lung segmentation represents the most cumbersome part. The aim of this study was to investigate whether deep learning-based segmentation of lung images which were scanned by a fast UTE sequence exploiting the stack-of-spirals trajectory can provide sufficiently good accuracy for the calculation of functional parameters. Methods In this study, lung images were acquired in 20 patients suffering from cystic fibrosis (CF) and 33 healthy volunteers, by a fast UTE sequence with a stack-of-spirals trajectory and a minimum echo-time of 0.05 ms. A convolutional neural network was then trained for semantic lung segmentation using 17,713 2D coronal slices, each paired with a label obtained from manual segmentation. Subsequently, the network was applied to 4920 independent 2D test images and results were compared to a manual segmentation using the Sørensen–Dice similarity coefficient (DSC) and the Hausdorff distance (HD). Obtained lung volumes and fractional ventilation values calculated from both segmentations were compared using Pearson’s correlation coefficient and Bland Altman analysis. To investigate generalizability to patients outside the CF collective, in particular to those exhibiting larger consolidations inside the lung, the network was additionally applied to UTE images from four patients with pneumonia and one with lung cancer. Results The overall DSC for lung tissue was 0.967 ± 0.076 (mean ± standard deviation) and HD was 4.1 ± 4.4 mm. Lung volumes derived from manual and deep learning based segmentations as well as values for fractional ventilation exhibited a high overall correlation (Pearson’s correlation coefficent = 0.99 and 1.00). For the additional cohort with unseen pathologies / consolidations, mean DSC was 0.930 ± 0.083, HD = 12.9 ± 16.2 mm and the mean difference in lung volume was 0.032 ± 0.048 L. Conclusions Deep learning-based image segmentation in stack-of-spirals based lung MRI allows for accurate estimation of lung volumes and fractional ventilation values and promises to replace the time-consuming step of manual image segmentation in the future.


2021 ◽  
Vol 11 (4) ◽  
pp. 1965
Author(s):  
Raul-Ronald Galea ◽  
Laura Diosan ◽  
Anca Andreica ◽  
Loredana Popa ◽  
Simona Manole ◽  
...  

Despite the promising results obtained by deep learning methods in the field of medical image segmentation, lack of sufficient data always hinders performance to a certain degree. In this work, we explore the feasibility of applying deep learning methods on a pilot dataset. We present a simple and practical approach to perform segmentation in a 2D, slice-by-slice manner, based on region of interest (ROI) localization, applying an optimized training regime to improve segmentation performance from regions of interest. We start from two popular segmentation networks, the preferred model for medical segmentation, U-Net, and a general-purpose model, DeepLabV3+. Furthermore, we show that ensembling of these two fundamentally different architectures brings constant benefits by testing our approach on two different datasets, the publicly available ACDC challenge, and the imATFIB dataset from our in-house conducted clinical study. Results on the imATFIB dataset show that the proposed approach performs well with the provided training volumes, achieving an average Dice Similarity Coefficient of the whole heart of 89.89% on the validation set. Moreover, our algorithm achieved a mean Dice value of 91.87% on the ACDC validation, being comparable to the second best-performing approach on the challenge. Our approach provides an opportunity to serve as a building block of a computer-aided diagnostic system in a clinical setting.


2021 ◽  
Author(s):  
Gastón Mauro Díaz

1) Hemispherical photography (HP) is a long-standing tool for forest canopy characterization. Currently, there are low-cost fisheye lenses to convert smartphones into high-portable HP equipment; however, they cannot be used whenever since HP is sensitive to illumination conditions. To obtain sound results outside diffuse light conditions, a deep-learning-based system needs to be developed. A ready-to-use alternative is the multiscale color-based binarization algorithm, but it can provide moderate-quality results only for open forests. To overcome this limitation, I propose coupling it with the model-based local thresholding algorithm. I call this coupling the MBCB approach. 2) Methods presented here are part of the R package CAnopy IMage ANalysis (caiman), which I am developing. The accuracy assessment of the new MBCB approach was done with data from a pine plantation and a broadleaf native forest. 3) The coefficient of determination (R^2) was greater than 0.7, and the root mean square error (RMSE) lower than 20 %, both for plant area index calculation. 4) Results suggest that the new MBCB approach allows the calculation of unbiased canopy metrics from smartphone-based HP acquired in sunlight conditions, even for closed canopies. This facilitates large-scale and opportunistic sampling with hemispherical photography.


2020 ◽  
Vol 6 (11) ◽  
pp. 125 ◽  
Author(s):  
Albert Comelli ◽  
Claudia Coronnello ◽  
Navdeep Dahiya ◽  
Viviana Benfante ◽  
Stefano Palmucci ◽  
...  

Background: The aim of this work is to identify an automatic, accurate, and fast deep learning segmentation approach, applied to the parenchyma, using a very small dataset of high-resolution computed tomography images of patients with idiopathic pulmonary fibrosis. In this way, we aim to enhance the methodology performed by healthcare operators in radiomics studies where operator-independent segmentation methods must be used to correctly identify the target and, consequently, the texture-based prediction model. Methods: Two deep learning models were investigated: (i) U-Net, already used in many biomedical image segmentation tasks, and (ii) E-Net, used for image segmentation tasks in self-driving cars, where hardware availability is limited and accurate segmentation is critical for user safety. Our small image dataset is composed of 42 studies of patients with idiopathic pulmonary fibrosis, of which only 32 were used for the training phase. We compared the performance of the two models in terms of the similarity of their segmentation outcome with the gold standard and in terms of their resources’ requirements. Results: E-Net can be used to obtain accurate (dice similarity coefficient = 95.90%), fast (20.32 s), and clinically acceptable segmentation of the lung region. Conclusions: We demonstrated that deep learning models can be efficiently applied to rapidly segment and quantify the parenchyma of patients with pulmonary fibrosis, without any radiologist supervision, in order to produce user-independent results.


2019 ◽  
Vol 70 (1) ◽  
pp. 80-87
Author(s):  
Mait Lang ◽  
Jan Pisek

Abstract Hemispherical photography provides permanent records of forest canopy structure. We analysed digital hemispherical images taken during the period of 2007–2018 in a mature silver birch stand located in Järvselja, Estonia. The stand was thinned in 2004. Understory trees were removed in the spring of 2018. Images were processed using the LinearRatioSC method. Effective plant area index Leff during the leafless phenophase increased as a result of tree growth from 0.92 to 1.24 and understory cutting was not detectable. During the full foliage condition Leff increased from 3.6 in 2008 to 5.8 in 2017. After removal of understory trees from the stand Leff decreased, and repeated measurements in the summer of 2018 estimated the plant area index range 4.5 < Leff < 4.8. The results are in agreement with the expected changes following forest growth and demonstrate that LinearRatioSC is a suitable method for the estimation and long-term monitoring of forest canopy properties from digital hemispherical images.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Xu Fu ◽  
Huaiqin Liu ◽  
Xiaowang Bi ◽  
Xiao Gong

This study focused on the application of deep learning algorithms in the segmentation of CT images, so as to diagnose chronic kidney diseases accurately and quantitatively. First, the residual dual-attention module (RDA module) was used for automatic segmentation of renal cysts in CT images. 79 patients with renal cysts were selected as research subjects, of whom 27 cases were defined as the test group and 52 cases were defined as the training group. The segmentation results of the test group were evaluated factoring into the Dice similarity coefficient (DSC), precision, and recall. The experimental results showed that the loss function value of the RDA-UNET model rapidly decayed and converged, and the segmentation results of the model in the study were roughly the same as those of manual labeling, indicating that the model had high accuracy in image segmentation, and the contour of the kidney can be segmented accurately. Next, the RDA-UNET model achieved 96.25% DSC, 96.34% precision, and 96.88% recall for the left kidney and 94.22% DSC, 95.34% precision, and 94.61% recall for the right kidney, which were better than other algorithms. The results showed that the algorithm model in this study was superior to other algorithms in each evaluation index. It explained the advantages of this model compared with other algorithm models. In conclusion, the RDA-UNET model can effectively improve the accuracy of CT image segmentation, and it is worth of promotion in the quantitative assessment of chronic kidney diseases through CT imaging.


2021 ◽  
Vol 15 ◽  
Author(s):  
Kaisar Kushibar ◽  
Mostafa Salem ◽  
Sergi Valverde ◽  
Àlex Rovira ◽  
Joaquim Salvi ◽  
...  

Segmentation of brain images from Magnetic Resonance Images (MRI) is an indispensable step in clinical practice. Morphological changes of sub-cortical brain structures and quantification of brain lesions are considered biomarkers of neurological and neurodegenerative disorders and used for diagnosis, treatment planning, and monitoring disease progression. In recent years, deep learning methods showed an outstanding performance in medical image segmentation. However, these methods suffer from generalisability problem due to inter-centre and inter-scanner variabilities of the MRI images. The main objective of the study is to develop an automated deep learning segmentation approach that is accurate and robust to the variabilities in scanner and acquisition protocols. In this paper, we propose a transductive transfer learning approach for domain adaptation to reduce the domain-shift effect in brain MRI segmentation. The transductive scenario assumes that there are sets of images from two different domains: (1) source—images with manually annotated labels; and (2) target—images without expert annotations. Then, the network is jointly optimised integrating both source and target images into the transductive training process to segment the regions of interest and to minimise the domain-shift effect. We proposed to use a histogram loss in the feature level to carry out the latter optimisation problem. In order to demonstrate the benefit of the proposed approach, the method has been tested in two different brain MRI image segmentation problems using multi-centre and multi-scanner databases for: (1) sub-cortical brain structure segmentation; and (2) white matter hyperintensities segmentation. The experiments showed that the segmentation performance of a pre-trained model could be significantly improved by up to 10%. For the first segmentation problem it was possible to achieve a maximum improvement from 0.680 to 0.799 in average Dice Similarity Coefficient (DSC) metric and for the second problem the average DSC improved from 0.504 to 0.602. Moreover, the improvements after domain adaptation were on par or showed better performance compared to the commonly used traditional unsupervised segmentation methods (FIRST and LST), also achieving faster execution time. Taking this into account, this work presents one more step toward the practical implementation of deep learning algorithms into the clinical routine.


2020 ◽  
Vol 28 (5) ◽  
pp. 905-922
Author(s):  
Qingqing Li ◽  
Ke Chen ◽  
Lin Han ◽  
Yan Zhuang ◽  
Jingtao Li ◽  
...  

BACKGROUND: Automatic segmentation of individual tooth root is a key technology for the reconstruction of the three-dimensional dental model from Cone Beam Computed Tomography (CBCT) images, which is of great significance for the orthodontic, implant and other dental diagnosis and treatment planning. OBJECTIVES: Currently, tooth root segmentation is mainly done manually because of the similar gray of the tooth root and the alveolar bone from CBCT images. This study aims to explore the automatic tooth root segmentation algorithm of CBCT axial image sequence based on deep learning. METHODS: We proposed a new automatic tooth root segmentation method based on the deep learning U-net with AGs. Since CBCT sequence has a strong correlation between adjacent slices, a Recurrent neural network (RNN) was applied to extract the intra-slice and inter-slice contexts. To develop and test this new method for automatic segmentation of tooth roots using CBCT images, 24 sets of CBCT sequences containing 1160 images and 5 sets of CBCT sequences containing 361 images were used to train and test the network, respectively. RESULTS: Applying to the testing dataset, the segmentation accuracy measured by the intersection over union (IOU), dice similarity coefficient (DICE), average precision rate (APR), average recall rate (ARR), and average symmetrical surface distance (ASSD) are 0.914, 0.955, 95.8% , 95.3% , 0.145 mm, respectively. CONCLUSIONS: The study demonstrates that the new method combining attention U-net with RNN yields the promising results of automatic tooth roots segmentation, which has potential to help improve the segmentation efficiency and accuracy in future clinical practice.


2021 ◽  
Vol 1861 (1) ◽  
pp. 012067
Author(s):  
Yu’ang Niu ◽  
Yuanyang Zhang ◽  
Liping Ying ◽  
Hong Li ◽  
Wenbo Chen ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1952
Author(s):  
May Phu Paing ◽  
Supan Tungjitkusolmun ◽  
Toan Huy Bui ◽  
Sarinporn Visitsattapongse ◽  
Chuchart Pintavirooj

Automated segmentation methods are critical for early detection, prompt actions, and immediate treatments in reducing disability and death risks of brain infarction. This paper aims to develop a fully automated method to segment the infarct lesions from T1-weighted brain scans. As a key novelty, the proposed method combines variational mode decomposition and deep learning-based segmentation to take advantages of both methods and provide better results. There are three main technical contributions in this paper. First, variational mode decomposition is applied as a pre-processing to discriminate the infarct lesions from unwanted non-infarct tissues. Second, overlapped patches strategy is proposed to reduce the workload of the deep-learning-based segmentation task. Finally, a three-dimensional U-Net model is developed to perform patch-wise segmentation of infarct lesions. A total of 239 brain scans from a public dataset is utilized to develop and evaluate the proposed method. Empirical results reveal that the proposed automated segmentation can provide promising performances with an average dice similarity coefficient (DSC) of 0.6684, intersection over union (IoU) of 0.5022, and average symmetric surface distance (ASSD) of 0.3932, respectively.


Sign in / Sign up

Export Citation Format

Share Document