scholarly journals Reducing Textural Bias Improves Robustness of Deep Segmentation Models

Author(s):  
Seoin Chai ◽  
Daniel Rueckert ◽  
Ahmed E. Fetit
Keyword(s):  
2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Arnaud Liehrmann ◽  
Guillem Rigaill ◽  
Toby Dylan Hocking

Abstract Background Histone modification constitutes a basic mechanism for the genetic regulation of gene expression. In early 2000s, a powerful technique has emerged that couples chromatin immunoprecipitation with high-throughput sequencing (ChIP-seq). This technique provides a direct survey of the DNA regions associated to these modifications. In order to realize the full potential of this technique, increasingly sophisticated statistical algorithms have been developed or adapted to analyze the massive amount of data it generates. Many of these algorithms were built around natural assumptions such as the Poisson distribution to model the noise in the count data. In this work we start from these natural assumptions and show that it is possible to improve upon them. Results Our comparisons on seven reference datasets of histone modifications (H3K36me3 & H3K4me3) suggest that natural assumptions are not always realistic under application conditions. We show that the unconstrained multiple changepoint detection model with alternative noise assumptions and supervised learning of the penalty parameter reduces the over-dispersion exhibited by count data. These models, implemented in the R package CROCS (https://github.com/aLiehrmann/CROCS), detect the peaks more accurately than algorithms which rely on natural assumptions. Conclusion The segmentation models we propose can benefit researchers in the field of epigenetics by providing new high-quality peak prediction tracks for H3K36me3 and H3K4me3 histone modifications.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Markus J. Ankenbrand ◽  
Liliia Shainberg ◽  
Michael Hock ◽  
David Lohr ◽  
Laura M. Schreiber

Abstract Background Image segmentation is a common task in medical imaging e.g., for volumetry analysis in cardiac MRI. Artificial neural networks are used to automate this task with performance similar to manual operators. However, this performance is only achieved in the narrow tasks networks are trained on. Performance drops dramatically when data characteristics differ from the training set properties. Moreover, neural networks are commonly considered black boxes, because it is hard to understand how they make decisions and why they fail. Therefore, it is also hard to predict whether they will generalize and work well with new data. Here we present a generic method for segmentation model interpretation. Sensitivity analysis is an approach where model input is modified in a controlled manner and the effect of these modifications on the model output is evaluated. This method yields insights into the sensitivity of the model to these alterations and therefore to the importance of certain features on segmentation performance. Results We present an open-source Python library (misas), that facilitates the use of sensitivity analysis with arbitrary data and models. We show that this method is a suitable approach to answer practical questions regarding use and functionality of segmentation models. We demonstrate this in two case studies on cardiac magnetic resonance imaging. The first case study explores the suitability of a published network for use on a public dataset the network has not been trained on. The second case study demonstrates how sensitivity analysis can be used to evaluate the robustness of a newly trained model. Conclusions Sensitivity analysis is a useful tool for deep learning developers as well as users such as clinicians. It extends their toolbox, enabling and improving interpretability of segmentation models. Enhancing our understanding of neural networks through sensitivity analysis also assists in decision making. Although demonstrated only on cardiac magnetic resonance images this approach and software are much more broadly applicable.


2021 ◽  
Vol 11 (15) ◽  
pp. 7046
Author(s):  
Jorge Francisco Ciprián-Sánchez ◽  
Gilberto Ochoa-Ruiz ◽  
Lucile Rossi ◽  
Frédéric Morandini

Wildfires stand as one of the most relevant natural disasters worldwide, particularly more so due to the effect of climate change and its impact on various societal and environmental levels. In this regard, a significant amount of research has been done in order to address this issue, deploying a wide variety of technologies and following a multi-disciplinary approach. Notably, computer vision has played a fundamental role in this regard. It can be used to extract and combine information from several imaging modalities in regard to fire detection, characterization and wildfire spread forecasting. In recent years, there has been work pertaining to Deep Learning (DL)-based fire segmentation, showing very promising results. However, it is currently unclear whether the architecture of a model, its loss function, or the image type employed (visible, infrared, or fused) has the most impact on the fire segmentation results. In the present work, we evaluate different combinations of state-of-the-art (SOTA) DL architectures, loss functions, and types of images to identify the parameters most relevant to improve the segmentation results. We benchmark them to identify the top-performing ones and compare them to traditional fire segmentation techniques. Finally, we evaluate if the addition of attention modules on the best performing architecture can further improve the segmentation results. To the best of our knowledge, this is the first work that evaluates the impact of the architecture, loss function, and image type in the performance of DL-based wildfire segmentation models.


Author(s):  
Wenjia Cai ◽  
Jie Xu ◽  
Ke Wang ◽  
Xiaohong Liu ◽  
Wenqin Xu ◽  
...  

Abstract Anterior segment eye diseases account for a significant proportion of presentations to eye clinics worldwide, including diseases associated with corneal pathologies, anterior chamber abnormalities (e.g. blood or inflammation) and lens diseases. The construction of an automatic tool for the segmentation of anterior segment eye lesions will greatly improve the efficiency of clinical care. With research on artificial intelligence progressing in recent years, deep learning models have shown their superiority in image classification and segmentation. The training and evaluation of deep learning models should be based on a large amount of data annotated with expertise, however, such data are relatively scarce in the domain of medicine. Herein, the authors developed a new medical image annotation system, called EyeHealer. It is a large-scale anterior eye segment dataset with both eye structures and lesions annotated at the pixel level. Comprehensive experiments were conducted to verify its performance in disease classification and eye lesion segmentation. The results showed that semantic segmentation models outperformed medical segmentation models. This paper describes the establishment of the system for automated classification and segmentation tasks. The dataset will be made publicly available to encourage future research in this area.


2021 ◽  
Vol 11 (5) ◽  
pp. 364
Author(s):  
Bingjiang Qiu ◽  
Hylke van der van der Wel ◽  
Joep Kraeima ◽  
Haye Hendrik Glas ◽  
Jiapan Guo ◽  
...  

Accurate mandible segmentation is significant in the field of maxillofacial surgery to guide clinical diagnosis and treatment and develop appropriate surgical plans. In particular, cone-beam computed tomography (CBCT) images with metal parts, such as those used in oral and maxillofacial surgery (OMFS), often have susceptibilities when metal artifacts are present such as weak and blurred boundaries caused by a high-attenuation material and a low radiation dose in image acquisition. To overcome this problem, this paper proposes a novel deep learning-based approach (SASeg) for automated mandible segmentation that perceives overall mandible anatomical knowledge. SASeg utilizes a prior shape feature extractor (PSFE) module based on a mean mandible shape, and recurrent connections maintain the continuity structure of the mandible. The effectiveness of the proposed network is substantiated on a dental CBCT dataset from orthodontic treatment containing 59 patients. The experiments show that the proposed SASeg can be easily used to improve the prediction accuracy in a dental CBCT dataset corrupted by metal artifacts. In addition, the experimental results on the PDDCA dataset demonstrate that, compared with the state-of-the-art mandible segmentation models, our proposed SASeg can achieve better segmentation performance.


2021 ◽  
Author(s):  
Yao Zhao ◽  
Dong Joo Rhee ◽  
Carlos Cardenas ◽  
Laurence E. Court ◽  
Jinzhong Yang

2022 ◽  
Vol 3 (2) ◽  
pp. 1-15
Author(s):  
Junqian Zhang ◽  
Yingming Sun ◽  
Hongen Liao ◽  
Jian Zhu ◽  
Yuan Zhang

Radiation-induced xerostomia, as a major problem in radiation treatment of the head and neck cancer, is mainly due to the overdose irradiation injury to the parotid glands. Helical Tomotherapy-based megavoltage computed tomography (MVCT) imaging during the Tomotherapy treatment can be applied to monitor the successive variations in the parotid glands. While manual segmentation is time consuming, laborious, and subjective, automatic segmentation is quite challenging due to the complicated anatomical environment of head and neck as well as noises in MVCT images. In this article, we propose a localization-refinement scheme to segment the parotid gland in MVCT. After data pre-processing we use mask region convolutional neural network (Mask R-CNN) in the localization stage after data pre-processing, and design a modified U-Net in the following fine segmentation stage. To the best of our knowledge, this study is a pioneering work of deep learning on MVCT segmentation. Comprehensive experiments based on different data distribution of head and neck MVCTs and different segmentation models have demonstrated the superiority of our approach in terms of accuracy, effectiveness, flexibility, and practicability. Our method can be adopted as a powerful tool for radiation-induced injury studies, where accurate organ segmentation is crucial.


2011 ◽  
Vol 43 (12) ◽  
pp. 3011-3029 ◽  
Author(s):  
Stewart Barr ◽  
Gareth Shaw ◽  
Tim Coles

Proenvironmental behaviour change remains a high priority for many governments and agencies and there are now numerous programmes aimed at encouraging citizens to adopt sustainable forms of living. However, although programmes for addressing behaviour change in and around the home are well developed, there has been significantly less attention paid to activities beyond this site of practice. This is despite the environmental implications of consumption choices for leisure, tourism, and work-related activities. Through focusing on sites of practice as a key framing device, this paper uses data from a series of in-depth interviews to identify three major challenges for academics and practitioners concerned with understanding and promoting more environmentally responsible behaviour. First, attention must shift beyond the home as a site of environmental practice to consider the ways in which individuals respond to exhortations towards ‘greener’ lifestyles in other high-consumption and carbon-intensive settings, Second, in broadening the scope of environmental practice, policy makers need to revisit their reliance on segmentation models and related social marketing approaches. This is in the light of data that suggest those with strong environmental commitments in the home are often reluctant to engage in similar commitments in other sites of practice. Third, researchers and policy makers therefore need to move beyond the traditional ‘siting’ of environmental practice towards a spatially sophisticated conceptualisation that accounts for the multiple settings of consumption through mapping the relationships that exist between sites of practice.


Author(s):  
Gavindya Jayawardena ◽  
Sampath Jayarathna

Eye-tracking experiments involve areas of interest (AOIs) for the analysis of eye gaze data. While there are tools to delineate AOIs to extract eye movement data, they may require users to manually draw boundaries of AOIs on eye tracking stimuli or use markers to define AOIs. This paper introduces two novel techniques to dynamically filter eye movement data from AOIs for the analysis of eye metrics from multiple levels of granularity. The authors incorporate pre-trained object detectors and object instance segmentation models for offline detection of dynamic AOIs in video streams. This research presents the implementation and evaluation of object detectors and object instance segmentation models to find the best model to be integrated in a real-time eye movement analysis pipeline. The authors filter gaze data that falls within the polygonal boundaries of detected dynamic AOIs and apply object detector to find bounding-boxes in a public dataset. The results indicate that the dynamic AOIs generated by object detectors capture 60% of eye movements & object instance segmentation models capture 30% of eye movements.


2021 ◽  
Author(s):  
Xuefen Liu ◽  
Tianping Wang ◽  
Guofu Zhang ◽  
Keqin Hua ◽  
Hua Jiang ◽  
...  

Abstract Background: Accurate discrimination between ovarian borderline tumors (BOTs) and malignancies with imaging play an important role in management.Purpose: To evaluate the ability of T2-weighted imaging (T2WI)-based radiomics to discriminate ovarian borderline tumors (BOTs) from malignancies based on two-dimensional (2D) and three-dimensional (3D) lesion segmentation methods.Methods: A total of 95 patients with pathologically proven ovarian BOTs and 101 patients with malignancies were retrospectively included in this study. We evaluated the diagnostic performance of the signatures derived from T2WI-based radiomics in their ability to differentiate between BOTs and malignancies and compared the performance differences in the 2D and 3D segmentation models. The least absolute shrinkage and selection operator method (LASSO) was used for radiomics feature selection and machine learning processing.Results: The radiomics score between BOTs and malignancies in four types of selected T2WI-based radiomics models differed significantly at the statistical level (p < 0.0001). For the classification between BOTs and malignant masses, the 2D and 3D coronal T2WI-based radiomics models yielded accuracy values of 0.79 and 0.83 in the testing group, respectively; the 2D and 3D sagittal fat-suppressed (fs) T2WI-based radiomics models yielded an accuracy of 0.78 and 0.99, respectively.Conclusion: Our results suggest that T2WI-based radiomic features were highly correlated with ovarian tumor subtype classification. 3D-sagittal MRI radiomics features may help clinicians differentiate ovarian BOTs from malignancies with high accuracy (ACC).


Sign in / Sign up

Export Citation Format

Share Document