scholarly journals Quantifying Variability of Manual Annotation in Cryo-Electron Tomograms

2016 ◽  
Vol 22 (3) ◽  
pp. 487-496 ◽  
Author(s):  
Corey W. Hecksel ◽  
Michele C. Darrow ◽  
Wei Dai ◽  
Jesús G. Galaz-Montoya ◽  
Jessica A. Chin ◽  
...  

AbstractAlthough acknowledged to be variable and subjective, manual annotation of cryo-electron tomography data is commonly used to answer structural questions and to create a “ground truth” for evaluation of automated segmentation algorithms. Validation of such annotation is lacking, but is critical for understanding the reproducibility of manual annotations. Here, we used voxel-based similarity scores for a variety of specimens, ranging in complexity and segmented by several annotators, to quantify the variation among their annotations. In addition, we have identified procedures for merging annotations to reduce variability, thereby increasing the reliability of manual annotation. Based on our analyses, we find that it is necessary to combine multiple manual annotations to increase the confidence level for answering structural questions. We also make recommendations to guide algorithm development for automated annotation of features of interest.

2020 ◽  
Vol 21 (1) ◽  
Author(s):  
Clyde J. Belasso ◽  
Bahareh Behboodi ◽  
Habib Benali ◽  
Mathieu Boily ◽  
Hassan Rivaz ◽  
...  

Abstract Background Among the paraspinal muscles, the structure and function of the lumbar multifidus (LM) has become of great interest to researchers and clinicians involved in lower back pain and muscle rehabilitation. Ultrasound (US) imaging of the LM muscle is a useful clinical tool which can be used in the assessment of muscle morphology and function. US is widely used due to its portability, cost-effectiveness, and ease-of-use. In order to assess muscle function, quantitative information of the LM must be extracted from the US image by means of manual segmentation. However, manual segmentation requires a higher level of training and experience and is characterized by a level of difficulty and subjectivity associated with image interpretation. Thus, the development of automated segmentation methods is warranted and would strongly benefit clinicians and researchers. The aim of this study is to provide a database which will contribute to the development of automated segmentation algorithms of the LM. Construction and content This database provides the US ground truth of the left and right LM muscles at the L5 level (in prone and standing positions) of 109 young athletic adults involved in Concordia University’s varsity teams. The LUMINOUS database contains the US images with their corresponding manually segmented binary masks, serving as the ground truth. The purpose of the database is to enable development and validation of deep learning algorithms used for automatic segmentation tasks related to the assessment of the LM cross-sectional area (CSA) and echo intensity (EI). The LUMINOUS database is publicly available at http://data.sonography.ai. Conclusion The development of automated segmentation algorithms based on this database will promote the standardization of LM measurements and facilitate comparison among studies. Moreover, it can accelerate the clinical implementation of quantitative muscle assessment in clinical and research settings.


2019 ◽  
Author(s):  
Julio Kovacs ◽  
Jun Ha Song ◽  
Manfred Auer ◽  
Jing He ◽  
Willy Wriggers

AbstractCryo-electron tomography maps often exhibit considerable noise and anisotropic resolution, due to the low-dose requirements and the missing wedge in Fourier space. These spurious features are visually unappealing and, more importantly, prevent an automated segmentation of geometric shapes, requiring a highly subjective, labor-intensive manual tracing. We developed a novel computational strategy for objectively denoising and correcting missing-wedge artifacts in the special but important case of repetitive basic shapes, such as filamentous structures. In this approach, we use the template and a non-negative “location map” to constrain the deconvolution scheme, allowing us to recover, to a considerable degree, the information lost in the missing wedge. We applied our method to data of actin-filament bundles of inner-ear stereocilia, which are critical in hearing transduction processes, and found a good overlap with the experimental map and with manual tracing. In addition, we demonstrate that our method can also be used for membrane detection.


Author(s):  
Alister Burt ◽  
Lorenzo Gaifas ◽  
Tom Dendooven ◽  
Irina Gutsche

AbstractCryo-electron tomography and subtomogram averaging are increasingly used for macromolecular structure determination in situ. Here we introduce a set of computational tools and resources designed to enable flexible approaches to subtomogram averaging. In particular, our tools simplify metadata handling, increase automation, and interface the Dynamo software package with the Warp-Relion-M pipeline. We provide a framework for ab initio and geometrical approaches to subtomogram averaging combining tools from these packages. We illustrate the power of working within the framework enabled by our developments by applying it to EMPIAR-10164, a publicly available dataset containing immature HIV-1 virus-like particles, and a challenging in situ dataset containing chemosensory arrays in bacterial minicells. Additionally, we establish an open and collaborative online platform for sharing knowledge and tools related to cryo-electron tomography data processing. To this platform, we contribute a comprehensive guide to obtaining state-of-the-art results from EMPIAR-10164.


2020 ◽  
Author(s):  
Jitin Singla ◽  
Kate L. White ◽  
Raymond C. Stevens ◽  
Frank Alber

AbstractCryo-electron tomography provides the opportunity for unsupervised discovery of endogenous complexes in situ. This process usually requires particle picking, clustering and alignment of subtomograms to produce an average structure of the complex. When applied to heterogeneous samples, template-free clustering and alignment of subtomograms can potentially lead to the discovery of structures for unknown endogenous complexes. However, such methods require useful scoring functions to measure the quality of aligned subtomogram clusters, which can be compromised by contaminations from misclassified complexes and alignment errors. To our knowledge, a comprehensive survey to assess the effectiveness of scoring functions for ranking the quality of subtomogram clusters does not exist yet. Here, we provide such a study and assess a total of 15 scoring functions for evaluating the quality of the subtomogram clusters, which differ in the amount of structural misalignments and contaminations due to misclassified complexes. We assessed both experimental and simulated subtomograms as ground truth data sets. Our analysis shows that the robustness of scoring functions varies largely. Most scores are sensitive to the signal-to-noise ratio of subtomograms and often require Gaussian filtering as preprocessing for improved performance. Two scoring functions, Spectral SNR-based Fourier Shell Correlation and Pearson Correlation in the Fourier domain with missing wedge correction, show a robust ranking of subtomogram clusters even without any preprocessing and irrespective of SNR levels of subtomograms. Of these two scoring functions, Spectral SNR-based Fourier Shell Correlation was fastest to compute and is a better choice for handling large numbers of subtomograms. Our results provide a guidance for choosing a scoring function for template-free approaches to detect complexes from heterogeneous samples.


Sign in / Sign up

Export Citation Format

Share Document