automated assignment
Recently Published Documents


TOTAL DOCUMENTS

78
(FIVE YEARS 23)

H-INDEX

19
(FIVE YEARS 2)

2021 ◽  
Vol 22 (S1) ◽  
Author(s):  
Mayla R. Boguslav ◽  
Negacy D. Hailu ◽  
Michael Bada ◽  
William A. Baumgartner ◽  
Lawrence E. Hunter

Abstract Background Automated assignment of specific ontology concepts to mentions in text is a critical task in biomedical natural language processing, and the subject of many open shared tasks. Although the current state of the art involves the use of neural network language models as a post-processing step, the very large number of ontology classes to be recognized and the limited amount of gold-standard training data has impeded the creation of end-to-end systems based entirely on machine learning. Recently, Hailu et al. recast the concept recognition problem as a type of machine translation and demonstrated that sequence-to-sequence machine learning models have the potential to outperform multi-class classification approaches. Methods We systematically characterize the factors that contribute to the accuracy and efficiency of several approaches to sequence-to-sequence machine learning through extensive studies of alternative methods and hyperparameter selections. We not only identify the best-performing systems and parameters across a wide variety of ontologies but also provide insights into the widely varying resource requirements and hyperparameter robustness of alternative approaches. Analysis of the strengths and weaknesses of such systems suggest promising avenues for future improvements as well as design choices that can increase computational efficiency with small costs in performance. Results Bidirectional encoder representations from transformers for biomedical text mining (BioBERT) for span detection along with the open-source toolkit for neural machine translation (OpenNMT) for concept normalization achieve state-of-the-art performance for most ontologies annotated in the CRAFT Corpus. This approach uses substantially fewer computational resources, including hardware, memory, and time than several alternative approaches. Conclusions Machine translation is a promising avenue for fully machine-learning-based concept recognition that achieves state-of-the-art results on the CRAFT Corpus, evaluated via a direct comparison to previous results from the 2019 CRAFT shared task. Experiments illuminating the reasons for the surprisingly good performance of sequence-to-sequence methods targeting ontology identifiers suggest that further progress may be possible by mapping to alternative target concept representations. All code and models can be found at: https://github.com/UCDenver-ccp/Concept-Recognition-as-Translation.


Cell Systems ◽  
2021 ◽  
Author(s):  
Michael J. Geuenich ◽  
Jinyu Hou ◽  
Sunyun Lee ◽  
Shanza Ayub ◽  
Hartland W. Jackson ◽  
...  

2021 ◽  
Author(s):  
Zach Rolfs ◽  
Lloyd M. Smith

Proteoform identification is required to fully understand the biological diversity present in a sample. However, these identifications are often ambiguous because of the challenges in analyzing full length proteins by mass spectrometry. A five-level proteoform classification system was recently developed to delineate the ambiguity of proteoform identifications and to allow for comparisons across software platforms and acquisition methods. Widespread adoption of this system requires software tools to provide classification of the proteoform identifications. We describe here implementation of the five-level classification system in the software program MetaMorpheus, which provides both bottom-up and top-down identifications. Additionally, we developed a stand-alone program called ProteoformClassifier that allows users to classify proteoform results from any search program, provided that the program writes output that includes the information necessary to evaluate proteoform ambiguity. This stand-alone program includes a small test file and database to evaluate if a given program provides sufficient information to evaluate ambiguity. If the program does not, then ProteoformClassifier provides meaningful feedback to assist developers with implementing the classification system. We tested currently available top-down software programs and found that none of them other than MetaMorpheus provided sufficient information regarding identification ambiguity to permit classification.


Radiation ◽  
2021 ◽  
Vol 1 (2) ◽  
pp. 79-94
Author(s):  
Peter K. Rogan ◽  
Eliseos J. Mucaki ◽  
Ben C. Shirley ◽  
Yanxin Li ◽  
Ruth C. Wilkins ◽  
...  

The dicentric chromosome (DC) assay accurately quantifies exposure to radiation; however, manual and semi-automated assignment of DCs has limited its use for a potential large-scale radiation incident. The Automated Dicentric Chromosome Identifier and Dose Estimator (ADCI) software automates unattended DC detection and determines radiation exposures, fulfilling IAEA criteria for triage biodosimetry. This study evaluates the throughput of high-performance ADCI (ADCI-HT) to stratify exposures of populations in 15 simulated population scale radiation exposures. ADCI-HT streamlines dose estimation using a supercomputer by optimal hierarchical scheduling of DC detection for varying numbers of samples and metaphase cell images in parallel on multiple processors. We evaluated processing times and accuracy of estimated exposures across census-defined populations. Image processing of 1744 samples on 16,384 CPUs required 1 h 11 min 23 s and radiation dose estimation based on DC frequencies required 32 sec. Processing of 40,000 samples at 10 exposures from five laboratories required 25 h and met IAEA criteria (dose estimates were within 0.5 Gy; median = 0.07). Geostatistically interpolated radiation exposure contours of simulated nuclear incidents were defined by samples exposed to clinically relevant exposure levels (1 and 2 Gy). Analysis of all exposed individuals with ADCI-HT required 0.6–7.4 days, depending on the population density of the simulation.


2021 ◽  
Author(s):  
Linda Baldewein ◽  
Ulrike Kleeberg ◽  
Lars Möller

<p>In Earth and environmental sciences data analyzed from field samples are a significant portion of all research data, oftentimes collected under significant costs and non-reproducibly. If important metadata is not immediately secured and stored in the field, the quality and re-usability of the resulting data will be diminished.  </p><p>At the Helmholtz Coastal Data Center (HCDC) a metadata and data workflow for biogeochemical data has been developed over the last couple of years to ensure the quality and richness of metadata and enable that the final data product will be FAIR. It automates and standardizes the data transfer from the campaign planning stage, through sample collection in the field, analysis and quality control to the storage into databases and the publication in repositories.</p><p>Prior to any sampling campaign, the scientists are equipped with a customized app on a tablet that enables them to record relevant metadata information, such as the date and time of sampling, the involved scientists and the type of sample collected. Each sample and station already receives a unique identifier at this stage. The location is directly retrieved from a high-accuracy GNSS receiver connected to the tablet. This metadata is transmitted via mobile data transfer to the institution’s cloud storage.</p><p>After the campaign, the metadata is quality checked by the field scientists and the data curator and stored in a relational database. Once the samples are analyzed in the lab, the data is imported into the database and connected to the corresponding metadata using a template. Data DOIs are registered for finalized datasets in close collaboration with the World Data Center PANGAEA. The data sets are discoverable through their DOIs as well as through the HCDC data portal and the API of the metadata catalogue service.</p><p>This workflow is well established within the institute, but is still in the process of being refined and becoming more sophisticated and FAIRer. For example, an automated assignment of International Geo Sample Numbers (IGSN) for all samples is currently being planned.</p>


2021 ◽  
Author(s):  
Michael J. Geuenich ◽  
Jinyu Hou ◽  
Sunyun Lee ◽  
Hartland W. Jackson ◽  
Kieran R. Campbell

AbstractThe creation of scalable single-cell and highly-multiplexed imaging technologies that profile the protein expression and phosphorylation status of heterogeneous cellular populations has led to multiple insights into disease processes including cancer initiation and progression. A major analytical challenge in interpreting the resulting data is the assignment of cells to a priori known cell types in a robust and interpretable manner. Existing approaches typically solve this by clustering cells followed by manual annotation of individual clusters or by strategies that gate protein expression at predefined thresholds. However, these often require several subjective analysis choices such as selecting the number of clusters and do not automatically assign cell types in line with prior biological knowledge. They further lack the ability to explicitly assign cells to an unknown or uncharacterized type, which exist in most highly multiplexed imaging experiments due to the limited number of markers quantified. To address these issues we present Astir, a probabilistic model to assign cells to cell types by integrating prior knowledge of marker proteins. Astir uses deep recognition neural networks for fast Bayesian inference, allowing for cell type annotations at the million-cell scale and in the absence of previously annotated reference data across multiple experimental modalities and antibody panels. We demonstrate that Astir outperforms existing approaches in terms of accuracy and robustness by applying it to over 2.1 million single cells from several suspension and imaging mass cytometry and microscopy datasets in multiple tissue contexts. We further showcase that Astir can be used for the fast analysis of the spatial architecture of the tumour microenvironment, automatically quantifying the immune influx and spatial heterogeneity of patient samples. Astir is freely available as an open source Python package at https://www.github.com/camlab-bioml/astir.


2020 ◽  
Author(s):  
Mayla R Boguslav ◽  
Negacy D Hailu ◽  
Michael Bada ◽  
William A Baumgartner ◽  
Lawrence E Hunter

AbstractBackgroundAutomated assignment of specific ontology concepts to mentions in text is a critical task in biomedical natural language processing, and the subject of many open shared tasks. Although the current state of the art involves the use of neural network language models as a post-processing step, the very large number of ontology classes to be recognized and the limited amount of gold-standard training data has impeded the creation of end-to-end systems based entirely on machine learning. Recently, Hailu et al. recast the concept recognition problem as a type of machine translation and demonstrated that sequence-to-sequence machine learning models had the potential to outperform multi-class classification approaches. Here we systematically characterize the factors that contribute to the accuracy and efficiency of several approaches to sequence-to-sequence machine learning.ResultsWe report on our extensive studies of alternative methods and hyperparameter selections. The results not only identify the best-performing systems and parameters across a wide variety of ontologies but also illuminate about the widely varying resource requirements and hyperparameter robustness of alternative approaches. Analysis of the strengths and weaknesses of such systems suggest promising avenues for future improvements as well as design choices that can increase computational efficiency with small costs in performance. Bidirectional Encoder Representations from Transformers for Biomedical Text Mining (BioBERT) for span detection (as previously found) along with the Open-source Toolkit for Neural Machine Translation (OpenNMT) for concept normalization achieve state-of-the-art performance for most ontologies in CRAFT Corpus. This approach uses substantially fewer computational resources, including hardware, memory, and time than several alternative approaches.ConclusionsMachine translation is a promising avenue for fully machine-learning-based concept recognition that achieves state-of-the-art results on the CRAFT Corpus, evaluated via a direct comparison to previous results from the 2019 CRAFT Shared Task. Experiments illuminating the reasons for the surprisingly good performance of sequence-to-sequence methods targeting ontology identifiers suggest that further progress may be possible by mapping to alternative target concept representations. All code and models can be found at: https://github.com/UCDenver-ccp/Concept-Recognition-as-Translation.


Sign in / Sign up

Export Citation Format

Share Document