MXF Multichannel Audio Controlled Vocabulary

2020 ◽  
Vol 20 (3) ◽  
pp. 284-290
Author(s):  
Jocelyn Chan ◽  
Yue Wu ◽  
James Wood ◽  
Mohammad Muhit ◽  
Mohammed K. Mahmood ◽  
...  

Background and Objectives: Congenital Rubella Syndrome (CRS) is the leading cause of vaccine-preventable congenital anomalies. Comprehensive country-level data on the burden of CRS in low and middle-income countries, such as Bangladesh, are scarce. This information is essential for assessing the impact of rubella vaccination programs. We aim to systematically review the literature on the epidemiology of CRS and estimate the burden of CRS in Bangladesh. Methods: We conducted a systematic review of existing literature and transmission modelling of seroprevalence studies to estimate the pre-vaccine period burden of CRS in Bangladesh. OVID Medline (1948 – 23 November 2016) and OVID EMBASE (1974 – 23 November 2016) were searched using a combination of the database-specific controlled vocabulary and free text terms. We used an age-stratified deterministic model to estimate the pre-vaccination burden of CRS in Bangladesh. Findings: Ten articles were identified, published between 2000 and 2014, including seven crosssectional studies, two case series and one analytical case-control study. Rubella seropositivity ranged from 47.0% to 86.0% among all age population. Rubella sero–positivity increased with age. Rubella seropositivity among women of childbearing age was 81.0% overall. The estimated incidence of CRS was 0·99 per 1,000 live births, which corresponds to approximately 3,292 CRS cases annually in Bangladesh. Conclusion: The estimated burden of CRS in Bangladesh during the pre-vaccination period was high. This will provide important baseline information to assess the impact and cost-effectiveness of routine rubella immunisation, introduced in 2012 in Bangladesh.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Paulien Adamse ◽  
Emilie Dagand ◽  
Karen Bohmert-Tatarev ◽  
Daniela Wahler ◽  
Manoela Miranda ◽  
...  

Abstract Background Various databases on genetically modified organisms (GMOs) exist, all with their specific focus to facilitate access to information needed for, e. g., the assistance in risk assessment, the development of detection and identification strategies or inspection and control activities. Each database has its unique approach towards the subject. Often these databases use different terminology to describe the GMOs. For adequate GMO addressing and identification and exchange of GMO-related information it is necessary to use commonly agreed upon concepts and terminology. Result A hierarchically structured controlled vocabulary describing the genetic elements inserted into conventional GMOs, and GMOs developed by the use of gen(om)e-editing is presented: the GMO genetic element thesaurus (GMO-GET). GMO-GET can be used for GMO-related documentation, including GMO-related databases. It has initially been developed on the basis of two GMO databases, i.e. the Biosafety Clearing-House and the EUginius database. Conclusion The use of GMO-GET will enable consistent and compatible information (harmonisation), also allowing an accurate exchange of information between the different data systems and thereby facilitating their interoperability. GMO-GET can also be used to describe genetic elements that are altered in organisms obtained through current targeted genome-editing techniques.


Author(s):  
Adrienne M Stilp ◽  
Leslie S Emery ◽  
Jai G Broome ◽  
Erin J Buth ◽  
Alyna T Khan ◽  
...  

Abstract Genotype-phenotype association studies often combine phenotype data from multiple studies to increase power. Harmonization of the data usually requires substantial effort due to heterogeneity in phenotype definitions, study design, data collection procedures, and data set organization. Here we describe a centralized system for phenotype harmonization that includes input from phenotype domain and study experts, quality control, documentation, reproducible results, and data sharing mechanisms. This system was developed for the National Heart, Lung and Blood Institute’s Trans-Omics for Precision Medicine program, which is generating genomic and other omics data for >80 studies with extensive phenotype data. To date, 63 phenotypes have been harmonized across thousands of participants from up to 17 studies per phenotype (participants recruited 1948-2012). We discuss challenges in this undertaking and how they were addressed. The harmonized phenotype data and associated documentation have been submitted to National Institutes of Health data repositories for controlled-access by the scientific community. We also provide materials to facilitate future harmonization efforts by the community, which include (1) the code used to generate the 63 harmonized phenotypes, enabling others to reproduce, modify or extend these harmonizations to additional studies; and (2) results of labeling thousands of phenotype variables with controlled vocabulary terms.


2011 ◽  
Vol 8 (2) ◽  
pp. 85-94
Author(s):  
Hendrik Mehlhorn ◽  
Falk Schreiber

Summary DBE2 is an information system for the management of biological experiment data from different data domains in a unified and simple way. It provides persistent data storage, worldwide accessibility of the data and the opportunity to load, save, modify, and annotate the data. It is seamlessly integrated in the VANTED system as an add-on, thereby extending the VANTED platform towards data management. DBE2 also utilizes controlled vocabulary from the Ontology Lookup Service to allow the management of terms such as substance names, species names, and measurement units, aiming at an eased data integration.


1984 ◽  
Vol 8 (2) ◽  
pp. 63-66 ◽  
Author(s):  
C.P.R. Dubois

The controlled vocabulary versus the free text approach to information retrieval is reviewed from the mid 1960s to the early 1980s. The dominance of the free text approach following the Cranfield tests is increasingly coming into question as a result of tests on existing online data bases and case studies. This is supported by two case studies on the Coffeeline data base. The differences and values of the two approaches are explored considering thesauri as semantic maps. It is suggested that the most appropriate evaluatory technique for indexing languages is to study the actual use made of various techniques in a wide variety of search environments. Such research is becoming more urgent. Economic and other reasons for the scarcity of online thesauri are reviewed and suggestions are made for methods to secure revenue from thesaurus display facilities. Finally, the promising outlook for renewed develop ment of controlled vocabularies with more effective online display techniques is mentioned, although such development must be based on firm research of user behaviour and needs.


Author(s):  
Rahul Renu ◽  
Matthew Peterson ◽  
Gregory Mocko ◽  
Joshua Summers

Assembly process sheets are formal documents used extensively within automotive original equipment manufacturers (OEMs) to document and communicate assembly procedure, required tooling, contingency plans, and time study results. These sheets are authored throughout the vehicle life-cycle. Further, various customers use these sheets for training, analyzing the process, and line-balancing. In this research, the primary focus is the time studies analysis that is completed using knowledge contained within the assembly process sheets. In this research, a method and software tool are developed to utilize coupling between part descriptions and process descriptions for assembly time studies. The method is realized through the development of a standardized vocabulary for describing work instructions, a mapping from work instructions to MTM codes, and a tool for extracting relevant part information from CAD models. The approach enables process planners to establish part-process coupling, author work instructions using the controlled vocabulary, to estimate assembly time. A prototype system is developed and tested using examples from an automotive OEM.


Sign in / Sign up

Export Citation Format

Share Document