scholarly journals Ontology-Based Data Access: A Survey

Author(s):  
Guohui Xiao ◽  
Diego Calvanese ◽  
Roman Kontchakov ◽  
Domenico Lembo ◽  
Antonella Poggi ◽  
...  

We present the framework of ontology-based data access, a semantic paradigm for providing a convenient and user-friendly access to data repositories, which has been actively developed and studied in the past decade. Focusing on relational data sources, we discuss the main ingredients of ontology-based data access, key theoretical results, techniques, applications and future challenges.

Author(s):  
Anna Bernasconi

AbstractA wealth of public data repositories is available to drive genomics and clinical research. However, there is no agreement among the various data formats and models; in the common practice, data sources are accessed one by one, learning their specific descriptions with tedious efforts. In this context, the integration of genomic data and of their describing metadata becomes—at the same time—an important, difficult, and well-recognized challenge. In this chapter, after overviewing the most important human genomic data players, we propose a conceptual model of metadata and an extended architecture for integrating datasets, retrieved from a variety of data sources, based upon a structured transformation process; we then describe a user-friendly search system providing access to the resulting consolidated repository, enriched by a multi-ontology knowledge base. Inspired by our work on genomic data integration, during the COVID-19 pandemic outbreak we successfully re-applied the previously proposed model-build-search paradigm, building on the analogies among the human and viral genomics domains. The availability of conceptual models, related databases, and search systems for both humans and viruses will provide important opportunities for research, especially if virus data will be connected to its host, provider of genomic and phenotype information.


2020 ◽  
Vol 3 (2) ◽  
pp. 67
Author(s):  
Jumah Y.J Sleeman ◽  
Jehad Abdulhamid Hammad

Ontology Based Data Access (OBDA) is a recently proposed approach which is able to provide a conceptual view on relational data sources. It addresses the problem of the direct access to big data through providing end-users with an ontology that goes between users and sources in which the ontology is connected to the data via mappings. We introduced the languages used to represent the ontologies and the mapping assertions technique that derived the query answering from sources. Query answering is divided into two steps: (i) Ontology rewriting, in which the query is rewritten with respect to the ontology into new query; (ii) mapping rewriting the query that obtained from previous step reformulating it over the data sources using mapping assertions. In this survey, we aim to study the earlier works done by other researchers in the fields of ontology, mapping and query answering over data sources.


Author(s):  
Sandra González-Bailón ◽  
Brooke Foucault Welles

This chapter offers an overview of the theoretical ideas that helped consolidate the field of communication and relates those ideas to the current media environment, where digital technologies and online networks mediate most exposure to information. The chapter offers a summary of research achievements and future challenges through the lens of the work discussed in the six parts that form this Handbook. Special attention is given to the question of how new data sources and recent methodological developments can help communication evolve as a field and adapt its research agenda to the demands of digital data, both in terms of analyzing that data with the right theoretical motivations and protecting the privacy of users to cement trust and a sustainable research agenda.


2019 ◽  
Author(s):  
Adib Rifqi Setiawan

STEAM is an acronym for Science, Technology, Engineering, Art, Mathematics. STEAM defined as the integration of science, technology, engineering, art, and mathematics into a new cross-disciplinary subject in schools. The concept of integrating subjects in Indonesian schools, generally is not new and has not been very successful in the past. Some people consider STEAM as an opportunity while others view it as having problems. Fenny Roshayanti is science educator and researcher that consider STEAM as an opportunity. She has involved the study of STEAM, as an author, educator, academic advisor, and seminar speaker. This article examines what it has been and continues work from Fenny Roshayanti in the science education. Our exploration uses qualitative methods of narrative approaches in the form of biographical studies. Participants as data sources were selected using a purposive sampling technique which was collected based on retrospective interview and naturalistic observation. Data's validity, reliability, and objectivity checked by using external audit techniques. This work explores the powerful of female’s personal style in developing a form of social influence based on her forms of capital as well as address the positive and negative consequences that may follow while implement and research STEAM in teaching classroom.


GigaScience ◽  
2021 ◽  
Vol 10 (2) ◽  
Author(s):  
Guilhem Sempéré ◽  
Adrien Pétel ◽  
Magsen Abbé ◽  
Pierre Lefeuvre ◽  
Philippe Roumagnac ◽  
...  

Abstract Background Efficiently managing large, heterogeneous data in a structured yet flexible way is a challenge to research laboratories working with genomic data. Specifically regarding both shotgun- and metabarcoding-based metagenomics, while online reference databases and user-friendly tools exist for running various types of analyses (e.g., Qiime, Mothur, Megan, IMG/VR, Anvi'o, Qiita, MetaVir), scientists lack comprehensive software for easily building scalable, searchable, online data repositories on which they can rely during their ongoing research. Results metaXplor is a scalable, distributable, fully web-interfaced application for managing, sharing, and exploring metagenomic data. Being based on a flexible NoSQL data model, it has few constraints regarding dataset contents and thus proves useful for handling outputs from both shotgun and metabarcoding techniques. By supporting incremental data feeding and providing means to combine filters on all imported fields, it allows for exhaustive content browsing, as well as rapid narrowing to find specific records. The application also features various interactive data visualization tools, ways to query contents by BLASTing external sequences, and an integrated pipeline to enrich assignments with phylogenetic placements. The project home page provides the URL of a live instance allowing users to test the system on public data. Conclusion metaXplor allows efficient management and exploration of metagenomic data. Its availability as a set of Docker containers, making it easy to deploy on academic servers, on the cloud, or even on personal computers, will facilitate its adoption.


Author(s):  
Marco Angrisani ◽  
Anya Samek ◽  
Arie Kapteyn

The number of data sources available for academic research on retirement economics and policy has increased rapidly in the past two decades. Data quality and comparability across studies have also improved considerably, with survey questionnaires progressively converging towards common ways of eliciting the same measurable concepts. Probability-based Internet panels have become a more accepted and recognized tool to obtain research data, allowing for fast, flexible, and cost-effective data collection compared to more traditional modes such as in-person and phone interviews. In an era of big data, academic research has also increasingly been able to access administrative records (e.g., Kostøl and Mogstad, 2014; Cesarini et al., 2016), private-sector financial records (e.g., Gelman et al., 2014), and administrative data married with surveys (Ameriks et al., 2020), to answer questions that could not be successfully tackled otherwise.


2020 ◽  
Vol 9 (4) ◽  
pp. e000843
Author(s):  
Kelly Bos ◽  
Maarten J van der Laan ◽  
Dave A Dongelmans

PurposeThe purpose of this systematic review was to identify an appropriate method—a user-friendly and validated method—that prioritises recommendations following analyses of adverse events (AEs) based on objective features.Data sourcesThe electronic databases PubMed/MEDLINE, Embase (Ovid), Cochrane Library, PsycINFO (Ovid) and ERIC (Ovid) were searched.Study selectionStudies were considered eligible when reporting on methods to prioritise recommendations.Data extractionTwo teams of reviewers performed the data extraction which was defined prior to this phase.Results of data synthesisEleven methods were identified that are designed to prioritise recommendations. After completing the data extraction, none of the methods met all the predefined criteria. Nine methods were considered user-friendly. One study validated the developed method. Five methods prioritised recommendations based on objective features, not affected by personal opinion or knowledge and expected to be reproducible by different users.ConclusionThere are several methods available to prioritise recommendations following analyses of AEs. All these methods can be used to discuss and select recommendations for implementation. None of the methods is a user-friendly and validated method that prioritises recommendations based on objective features. Although there are possibilities to further improve their features, the ‘Typology of safety functions’ by de Dianous and Fiévez, and the ‘Hierarchy of hazard controls’ by McCaughan have the most potential to select high-quality recommendations as they have only a few clearly defined categories in a well-arranged ordinal sequence.


2006 ◽  
Vol 2 (SPS5) ◽  
pp. 221-228 ◽  
Author(s):  
Michèle Gerbaldi

AbstractThis paper outlines the main features of the International Schools for Young Astronomers (ISYA), a programme developed by the International Astronomical Union (IAU) in 1967. The main goal of this programme is to support astronomy in developing countries by organizing a school lasting 3 weeks for students with typically a M.Sc. degree. The context in which the ISYA were developed has changed drastically over the past 10 years. We have moved from a time when access to any large telescope was difficult and mainly organized on a national basis, to the situation nowadays where data archives are established at the same time that any major telescope, ground-based or in space, is built, and these archives are accessible from everywhere. The concept of the virtual observatory reinforces this access. However, the rapid development of information and communications technologies and the increasing penetration of internet have not yet removed all barriers to data access. The role of the ISYA is addressed in this context.


2019 ◽  
Vol 188 (12) ◽  
pp. 2069-2077
Author(s):  
Priya Duggal ◽  
Christine Ladd-Acosta ◽  
Debashree Ray ◽  
Terri H Beaty

Abstract The field of genetic epidemiology is relatively young and brings together genetics, epidemiology, and biostatistics to identify and implement the best study designs and statistical analyses for identifying genes controlling risk for complex and heterogeneous diseases (i.e., those where genes and environmental risk factors both contribute to etiology). The field has moved quickly over the past 40 years partly because the technology of genotyping and sequencing has forced it to adapt while adhering to the fundamental principles of genetics. In the last two decades, the available tools for genetic epidemiology have expanded from a genetic focus (considering 1 gene at a time) to a genomic focus (considering the entire genome), and now they must further expand to integrate information from other “-omics” (e.g., epigenomics, transcriptomics as measured by RNA expression) at both the individual and the population levels. Additionally, we can now also evaluate gene and environment interactions across populations to better understand exposure and the heterogeneity in disease risk. The future challenges facing genetic epidemiology are considerable both in scale and techniques, but the importance of the field will not diminish because by design it ties scientific goals with public health applications.


Optics ◽  
2020 ◽  
Vol 2 (1) ◽  
pp. 25-42
Author(s):  
Ioseph Gurwich ◽  
Yakov Greenberg ◽  
Kobi Harush ◽  
Yarden Tzabari

The present study is aimed at designing anti-reflective (AR) engraving on the input–output surfaces of a rectangular light-guide. We estimate AR efficiency, by the transmittance level in the angular range, determined by the light-guide. Using nano-engraving, we achieve a uniform high transmission over a wide range of wavelengths. In the past, we used smoothed conical pins or indentations on the faces of light-guide crystal as the engraved structure. Here, we widen the class of pins under consideration, following the physical model developed in the previous paper. We analyze the smoothed pyramidal pins with different base shapes. The possible effect of randomization of the pins parameters is also examined. The results obtained demonstrate optimized engraved structure with parameters depending on the required spectral range and facet format. The predicted level of transmittance is close to 99%, and its flatness (estimated by the standard deviation) in the required wavelengths range is 0.2%. The theoretical analysis and numerical calculations indicate that the obtained results demonstrate the best transmission (reflection) we can expect for a facet with the given shape and size for the required spectral band. The approach is equally useful for any other form and of the facet. We also discuss a simple way of comparing experimental and theoretical results for a light-guide with the designed input and output features. In this study, as well as in our previous work, we restrict ourselves to rectangular facets. We also consider the limitations on maximal transmission produced by the size and shape of the light-guide facets. The theoretical analysis is performed for an infinite structure and serves as an upper bound on the transmittance for smaller-size apertures.


Sign in / Sign up

Export Citation Format

Share Document