scholarly journals CODCA – COVID-19 ONTOLOGY FOR DATA COLLECTION AND ANALYSIS IN E-HEALTH

Author(s):  
A. Ismail ◽  
M. Sah

Abstract. Coronavirus (Covid-19) pandemic is one of the most deadly diseases that cause the death of millions around the world. Automatic collection and analysis of Covid-19 patient data will help medical practitioners in containing the virus. For this purpose, Semantic Web technologies can be utilized, which allows machine-processable data and enables data sharing, and reuse across machines. In this paper, we propose a Covid-19 ontology (named CODCA) that helps in collecting, analysing, and sharing medical information about people in the e-health domain. In particular, the proposed ontology uses information about medical history, drug history, vaccination history, and symptoms in order to analyse Covid-19 risk factors of people and their treatment plans. In this way, information about Covid-19 patients can be automatically processed and can be re-usable by other applications. We also demonstrate extensive semantic queries (i.e. SPARQL queries) to search the created metadata. Furthermore, we illustrate the usage of semantic rules (i.e. SWRL) so that treatment plans for individual patients can be inferred from the available knowledge.

Web Services ◽  
2019 ◽  
pp. 1068-1076
Author(s):  
Vudattu Kiran Kumar

The World Wide Web (WWW) is global information medium, where users can read and write using computers over internet. Web is one of the services available on internet. The Web was created in 1989 by Sir Tim Berners-Lee. Since then a great refinement has done in the web usage and development of its applications. Semantic Web Technologies enable machines to interpret data published in a machine-interpretable form on the web. Semantic web is not a separate web it is an extension to the current web with additional semantics. Semantic technologies play a crucial role to provide data understandable to machines. To achieve machine understandable, we should add semantics to existing websites. With additional semantics, we can achieve next level web where knowledge repositories are available for better understanding of web data. This facilitates better search, accurate filtering and intelligent retrieval of data. This paper discusses about the Semantic Web and languages involved in describing documents in machine understandable format.


Author(s):  
Vudattu Kiran Kumar

The World Wide Web (WWW) is global information medium, where users can read and write using computers over internet. Web is one of the services available on internet. The Web was created in 1989 by Sir Tim Berners-Lee. Since then a great refinement has done in the web usage and development of its applications. Semantic Web Technologies enable machines to interpret data published in a machine-interpretable form on the web. Semantic web is not a separate web it is an extension to the current web with additional semantics. Semantic technologies play a crucial role to provide data understandable to machines. To achieve machine understandable, we should add semantics to existing websites. With additional semantics, we can achieve next level web where knowledge repositories are available for better understanding of web data. This facilitates better search, accurate filtering and intelligent retrieval of data. This paper discusses about the Semantic Web and languages involved in describing documents in machine understandable format.


Author(s):  
Daniel Fernández-Álvarez ◽  
José Emilio Labra Gayo ◽  
Daniel Gayo-Avello ◽  
Patricia Ordoñez de Pablos

The proliferation of large databases with potentially repeated entities across the World Wide Web drives into a generalized interest to find methods to detect duplicated entries. The heterogeneity of the data cause that generalist approaches may produce a poor performance in scenarios with distinguishing features. In this paper, we analyze the particularities of music related-databases and we describe Musical Entities Reconciliation Architecture (MERA). MERA consists of an architecture to match entries of two sources, allowing the use of extra support sources to improve the results. It makes use of semantic web technologies and it is able to adapt the matching process to the nature of each field in each database. We have implemented a prototype of MERA and compared it with a well-known music-specialized search engine. Our prototype outperforms the selected baseline in terms of accuracy.


Author(s):  
Clifford Brown

The Reproducibility issue even if not a crisis, is still a major problem in the world of science and engineering. Within metrology, making measurements at the limits that science allows for, inevitably, factors not originally considered relevant can be very relevant. Who did the measurement? How exactly did they do it? Was a mistake made? Was the equipment working correctly? All these factors can influence the outputs from a measurement process. In this work we investigate the use of Semantic Web technologies as a strategic basis on which to capture provenance meta-data and the data curation processes that will lead to a better understanding of issues affecting reproducibility.


Author(s):  
Hayden Wimmer ◽  
Victoria Yoon ◽  
Roy Rada

Ontologies are the backbone of intelligent computing on the World Wide Web but also crucial in many decision support situations. Many sophisticated tools have been developed to support working with ontologies, including prominently exploiting the vast array of existing ontologies. A system called ALIGN is developed that demonstrates how to use freely available tools to facilitate ontology alignment. First two ontologies are built with the ontology editor Protégé and represented in OWL. ALIGN then accesses these ontologies via Java’s JENA framework and SPARQL queries. The efficacy of the ALIGN prototype is demonstrated on a drug-drug interaction problem. The prototype could readily be applied to other domains or be incorporated into decision support tools.


2011 ◽  
Vol 17 (2) ◽  
pp. 95-115 ◽  
Author(s):  
Miguel A. Mayer ◽  
Pythagoras Karampiperis ◽  
Antonis Kukurikos ◽  
Vangelis Karkaletsis ◽  
Kostas Stamatakis ◽  
...  

The number of health-related websites is increasing day-by-day; however, their quality is variable and difficult to assess. Various “trust marks” and filtering portals have been created in order to assist consumers in retrieving quality medical information. Consumers are using search engines as the main tool to get health information; however, the major problem is that the meaning of the web content is not machine-readable in the sense that computers cannot understand words and sentences as humans can. In addition, trust marks are invisible to search engines, thus limiting their usefulness in practice. During the last five years there have been different attempts to use Semantic Web tools to label health-related web resources to help internet users identify trustworthy resources. This paper discusses how Semantic Web technologies can be applied in practice to generate machine-readable labels and display their content, as well as to empower end-users by providing them with the infrastructure for expressing and sharing their opinions on the quality of health-related web resources.


1966 ◽  
Vol 05 (03) ◽  
pp. 142-146
Author(s):  
A. Kent ◽  
P. J. Vinken

A joint center has been established by the University of Pittsburgh and the Excerpta Medica Foundation. The basic objective of the Center is to seek ways in which the health sciences community may achieve increasingly convenient and economical access to scientific findings. The research center will make use of facilities and resources of both participating institutions. Cooperating from the University of Pittsburgh will be the School of Medicine, the Computation and Data Processing Center, and the Knowledge Availability Systems (KAS) Center. The KAS Center is an interdisciplinary organization engaging in research, operations, and teaching in the information sciences.Excerpta Medica Foundation, which is the largest international medical abstracting service in the world, with offices in Amsterdam, New York, London, Milan, Tokyo and Buenos Aires, will draw on its permanent medical staff of 54 specialists in charge of the 35 abstracting journals and other reference works prepared and published by the Foundation, the 700 eminent clinicians and researchers represented on its International Editorial Boards, and the 6,000 physicians who participate in its abstracting programs throughout the world. Excerpta Medica will also make available to the Center its long experience in the field, as well as its extensive resources of medical information accumulated during the Foundation’s twenty years of existence. These consist of over 1,300,000 English-language _abstract of the world’s biomedical literature, indexes to its abstracting journals, and the microfilm library in which complete original texts of all the 3,000 primary biomedical journals, monitored by Excerpta Medica in Amsterdam are stored since 1960.The objectives of the program of the combined Center include: (1) establishing a firm base of user relevance data; (2) developing improved vocabulary control mechanisms; (3) developing means of determining confidence limits of vocabulary control mechanisms in terms of user relevance data; 4. developing and field testing of new or improved media for providing medical literature to users; 5. developing methods for determining the relationship between learning and relevance in medical information storage and retrieval systems’; and (6) exploring automatic methods for retrospective searching of the specialized indexes of Excerpta Medica.The priority projects to be undertaken by the Center are (1) the investigation of the information needs of medical scientists, and (2) the development of a highly detailed Master List of Biomedical Indexing Terms. Excerpta Medica has already been at work on the latter project for several years.


Informatica ◽  
2015 ◽  
Vol 26 (2) ◽  
pp. 221-240 ◽  
Author(s):  
Valentina Dagienė ◽  
Daina Gudonienė ◽  
Renata Burbaitė

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Hossein Estiri ◽  
Zachary H. Strasser ◽  
Jeffy G. Klann ◽  
Pourandokht Naseri ◽  
Kavishwar B. Wagholikar ◽  
...  

AbstractThis study aims to predict death after COVID-19 using only the past medical information routinely collected in electronic health records (EHRs) and to understand the differences in risk factors across age groups. Combining computational methods and clinical expertise, we curated clusters that represent 46 clinical conditions as potential risk factors for death after a COVID-19 infection. We trained age-stratified generalized linear models (GLMs) with component-wise gradient boosting to predict the probability of death based on what we know from the patients before they contracted the virus. Despite only relying on previously documented demographics and comorbidities, our models demonstrated similar performance to other prognostic models that require an assortment of symptoms, laboratory values, and images at the time of diagnosis or during the course of the illness. In general, we found age as the most important predictor of mortality in COVID-19 patients. A history of pneumonia, which is rarely asked in typical epidemiology studies, was one of the most important risk factors for predicting COVID-19 mortality. A history of diabetes with complications and cancer (breast and prostate) were notable risk factors for patients between the ages of 45 and 65 years. In patients aged 65–85 years, diseases that affect the pulmonary system, including interstitial lung disease, chronic obstructive pulmonary disease, lung cancer, and a smoking history, were important for predicting mortality. The ability to compute precise individual-level risk scores exclusively based on the EHR is crucial for effectively allocating and distributing resources, such as prioritizing vaccination among the general population.


Sign in / Sign up

Export Citation Format

Share Document