scholarly journals An Ontology Based Expert System for Lung Cancer : OBESLC

Lung Cancer is the second most recurrent cancer in both men and women and which is the leading cause of cancer death worldwide. The American cancer Society (ACS) in US estimates nearly 228,150 new cases of lung cancer and 142,670 deaths from lung cancer for the year 2019. This paper proposes to build an ontology based expert system to diagnose Lung Cancer Disease and to identify the stage of Lung Cancer. Ontology is defined as a specification of conceptualization and describes knowledge about any domain in the form of concepts and relationships among them. It is a framework for representing shareable and reusable knowledge across a domain. The advantage of using ontology for knowledge representation of a particular domain is they are machine readable. We designed a System named OBESLC (Ontology Based Expert System for Lung Cancer) for lung cancer diagnosis, in that to construct an ontology we make use of Ontology Web Language (OWL) and Resource Description Framework (RDF) .The design of this system depends on knowledge about patient’s symptoms and the state of lung nodules to build knowledge base of Lung Cancer Disease. We verified our ontology OBESLC by querying it using SPARQL query language, a popular query language for extracting required information from Semantic web. We validate our ontology by developing reasoning rules using semantic Web Rule Language (SWRL).To provide the user interface, we implemented our approach in java using Jena API and Eclipse Editor.

2018 ◽  
Vol 4 (Supplement 2) ◽  
pp. 201s-201s ◽  
Author(s):  
L. Atundo ◽  
F. Chite ◽  
G. Chesumbai ◽  
A. Kosgei

Background: Lung cancer diagnosis has been a challenge in western Kenya due to the technicalities related to screening and diagnostic procedures. The burden in the adult population is largely unknown, as most patients are managed for Pulmonary Tuberculosis, since both have similar clinical manifestations. The Eldoret Cancer Registry (ECR) provides statistics and epidemiologic profile across western region of Kenya. Aim: The aim of this study is to establish lung cancer incidences in relation to year of diagnosis, age, gender and stage at diagnosis across western Kenya region. Methods: A retrospective review of all cases of lung cancer disease diagnosed at Moi Teaching and Referral Hospital from 2012 to 2016 were identified from the ECR. Data on year of incidence, age, gender, stage at diagnosis and county of origin was analyzed Results: Out of the 60 patients diagnosed with lung cancer, the findings were as follows: In 2012 there were 11 cases representing 18.3%, 2013 10 cases (16.7%), 2014, 12 cases (20%), 2015 12 cases (20%) and 2016, 15 cases (25%). Incidences by age were in the following cohorts; 0-27 years 1 case representing 1.7%, 30-39 years (4) 6.7%, 40-49 years (8) 13.3%, 50-59 years (17) 28.3%, 60-69 years (12) 20%, 70-79 years (15) 25%, above 80 years (3) 5%. Incidences by gender: male had 38 cases at 63.3% and female had 22 cases at 36.7%. Incidence by stage at diagnosis; stage iv (6) 10%, unknown stage (54) 90%. Conclusion: 2016 had the highest incidence and may be associated with the increased awareness on screening services at MTRH. Most cases were between 50-79 years and could be attributed to the slow disease progression and delays in early diagnosis. Higher incidences were in males and may be related to susceptibilities to risks factors such as smoking and industrial fumes respectively. There's need for early diagnosis and disease staging as most cases were at stage 4 and unknown.


Heritage ◽  
2019 ◽  
Vol 2 (2) ◽  
pp. 1471-1498 ◽  
Author(s):  
Ikrom Nishanbaev ◽  
Erik Champion ◽  
David A. McMeekin

The amount of digital cultural heritage data produced by cultural heritage institutions is growing rapidly. Digital cultural heritage repositories have therefore become an efficient and effective way to disseminate and exploit digital cultural heritage data. However, many digital cultural heritage repositories worldwide share technical challenges such as data integration and interoperability among national and regional digital cultural heritage repositories. The result is dispersed and poorly-linked cultured heritage data, backed by non-standardized search interfaces, which thwart users’ attempts to contextualize information from distributed repositories. A recently introduced geospatial semantic web is being adopted by a great many new and existing digital cultural heritage repositories to overcome these challenges. However, no one has yet conducted a conceptual survey of the geospatial semantic web concepts for a cultural heritage audience. A conceptual survey of these concepts pertinent to the cultural heritage field is, therefore, needed. Such a survey equips cultural heritage professionals and practitioners with an overview of all the necessary tools, and free and open source semantic web and geospatial semantic web platforms that can be used to implement geospatial semantic web-based cultural heritage repositories. Hence, this article surveys the state-of-the-art geospatial semantic web concepts, which are pertinent to the cultural heritage field. It then proposes a framework to turn geospatial cultural heritage data into machine-readable and processable resource description framework (RDF) data to use in the geospatial semantic web, with a case study to demonstrate its applicability. Furthermore, it outlines key free and open source semantic web and geospatial semantic platforms for cultural heritage institutions. In addition, it examines leading cultural heritage projects employing the geospatial semantic web. Finally, the article discusses attributes of the geospatial semantic web that require more attention, that can result in generating new ideas and research questions for both the geospatial semantic web and cultural heritage fields.


2010 ◽  
Vol 9 (1) ◽  
pp. 1-11
Author(s):  
K. Balachandran ◽  
R. Anitha

Knowledge-based expert systems, or expert systems, use human knowledge to solve problems that normally would require human intelligence. These expert systems represent the expertise knowledge as data or rules within the computer. These rules and data can be called upon when needed to solve problems. Lung cancer is one of the dreaded disease in the modern era. It is responsible for the most cancer deaths in both men and women throughout the world. Early diagnosis and timely treatment are imperative for the cure. Longevity and cure depends on early detection. This paper gives on insight to identify the forget group of people who are suffering or susceptible to suffer lung cancer disease. Seeking proper medical attention con be initiated based on the findings. Expert system tool developed, to find this target group based on the non-clinical parameters. Symptoms and risk factors associated with Lung cancer ore token as the basis of this study. This expert system basically works on the rule based approach to collect the data. Then Supervisory learning approach is used to infer the basic data. Once sufficient knowledge base is generated the system can be made to adopt in unsupervised learning mode.


Author(s):  
Reto Gmür ◽  
Donat Agosti

Taxonomic treatments, sections of publications documenting the features or distribution of a related group of organisms (called a “taxon”, plural “taxa”) in ways adhering to highly formalized conventions, and published in scientific journals, shape our understanding of global biodiversity (Catapano 2019). Treatments are the building blocks of the evolving scientific consensus on taxonomic entities. The semantics of these treatments and their relationships are highly structured: taxa are introduced, merged, made obsolete, split, renamed, associated with specimens and so on. Plazi makes this content available in machine-readable form using Resource Description Framework (RDF) . RDF is the standard model for Linked Data and the Semantic Web. RDF can be exchanged in different formats (aka concrete syntaxes) such as RDF/XML or Turtle. The data model describes graph structures and relies on Internationalized Resource Identifiers (IRIs) , ontologies such as Darwin Core basic vocabulary are used to assign meaning to the identifiers. For Synospecies, we unite all treatments into one large knowledge graph, modelling taxonomic knowledge and its evolution with complete references to quotable treatments. However, this knowledge graph expresses much more than any individual treatment could convey because every referenced entity is linked to every other relevant treatment. On synospecies.plazi.org, we provide a user-friendly interface to find the names and treatments related to a taxon. An advanced mode allows execution of queries using the SPARQL query language.


2019 ◽  
Vol 8 (3) ◽  
pp. 1306-1308

Cancer registries are most important to predict and treat the cancer disease. Numerous solutions are available in research to analyze the data in cancer registries. However, there is a lack of well defined data model since there is a link to external web pages. In order to overcome this issue a system is proposed to represent the cancer data using a semantic data model. The data model uses a Resource Description Framework (RDF) format to represent the data from the local cancer databases. It also uses an optimized Querying of the semantically represented data using SPARQL query language. The optimization of the queries is done with the Modified shuffled frog leaping algorithm(MSFL). This helps in treatment of cancer patients in an easy way.


F1000Research ◽  
2019 ◽  
Vol 8 ◽  
pp. 1677
Author(s):  
Toshiaki Katayama ◽  
Shuichi Kawashima ◽  
Gos Micklem ◽  
Shin Kawano ◽  
Jin-Dong Kim ◽  
...  

Publishing databases in the Resource Description Framework (RDF) model is becoming widely accepted to maximize the syntactic and semantic interoperability of open data in life sciences. Here we report advancements made in the 6th and 7th annual BioHackathons which were held in Tokyo and Miyagi respectively. This review consists of two major sections covering: 1) improvement and utilization of RDF data in various domains of the life sciences and 2) meta-data about these RDF data, the resources that store them, and the service quality of SPARQL Protocol and RDF Query Language (SPARQL) endpoints. The first section describes how we developed RDF data, ontologies and tools in genomics, proteomics, metabolomics, glycomics and by literature text mining. The second section describes how we defined descriptions of datasets, the provenance of data, and quality assessment of services and service discovery. By enhancing the harmonization of these two layers of machine-readable data and knowledge, we improve the way community wide resources are developed and published.  Moreover, we outline best practices for the future, and prepare ourselves for an exciting and unanticipatable variety of real world applications in coming years.


2018 ◽  
Vol 8 (1) ◽  
pp. 18-37 ◽  
Author(s):  
Median Hilal ◽  
Christoph G. Schuetz ◽  
Michael Schrefl

Abstract The foundations for traditional data analysis are Online Analytical Processing (OLAP) systems that operate on multidimensional (MD) data. The Resource Description Framework (RDF) serves as the foundation for the publication of a growing amount of semantic web data still largely untapped by companies for data analysis. Most RDF data sources, however, do not correspond to the MD modeling paradigm and, as a consequence, elude traditional OLAP. The complexity of RDF data in terms of structure, semantics, and query languages renders RDF data analysis challenging for a typical analyst not familiar with the underlying data model or the SPARQL query language. Hence, conducting RDF data analysis is not a straightforward task. We propose an approach for the definition of superimposed MD schemas over arbitrary RDF datasets and show how to represent the superimposed MD schemas using well-known semantic web technologies. On top of that, we introduce OLAP patterns for RDF data analysis, which are recurring, domain-independent elements of data analysis. Analysts may compose queries by instantiating a pattern using only the MD concepts and business terms. Upon pattern instantiation, the corresponding SPARQL query over the source data can be automatically generated, sparing analysts from technical details and fostering self-service capabilities.


2010 ◽  
Vol 9 (2) ◽  
pp. 62-71
Author(s):  
K. Balachandran ◽  
R. Anitha

Knowledge-based expert systems, or expert systems, use human knowledge to solve problems that normally would require human intelligence. These expert systems represent the expertise knowledge as data or rules within the computer. These rules and data can be called upon when needed to solve problems. Lung cancer is one of the dreaded disease in the modern era. It is responsible for the most cancer deaths in both men and women throughout the world. Early diagnosis and timely treatment are imperative for the cure. Longevity and cure depend on early detection. This paper gives on insight to identify the target group of people who are suffering or susceptible to suffer lung cancer disease. Seeking proper medical attention can be initiated based on the findings. Expert system tool developed, to find this target group based on the non-clinical parameters. Symptoms and risk factors associated with Lung cancer are taken as the basis of this study. This expert system basically works on the rule based approach to collect the data. Then Supervisory learning approach is used to infer the basic data. Once sufficient knowledge base is generated the system can be mode to adopt in unsupervised learning mode.


2020 ◽  
Vol 24 (18) ◽  
pp. 14179-14207 ◽  
Author(s):  
Ahmed Mostafa Khalil ◽  
Sheng-Gang Li ◽  
Yong Lin ◽  
Hong-Xia Li ◽  
Sheng-Guan Ma

Sign in / Sign up

Export Citation Format

Share Document