EXTRACTING KNOWLEDGE FROM ENGLISH TRANSLATED QURAN USING NLP PATTERN

2015 ◽  
Vol 77 (19) ◽  
Author(s):  
Rohana Ismail ◽  
Zainab Abu Bakar ◽  
Nurazzah Abd. Rahman

Ontology is able to represent knowledge from an abstract view into formal semantics. It is essential for the success of knowledge-based systems because it has been used to share vocabulary, discover new knowledge, flexible access of knowledge and easy integration of knowledge. Currently, Ontology from Quran is not complete and most of the development is done manually. Manual development of ontology is time consuming and labor intensive task. Hence, the automatic or semi-automatic ontology development which is a field of Ontology Learning is needed to efficiently extract knowledge and transform it into Ontology. Current techniques employed in Ontology Learning are based on statistical and Natural Language Processing. This paper provides result from an experiment to extract knowledge using the existing Natural Language Processing (NLP) Pattern based on the Ontology Learning approach. Initial experiment shows that the pattern could be used to extract knowledge in terms of relations that exist in English translated Quran. In addition, NLP could also use to identify new pattern that can be further explored.

Author(s):  
K.G.C.M Kooragama ◽  
L.R.W.D. Jayashanka ◽  
J.A. Munasinghe ◽  
K.W. Jayawardana ◽  
Muditha Tissera ◽  
...  

Author(s):  
Saravanakumar Kandasamy ◽  
Aswani Kumar Cherukuri

Semantic similarity quantification between concepts is one of the inevitable parts in domains like Natural Language Processing, Information Retrieval, Question Answering, etc. to understand the text and their relationships better. Last few decades, many measures have been proposed by incorporating various corpus-based and knowledge-based resources. WordNet and Wikipedia are two of the Knowledge-based resources. The contribution of WordNet in the above said domain is enormous due to its richness in defining a word and all of its relationship with others. In this paper, we proposed an approach to quantify the similarity between concepts that exploits the synsets and the gloss definitions of different concepts using WordNet. Our method considers the gloss definitions, contextual words that are helping in defining a word, synsets of contextual word and the confidence of occurrence of a word in other word’s definition for calculating the similarity. The evaluation based on different gold standard benchmark datasets shows the efficiency of our system in comparison with other existing taxonomical and definitional measures.


Author(s):  
Azleena Mohd Kassim ◽  
Yu-N Cheah

Information Technology (IT) is often employed to put knowledge management policies into operation. However, many of these tools require human intervention when it comes to deciding how the knowledge is to be managed. The Sematic Web may be an answer to this issue, but many Sematic Web tools are not readily available for the regular IT user. Another problem that arises is that typical efforts to apply or reuse knowledge via a search mechanism do not necessarily link to other pages that are relevant. Blogging systems appear to address some of these challenges but the browsing experience can be further enhanced by providing links to other relevant posts. In this chapter, the authors present a semantic blogging tool called SEMblog to identify, organize, and reuse knowledge based on the Sematic Web and ontologies. The SEMblog methodology brings together technologies such as Natural Language Processing (NLP), Sematic Web representations, and the ubiquity of the blogging environment to produce a more intuitive way to manage knowledge, especially in the areas of knowledge identification, organization, and reuse. Based on detailed comparisons with other similar systems, the uniqueness of SEMblog lies in its ability to automatically generate keywords and semantic links.


Author(s):  
Iraj Mantegh ◽  
Nazanin S. Darbandi

Robotic alternative to many manual operations falls short in application due to the difficulties in capturing the manual skill of an expert operator. One of the main problems to be solved if robots are to become flexible enough for various manufacturing needs is that of end-user programming. An end-user with little or no technical expertise in robotics area needs to be able to efficiently communicate its manufacturing task to the robot. This paper proposes a new method for robot task planning using some concepts of Artificial Intelligence. Our method is based on a hierarchical knowledge representation and propositional logic, which allows an expert user to incrementally integrate process and geometric parameters with the robot commands. The objective is to provide an intelligent and programmable agent such as a robot with a knowledge base about the attributes of human behaviors in order to facilitate the commanding process. The focus of this work is on robot programming for manufacturing applications. Industrial manipulators work with low level programming languages. This work presents a new method based on Natural Language Processing (NLP) that allows a user to generate robot programs using natural language lexicon and task information. This will enable a manufacturing operator (for example for painting) who may be unfamiliar with robot programming to easily employ the agent for the manufacturing tasks.


Author(s):  
KOH TOH TZU

Since the end of last year, the researchers at the Institute of Systems Science (ISS) started to consider a more ambitious project as part of its multilingual programming objective. This project examines the domain of Chinese Business Letter Writing. With the problem defined as generating Chinese letters to meet business needs, investigations suggest an intersection of 3 possible approaches: knowledge engineering, form processing and natural language processing. This paper attempts to report some of the findings and document the design and implementation issues that have arisen and been tackled as prototyping work progresses.


Author(s):  
Roy Rada

The techniques of artificial intelligence include knowledgebased, machine learning, and natural language processing techniques. The discipline of investing requires data identification, asset valuation, and risk management. Artificial intelligence techniques apply to many aspects of financial investing, and published work has shown an emphasis on the application of knowledge-based techniques for credit risk assessment and machine learning techniques for stock valuation. However, in the future, knowledge-based, machine learning, and natural language processing techniques will be integrated into systems that simultaneously address data identification, asset valuation, and risk management.


Author(s):  
E. Hope Weissler ◽  
Jikai Zhang ◽  
Steven Lippmann ◽  
Shelley Rusincovitch ◽  
Ricardo Henao ◽  
...  

Background: Peripheral artery disease (PAD) is underrecognized, undertreated, and understudied: each of these endeavors requires efficient and accurate identification of patients with PAD. Currently, PAD patient identification relies on diagnosis/procedure codes or lists of patients diagnosed or treated by specific providers in specific locations and ways. The goal of this research was to leverage natural language processing to more accurately identify patients with PAD in an electronic health record system compared with a structured data–based approach. Methods: The clinical notes from a cohort of 6861 patients in our health system whose PAD status had previously been adjudicated were used to train, test, and validate a natural language processing model using 10-fold cross-validation. The performance of this model was described using the area under the receiver operating characteristic and average precision curves; its performance was quantitatively compared with an administrative data–based least absolute shrinkage and selection operator (LASSO) approach using the DeLong test. Results: The median (SD) of the area under the receiver operating characteristic curve for the natural language processing model was 0.888 (0.009) versus 0.801 (0.017) for the LASSO-based approach alone (DeLong P <0.0001). The median (SD) of the area under the precision curve was 0.909 (0.008) versus 0.816 (0.012) for the structured data–based approach. When sensitivity was set at 90%, the precision for LASSO was 65% and the machine learning approach was 74%, while the specificity for LASSO was 41% and for the machine learning approach was 62%. Conclusions: Using a natural language processing approach in addition to partial cohort preprocessing with a LASSO-based model, we were able to meaningfully improve our ability to identify patients with PAD compared with an approach using structured data alone. This model has potential applications to both interventions targeted at improving patient care as well as efficient, large-scale PAD research. Graphic Abstract: A graphic abstract is available for this article.


2021 ◽  
Vol 2 (1) ◽  
pp. 43-48
Author(s):  
Merlin Florrence

Natural Language Processing (NLP) is rapidly increasing in all domains of knowledge acquisition to facilitate different language user. It is required to develop knowledge based NLP systems to provide better results.  Knowledge based systems can be implemented using ontologies where ontology is a collection of terms and concepts arranged taxonomically.  The concepts that are visualized graphically are more understandable than in the text form.   In this research paper, new multilingual ontology visualization plug-in MLGrafViz is developed to visualize ontologies in different natural languages. This plug-in is developed for protégé ontology editor. This plug-in allows the user to translate and visualize the core ontology into 135 languages.


Sign in / Sign up

Export Citation Format

Share Document