A Noval Information Management Method of Text Recognition Based on Ontology Semantic Domain

2013 ◽  
Vol 482 ◽  
pp. 335-340
Author(s):  
Yang Xin Yu ◽  
Liu Yang Wang

In today's network environment, the semantic gap between machine language and human language is the most important challenge of information management. Processing of text plays an important role in information management and knowledge management. In this paper, a proposed method shows how a text is related to its background knowledge. By background knowledge, People mean the parts of domain ontology which are not expressed in the text, but are shared by the creator and potential readers. Given the text-ontology mapping, people may discover the semantic domain of a text and how the text covers the domain knowledge. The semantic relatedness between the concepts mentioned in a text, as a whole unit,and the other concepts of the domain should be measured. This measure is based on the semantic relations defined by the ontology among its concepts. The experimental results prove that proposed method presents better overall performance and is natural way to improve retrieval results of users needed.

2017 ◽  
Vol 863 ◽  
pp. 361-367
Author(s):  
Jun Zhong ◽  
Hong Bao ◽  
Yan Wang

As the current regulatory requirements of automobile disassembly and recycling are becoming strict increasingly, it is necessary to focus on environmental protection in the processing and information management system of disassembly and recycling. This paper studies on the information management method and system implementation of disassembly and recycling processing of automotive products. Firstly, the concept of parts-disassembly level is put forward and the level planning model is built together with the confirmed methods of parts-disassembly level scheme by LCIA. Then, a management system platform for disassembly process of automotive products is developed. Finally, the feasibility and availability of this system is verified by the application example of the disassembly level for a certain type automobile.


2012 ◽  
Vol 14 (3) ◽  
pp. 14-34
Author(s):  
Roger Clark ◽  
Jonathan Wingfield

In 2006 AstraZeneca (AZ) executed a strategy to centralise all biochemical screening activities within one of its Research Areas, into a single team. This team had the remit to deliver data faster and more consistently, whilst reducing the FTE’s deployed against such activities. Keeping the team small, AZ hoped to facilitate more flexible use of resources, remaining agile enough to respond to changing business demands; however this centralised approach brought with it a fresh set of challenges, not least of which was information management. This review describes a successful LIMS implementation within AZ (who deployed a customised COTS solution in just four months). It outlines the steps taken over the initial system development life cycle and highlights the requirement for dedicated in-house resource (with intimate domain knowledge) coupled with experienced vendor personnel. It goes on to explore the requirement for continued evolution of the system and the challenges this posed.


Author(s):  
Robert Hallis

The Scholarship of Teaching and Learning nurtures an academic discussion of best instructional practices. This case study examines the role domain knowledge plays in determining extent to which students can effectively analyze an opinion piece from a major news organization, locate a relevant source to support their view of the issue, and reflect on the quality of their work. The goal of analyzing an opinion piece is twofold: it fosters critical thinking in analyzing the strength of an argument and it promotes information management skills in locating and incorporating relevant sources in a real-world scenario. Students, however, exhibited difficulties in accurately completing the assignment and usually overestimated their expertise. This chapter traces how each step in the process of making this study public clarifies the issues encountered. The focus here, however, centers on the context within which the study was formulated, those issues that contributed to framing the research question, and how the context of inquiry served to deepen insights in interpreting the results.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4558
Author(s):  
Yiping Xu ◽  
Hongbing Ji ◽  
Wenbo Zhang

Detecting and removing ghosts is an important challenge for moving object detection because ghosts will remain forever once formed, leading to the overall detection performance degradation. To deal with this issue, we first classified the ghosts into two categories according to the way they were formed. Then, the sample-based two-layer background model and histogram similarity of ghost areas were proposed to detect and remove the two types of ghosts, respectively. Furthermore, three important parameters in the two-layer model, i.e., the distance threshold, similarity threshold of local binary similarity pattern (LBSP), and time sub-sampling factor, were automatically determined by the spatial-temporal information of each pixel for adapting to the scene change rapidly. The experimental results on the CDnet 2014 dataset demonstrated that our proposed algorithm not only effectively eliminated ghost areas, but was also superior to the state-of-the-art approaches in terms of the overall performance.


2016 ◽  
Vol 25 (01) ◽  
pp. 1550029
Author(s):  
M. Vilares Ferro ◽  
M. Fernández Gavilanes ◽  
A. Blanco González ◽  
C. Gómez-Rodríguez

A proposal for intelligent retrieval in the biodiversity domain is described. It applies natural language processing to integrate linguistic and domain knowledge in a mathematical model for information management, formalizing the notion of semantic similarity in different degrees. The goal is to provide computational tools to identify, extract and relate not only data but also scientific notions, even if the information available to start the process is not complete. The use of conceptual graphs as a basis for interpretation makes it possible to avoid the use of classic ontologies, whose start-up requires costly generation and maintenance protocols and also unnecessarily overload the accessing task for inexpert users. We exploit the automatic generation of these structures from raw texts through graphical and natural language interaction, at the same time providing a solid logical and linguistic foundation to sustain the curation of databases.


2012 ◽  
Vol 594-597 ◽  
pp. 2986-2989
Author(s):  
Rui Wang ◽  
Deng Hua Zhong ◽  
Jun Ping Zhou

The work of construction consultant in hydropower project in China is different from other countries and the Chinese consultant is more like a supervisor whose work is constrained by the owner. Therefor the consultant has to coordinate among designers, owner and constructors and the information management becomes complex and trivial, especially in the manual management mode, it is difficult to communicate among departments and staffs and manage the information promptly and efficiently. This paper proposed a process control based information management method which includes the main consultant workflow and the consultant business database and used the workflow technology to develop a flexible and adaptable MIS (Management Information System) for construction consultant.


Author(s):  
Maxat Kulmanov ◽  
Fatima Zohra Smaili ◽  
Xin Gao ◽  
Robert Hoehndorf

Ontologies have long been employed in the life sciences to formally represent and reason over domain knowledge, and they are employed in almost every major biological database. Recently, ontologies are increasingly being used to provide background knowledge in similarity-based analysis and machine learning models. The methods employed to combine ontologies and machine learning are still novel and actively being developed. We provide an overview over the methods that use ontologies to compute similarity and incorporate them in machine learning methods; in particular, we outline how semantic similarity measures and ontology embeddings can exploit the background knowledge in biomedical ontologies, and how ontologies can provide constraints that improve machine learning models. The methods and experiments we describe are available as a set of executable notebooks, and we also provide a set of slides and additional resources at https://github.com/bio-ontology-research-group/machine-learning-with-ontologies.Key pointsOntologies provide background knowledge that can be exploited in machine learning models.Ontology embeddings are structure-preserving maps from ontologies into vector spaces and provide an important method for utilizing ontologies in machine learning. Embeddings can preserve different structures in ontologies, including their graph structures, syntactic regularities, or their model-theoretic semantics.Axioms in ontologies, in particular those involving negation, can be used as constraints in optimization and machine learning to reduce the search space.


The questions of analysis, research and development of game-theoretic optimization models, methods of informational influence, control and confrontation in multi-criteria information networks are considered. Keywords game and graph models; information network; network game; player coalition model; information management method; agents benefit


Author(s):  
Hacène Cherfi ◽  
Amedeo Napoli ◽  
Yannick Toussaint

A text mining process using association rules generates a very large number of rules. According to experts of the domain, most of these rules basically convey a common knowledge, that is, rules which associate terms that experts may likely relate to each other. In order to focus on the result interpretation and discover new knowledge units, it is necessary to define criteria for classifying the extracted rules. Most of the rule classification methods are based on numerical quality measures. In this chapter, the authors introduce two classification methods: the first one is based on a classical numerical approach, that is, using quality measures, and the other one is based on domain knowledge. They propose the second original approach in order to classify association rules according to qualitative criteria using domain model as background knowledge. Hence, they extend the classical numerical approach in an effort to combine data mining and semantic techniques for post mining and selection of association rules. The authors mined a corpus of texts in molecular biology and present the results of both approaches, compare them, and give a discussion on the benefits of taking into account a knowledge domain model of the data.


Author(s):  
Maxat Kulmanov ◽  
Fatima Zohra Smaili ◽  
Xin Gao ◽  
Robert Hoehndorf

Abstract Ontologies have long been employed in the life sciences to formally represent and reason over domain knowledge and they are employed in almost every major biological database. Recently, ontologies are increasingly being used to provide background knowledge in similarity-based analysis and machine learning models. The methods employed to combine ontologies and machine learning are still novel and actively being developed. We provide an overview over the methods that use ontologies to compute similarity and incorporate them in machine learning methods; in particular, we outline how semantic similarity measures and ontology embeddings can exploit the background knowledge in ontologies and how ontologies can provide constraints that improve machine learning models. The methods and experiments we describe are available as a set of executable notebooks, and we also provide a set of slides and additional resources at https://github.com/bio-ontology-research-group/machine-learning-with-ontologies.


Sign in / Sign up

Export Citation Format

Share Document