Enhancing the Quality of Image Tagging Using a Visio-Textual Knowledge Base

2020 ◽  
Vol 22 (4) ◽  
pp. 897-911 ◽  
Author(s):  
Chandramani Chaudhary ◽  
Poonam Goyal ◽  
Dhanashree Nellayi Prasad ◽  
Yi-Ping Phoebe Chen
Keyword(s):  
Author(s):  
A. V. Ponomarev

Introduction: Large-scale human-computer systems involving people of various skills and motivation into the information processing process are currently used in a wide spectrum of applications. An acute problem in such systems is assessing the expected quality of each contributor; for example, in order to penalize incompetent or inaccurate ones and to promote diligent ones.Purpose: To develop a method of assessing the expected contributor’s quality in community tagging systems. This method should only use generally unreliable and incomplete information provided by contributors (with ground truth tags unknown).Results:A mathematical model is proposed for community image tagging (including the model of a contributor), along with a method of assessing the expected contributor’s quality. The method is based on comparing tag sets provided by different contributors for the same images, being a modification of pairwise comparison method with preference relation replaced by a special domination characteristic. Expected contributors’ quality is evaluated as a positive eigenvector of a pairwise domination characteristic matrix. Community tagging simulation has confirmed that the proposed method allows you to adequately estimate the expected quality of community tagging system contributors (provided that the contributors' behavior fits the proposed model).Practical relevance: The obtained results can be used in the development of systems based on coordinated efforts of community (primarily, community tagging systems). 


Author(s):  
Heiko Paulheim ◽  
Christian Bizer

Linked Data on the Web is either created from structured data sources (such as relational databases), from semi-structured sources (such as Wikipedia), or from unstructured sources (such as text). In the latter two cases, the generated Linked Data will likely be noisy and incomplete. In this paper, we present two algorithms that exploit statistical distributions of properties and types for enhancing the quality of incomplete and noisy Linked Data sets: SDType adds missing type statements, and SDValidate identifies faulty statements. Neither of the algorithms uses external knowledge, i.e., they operate only on the data itself. We evaluate the algorithms on the DBpedia and NELL knowledge bases, showing that they are both accurate as well as scalable. Both algorithms have been used for building the DBpedia 3.9 release: With SDType, 3.4 million missing type statements have been added, while using SDValidate, 13,000 erroneous RDF statements have been removed from the knowledge base.


2013 ◽  
Vol 860-863 ◽  
pp. 2456-2462
Author(s):  
Qiang Qing Zhou ◽  
Jing Liu ◽  
Qing Nian Zou ◽  
Guo Lin Huang ◽  
Ping Han ◽  
...  

By using of the object-oriented technology and the knowledge representation of the production rule, this paper classifies the operation experience of grid according to the nature and builds a power grid operation experience knowledge base with active learning capability. Through application of Bayesian classifier model based on weight, it classifies the statistical data and identifies the semantic, to realize the exchange between the knowledge base and the users feedback. Using the powerful learning ability of knowledge base, it can make the operation experience knowledge base optimize its knowledge system structure while exchanging with users feedback, so that it can go on refining the operation experience base of the grid. This method can provide technical support and improve the quality of the stuff, as well as strengthen the security and stability of the grid.


2016 ◽  
Vol 15 (02) ◽  
pp. 1650019 ◽  
Author(s):  
Chien-Hsing Wu ◽  
Shu-Chen Kao

The quality of a knowledge base does not depend mainly on the number of knowledge objects (KOs) that are available, but on the number of objects that are actually shared, combined, internalised, adopted, or even validated to be value creatable. In support of this goal, knowledge management has paid increasing attention to the most valued KO of knowledge base for a variety of managerial decisions. This paper proposes a model called knowledge object use appraisal model (KOUAM) based on knowledge flow profile to help derive the appraisal of KO use in knowledge base. The KOUAM considers three phases, namely, incubation, adoption, and validation as the appraisal base. A demonstrated example is presented and evaluation of the proposed KOUAM is acceptable. Feedback and managerial implications are also addressed.


2020 ◽  
Vol 8 (4) ◽  
pp. 1376-1384

Rock disintegration is one of the most important technological operations in the production of crushed stone, which is widely used in the construction of roads and railways, buildings, structures, hydraulic engineering facilities, bridges, tunnels, the production of concrete, asphalt, dry building mixes, and other objects and products. In recent years, research has been actively conducted to improve the processes of rock disintegration by reducing energy intensity and improving the quality of disintegrating technologies and equipment. Moreover, developers and inventors are actively patenting the results of their intellectual activities in the field of promising types of new intellectual property for high-potential technologies and equipment for producing high-quality crushed stone. This fact indicates that the problems of improving the quality and efficiency of technologies and disintegrating equipment, in particular crushers for obtaining crushed stone, have not been solved yet. All this necessitates the synthesis and patenting of new market-competitive technologies and equipment for obtaining crushed stone by disintegrating rocks. In this regard, the authors of this work carried out research aimed at building a knowledge base in the field of patenting technologies and crushers that ensure the production of crushed stone for the construction of roads and railways, buildings, structures, hydraulic engineering facilities, bridges, tunnels, the production of concrete, asphalt, dry mixes, and other objects and products. The research also included the categorization of the main areas of patenting in this field. This involved using the method of system analysis performed based on the results of patent information search, collecting and systematizing the information. The current state and trends of patenting in the field of rock disintegration are identified, the main goals (effects) of patented intellectual property are defined. The article presents the most noteworthy patents collected during the building the knowledge base.


Author(s):  
Antonello Cammarano ◽  
Francesca Michelino ◽  
Maria Prosperina Vitale ◽  
Michele La Rocca ◽  
Mauro Caputo

1999 ◽  
Vol 5 (1) ◽  
pp. 53
Author(s):  
Kim Robinson

This study examines the experiences of women who have been in situations of family violence, and identifies the barriers they faced when seeking assistance from a variety of services. The research aims to contribute to the knowledge base of the health, social welfare, legal and policing services which respond to calls for assistance from women faced with family violence. The service system is varied in how it conceptualizes family violence, and in the aims of the services it provides. The research reports that the service system does not always meet the needs of victim/survivors, and reveals that service providers are often ill equipped to deal with the complexity of violence. Due to the length of waiting lists and the quality of interventions women experienced particular difficulty in accessing advice based services. A number of recommendations are made for improvements in services.


Author(s):  
Muhammad Ahtisham Aslam ◽  
Naif Radi Aljohani

Producing the Linked Open Data (LOD) is getting potential to publish high-quality interlinked data. Publishing such data facilitates intelligent searching from the Web of data. In the context of scientific publications, data about millions of scientific documents published by hundreds and thousands of publishers is in silence as it is not published as open data and ultimately is not linked to other datasets. In this paper the authors present SPedia: a semantically enriched knowledge base of data about scientific documents. SPedia knowledge base provides information on more than nine million scientific documents, consisting of more than three hundred million RDF triples. These extracted datasets, allow users to put sophisticated queries by employing semantic Web techniques instead of relying on keyword-based searches. This paper also shows the quality of extracted data by performing sample queries through SPedia SPARQL Endpoint and analyzing results. Finally, the authors describe that how SPedia can serve as central hub for the cloud of LOD of scientific publications.


Author(s):  
О.С. Исаева ◽  
Л.Ф. Ноженкова ◽  
А.Ю. Колдырев

Показано применение прецедентов функционирования имитационной модели для анализа результатов испытаний бортовой аппаратуры космического аппарата. Прецеденты содержат различные сценарии передачи команд управления и соответствующие им варианты изменения параметров телеметрии бортовых систем космического аппарата. Сценарии описываются правилами базы знаний и моделируются процедурами логического вывода. Разработанные структуры данных и программное обеспечение позволяют выполнять имитационные эксперименты, сохранять результаты в базе прецедентов и сопоставлять их с результатами испытаний бортовых систем как в процессе выполнения испытательных процедур, так и ретроспективно. Применение результатов имитационного моделирования расширяет возможности испытательного программного обеспечения и способствует повышению качества конструкторских решений. This article represents a method for intellectual analysis of the results of testing spacecraft onboard equipment on the bases of the precedents of the simulation model. A simulation model is founded on a knowledge base describing different peculiar properties of the onboard equipment’s function, settings of the reception-transmission tract, scenarios of control commands’ transmission and corresponding changes of the parameters of onboard devices’ telemetry. We have designed data structures and software that allows conducting simulation experiments, save them in the precedent base and compare the results of the simulation modelling with the results of the onboard system’s testing. This analysis is carried out both during tests and after they are finished. In the first case, an onboard systems’ command is sent to the object of testing and to the simulation model. The model contains methods of logical inference that forms a conflict set of rules, choose and complete the applied actions simulating the functions of onboard equipment at reception and execution of the given commands. The results of modelling are represented in telemetry that is compared with the telemetry received from the objects of testing. A designer is given a list of parameters that were changed in the process of the logical inference and the telemetry parameters of the object of testing. In the other case, the telemetry of the onboard system obtained from the test storage results is compared with the precedents of the simulation model from the data base. Precedents contain examples of execution of big variety of commands and sets of the rules of the knowledge base that were completed for their acquisition. If telemetry parameters coincide, software allows a step-by-step review of the tests thus making a comparison with the actions of the simulation model. Comparison of the simulation model precedents with the results of testing allows revealing special features of the onboard equipment function that may remain unnoticed when other methods of analysis are utilized. Intellectual methods of logical output for analysis of tests extend the capabilities of test software and provide better quality of designer solutions.


Sign in / Sign up

Export Citation Format

Share Document