A Model Approach to Infer the Quality in Agricultural Sprayers Supported by Knowledge Bases and Experimental Measurements

2017 ◽  
Vol 11 (03) ◽  
pp. 279-292 ◽  
Author(s):  
Elmer A. G. Peñaloza ◽  
Paulo E. Cruvinel ◽  
Vilma A. Oliveira ◽  
Augusto G. F. Costa

This paper presents a method to infer the quality of sprayers based on data collection of the drop spectra and their physical descriptors, which are used to generate a knowledge base to support decision-making in agriculture. The knowledge base is formed by collected experimental data, obtained in a controlled environment under specific operating conditions, and the semantics used in the spraying process to infer the quality in the application. The electro-hydraulic operating conditions of the sprayer system, which include speed and flow measurements, are used to define experimental tests, perform calibration of the spray booms and select the nozzle types. Using the Grubbs test and the quartile-quartile plot an exploratory analysis of the collected data was made in order to determine the data consistency, the deviation of atypical values, the independence between the data of each test, the repeatability and the normal representation of them. Therefore, integrating measurements to a knowledge base it was possible to improve the decision-making in relation to the quality of the spraying process defined in terms of a distribution function. Results shown that the use of advanced models and semantic interpretation improved the decision-making processes related to the quality of the agricultural sprayers.

2021 ◽  
Vol 12 (4) ◽  
pp. 189-199
Author(s):  
O. N. Dolinina ◽  
◽  
V. A. Kushnikov ◽  

An increase in the degree of intellectualization of tasks requires the creation of methodology for improving the quality of intelligent decision-making systems. The possibility of automating decision-making in poorly formalized areas through the using of the expert knowledge leads to increasing of the number of errors in the software, and as a consequence to increasing of the number of various sources of failures.The article provides a detailed overview of existing methods and technologies for quality assurance of intelligent decision systems. The first part of the article describes the methodology for ensuring the quality of the intelligent systems (IS), based on the GOST/ ISO standards, where it is proposed to use a multilevel model to describe the quality of the IS software. It is shown that to ensure the required level of quality, an action plan can be formed and the use of a system dynamics model for the implementation of an action plan for ensuring the quality of IS is described. A comparative analysis of the complex criteria of quality and reliability is given. In the second part, the quality of knowledge base (KB) as a special element of the IS software is described, a comparative analysis of methods for static and dynamic analysis of knowledge bases is considered. An overview of research results in the classification of errors in the knowledge bases and their debugging is given. Special attention is given to the "forgetting about exception" type of errors. The concept of a statically correct knowledge base at the level of the knowledge structure is described and it is shown that statically correct knowledge bases can nevertheless give errors due to errors in the rules themselves because of the inconsistency of the field of studies. Neural network knowledge bases are allocated in a separate class, for neural networks methods of debugging are described.


Author(s):  
Heiko Paulheim ◽  
Christian Bizer

Linked Data on the Web is either created from structured data sources (such as relational databases), from semi-structured sources (such as Wikipedia), or from unstructured sources (such as text). In the latter two cases, the generated Linked Data will likely be noisy and incomplete. In this paper, we present two algorithms that exploit statistical distributions of properties and types for enhancing the quality of incomplete and noisy Linked Data sets: SDType adds missing type statements, and SDValidate identifies faulty statements. Neither of the algorithms uses external knowledge, i.e., they operate only on the data itself. We evaluate the algorithms on the DBpedia and NELL knowledge bases, showing that they are both accurate as well as scalable. Both algorithms have been used for building the DBpedia 3.9 release: With SDType, 3.4 million missing type statements have been added, while using SDValidate, 13,000 erroneous RDF statements have been removed from the knowledge base.


Author(s):  
Emmanuel Horowitz

Nuclear scientists and engineers should consider adopting a more operational approach for the purpose of selecting their future materials. For each type of nuclear power generating reactor, for each coolant (water, helium or liquid metal), the next generation of specialists and decision-makers will need to choose and optimise the iron or nickel alloys, steels, ODS (oxide dispersed strengthened steels) and ceramics that are going to be used. It may well be considered that either each reactor type has its own, specific materials, or, in a complementary manner, that the efforts for improvements should be shared. At high temperatures, as found on fuel-cladding liners, heat exchangers or even tubes or tube liners, different types of steels and alloys may be envisaged. It is considered that austenitic steels provide a better creep resistance at high temperature but they must be stabilized by nickel, thereby becoming more expensive. Ferrite steels could be better as far as swelling, mechanical strength and thermal behaviour are concerned. To withstand corrosion, chromium or aluminium, ODS steels could turn out to be good solutions, if they can comply with stringent criteria. Concerning heat exchangers, choices must be made between iron and nickel alloys, according to proposed operating conditions. In the case of sodium-cooled rapid neutron reactors (RNRs), ferritic-martensitic alloys with 9%–12% chromium or chromium ODS steels could prove suitable, especially if we judge by their specific mechanical behaviour, up to at least 700°C. Nevertheless, behaviour of these steels — with respect to ageing, anisotropy, radiation induced segregation, radiation induced precipitation, reduction of activation products and welding — needs be better understood and qualified. Sodium heat exchanger materials should be carefully chosen since they have to withstand corrosion arising from the primary flow and also from the secondary or tertiary flow (either sodium or molten salts, gas or water); therefore, experimental loops are necessary to gain improved understanding and assessment of the designs envisioned. One way to improve alloys is through thermal, mechanical treatments or by surface treatments. A better way could, however, be to improve the nanostructure and mesostructure of the materials chosen at the drawing-board stage, for instance by nano-size cluster dispersion and grain size controls; experimental tests, microscope and spectroscope observations, multi-scale modelling and thermodynamics computing could also help calibrate and implement these improvements. Large, experimental databases and codes will be the keystone to defining more operational knowledge bases that will then allow us to determine terms of reference for the new materials. Failing this, time will be running out — within the next twenty years — to design and develop nuclear prototypes consistent with the criteria laid down for “Generation IV” reactors.


2017 ◽  
Vol 2017 ◽  
pp. 1-17
Author(s):  
Chunhua Li ◽  
Pengpeng Zhao ◽  
Victor S. Sheng ◽  
Xuefeng Xian ◽  
Jian Wu ◽  
...  

Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.


2013 ◽  
Vol 13 (3) ◽  
pp. 124-139 ◽  
Author(s):  
Margret Anouncia S. ◽  
Clara Madonna L. J. ◽  
Jeevitha P. ◽  
Nandhini R. T.

Abstract Traditionally the diagnosis of a disease is done by medical experts with experience, clinical data of the patients and adequate knowledge in identifying the disease. Such diagnosis is found to be approximate and time-consuming since it purely depends on the availability and the experience of the medical experts dealing with imprecise and uncertain clinical data of the patients. Hence, to improve decision making with uncertain data and to reduce the time consumption in diagnosing a disease, several simulated diagnosis systems have been developed. Most of these diagnosis systems are designed to possess the clinical data and symptoms associated with a specific disease as knowledge base. The quality of the knowledge base has an impact not only on the consequences, but also on the diagnostic precision. Most of the existing systems have been developed as an expert system that contains all the diagnosis facts as rules. Notably, applying the concept of a fuzzy set has shown better knowledge representation to improve the decision making process. Therefore an attempt is made in this paper to design and develop such diagnosis system, using a rough set. The system developed is evaluated using a simple set of symptoms that is added to clinical data in determining diabetes and its severity.


2021 ◽  
Vol 19 (2) ◽  
pp. 65-75
Author(s):  
A. A. Mezentseva ◽  
E. P. Bruches ◽  
T. V. Batura

Due to the growth of the number of scientific publications, the tasks related to scientific article processing become more actual. Such texts have a special structure, lexical and semantic content that should be taken into account while processing. Using information from knowledge bases can significantly improve the quality of text processing systems. This paper is dedicated to the entity linking task for scientific articles in Russian, where we consider scientific terms as entities. During our work, we annotated a corpus with scientific texts, where each term was linked with an entity from a knowledge base. Also, we implemented an algorithm for entity linking and evaluated it on the corpus. The algorithm consists of two stages: candidate generation for an input term and ranking this set of candidates to choose the best match. We used string matching of an input term and an entity in a knowledge base to generate a set of candidates. To rank the candidates and choose the most relevant entity for a term, information about the number of links to other entities within the knowledge base and to other sites is used. We analyzed the obtained results and proposed possible ways to improve the quality of the algorithm, for example, using information about the context and a knowledge base structure. The annotated corpus is publicly available and can be useful for other researchers.


2021 ◽  
Vol 12 (5) ◽  
Author(s):  
João Pedro V. Pinheiro ◽  
Marco A. Casanova ◽  
Elisa S. Menendez

The answer of a query, submitted to a database or a knowledge base, is often long and may contain redundant data. The user is frequently forced to browse through a long answer or refine and repeat the query until the answer reaches a manageable size. Without proper treatment, consuming the answer may indeed become a tedious task. This article then proposes a process that modifies the presentation of a query answer to improve the quality of the user’s experience in the context of an RDF knowledge base. The process reorganizes the original query answer by applying heuristics to summarize the results and to select template questions that create a user dialog that guides the presentation of the results. The article also includes experiments based on RDF versions of MusicBrainz, enriched with DBpedia data, and IMDb, each with over 200 million RDF triples. The experiments use sample queries from well-known benchmarks.


2020 ◽  
Vol 1 (2) ◽  
pp. 121-125
Author(s):  
Josua Fernando Simanjuntak ◽  
Agnes Prawita Sari ◽  
Aulia Nada Syahputri

In human life, many things require decision making, including in agriculture. One of them is rice farmers who make the decision to determine the selling price of their grain according to the quality of their grain. By using Fuzzy logic, the grain price can be determined by going through the following stages: Fuzzification, Knowledge Base Formation, Fuzzy Inference, and Defuzzification. One of the Fuzzy logic methods that can be used is the Tsukamoto method, where this method has an output in the form of firm values. To be able to determine the price of grain, the data is taken from the Central Statistics Agency website, so that later prices and levels of grain quality can be determined properly. With this research, the farmers can determine the price of their grain exactly according to the quality of the grain. So that the problem of determining their grain prices can be overcome properly.


2008 ◽  
Vol 42 (43) ◽  
pp. 91-97
Author(s):  
Laima Paliulionienė

Dirbtinio intelekto ir kitų kompiuterinių technologijų naudojimas rengiant teisės aktus pagerina jų kokybę ir sutrumpina rengimo laiką. Viena iš problemų, su kuriomus susiduriama formalizuojant teisės aktus, yra izomorfizmo problema. Tai teisinių žinių bazės fragmentų susiejimo su atitinkamais teisinių dokumentų struktūriniais elementais problema. Straipsnyje siūlomas teisės akto tekstų ir žinių bazių vaizdavimo būdas, užtikrinantis izomorfizmą tarp jų. Struktūruotam dokumento tekstui saugotinaudojamas XML dokumentas, o žinių bazei – F-logikos formalizmas. Be dokumento teksto ir žinių bazės, nagrinėjamas dar vienas izomorfizmo aspektas – testai, aprašantys realias arba hipotetines situacijas ir skirti patikrinti, ar teisės akto straipsnis adekvačiai apibrėžia jo pageidaujamą taikymą. Siūlomas metodas palengvina teisės akto ir žinių bazės pakeitimų valdymą ir sudaro prielaidas generuoti kokybiškesnius žinių bazėje atliekamų išvedimų paaiškinimus.A method of implementing isomorphism between legal knowledge bases and legal documentsLaima Paliulionienė SummaryThe use of artificial intelligence methods or other computer-based methods in legal drafting can improve the quality of legal documents. During the formalization of legal documents, a problem of isomorphism arises. Isomorphism can be defined as the well defined correspondence of the knowledge base elements to the structural elements of source texts. A method of the representation of legal texts and knowledge bases is proposed in this paper to ensure the isomorphism. XML documents are used to store texts of a legal document, and F-logic is used as a formalism for the knowledge base. Additionally to the document text and knowledge base, one more aspect of the isomorphism is included – tests that describe hypothetic or real situations, and are intended to check the adequacy of possible applications of structural units (articles) of the legal document. The proposed method is designed to simplify the management of the changes in legal documents and appropriate knowledge bases, and to generate explanations of the inference in knowledge bases.


Sign in / Sign up

Export Citation Format

Share Document