Application oriented qualitative reasoning

1995 ◽  
Vol 10 (2) ◽  
pp. 181-204 ◽  
Author(s):  
Louise Travé-Massuyès ◽  
Robert Milne

AbstractThe techniques of qualitative reasoning are now becoming sufficiently mature to be applied to real world problems. In order to better understand which techniques are being used successfully for real world applications, and which application areas can be suitably addressed using qualitative reasoning techniques, it is helpful to have a summary of what application oriented work has been done to date. This helps to provide a picture of the application areas in which the techniques are being applied, and who is working in each application domain. In this paper, we summarize over 40 relevant projects.

2021 ◽  
Author(s):  
Andreas Christ Sølvsten Jørgensen ◽  
Atiyo Ghosh ◽  
Marc Sturrock ◽  
Vahid Shahrezaei

AbstractThe modelling of many real-world problems relies on computationally heavy simulations. Since statistical inference rests on repeated simulations to sample the parameter space, the high computational expense of these simulations can become a stumbling block. In this paper, we compare two ways to mitigate this issue based on machine learning methods. One approach is to construct lightweight surrogate models to substitute the simulations used in inference. Alternatively, one might altogether circumnavigate the need for Bayesian sampling schemes and directly estimate the posterior distribution. We focus on stochastic simulations that track autonomous agents and present two case studies of real-world applications: tumour growths and the spread of infectious diseases. We demonstrate that good accuracy in inference can be achieved with a relatively small number of simulations, making our machine learning approaches orders of magnitude faster than classical simulation-based methods that rely on sampling the parameter space. However, we find that while some methods generally produce more robust results than others, no algorithm offers a one-size-fits-all solution when attempting to infer model parameters from observations. Instead, one must choose the inference technique with the specific real-world application in mind. The stochastic nature of the considered real-world phenomena poses an additional challenge that can become insurmountable for some approaches. Overall, we find machine learning approaches that create direct inference machines to be promising for real-world applications. We present our findings as general guidelines for modelling practitioners.Author summaryComputer simulations play a vital role in modern science as they are commonly used to compare theory with observations. One can thus infer the properties of a observed system by comparing the data to the predicted behaviour in different scenarios. Each of these scenarios corresponds to a simulation with slightly different settings. However, since real-world problems are highly complex, the simulations often require extensive computational resources, making direct comparisons with data challenging, if not insurmountable. It is, therefore, necessary to resort to inference methods that mitigate this issue, but it is not clear-cut what path to choose for any specific research problem. In this paper, we provide general guidelines for how to make this choice. We do so by studying examples from oncology and epidemiology and by taking advantage of developments in machine learning. More specifically, we focus on simulations that track the behaviour of autonomous agents, such as single cells or individuals. We show that the best way forward is problem-dependent and highlight the methods that yield the most robust results across the different case studies. We demonstrate that these methods are highly promising and produce reliable results in a small fraction of the time required by classic approaches that rely on comparisons between data and individual simulations. Rather than relying on a single inference technique, we recommend employing several methods and selecting the most reliable based on predetermined criteria.


Author(s):  
Marisa Mohr ◽  
Florian Wilhelm ◽  
Ralf Möller

The estimation of the qualitative behaviour of fractional Brownian motion is an important topic for modelling real-world applications. Permutation entropy is a well-known approach to quantify the complexity of univariate time series in a scalar-valued representation. As an extension often used for outlier detection, weighted permutation entropy takes amplitudes within time series into account. As many real-world problems deal with multivariate time series, these measures need to be extended though. First, we introduce multivariate weighted permutation entropy, which is consistent with standard multivariate extensions of permutation entropy. Second, we investigate the behaviour of weighted permutation entropy on both univariate and multivariate fractional Brownian motion and show revealing results.


2020 ◽  
Vol 20 (5) ◽  
pp. 687-702 ◽  
Author(s):  
GEORGE BARYANNIS ◽  
ILIAS TACHMAZIDIS ◽  
SOTIRIS BATSAKIS ◽  
GRIGORIS ANTONIOU ◽  
MARIO ALVIANO ◽  
...  

AbstractQualitative reasoning involves expressing and deriving knowledge based on qualitative terms such as natural language expressions, rather than strict mathematical quantities. Well over 40 qualitative calculi have been proposed so far, mostly in the spatial and temporal domains, with several practical applications such as naval traffic monitoring, warehouse process optimisation and robot manipulation. Even if a number of specialised qualitative reasoning tools have been developed so far, an important barrier to the wider adoption of these tools is that only qualitative reasoning is supported natively, when real-world problems most often require a combination of qualitative and other forms of reasoning. In this work, we propose to overcome this barrier by using ASP as a unifying formalism to tackle problems that require qualitative reasoning in addition to non-qualitative reasoning. A family of ASP encodings is proposed which can handle any qualitative calculus with binary relations. These encodings are experimentally evaluated using a real-world dataset based on a case study of determining optimal coverage of telecommunication antennas, and compared with the performance of two well-known dedicated reasoners. Experimental results show that the proposed encodings outperform one of the two reasoners, but fall behind the other, an acceptable trade-off given the added benefits of handling any type of reasoning as well as the interpretability of logic programs.


Author(s):  
Yang Liu ◽  
Luyang Jiao ◽  
Guohua Bai ◽  
Boqin Feng

From the perspective of cognitive informatics, cognition can be viewed as the acquisition of knowledge. In real-world applications, information systems usually contain some degree of noisy data. A new model proposed to deal with the hybrid-feature selection problem combines the neighbourhood approximation and variable precision rough set models. Then rule induction algorithm can learn from selected features in order to reduce the complexity of rule sets. Through proposed integration, the knowledge acquisition process becomes insensitive to the dimensionality of data with a pre-defined tolerance degree of noise and uncertainty for misclassification. When the authors apply the method to a Chinese diabetic diagnosis problem, the hybrid-attribute reduction method selected only five attributes from totally thirty-four measurements. Rule learner produced eight rules with average two attributes in the left part of an IF-THEN rule form, which is a manageable set of rules. The demonstrated experiment shows that the present approach is effective in handling real-world problems.


Author(s):  
Jongwoo Kim ◽  
Veda C. Storey

As the World Wide Web evolves into the Semantic Web, domain ontologies, which represent the concepts of an application domain and their associated relationships, have become increasingly important as surrogates for capturing and representing the semantics of real world applications. Much ontology development remains manual and is both difficult and time-consuming. This research presents a methodology for semi-automatically generating domain ontologies from extracted information on the World Wide Web. The methodology is implemented in a prototype that integrates existing ontology and web organization tools. The prototype is used to develop ontologies for different application domains, and an empirical analysis carried out to demonstrate the feasibility of the research.


Author(s):  
Jongwoo Kim ◽  
Veda C. Storey

As the World Wide Web evolves into the Semantic Web, domain ontologies, which represent the concepts of an application domain and their associated relationships, have become increasingly important as surrogates for capturing and representing the semantics of real world applications. Much ontology development remains manual and is both difficult and time-consuming. This research presents a methodology for semi-automatically generating domain ontologies from extracted information on the World Wide Web. The methodology is implemented in a prototype that integrates existing ontology and web organization tools. The prototype is used to develop ontologies for different application domains, and an empirical analysis carried out to demonstrate the feasibility of the research.


Author(s):  
Davood Mohammaditabar

One of the very popular applications of the graph theory in real world problems is related to the concept of Eulerian tours and trails introduced in Eulerian trail and tours chapter. There are many problems in which users should serve all the connections (edges in a graph, streets of a city, pipelines of a network and etc.) between nodes. In chapter 7 of this book, the existence of such trails and tours in graphs were discussed, and appropriate algorithms were introduced to find Eulerian trails and tour. But in the case a graph does not have such a tour or trail, it’s important to traverse some edges more than once, and this is what usually happens in real world applications. M.K. Kwan in 1962 was the first who introduced this problem as the Chinese postman problem (CPP). The question was that, given a postal zone with a number of streets that must be served by a postal carrier, how can one develop a tour that covers every street in the zone and brings the postman back to his or her point of origin, having traveled the minimum possible distance (Wang et al., 2008)? In this chapter, the Chinese postman problem is discussed, and different variations of it are introduced. Then the very early form of the CPP in which the graph is undirected is explained in more detail.


Author(s):  
Jingrui He

Nowadays, as an intrinsic property of big data, data heterogeneity can be seen in a variety of real-world applications, ranging from security to manufacturing, from healthcare to crowdsourcing. It refers to any inhomogeneity in the data, and can be present in a variety of forms, corresponding to different types of data heterogeneity, such as task/view/instance/oracle heterogeneity. As shown in previous work as well as our own work, learning from data heterogeneity not only helps people gain a better understanding of the large volume of data, but also provides a means to leverage such data for effective predictive modeling. In this paper, along with multiple real applications, we will briefly review state-of-the-art techniques for learning from data heterogeneity, and demonstrate their performance at addressing these real world problems.


Author(s):  
Maik Stührenberg ◽  
Daniela Goecke

Seamless integration of various, often heterogeneous linguistic resources (in terms of their output formats) and merging of the respective annotation layers are crucial tasks for linguistic research. After a decade of concentration on the development of formats in order to structure single annotations for specific linguistic issues, a variety of specifications to store multiple annotations over the same primary data has been developed in the last years. Among these approaches three main architectures can be identified: Prolog-based architectures, XML-related approaches and graph-based models that follow the XML syntax. However, these architectures are not free of disadvantages when used in real world applications. In the Sekimo project the XML-based Sekimo Generic Format (SGF) was developed for the purpose of storing multiple annotations on the same primary data and examine relationships between elements of different annotation layers without prepended conversion. SGF is based on the design principles of graph-based approaches but makes use of the XML-inherent tree structures whenever possible to reduce processing costs. Analysing data stored in SGF can be done via standard XML-related specifications such as XPath, XSLT or XQuery and is done in our project in the linguistic application domain of anaphora resolution.


Sign in / Sign up

Export Citation Format

Share Document