Paraconsistent Reasoning in Cops and Robber Game with Uncertain Information: A Simulation-Based Analysis

Author(s):  
Jacek Szklarski ◽  
Łukasz Białek ◽  
Andrzej Szałs

We apply a non-classical four-valued logic in the process of reasoning regarding strategies for cops in a modified game of “Cops and Robber” played on a graph. We extend the game by introducing uncertainty in a form of random failures of detecting devices. This is realized by allowing that a robber can be detected in a node only with the given probability PA. Additionally, with the probability PF, cops can be given a false-positive, i.e., they are informed that the robber is located at some node, whereas it is located somewhere else. Consequently, non-zero PFintroduces a measurement noise into the system. All the cops have access to information provided by the detectors and can communicate with each other, so they can coordinate the search. By adjusting the number of detectors, PA, and PFwe can achieve a smooth transition between the two well-known variants of the game: “with fully visible robber” and “with invisible robber”. We compare a simple probabilistic strategy for cops with the non-parametric strategy based on reasoning with a four-valued paraconsistent logic. It is shown that this novel approach leads to a good performance, as measured by the required mean catch-time. We conclude that this type of reasoning can be applied in real-world applications where there is no knowledge about the underlying source of errors which is particularly useful in robotics.

Author(s):  
Aijun Xue ◽  
Xiaodan Wang

Many real world applications involve multiclass cost-sensitive learning problems. However, some well-worked binary cost-sensitive learning algorithms cannot be extended into multiclass cost-sensitive learning directly. It is meaningful to decompose the complex multiclass cost-sensitive classification problem into a series of binary cost-sensitive classification problems. So, in this paper we propose an alternative and efficient decomposition framework, using the original error correcting output codes. The main problem in our framework is how to evaluate the binary costs for each binary cost-sensitive base classifier. To solve this problem, we proposed to compute the expected misclassification costs starting from the given multiclass cost matrix. Furthermore, the general formulations to compute the binary costs are given. Experimental results on several synthetic and UCI datasets show that our method can obtain comparable performance in comparison with the state-of-the-art methods.


2014 ◽  
Vol 626 ◽  
pp. 32-37 ◽  
Author(s):  
Ajayan Lekshmi ◽  
C. Christopher Seldev

Shadows are viewed as undesired information that strongly affects images. Shadows may cause a high risk to present false color tones, to distort the shape of objects, to merge, or to lose objects. This paper proposes a novel approach for the detection and removal of shadows in an image. Firstly the shadow and non shadow region of the original image is identified by HSV color model. The shadow removal is based on exemplar based image inpainting. Finally, the border between the reconstructed shadow and the non shadow areas undergoes bilinear interpolation to yield a smooth transition between them. They would lead to a better fitting of the shadow and non shadow classes, thus resulting in a potentially better reconstruction quality.


2021 ◽  
Vol 8 (4) ◽  
pp. 75-81
Author(s):  
Ahmed A. Alsheikhy ◽  

In real-time systems, a task or a set of tasks needs to be executed and completed successfully within a predefined time. Those systems require a scheduling technique or a set of scheduling methods to distribute the given task or the set of tasks among different processors or on a processor. In this paper, a new novel scheduling approach to minimize the overhead from context switching between several periodic tasks is presented. This method speeds up a required response time while ensuring that all tasks meet their deadline times and there is no deadline miss occurred. It is a dynamic-priority technique that works either on a uniprocessor or several processors. In particular, it is proposed to be applied on multiprocessor environments since many applications run on several processors. Various examples are presented within this paper to demonstrate its optimality and efficiency. In addition, several comparison experiments with an earlier version of this approach were performed to demonstrate its efficiency and effectiveness too. Those experiments showed that this novel approach sped up the execution time from 15% to nearly around 46%. In addition, it proved that it reduced the number of a context switch between tasks from 12% to around 50% as shown from simulation tests. Furthermore, this approach delivered all tasks/jobs successfully and ensured there was no deadline miss happened.


Author(s):  
Yuan Chen ◽  
Liya Ding ◽  
Sio-Long Lo ◽  
Dickson K.W. Chiu

This article proposes a novel approach that combines user’s instant requirement described in keywords with her or his long-term knowledge background to better serve article selection based on personal preference. The knowledge background is represented as a weighted undirected graph called background net that captures the contextual association of words that appear in the articles recommended by the user through incremental learning. With a background net of user constructed, a keyword from the user is personalized to a fuzzy set that represents contextual association of the given keyword to other words involved in the user’s background net. An article evaluation with personal preference can be achieved by evaluating similarity between personalized keyword set based on user’s background net and a candidate article. The proposed approach makes it possible to construct a search engine optimizer running on the top of search engines to adjust search results, and offer the potential to be integrated with existing search engine techniques to achieve better performance. The target system of personalized article selection can be automatically constructed using Knowware System which is a development tool of KBS for convenient modeling and component reuse.


2020 ◽  
pp. 097215091988645
Author(s):  
Saikat Chatterjee ◽  
Amit Shukla

Workplace stress has always been considered as a potential source of job dissatisfaction and many psychosomatic disorders in employees the world over. The IT sector has emerged as a major contributor to work stress in India over the last 2 decades. Still there is lack of sector-specific studies, and most of the existing studies treat work stress as an umbrella term. Against this background, the objective of this article is twofold: one, to identify different types of stressors, and the other, to rate them according to their severity. The outcome should be helpful in devising proper mitigation strategies. On the basis of findings from the two field studies, the article identifies major stressors among junior level Indian IT professionals ( n1 = 38), and then furnishes a risk profile of these stressors on the basis of their frequency and impact ( n2 = 234). At the end, 21 stressors are identified in the given context, and their ‘riskiness’ is presented in a descending order in terms of risk scores. Implications of findings are discussed at the end. All the stressors were assigned a score in terms of their frequency, impact and risk. At the end, techno-stress emerged as the most serious stressor in both in terms of its frequency of occurrence and impact. The results serve as a guide to the management in the IT firms in addressing the prevalent high levels of stress at workplace. The risk scores will help them in allocating resources and, setting and prioritizing their HR strategies to this end. Amid few studies conducted in the context of stress in the Indian IT sector, this article offers useful and practical insights while deploying a novel approach of risk profiling.


Author(s):  
Bharat Tidke ◽  
Swati Tidke

In this age of the internet, no person wants to make his decision on his own. Be it for purchasing a product, watching a movie, reading a book, a person looks out for reviews. People are unaware of the fact that these reviews may not always be true. It is the age of paid reviews, where the reviews are not just written to promote one's product but also to demote a competitor's product. But the ones which are turning out to be the most critical are given on brand of a certain product. This chapter proposed a novel approach for brand spam detection using feature correlation to improve state-of-the-art approaches. Correlation-based feature engineering is considered as one of the finest methods for determining the relations among the features. Several features attached with reviews are important, keeping in focus customer and company needs in making strong decisions, user for purchasing, and company for improving sales and services. Due to severe spamming these days, it has become nearly impossible to judge whether the given review is a trusted or a fake review.


Author(s):  
Steven Turek ◽  
Sam Anand

When a cylindrical datum feature is specified at maximum material condition (MMC) or least material condition (LMC) a unique circumstance arises: a virtual condition (VC) cylindrical boundary must be defined [1]. The geometric relationship between a cylindrical point cloud obtained from inspection equipment and a VC cylinder has not been specifically addressed in previous research. In this research, novel approaches to this geometric analysis are presented, analyzed, and validated. Two of the proposed methods are new interpretations of established methods applied to this unique geometric circumstance: least squares and the maximum inscribing cylinder (MIC) or minimum circumscribing cylinder (MCC). The third method, the Hull Normal method, is a novel approach specifically developed to address the VC cylinder problem. Each of the proposed methods utilizes a different amount of sampled data, leading to various levels of sensitivity to sample size and error. The three methods were applied to different cylindrical forms, utilizing various sampling techniques and sample sizes. Trends across sample size were analyzed to assess the variation in axial orientation when compared to the true geometric form, and a relevant case study explores the applicability of these methods in real world applications.


Author(s):  
CHUN-HAO CHEN ◽  
TZUNG-PEI HONG ◽  
YEONG-CHYI LEE

Data mining is most commonly used in attempts to induce association rules from transaction data. Since transactions in real-world applications usually consist of quantitative values, many fuzzy association-rule mining approaches have been proposed on single- or multiple-concept levels. However, the given membership functions may have a critical influence on the final mining results. In this paper, we propose a multiple-level genetic-fuzzy mining algorithm for mining membership functions and fuzzy association rules using multiple-concept levels. It first encodes the membership functions of each item class (category) into a chromosome according to the given taxonomy. The fitness value of each individual is then evaluated by the summation of large 1-itemsets of each item in different concept levels and the suitability of membership functions in the chromosome. After the GA process terminates, a better set of multiple-level fuzzy association rules can then be expected with a more suitable set of membership functions. Experimental results on a simulation dataset also show the effectiveness of the algorithm.


1994 ◽  
Vol 116 (2) ◽  
pp. 581-586 ◽  
Author(s):  
D. C. H. Yang ◽  
Jui-Jen Chou

This paper presents a general theory on generating a smooth motion profile for the coordinated motion of five-axes CNC/CMM machines. Motion with constat speed is important and required in many manufacturing processes, such as milling, welding, finishing, and painting. In this paper, a piecewise constant speed profile is constructed by a sequence of Hermite curves to form a composite Hermite curve in parametric domain. Given the continuity of acceleration in our proposed speed profile, it generates relatively better product quality than traditional techniques. We also provide a method for the feasibility study of manufacturing capability in terms of the given machine, the desired path, and the assigned speed. We consider machine dynamics, actuator limitation, path geometry, jerk constraints, and motion kinematics.


Sign in / Sign up

Export Citation Format

Share Document