scholarly journals Deception through Half-Truths

2020 ◽  
Vol 34 (06) ◽  
pp. 10110-10117
Author(s):  
Andrew Estornell ◽  
Sanmay Das ◽  
Yevgeniy Vorobeychik

Deception is a fundamental issue across a diverse array of settings, from cybersecurity, where decoys (e.g., honeypots) are an important tool, to politics that can feature politically motivated “leaks” and fake news about candidates. Typical considerations of deception view it as providing false information. However, just as important but less frequently studied is a more tacit form where information is strategically hidden or leaked. We consider the problem of how much an adversary can affect a principal's decision by “half-truths”, that is, by masking or hiding bits of information, when the principal is oblivious to the presence of the adversary. The principal's problem can be modeled as one of predicting future states of variables in a dynamic Bayes network, and we show that, while theoretically the principal's decisions can be made arbitrarily bad, the optimal attack is NP-hard to approximate, even under strong assumptions favoring the attacker. However, we also describe an important special case where the dependency of future states on past states is additive, in which we can efficiently compute an approximately optimal attack. Moreover, in networks with a linear transition function we can solve the problem optimally in polynomial time.

1999 ◽  
Vol 10 ◽  
pp. 199-241 ◽  
Author(s):  
T. Lukasiewicz

We study the problem of probabilistic deduction with conditional constraints over basic events. We show that globally complete probabilistic deduction with conditional constraints over basic events is NP-hard. We then concentrate on the special case of probabilistic deduction in conditional constraint trees. We elaborate very efficient techniques for globally complete probabilistic deduction. In detail, for conditional constraint trees with point probabilities, we present a local approach to globally complete probabilistic deduction, which runs in linear time in the size of the conditional constraint trees. For conditional constraint trees with interval probabilities, we show that globally complete probabilistic deduction can be done in a global approach by solving nonlinear programs. We show how these nonlinear programs can be transformed into equivalent linear programs, which are solvable in polynomial time in the size of the conditional constraint trees.


10.29007/v68w ◽  
2018 ◽  
Author(s):  
Ying Zhu ◽  
Mirek Truszczynski

We study the problem of learning the importance of preferences in preference profiles in two important cases: when individual preferences are aggregated by the ranked Pareto rule, and when they are aggregated by positional scoring rules. For the ranked Pareto rule, we provide a polynomial-time algorithm that finds a ranking of preferences such that the ranked profile correctly decides all the examples, whenever such a ranking exists. We also show that the problem to learn a ranking maximizing the number of correctly decided examples (also under the ranked Pareto rule) is NP-hard. We obtain similar results for the case of weighted profiles when positional scoring rules are used for aggregation.


1986 ◽  
Vol 9 (3) ◽  
pp. 323-342
Author(s):  
Joseph Y.-T. Leung ◽  
Burkhard Monien

We consider the computational complexity of finding an optimal deadlock recovery. It is known that for an arbitrary number of resource types the problem is NP-hard even when the total cost of deadlocked jobs and the total number of resource units are “small” relative to the number of deadlocked jobs. It is also known that for one resource type the problem is NP-hard when the total cost of deadlocked jobs and the total number of resource units are “large” relative to the number of deadlocked jobs. In this paper we show that for one resource type the problem is solvable in polynomial time when the total cost of deadlocked jobs or the total number of resource units is “small” relative to the number of deadlocked jobs. For fixed m ⩾ 2 resource types, we show that the problem is solvable in polynomial time when the total number of resource units is “small” relative to the number of deadlocked jobs. On the other hand, when the total number of resource units is “large”, the problem becomes NP-hard even when the total cost of deadlocked jobs is “small” relative to the number of deadlocked jobs. The results in the paper, together with previous known ones, give a complete delineation of the complexity of this problem under various assumptions of the input parameters.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 556
Author(s):  
Thaer Thaher ◽  
Mahmoud Saheb ◽  
Hamza Turabieh ◽  
Hamouda Chantar

Fake or false information on social media platforms is a significant challenge that leads to deliberately misleading users due to the inclusion of rumors, propaganda, or deceptive information about a person, organization, or service. Twitter is one of the most widely used social media platforms, especially in the Arab region, where the number of users is steadily increasing, accompanied by an increase in the rate of fake news. This drew the attention of researchers to provide a safe online environment free of misleading information. This paper aims to propose a smart classification model for the early detection of fake news in Arabic tweets utilizing Natural Language Processing (NLP) techniques, Machine Learning (ML) models, and Harris Hawks Optimizer (HHO) as a wrapper-based feature selection approach. Arabic Twitter corpus composed of 1862 previously annotated tweets was utilized by this research to assess the efficiency of the proposed model. The Bag of Words (BoW) model is utilized using different term-weighting schemes for feature extraction. Eight well-known learning algorithms are investigated with varying combinations of features, including user-profile, content-based, and words-features. Reported results showed that the Logistic Regression (LR) with Term Frequency-Inverse Document Frequency (TF-IDF) model scores the best rank. Moreover, feature selection based on the binary HHO algorithm plays a vital role in reducing dimensionality, thereby enhancing the learning model’s performance for fake news detection. Interestingly, the proposed BHHO-LR model can yield a better enhancement of 5% compared with previous works on the same dataset.


2004 ◽  
Vol 04 (01) ◽  
pp. 63-76 ◽  
Author(s):  
OLIVER JENKINSON

Given a non-empty finite subset A of the natural numbers, let EA denote the set of irrationals x∈[0,1] whose continued fraction digits lie in A. In general, EA is a Cantor set whose Hausdorff dimension dim (EA) is between 0 and 1. It is shown that the set [Formula: see text] intersects [0,1/2] densely. We then describe a method for accurately computing dimensions dim (EA), and employ it to investigate numerically the way in which [Formula: see text] intersects [1/2,1]. These computations tend to support the conjecture, first formulated independently by Hensley, and by Mauldin & Urbański, that [Formula: see text] is dense in [0,1]. In the important special case A={1,2}, we use our computational method to give an accurate approximation of dim (E{1,2}), improving on the one given in [18].


2012 ◽  
Vol 601 ◽  
pp. 347-353
Author(s):  
Xiong Zhi Wang ◽  
Guo Qing Wang

We study the order picking problem in carousels system with a single picker. The objective is to find a picking scheduling to minimizing the total order picking time. After showing the problem being strongly in NP-Hard and finding two characteristics, we construct an approximation algorithm for a special case (two carousels) and a heuristics for the general problem. Experimental results verify that the solutions are quickly and steadily achieved and show its better performance.


2009 ◽  
Vol 158 (5) ◽  
pp. 727-740 ◽  
Author(s):  
V. Kreinovich ◽  
M. Margenstern

2005 ◽  
Vol 48 (2) ◽  
pp. 221-236 ◽  
Author(s):  
Matt Kerr

AbstractWe state and prove an important special case of Suslin reciprocity that has found significant use in the study of algebraic cycles. An introductory account is provided of the regulator and norm maps on Milnor K2-groups (for function fields) employed in the proof.


Author(s):  
Fakhra Akhtar ◽  
Faizan Ahmed Khan

<p>In the digital age, fake news has become a well-known phenomenon. The spread of false evidence is often used to confuse mainstream media and political opponents, and can lead to social media wars, hatred arguments and debates.Fake news is blurring the distinction between real and false information, and is often spread on social media resulting in negative views and opinions. Earlier Research describe the fact that false propaganda is used to create false stories on mainstream media in order to cause a revolt and tension among the masses The digital rights foundation DRF report, which builds on the experiences of 152 journalists and activists in Pakistan, presents that more than 88 % of the participants find social media platforms as the worst source for information, with Facebook being the absolute worst. The dataset used in this paper relates to Real and fake news detection. The objective of this paper is to determine the Accuracy , precision , of the entire dataset .The results are visualized in the form of graphs and the analysis was done using python. The results showed the fact that the dataset holds 95% of the accuracy. The number of actual predicted cases were 296. Results of this paper reveals that The accuracy of the model dataset is 95.26 % the precision results 95.79 % whereas recall and F-Measure shows 94.56% and 95.17% accuracy respectively.Whereas in predicted models there are 296 positive attributes , 308 negative attributes 17 false positives and 13 false negatives. This research recommends that authenticity of news should be analysed first instead of drafting an opinion, sharing fake news or false information is considered unethical journalists and news consumers both should act responsibly while sharing any news.</p>


2018 ◽  
Vol 39 (3) ◽  
pp. 350-361 ◽  
Author(s):  
Teri Finneman ◽  
Ryan J. Thomas

“Fake news” became a concern for journalists in 2017 as news organizations sought to differentiate themselves from false information spread via social media, websites and public officials. This essay examines the history of media hoaxing and fake news to help provide context for the current U.S. media environment. In addition, definitions of the concepts are proposed to provide clarity for researchers and journalists trying to explain these phenomena.


Sign in / Sign up

Export Citation Format

Share Document