upper bound
Recently Published Documents


TOTAL DOCUMENTS

6856
(FIVE YEARS 1351)

H-INDEX

83
(FIVE YEARS 9)

2022 ◽  
Vol 40 (3) ◽  
pp. 1-24
Author(s):  
Jiaul H. Paik ◽  
Yash Agrawal ◽  
Sahil Rishi ◽  
Vaishal Shah

Existing probabilistic retrieval models do not restrict the domain of the random variables that they deal with. In this article, we show that the upper bound of the normalized term frequency ( tf ) from the relevant documents is much smaller than the upper bound of the normalized tf from the whole collection. As a result, the existing models suffer from two major problems: (i) the domain mismatch causes data modeling error, (ii) since the outliers have very large magnitude and the retrieval models follow tf hypothesis, the combination of these two factors tends to overestimate the relevance score. In an attempt to address these problems, we propose novel weighted probabilistic models based on truncated distributions. We evaluate our models on a set of large document collections. Significant performance improvement over six existing probabilistic models is demonstrated.


2022 ◽  
Vol 23 (2) ◽  
pp. 1-34
Author(s):  
Clemens Kupke ◽  
Dirk Pattinson ◽  
Lutz Schröder

We establish a generic upper bound ExpTime for reasoning with global assumptions (also known as TBoxes) in coalgebraic modal logics. Unlike earlier results of this kind, our bound does not require a tractable set of tableau rules for the instance logics, so that the result applies to wider classes of logics. Examples are Presburger modal logic, which extends graded modal logic with linear inequalities over numbers of successors, and probabilistic modal logic with polynomial inequalities over probabilities. We establish the theoretical upper bound using a type elimination algorithm. We also provide a global caching algorithm that potentially avoids building the entire exponential-sized space of candidate states, and thus offers a basis for practical reasoning. This algorithm still involves frequent fixpoint computations; we show how these can be handled efficiently in a concrete algorithm modelled on Liu and Smolka’s linear-time fixpoint algorithm. Finally, we show that the upper complexity bound is preserved under adding nominals to the logic, i.e., in coalgebraic hybrid logic.


2022 ◽  
Vol 416 ◽  
pp. 126756
Author(s):  
Sunben Chiu ◽  
Yuqing He ◽  
Pingzhi Yuan
Keyword(s):  

2022 ◽  
Vol 40 (3) ◽  
pp. 1-29
Author(s):  
Yashar Moshfeghi ◽  
Alvaro Francisco Huertas-Rosero

In this article, we propose an approach to improve quality in crowdsourcing (CS) tasks using Task Completion Time (TCT) as a source of information about the reliability of workers in a game-theoretical competitive scenario. Our approach is based on the hypothesis that some workers are more risk-inclined and tend to gamble with their use of time when put to compete with other workers. This hypothesis is supported by our previous simulation study. We test our approach with 35 topics from experiments on the TREC-8 collection being assessed as relevant or non-relevant by crowdsourced workers both in a competitive (referred to as “Game”) and non-competitive (referred to as “Base”) scenario. We find that competition changes the distributions of TCT, making them sensitive to the quality (i.e., wrong or right) and outcome (i.e., relevant or non-relevant) of the assessments. We also test an optimal function of TCT as weights in a weighted majority voting scheme. From probabilistic considerations, we derive a theoretical upper bound for the weighted majority performance of cohorts of 2, 3, 4, and 5 workers, which we use as a criterion to evaluate the performance of our weighting scheme. We find our approach achieves a remarkable performance, significantly closing the gap between the accuracy of the obtained relevance judgements and the upper bound. Since our approach takes advantage of TCT, which is an available quantity in any CS tasks, we believe it is cost-effective and, therefore, can be applied for quality assurance in crowdsourcing for micro-tasks.


2022 ◽  
Vol 6 (1) ◽  
pp. 48
Author(s):  
Najeeb Ullah ◽  
Irfan Ali ◽  
Sardar Muhammad Hussain ◽  
Jong-Suk Ro ◽  
Nazar Khan ◽  
...  

This paper deals with a new subclass of univalent function associated with the right half of the lemniscate of Bernoulli. We have find the upper bound of the Hankel determinant H3(1) for this subclass by applying the Carlson–Shaffer operator to it. The present work also deals with certain properties of this newly defined subclass, such as the upper bound of the Hankel determinant of order 3, the co-efficient estimate, etc.


2022 ◽  
Vol 17 (1) ◽  
Author(s):  
Luiz Augusto G. Silva ◽  
Luis Antonio B. Kowada ◽  
Noraí Romeu Rocco ◽  
Maria Emília M. T. Walter

Abstract Background sorting by transpositions (SBT) is a classical problem in genome rearrangements. In 2012, SBT was proven to be $$\mathcal {NP}$$ NP -hard and the best approximation algorithm with a 1.375 ratio was proposed in 2006 by Elias and Hartman (EH algorithm). Their algorithm employs simplification, a technique used to transform an input permutation $$\pi$$ π into a simple permutation$${\hat{\pi }}$$ π ^ , presumably easier to handle with. The permutation $${\hat{\pi }}$$ π ^ is obtained by inserting new symbols into $$\pi$$ π in a way that the lower bound of the transposition distance of $$\pi$$ π is kept on $${\hat{\pi }}$$ π ^ . The simplification is guaranteed to keep the lower bound, not the transposition distance. A sequence of operations sorting $${\hat{\pi }}$$ π ^ can be mimicked to sort $$\pi$$ π . Results and conclusions First, using an algebraic approach, we propose a new upper bound for the transposition distance, which holds for all $$S_n$$ S n . Next, motivated by a problem identified in the EH algorithm, which causes it, in scenarios involving how the input permutation is simplified, to require one extra transposition above the 1.375-approximation ratio, we propose a new approximation algorithm to solve SBT ensuring the 1.375-approximation ratio for all $$S_n$$ S n . We implemented our algorithm and EH’s. Regarding the implementation of the EH algorithm, two other issues were identified and needed to be fixed. We tested both algorithms against all permutations of size n, $$2\le n \le 12$$ 2 ≤ n ≤ 12 . The results show that the EH algorithm exceeds the approximation ratio of 1.375 for permutations with a size greater than 7. The percentage of computed distances that are equal to transposition distance, computed by the implemented algorithms are also compared with others available in the literature. Finally, we investigate the performance of both implementations on longer permutations of maximum length 500. From the experiments, we conclude that maximum and the average distances computed by our algorithm are a little better than the ones computed by the EH algorithm and the running times of both algorithms are similar, despite the time complexity of our algorithm being higher.


Sign in / Sign up

Export Citation Format

Share Document