High Probability
Recently Published Documents


TOTAL DOCUMENTS

2279
(FIVE YEARS 1299)

H-INDEX

60
(FIVE YEARS 18)

PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0258370
Author(s):  
Theodora Moutsiou ◽  
Christian Reepmeyer ◽  
Vasiliki Kassianidou ◽  
Zomenia Zomeni ◽  
Athos Agapiou

Predictive models have become an integral part of archaeological research, particularly in the discovery of new archaeological sites. In this paper, we apply predictive modeling to map high potential Pleistocene archaeological locales on the island of Cyprus in the Eastern Mediterranean. The model delineates landscape characteristics that denote areas with high potential to unearth Pleistocene archaeology while at the same time highlighting localities that should be excluded. The predictive model was employed in surface surveys to systematically access high probability locales on Cyprus. A number of newly identified localities suggests that the true density of mobile hunter-gatherer sites on Cyprus is seriously underestimated in current narratives. By adding new data to this modest corpus of early insular sites, we are able to contribute to debates regarding island colonisation and the role of coastal environments in human dispersals to new territories.


2021 ◽  
Vol 17 (4) ◽  
pp. 1-12
Author(s):  
Robert E. Tarjan ◽  
Caleb Levy ◽  
Stephen Timmel

We introduce the zip tree , 1 a form of randomized binary search tree that integrates previous ideas into one practical, performant, and pleasant-to-implement package. A zip tree is a binary search tree in which each node has a numeric rank and the tree is (max)-heap-ordered with respect to ranks, with rank ties broken in favor of smaller keys. Zip trees are essentially treaps [8], except that ranks are drawn from a geometric distribution instead of a uniform distribution, and we allow rank ties. These changes enable us to use fewer random bits per node. We perform insertions and deletions by unmerging and merging paths ( unzipping and zipping ) rather than by doing rotations, which avoids some pointer changes and improves efficiency. The methods of zipping and unzipping take inspiration from previous top-down approaches to insertion and deletion by Stephenson [10], Martínez and Roura [5], and Sprugnoli [9]. From a theoretical standpoint, this work provides two main results. First, zip trees require only O (log log n ) bits (with high probability) to represent the largest rank in an n -node binary search tree; previous data structures require O (log n ) bits for the largest rank. Second, zip trees are naturally isomorphic to skip lists [7], and simplify Dean and Jones’ mapping between skip lists


2021 ◽  
Vol 17 (4) ◽  
pp. 1-51
Author(s):  
Aaron Bernstein ◽  
Sebastian Forster ◽  
Monika Henzinger

Many dynamic graph algorithms have an amortized update time, rather than a stronger worst-case guarantee. But amortized data structures are not suitable for real-time systems, where each individual operation has to be executed quickly. For this reason, there exist many recent randomized results that aim to provide a guarantee stronger than amortized expected. The strongest possible guarantee for a randomized algorithm is that it is always correct (Las Vegas) and has high-probability worst-case update time, which gives a bound on the time for each individual operation that holds with high probability. In this article, we present the first polylogarithmic high-probability worst-case time bounds for the dynamic spanner and the dynamic maximal matching problem. (1) For dynamic spanner, the only known o ( n ) worst-case bounds were O ( n 3/4 ) high-probability worst-case update time for maintaining a 3-spanner and O ( n 5/9 ) for maintaining a 5-spanner. We give a O (1) k log 3 ( n ) high-probability worst-case time bound for maintaining a ( 2k-1 )-spanner, which yields the first worst-case polylog update time for all constant k . (All the results above maintain the optimal tradeoff of stretch 2k-1 and Õ( n 1+1/k ) edges.) (2) For dynamic maximal matching, or dynamic 2-approximate maximum matching, no algorithm with o(n) worst-case time bound was known and we present an algorithm with O (log 5 ( n )) high-probability worst-case time; similar worst-case bounds existed only for maintaining a matching that was (2+ϵ)-approximate, and hence not maximal. Our results are achieved using a new approach for converting amortized guarantees to worst-case ones for randomized data structures by going through a third type of guarantee, which is a middle ground between the two above: An algorithm is said to have worst-case expected update time ɑ if for every update σ, the expected time to process σ is at most ɑ. Although stronger than amortized expected, the worst-case expected guarantee does not resolve the fundamental problem of amortization: A worst-case expected update time of O(1) still allows for the possibility that every 1/ f(n) updates requires ϴ ( f(n) ) time to process, for arbitrarily high f(n) . In this article, we present a black-box reduction that converts any data structure with worst-case expected update time into one with a high-probability worst-case update time: The query time remains the same, while the update time increases by a factor of O (log 2(n) ). Thus, we achieve our results in two steps: (1) First, we show how to convert existing dynamic graph algorithms with amortized expected polylogarithmic running times into algorithms with worst-case expected polylogarithmic running times. (2) Then, we use our black-box reduction to achieve the polylogarithmic high-probability worst-case time bound. All our algorithms are Las-Vegas-type algorithms.


2022 ◽  
Vol 54 (8) ◽  
pp. 1-36
Author(s):  
Xingwei Zhang ◽  
Xiaolong Zheng ◽  
Wenji Mao

Deep neural networks (DNNs) have been verified to be easily attacked by well-designed adversarial perturbations. Image objects with small perturbations that are imperceptible to human eyes can induce DNN-based image class classifiers towards making erroneous predictions with high probability. Adversarial perturbations can also fool real-world machine learning systems and transfer between different architectures and datasets. Recently, defense methods against adversarial perturbations have become a hot topic and attracted much attention. A large number of works have been put forward to defend against adversarial perturbations, enhancing DNN robustness against potential attacks, or interpreting the origin of adversarial perturbations. In this article, we provide a comprehensive survey on classical and state-of-the-art defense methods by illuminating their main concepts, in-depth algorithms, and fundamental hypotheses regarding the origin of adversarial perturbations. In addition, we further discuss potential directions of this domain for future researchers.


2021 ◽  
Vol 118 (42) ◽  
pp. e2108507118
Author(s):  
Kinneret Teodorescu ◽  
Ori Plonsky ◽  
Shahar Ayal ◽  
Rachel Barkan

External enforcement policies aimed to reduce violations differ on two key components: the probability of inspection and the severity of the punishment. Different lines of research offer different insights regarding the relative importance of each component. In four studies, students and Prolific crowdsourcing participants (Ntotal = 816) repeatedly faced temptations to commit violations under two enforcement policies. Controlling for expected value, we found that a policy combining a high probability of inspection with a low severity of fines (HILS) was more effective than an economically equivalent policy that combined a low probability of inspection with a high severity of fines (LIHS). The advantage of prioritizing inspection frequency over punishment severity (HILS over LIHS) was greater for participants who, in the absence of enforcement, started out with a higher violation rate. Consistent with studies of decisions from experience, frequent enforcement with small fines was more effective than rare severe fines even when we announced the severity of the fine in advance to boost deterrence. In addition, in line with the phenomenon of underweighting of rare events, the effect was stronger when the probability of inspection was rarer (as in most real-life inspection probabilities) and was eliminated under moderate inspection probabilities. We thus recommend that policymakers looking to effectively reduce recurring violations among noncriminal populations should consider increasing inspection rates rather than punishment severity.


2021 ◽  
Author(s):  
Geraldo Marcelo Lima ◽  
Kita Macário ◽  
Alexandre Costa ◽  
Eduardo Alves ◽  
Joaquim Filho ◽  
...  

Abstract The Chapada Diamantina, in Northeastern Brazil, is one of the few places where one can find drylands with a backswamp containing hundreds of dead deciduous trees in the floodplain. During the 18th century, the region was globally important due to the exploration of mineral resources. The death of these trees was caused by mining activities that silted the main river, leading to the impoundment of the tributary river, and resulting in a wetland known as Pantanal Marimbus, having as indicators: (i) backswamp morphological feature that remains permanently flooded in the axis of the fluvial course, and (ii) alluvial fans concentrated in one footslope area where mining activities at the Chapada Diamantina were also concentrated. The hydrological and sedimentological behavior was investigated to multi-methods. By analysing four different samples from the bark and core of the same tree, we obtained calibrated radiocarbon dates within the 18th century. For no robust dendrochronology could be performed, a simple sequence model was built, revealing a high probability that the tree lived until approximately 1700 AD. 14C-AMS measured pioneering possible to evaluate the 300-years-old wetlands juvenile evolutionary state.


2021 ◽  
Vol 11 (3) ◽  
pp. 130-139
Author(s):  
Mikhail Drapalyuk ◽  
Nikita Ushakov ◽  
Nikolai Jujukin ◽  
Aleksey Zhuravlev

The analysis of sowing methods and existing types of seeders, which are used in forestry and agrotechnical complexes, as well as patent materials, is given. Analysis of domestic designs of SLP-M, SLU-5-20 and "Litva-25" seeders, intended for sowing small forest seeds in nurseries and open ground, showed that they are energy-intensive and do not always ensure the embedding of seeds in moist soil. The perspective directions of resource conservation in agriculture have been considered: sowing using "no-till" or "mini-do" technology, ensuring sowing of seeds in untreated and minimally cultivated soil. The combined seeder AGRATORDK is equipped with a disc cultivator and a seeder with gouters. The RAPIDRDA-450S seeder from VADERSTAD has spherical discs that cultivate soil in one pass. The presented methods of sowing and seeding devices have a significant drawback - the necessity of additional working bodies with a high probability of getting into the grooves of dry soil, moving the top layer of soil "back and forth." A gouter which can change the angle of entering the soil and planting depths of small forest seeds was developed. Preliminary laboratory studies have shown the operability of gouter mock-up specimen and the ability to cut the seed furrow by cutting out a layer of soil with void formation above the bottom of the seed furrow, into which seeds were fed through tubes from funnels. The seeds were embedded with a layer of soil under the influence of its own gravity


Land ◽  
2021 ◽  
Vol 10 (10) ◽  
pp. 1056
Author(s):  
Albert Poponi Maniraho ◽  
Richard Mind’je ◽  
Wenjiang Liu ◽  
Vincent Nzabarinda ◽  
Patient Mindje Kayumba ◽  
...  

Land use and land cover (LULC) management influences the severity of soil erosion risk. However, crop management (C) is one factor of the Revised Universal Soil Loss Equation (RUSLE) model that should be taken into account in its determination, as it influences soil loss rate estimations. Thus, the present study applied an adapted C-factor estimation approach (CvkA) modified from the former approach (Cvk) to assess the impact of LULC dynamics on soil erosion risk in an agricultural area of Rwanda taking the western province as a case study. The results disclosed that the formerly used Cvk was not suitable, as it tended to overestimate C-factor values compared with the values obtained from t CvkA. An approximated mean soil loss of 15.1 t ha−1 yr−1, 47.4 t ha−1 yr−1, 16.3 t ha−1 yr−1, 66.8 t ha−1 yr−1 and 15.3 t ha−1 yr−1 in 2000, 2005, 2010, 2015 and 2018, respectively, was found. The results also indicated that there was a small increase in mean annual soil loss from 15.1 t ha−1 yr−1 in 2000 to 15.3 t ha−1 yr−1 in 2018 (1.3%). Moreover, the soil erosion risk categories indicated that about 57.5%, 21.8%, 64.9%, 15.5% and 73.8% had a sustainable soil erosion rate tolerance (≤10 t ha−1 yr−1), while about 42.5%, 78.2%, 35.1%, 84.5% and 16.8% had an unsustainable mean soil erosion rate (>10 t ha−1 yr−1) in 2000, 2005, 2010, 2015 and 2018, respectively. A major portion of the area fell under the high and very high probability zones, whereas only a small portion fell under the very low, low, moderate and extremely high probability zones. Therefore, the CvkA approach presents the most suitable alternative to estimate soil loss in the western province of Rwanda with reasonable soil loss prediction results. The study area needs urgent intervention for soil conservation planning, taking into account the implementation of effective conservation practices such as terracing for soil erosion control.


Sign in / Sign up

Export Citation Format

Share Document