scholarly journals BayesProject: Fast computation of a projection direction for multivariate changepoint detection

2020 ◽  
Vol 30 (6) ◽  
pp. 1691-1705
Author(s):  
Georg Hahn ◽  
Paul Fearnhead ◽  
Idris A. Eckley

Abstract This article focuses on the challenging problem of efficiently detecting changes in mean within multivariate data sequences. Multivariate changepoints can be detected by projecting a multivariate series to a univariate one using a suitable projection direction that preserves a maximal proportion of signal information. However, for some existing approaches the computation of such a projection direction can scale unfavourably with the number of series and might rely on additional assumptions on the data sequences, thus limiting their generality. We introduce BayesProject, a computationally inexpensive Bayesian approach to compute a projection direction in such a setting. The proposed approach allows the incorporation of prior knowledge of the changepoint scenario, when such information is available, which can help to increase the accuracy of the method. A simulation study shows that BayesProject is robust, yields projections close to the oracle projection direction and, moreover, that its accuracy in detecting changepoints is comparable to, or better than, existing algorithms while scaling linearly with the number of series.

2020 ◽  
Author(s):  
Kshema Jose

<p>This study observed how two hypertext features – absence of a linear or author-specified order and availability of multiple reading aids – influence reading comprehension processes of ESL readers. Studies with native or highly proficient users of English, have suggested that readers reading hypertexts comprehend better than readers reading print texts. This was attributed to (i) presence of hyperlinks that provide access to additional information that can potentially help overcome comprehension obstacles and (ii) the absence of an author-imposed reading order that helps readers exercise cognitive flexibility. An aspect that remains largely un-researched is how well readers with low language competence comprehend hypertexts. This research sought to initiate research in the area by exploring the question: Do all ESL readers comprehend a hypertext better than a print text?</p> <p>Keeping in mind the fact that a majority of readers reading online texts in English can be hindered by three types of comprehension deficits – low levels of language proficiency, non-availability of prior knowledge, or both – this study investigated how two characteristic features of hypertext, viz., linking to additional information and non-linearity in presentation of information, affect reading comprehension of ESL readers. </p> <p>Two types of texts that occur in the electronic medium – linear or pre-structured texts and non-linear or self-navigating texts, were used in this study. Based on a comparison of subjects’ comprehension outcomes and free recalls, text factors and reader factors that can influence hypertext reading comprehension of ESL readers are identified. </p> Contradictory to what many researchers believe, results indicate that self-navigating hypertexts might not promote deep comprehension in all ESL readers.


2020 ◽  
Author(s):  
Laetitia Zmuda ◽  
Charlotte Baey ◽  
Paolo Mairano ◽  
Anahita Basirat

It is well-known that individuals can identify novel words in a stream of an artificial language using statistical dependencies. While underlying computations are thought to be similar from one stream to another (e.g. transitional probabilities between syllables), performance are not similar. According to the “linguistic entrenchment” hypothesis, this would be due to the fact that individuals have some prior knowledge regarding co-occurrences of elements in speech which intervene during verbal statistical learning. The focus of previous studies was on task performance. The goal of the current study is to examine the extent to which prior knowledge impacts metacognition (i.e. ability to evaluate one’s own cognitive processes). Participants were exposed to two different artificial languages. Using a fully Bayesian approach, we estimated an unbiased measure of metacognitive efficiency and compared the two languages in terms of task performance and metacognition. While task performance was higher in one of the languages, the metacognitive efficiency was similar in both languages. In addition, a model assuming no correlation between the two languages better accounted for our results compared to a model where correlations were introduced. We discuss the implications of our findings regarding the computations which underlie the interaction between input and prior knowledge during verbal statistical learning.


2020 ◽  
Vol 8 ◽  
pp. 199-214
Author(s):  
Xi (Leslie) Chen ◽  
Sarah Ita Levitan ◽  
Michelle Levine ◽  
Marko Mandic ◽  
Julia Hirschberg

Humans rarely perform better than chance at lie detection. To better understand human perception of deception, we created a game framework, LieCatcher, to collect ratings of perceived deception using a large corpus of deceptive and truthful interviews. We analyzed the acoustic-prosodic and linguistic characteristics of language trusted and mistrusted by raters and compared these to characteristics of actual truthful and deceptive language to understand how perception aligns with reality. With this data we built classifiers to automatically distinguish trusted from mistrusted speech, achieving an F1 of 66.1%. We next evaluated whether the strategies raters said they used to discriminate between truthful and deceptive responses were in fact useful. Our results show that, although several prosodic and lexical features were consistently perceived as trustworthy, they were not reliable cues. Also, the strategies that judges reported using in deception detection were not helpful for the task. Our work sheds light on the nature of trusted language and provides insight into the challenging problem of human deception detection.


2018 ◽  
Vol 137 (1-2) ◽  
pp. 1337-1346
Author(s):  
Davi Butturi-Gomes ◽  
Luiz Alberto Beijo ◽  
Fabricio Goecking Avelar

Author(s):  
Diwakar Shukla ◽  
Uttam Kumar Khedlekar ◽  
Raghovendra Pratap Singh Chandel

This paper presents an inventory model considering the demand as a parametric dependent linear function of time and price both. The coefficient of time-parameter and coefficient of price-parameter are examined simultaneously and proved that time is dominating variable over price in terms of earning more profit. It is also proved that deterioration of item in the inventory is one of the most sensitive parameter to look into besides many others. The robustness of the suggested model is examined using variations in the input parameters and ranges are specified on which the model is robust on most of occasions and profit is optimal. Two kinds of doubly-demand function strategies are examined and mutually compared in view of the two different cases. Second strategy found better than first. Holding cost is treated as a variable. Theoretical results are supported by numerical based simulation study with robustness. Some recommendations are given at the end for the inventory managers and also open problems are discussed for researchers. This model is more realistic than considered by earlier author.


1999 ◽  
Vol 64 (1) ◽  
pp. 55-70 ◽  
Author(s):  
Robert G. Aykroyd ◽  
David Lucy ◽  
A. Mark Pollard ◽  
Charlotte A. Roberts

It is generally assumed that life expectancy in antiquity was considerably shorter than it is now. In the limited number of cases where skeletal or dental age-at-death estimates have been made on adults for whom there are other reliable indications of age, there appears to be a clear systematic trend towards overestimating the age of young adults, and underestimating that of older individuals. We show that this might be a result of the use of regression-based techniques of analysis for converting age indicators into estimated ages. Whilst acknowledging the limitations of most age-at-death indicators in the higher age categories, we show that a Bayesian approach to converting age indicators into estimated age can reduce this trend of underestimation at the older end. We also show that such a Bayesian approach can always do better than regression-based methods in terms of giving a smaller average difference between predicted age and known age, and a smaller average 95-percent confidence interval width of the estimate. Given these observations, we suggest that Bayesian approaches to converting age indicators into age estimates deserve further investigation. In view of the generality and flexibility of the approach, we also suggest that similar algorithms may have a much wider application.


Author(s):  
A. TETERUKOVSKIY

A problem of automatic detection of tracks in aerial photos is considered. We adopt a Bayesian approach and base our inference on an a priori knowledge of the structure of tracks. The probability of a pixel to belong to a track depends on how the pixel gray level differs from the gray levels of pixels in the neighborhood and on additional prior information. Several suggestions on how to formalize the prior knowledge about the shape of the tracks are made. The Gibbs sampler is used to construct the most probable configuration of tracks in the area. The method is applied to aerial photos with cell size of 1 sq. m. Even for detection of trails of width comparable with or smaller than the cell size, positive results can be achieved.


2015 ◽  
Vol 31 (3) ◽  
pp. 415-429 ◽  
Author(s):  
Loredana Di Consiglio ◽  
Tiziana Tuoto

Abstract The Capture-recapture method is a well-known solution for evaluating the unknown size of a population. Administrative data represent sources of independent counts of a population and can be jointly exploited for applying the capture-recapture method. Of course, administrative sources are affected by over- or undercoverage when considered separately. The standard Petersen approach is based on strong assumptions, including perfect record linkage between lists. In reality, record linkage results can be affected by errors. A simple method for achieving linkage error-unbiased population total estimates is proposed in Ding and Fienberg (1994). In this article, an extension of the Ding and Fienberg model by relaxing their conditions is proposed. The procedures are illustrated for estimating the total number of road casualties, on the basis of a probabilistic record linkage between two administrative data sources. Moreover, a simulation study is developed, providing evidence that the adjusted estimator always performs better than the Petersen estimator.


Sign in / Sign up

Export Citation Format

Share Document