scholarly journals Efficient estimation of stereo thresholds: what slope should be assumed for the psychometric function?

2019 ◽  
Author(s):  
Ignacio Serrano-Pedraza ◽  
Kathleen Vancleef ◽  
William Herbert ◽  
Nicola Goodship ◽  
Maeve Woodhouse ◽  
...  

Bayesian staircases are widely used in psychophysics to estimate detection thresholds. Simulations have revealed the importance of the parameters selected for the assumed subject’s psychometric function in enabling thresholds to be estimated with small bias and high precision. One important parameter is the slope of the psychometric function, or equivalently its spread. This is often held fixed, rather than estimated for individual subjects, because much larger numbers of trials are required to estimate the spread as well as the threshold. However, if this fixed value is wrong, the threshold estimate can be biased. Here we determine the optimal slope to minimize bias and maximize precision when measuring stereoacuity with Bayesian staircases. We performed 2- and 4AFC disparity detection stereo experiments in order to measure the spread of the disparity psychometric function in human observers assuming a Logistic function. We found a wide range, between 0.03 and 3.5 log10 arcsec, with little change with age. We then ran simulations to examine the optimal spread using the real data. From our simulations and for three different experiments, we recommend selecting assumed spread values between the percentiles 60-80% of the population distribution of spreads (these percentiles can be extended to other type of thresholds). For stereo thresholds, we recommend a spread σ=1.7 log10 arcsec for 2AFC (slope 𝛽 = 4.3/log10 arcsec), and σ=1.5 log10 arcsec for 4AFC (𝛽 = 4.9/log10 arcsec). Finally, we compared a Bayesian procedure (ZEST using the optimal σ) with five Bayesian procedures that are versions of ZEST-2D, Psi, and Psi-marginal. In general, our recommended procedure showed the lowest threshold bias and highest precision.

2020 ◽  
Vol 8 (2) ◽  
pp. B35-B43
Author(s):  
Julio Cesar S. O. Lyrio ◽  
Paulo T. L. Menezes ◽  
Jorlivan L. Correa ◽  
Adriano R. Viana

When collecting and processing geophysical data for exploration, the same geologic feature can generate a different response for each rock property being targeted. Typically, the units of these responses may differ by several orders of magnitude; therefore, the combination of geophysical data in integrated interpretation is not a straightforward process and cannot be performed by visual inspection only. The multiphysics anomaly map (MAM) that we have developed is a data fusion solution that consists of a spatial representation of the correlation between anomalies detected with different geophysical methods. In the MAM, we mathematically process geophysical data such as seismic attributes, gravity, magnetic, and resistivity before combining them in a single map. In each data set, anomalous regions of interest, which are problem-dependent, are selected by the interpreter. Selected anomalies are highlighted through the use of a logistic function, which is specially designed to clip large magnitudes and rescale the range of values, increasing the discrimination of anomalies. The resulting anomalies, named logistic anomalies, represent regions of large probabilities of target occurrence. This new solution highlights areas where individual interpretations of different geophysical methods correlate, increasing the confidence in the interpretation. We determine the effectiveness of our MAM with application to real data from onshore and offshore Brazil. In the onshore Recôncavo Basin, the MAM allows the interpreter to identify a channel where a drilled well found the largest sandstone thickness on the area. In a second example, from offshore Sergipe-Alagoas Basin, the MAM helps differentiate between a dry and an oil-bearing channel previously outlined in seismic data. Therefore, these outcomes indicate that the MAM is a valid interpretation tool that we believe can be applied to a wide range of geologic problems.


Author(s):  
Saheb Foroutaifar

AbstractThe main objectives of this study were to compare the prediction accuracy of different Bayesian methods for traits with a wide range of genetic architecture using simulation and real data and to assess the sensitivity of these methods to the violation of their assumptions. For the simulation study, different scenarios were implemented based on two traits with low or high heritability and different numbers of QTL and the distribution of their effects. For real data analysis, a German Holstein dataset for milk fat percentage, milk yield, and somatic cell score was used. The simulation results showed that, with the exception of the Bayes R, the other methods were sensitive to changes in the number of QTLs and distribution of QTL effects. Having a distribution of QTL effects, similar to what different Bayesian methods assume for estimating marker effects, did not improve their prediction accuracy. The Bayes B method gave higher or equal accuracy rather than the rest. The real data analysis showed that similar to scenarios with a large number of QTLs in the simulation, there was no difference between the accuracies of the different methods for any of the traits.


2019 ◽  
Vol 11 (6) ◽  
pp. 608 ◽  
Author(s):  
Yun-Jia Sun ◽  
Ting-Zhu Huang ◽  
Tian-Hui Ma ◽  
Yong Chen

Remote sensing images have been applied to a wide range of fields, but they are often degraded by various types of stripes, which affect the image visual quality and limit the subsequent processing tasks. Most existing destriping methods fail to exploit the stripe properties adequately, leading to suboptimal performance. Based on a full consideration of the stripe properties, we propose a new destriping model to achieve stripe detection and stripe removal simultaneously. In this model, we adopt the unidirectional total variation regularization to depict the directional property of stripes and the weighted ℓ 2 , 1 -norm regularization to depict the joint sparsity of stripes. Then, we combine the alternating direction method of multipliers and iterative support detection to solve the proposed model effectively. Comparison results on simulated and real data suggest that the proposed method can remove and detect stripes effectively while preserving image edges and details.


2018 ◽  
Author(s):  
Adrian Fritz ◽  
Peter Hofmann ◽  
Stephan Majda ◽  
Eik Dahms ◽  
Johannes Dröge ◽  
...  

Shotgun metagenome data sets of microbial communities are highly diverse, not only due to the natural variation of the underlying biological systems, but also due to differences in laboratory protocols, replicate numbers, and sequencing technologies. Accordingly, to effectively assess the performance of metagenomic analysis software, a wide range of benchmark data sets are required. Here, we describe the CAMISIM microbial community and metagenome simulator. The software can model different microbial abundance profiles, multi-sample time series and differential abundance studies, includes real and simulated strain-level diversity, and generates second and third generation sequencing data from taxonomic profiles or de novo. Gold standards are created for sequence assembly, genome binning, taxonomic binning, and taxonomic profiling. CAMSIM generated the benchmark data sets of the first CAMI challenge. For two simulated multi-sample data sets of the human and mouse gut microbiomes we observed high functional congruence to the real data. As further applications, we investigated the effect of varying evolutionary genome divergence, sequencing depth, and read error profiles on two popular metagenome assemblers, MEGAHIT and metaSPAdes, on several thousand small data sets generated with CAMISIM. CAMISIM can simulate a wide variety of microbial communities and metagenome data sets together with truth standards for method evaluation. All data sets and the software are freely available at: https://github.com/CAMI-challenge/CAMISIM


2021 ◽  
Author(s):  
Mikhail Kanevski

<p>Nowadays a wide range of methods and tools to study and forecast time series is available. An important problem in forecasting concerns embedding of time series, i.e. construction of a high dimensional space where forecasting problem is considered as a regression task. There are several basic linear and nonlinear approaches of constructing such space by defining an optimal delay vector using different theoretical concepts. Another way is to consider this space as an input feature space – IFS, and to apply machine learning feature selection (FS) algorithms to optimize IFS according to the problem under study (analysis, modelling or forecasting). Such approach is an empirical one: it is based on data and depends on the FS algorithms applied. In machine learning features are generally classified as relevant, redundant and irrelevant. It gives a reach possibility to perform advanced multivariate time series exploration and development of interpretable predictive models.</p><p>Therefore, in the present research different FS algorithms are used to analyze fundamental properties of time series from empirical point of view. Linear and nonlinear simulated time series are studied in detail to understand the advantages and drawbacks of the proposed approach. Real data case studies deal with air pollution and wind speed times series. Preliminary results are quite promising and more research is in progress.</p>


2019 ◽  
Vol 34 (2) ◽  
pp. 485-508
Author(s):  
Tomoki Fujii ◽  
Roy van der Weide

Abstract It is costly to collect the household- and individual-level data that underlie official estimates of poverty and health. For this reason, developing countries often do not have the budget to update estimates of poverty and health regularly, even though these estimates are most needed there. One way to reduce the financial burden is to substitute some of the real data with predicted data by means of double sampling, where the expensive outcome variable is collected for a subsample and its predictors for all. This study finds that double sampling yields only modest reductions in financial costs when imposing a statistical precision constraint in a wide range of realistic empirical settings. There are circumstances in which the gains can be more substantial, but these denote the exception rather than the rule. The recommendation is to rely on real data whenever there is a need for new data and to use prediction estimators to leverage existing data.


Author(s):  
Janet Peacock ◽  
Sally Kerry

Presenting Medical Statistics includes a wide range of statistical analyses, and all the statistical methods are illustrated using real data. Labelled figures show the Stata and SPSS commands needed to obtain the analyses, with indications of which information should be extracted from the output for reporting. The relevant results are then presented as for a report or journal article, to illustrate the principles of good presentation.


Author(s):  
Janet L. Peacock ◽  
Sally M. Kerry ◽  
Raymond R. Balise

Presenting Medical Statistics from Proposal to Publication (second edition) aims to show readers how to conduct a wide range of statistical analyses from sample size calculations through to multifactorial regressions that are needed in the research process. The second edition of ‘Presenting’ has been revised and updated and now includes Stata, SAS, SPSS, and R. The book shows how to interpret each computer output and illustrates how to present the results and accompanying text in a format suitable for a peer-reviewed journal article or research report. All analyses are illustrated using real data and all programming code, outputs, and datasets used in the book are available on a website for readers to freely download and use. ‘Presenting’ includes practical information and helpful tips for software, all statistical methods used, and the research process. It is written by three experienced biostatisticians, Janet Peacock, Sally Kerry, and Ray Balise from the UK and the USA, and is born out of their extensive experience conducting collaborative medical research, teaching medical students, physicians, and other health professionals, and providing researchers with advice.


2020 ◽  
Vol 58 (3) ◽  
pp. 644-719 ◽  
Author(s):  
Mark F. J. Steel

The method of model averaging has become an important tool to deal with model uncertainty, for example in situations where a large amount of different theories exist, as are common in economics. Model averaging is a natural and formal response to model uncertainty in a Bayesian framework, and most of the paper deals with Bayesian model averaging. The important role of the prior assumptions in these Bayesian procedures is highlighted. In addition, frequentist model averaging methods are also discussed. Numerical techniques to implement these methods are explained, and I point the reader to some freely available computational resources. The main focus is on uncertainty regarding the choice of covariates in normal linear regression models, but the paper also covers other, more challenging, settings, with particular emphasis on sampling models commonly used in economics. Applications of model averaging in economics are reviewed and discussed in a wide range of areas including growth economics, production modeling, finance and forecasting macroeconomic quantities. (JEL C11, C15, C20, C52, O47).


Geophysics ◽  
2020 ◽  
pp. 1-104
Author(s):  
Volodya Hlebnikov ◽  
Thomas Elboth ◽  
Vetle Vinje ◽  
Leiv-J. Gelius

The presence of noise in towed marine seismic data is a long-standing problem. The various types of noise present in marine seismic records are never truly random. Instead, seismic noise is more complex and often challenging to attenuate in seismic data processing. Therefore, we examine a wide range of real data examples contaminated by different types of noise including swell noise, seismic interference noise, strumming noise, passing vessel noise, vertical particle velocity noise, streamer hit and fishing gear noise, snapping shrimp noise, spike-like noise, cross-feed noise and streamer mounted devices noise. The noise examples investigated focus only on data acquired with analogue group-forming. Each noise type is classified based on its origin, coherency and frequency content. We then demonstrate how the noise component can be effectively attenuated through industry standard seismic processing techniques. In this tutorial, we avoid presenting the finest details of either the physics of the different types of noise themselves or the noise attenuation algorithms applied. Rather, we focus on presenting the noise problems themselves and show how well the community is able to address such noise. Our aim is that based on the provided insights, the geophysical community will be able to gain an appreciation of some of the most common types of noise encountered in marine towed seismic, in the hope to inspire more researchers to focus their attention on noise problems with greater potential industry impact.


Sign in / Sign up

Export Citation Format

Share Document