A simulation study of single-trial ERP latency estimation methods

1996 ◽  
Author(s):  
Kevin M. Spencer ◽  
Emanuel Donchin
2021 ◽  
pp. 073428292110277
Author(s):  
Ioannis Tsaousis ◽  
Georgios D. Sideridis ◽  
Hannan M. AlGhamdi

This study evaluated the psychometric quality of a computerized adaptive testing (CAT) version of the general cognitive ability test (GCAT), using a simulation study protocol put forth by Han, K. T. (2018a). For the needs of the analysis, three different sets of items were generated, providing an item pool of 165 items. Before evaluating the efficiency of the GCAT, all items in the final item pool were linked (equated), following a sequential approach. Data were generated using a standard normal for 10,000 virtual individuals ( M = 0 and SD = 1). Using the measure’s 165-item bank, the ability value (θ) for each participant was estimated. maximum Fisher information (MFI) and maximum likelihood estimation with fences (MLEF) were used as item selection and score estimation methods, respectively. For item exposure control, the fade away method (FAM) was preferred. The termination criterion involved a minimum SE ≤ 0.33. The study revealed that the average number of items administered for 10,000 participants was 15. Moreover, the precision level in estimating the participant’s ability score was very high, as demonstrated by the CBIAS, CMAE, and CRMSE). It is concluded that the CAT version of the test is a promising alternative to administering the corresponding full-length measure since it reduces the number of administered items, prevents high rates of item exposure, and provides accurate scores with minimum measurement error.


Econometrics ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. 40
Author(s):  
Erhard Reschenhofer ◽  
Manveer K. Mangat

For typical sample sizes occurring in economic and financial applications, the squared bias of estimators for the memory parameter is small relative to the variance. Smoothing is therefore a suitable way to improve the performance in terms of the mean squared error. However, in an analysis of financial high-frequency data, where the estimates are obtained separately for each day and then combined by averaging, the variance decreases with the sample size but the bias remains fixed. This paper proposes a method of smoothing that does not entail an increase in the bias. This method is based on the simultaneous examination of different partitions of the data. An extensive simulation study is carried out to compare it with conventional estimation methods. In this study, the new method outperforms its unsmoothed competitors with respect to the variance and its smoothed competitors with respect to the bias. Using the results of the simulation study for the proper interpretation of the empirical results obtained from a financial high-frequency dataset, we conclude that significant long-range dependencies are present only in the intraday volatility but not in the intraday returns. Finally, the robustness of these findings against daily and weekly periodic patterns is established.


2018 ◽  
Vol 104 (2) ◽  
pp. 121-123 ◽  
Author(s):  
Robin D Marlow ◽  
Dora L B Wood ◽  
Mark D Lyttle

ObjectiveEstimating weight is essential in order to prepare appropriate sized equipment and doses of resuscitation drugs in cases where children are critically ill or injured. Many methods exist with varying degrees of complexity and accuracy. The most recent version of the Advanced Paediatric Life Support (APLS) course has changed their teaching from an age-based calculation method to the use of a reference table. We aimed to evaluate the potential implications of this change.MethodUsing a bespoke online simulation platform we assessed the ability of acute paediatric staff to apply different methods of weight estimation. Comparing the time taken, rate and magnitude of errors were made using the APLS single and triple age-based formulae, Best Guess and reference table methods. To add urgency and an element of cognitive stress, a time-based competitive component was included.Results57 participants performed a total of 2240 estimates of weight. The reference table was the fastest (25 (22–28) vs 35 (31–38) to 48 (43–51) s) and most preferred, but errors were made using all methods. There was no significant difference in the percentage accuracy between methods (93%–97%) but the magnitude of errors made was significantly smaller using the three APLS formulae 10% (6.5–21) compared with reference table (69% (34–133)) mainly from month/year table confusion.ConclusionIn this exploratory study under psychological stress none of the methods of weight estimation were free from error. Reference tables were the fastest method and also had the largest errors and should be designed to minimise the risk of picking errors.


2020 ◽  
Vol 12 (21) ◽  
pp. 3631
Author(s):  
Alexandre Corazza ◽  
Ali Khenchaf ◽  
Fabrice Comblet

Wind information on SAR images are essential to characterize a marine environment in offshore or coastal area. More and more applications require high resolution wind field estimation. In this article, classical wind wave direction estimation methods are reviewed as the spectral or gradient approaches. In addition, a way to enhance the spectral method with the Radon transform is proposed. The aim of this document is to determine which method provides greatest results when the resolution grid is finer. Therefore, the methods accuracy, fidelity and uncertainty are compared through a simulation study, a section with RadarSAT2 data in coastal area and another one with Sentinel-1 measurements in offshore area.


2020 ◽  
Vol 11 ◽  
Author(s):  
Sarah Depaoli ◽  
Sonja D. Winter ◽  
Marieke Visser

The current paper highlights a new, interactive Shiny App that can be used to aid in understanding and teaching the important task of conducting a prior sensitivity analysis when implementing Bayesian estimation methods. In this paper, we discuss the importance of examining prior distributions through a sensitivity analysis. We argue that conducting a prior sensitivity analysis is equally important when so-called diffuse priors are implemented as it is with subjective priors. As a proof of concept, we conducted a small simulation study, which illustrates the impact of priors on final model estimates. The findings from the simulation study highlight the importance of conducting a sensitivity analysis of priors. This concept is further extended through an interactive Shiny App that we developed. The Shiny App allows users to explore the impact of various forms of priors using empirical data. We introduce this Shiny App and thoroughly detail an example using a simple multiple regression model that users at all levels can understand. In this paper, we highlight how to determine the different settings for a prior sensitivity analysis, how to visually and statistically compare results obtained in the sensitivity analysis, and how to display findings and write up disparate results obtained across the sensitivity analysis. The goal is that novice users can follow the process outlined here and work within the interactive Shiny App to gain a deeper understanding of the role of prior distributions and the importance of a sensitivity analysis when implementing Bayesian methods. The intended audience is broad (e.g., undergraduate or graduate students, faculty, and other researchers) and can include those with limited exposure to Bayesian methods or the specific model presented here.


Sign in / Sign up

Export Citation Format

Share Document