experimenter bias
Recently Published Documents


TOTAL DOCUMENTS

81
(FIVE YEARS 5)

H-INDEX

16
(FIVE YEARS 1)

2020 ◽  
Author(s):  
John Turri

“Gettier cases” have played a major role in Anglo-American analytic epistemology over the past fifty years. Philosophers have grouped a bewildering array of examples under the heading “Gettier case.” Philosophers claim that these cases are obvious counterexamples to the “traditional” analysis of knowledge as justified true belief, and they treat correctly classifying the cases as a criterion for judging proposed theories of knowledge. Cognitive scientists recently began testing whether philosophers are right about these cases. It turns out that philosophers were partly right and partly wrong. Some “Gettier cases” are obvious examples of ignorance, but others are obvious examples of knowledge. It also turns out that much research in this area of philosophy is marred by experimenter bias, invented historical claims, dysfunctional categorization of examples, and mischaracterization by philosophers of their own intuitive judgments about particular cases. Despite these shortcomings, lessons learned from studying “Gettier cases” are leading to important insights about knowledge and knowledge attributions, which are central components of social cognition.


Author(s):  
Christian L. Ebbesen ◽  
Robert C. Froemke

ABSTRACTSocial interactions powerfully impact both the brain and the body, but high-resolution descriptions of these important physical interactions are lacking. Currently, most studies of social behavior rely on labor-intensive methods such as manual annotation of individual video frames. These methods are susceptible to experimenter bias and have limited throughput. To understand the neural circuits underlying social behavior, scalable and objective tracking methods are needed. We present a hardware/software system that combines 3D videography, deep learning, physical modeling and GPU-accelerated robust optimization. Our system is capable of fully automatic multi-animal tracking during naturalistic social interactions and allows for simultaneous electro-physiological recordings. We capture the posture dynamics of multiple unmarked mice with high spatial (∼2 mm) and temporal precision (60 frames/s). This method is based on inexpensive consumer cameras and is implemented in python, making our method cheap and straightforward to adopt and customize for studies of neurobiology and animal behavior.


2019 ◽  
Vol 30 (11) ◽  
pp. 1592-1602 ◽  
Author(s):  
Samantha M. W. Wood ◽  
Scott P. Johnson ◽  
Justin N. Wood

What mechanisms underlie learning in newborn brains? Recently, researchers reported that newborn chicks use unsupervised statistical learning to encode the transitional probabilities (TPs) of shapes in a sequence, suggesting that TP-based statistical learning can be present in newborn brains. Using a preregistered design, we attempted to reproduce this finding with an automated method that eliminated experimenter bias and allowed more than 250 times more data to be collected per chick. With precise measurements of each chick’s behavior, we were able to perform individual-level analyses and substantially reduce measurement error for the group-level analyses. We found no evidence that newborn chicks encode the TPs between sequentially presented shapes. None of the chicks showed evidence for this ability. Conversely, we obtained strong evidence that newborn chicks encode the shapes of individual objects, showing that this automated method can produce robust results. These findings challenge the claim that TP-based statistical learning is present in newborn brains.


2019 ◽  
Author(s):  
Amy L. Shepherd ◽  
A. Alexander T. Smith ◽  
Kirsty A. Wakelin ◽  
Sabine Kuhn ◽  
Jianping Yang ◽  
...  

ABSTRACTColorectal cancer is a major contributor to death and disease worldwide. The ApcMin mouse is a widely used model of intestinal neoplasia, as it carries a mutation also found in human colorectal cancers. However, the method most commonly used to quantify tumour burden in these mice is manual adenoma counting, which is time consuming and poorly suited to standardization across different laboratories. We describe a method to produce suitable photographs of the small intestine, process them with an ImageJ macro, FeatureCounter, which automatically locates image features potentially corresponding to adenomas, and a machine learning pipeline to identify and quantify them. Compared to a manual method, the specificity (or True Negative Rate, TNR) and sensitivity (or True Positive Rate, TPR) of this method in detecting adenomas are similarly high at about 80% and 87%, respectively. Importantly, total adenoma area measures derived from the automatically-called tumours were just as capable of distinguishing high-burden from low-burden mice as those established manually. Overall, our strategy is quicker, helps control experimenter bias and yields a greater wealth of information about each tumour, thus providing a convenient route to getting consistent and reliable results from a study.


2019 ◽  
Author(s):  
Samantha Wood ◽  
Justin Wood

The accuracy of science depends on the precision of its methods. When fields produce precise measurements, the scientific method can generate remarkable gains in knowledge. When fields produce noisy measurements, however, the scientific method is not guaranteed to work—in fact, noisy measurements are now regarded as a leading cause of the replication crisis in psychology. Scientists should therefore strive to improve the precision of their methods, especially in fields with noisy measurements. Here, we show that automation can reduce measurement error by ~60% in one domain of developmental psychology: controlled-rearing studies of newborn chicks. Automated studies produce measurements that are 3-4 times more precise than non-automated studies and produce effect sizes that are 3-4 times larger than non-automated studies. Automation also eliminates experimenter bias and allows replications to be performed quickly and easily. We suggest that automation can be a powerful tool for improving measurement precision, producing high powered experiments, and combating the replication crisis.


2018 ◽  
Author(s):  
Thomas D. Prevot ◽  
Keith A. Misquitta ◽  
Corey Fee ◽  
Dwight F. Newton ◽  
Dipashree Chatterjee ◽  
...  

AbstractStress-related illnesses such as major depressive and anxiety disorders are characterized by maladaptive responses to stressful life events. Chronic stress-based animal models have provided critical insight into the understanding of these responses. Currently available assays measuring chronic stress-induced behavioral states in mice are limited in their design (short, not repeatable, sensitive to experimenter-bias) and often inconsistent. Using the Noldus PhenoTyper apparatus, we identified a new readout that repeatedly assesses behavioral changes induced by chronic stress in two mouse models i.e. chronic restraint stress (CRS) and chronic unpredictable mild stress (UCMS). The PhenoTyper test consists of overnight monitoring of animals’ behavior in home-cage setting before, during and after a 1hr light challenge applied over a designated food zone. We tested the reproducibility and reliability of the PhenoTyper test in assessing the effects of chronic stress exposure, and compared outcomes with commonly-used tests. While chronic stress induced heterogeneous profiles in classical tests, CRS- and UCMS-exposed mice showed a very consistent response in the PhenoTyper test. Indeed, CRS and UCMS mice continue avoiding the lit zone in favor of the shelter zone. This “residual avoidance” after the light challenge, lasted for hours beyond termination of the challenge, was not observed after acute stress and was consistently found throughout stress exposure in both models. Chronic stress-induced residual avoidance was alleviated by chronic imipramine treatment but not acute diazepam administration. This behavioral index should be instrumental for studies aiming to better understand the trajectory of chronic stress-induced deficits and potentially screen novel anxiolytics and antidepressants.


2018 ◽  
Author(s):  
Margaret L. Signorella

Research on the impact of different accents on perceptions of individuals is both important and difficult. The main challenge is in creating realistic portrayals of accents that also control for potential confounding variables. The advantages and disadvantages of three different options are reviewed: the same person speaking in different accents, different persons speaking in different accents, and computer generated accents. This case study describes the method, procedure and results from a study in which the decision was made to use different persons to convey different accents. Although the experimental manipulations were not as controlled as might be ideal, they were used with the aim of increasing realism and external validity. Manipulation checks are described that tested whether the experimental manipulations were effective in conveying similar levels of accents in male and female speakers. Procedures for reducing the risk of experimenter bias are also described. The inclusion of an unmanipulated variable, participant gender, and the interpretational issues, are also discussed. There continues to be a need for research on the impact of speaking with an accent, in spite of the methodological complications.


2017 ◽  
Author(s):  
Daniel Lakens

Running studies with high statistical power, while effect size estimates in psychology are often inaccurate, leads to a practical challenge when designing an experiment. This challenge can be addressed by performing sequential analyses while the data collection is still in progress. At an interim analysis, data collection can be stopped whenever the results are convincing enough to conclude an effect is present, more data can be collected, or the study can be terminated whenever it is extremely unlikely the predicted effect will be observed if data collection would be continued. Such interim analyses can be performed while controlling the Type 1 error rate. Sequential analyses can greatly improve the efficiency with which data is collected. Additional flexibility is provided by adaptive designs where sample sizes are increased based on the observed effect size. The need for pre-registration, ways to prevent experimenter bias, and a comparison between Bayesian approaches and NHST are discussed. Sequential analyses, which are widely used in large scale medical trials, provide an efficient way to perform high-powered informative experiments. I hope this introduction will provide a practical primer that allows researchers to incorporate sequential analyses in their research.


2014 ◽  
Vol 2014 ◽  
pp. 1-5 ◽  
Author(s):  
Glenn Shean

Despite the growing influence of lists of empirically supported therapies (ESTs) there are concerns about the design and conduct of this body of research. These concerns include limitations inherent in the requirements of randomized control trials (RCTs) that favor those psychotherapies that define problems and outcome in terms of uncomplicated symptoms. Additional concerns have to do with criteria for patient selection, lack of integration with research on psychotherapy process and effectiveness studies, limited outcome criteria, and lack of controls for experimenter bias. RCT designs have an important place in outcome research; however it is important to recognize that these designs also place restrictions on what and how psychotherapy can be studied. There is a need for large scale psychotherapy outcome research based on designs that allow for inclusion of process variables and the study of the effects of those idiographic approaches to therapy that do not lend themselves to RCT designs. Interpretative phenomenological analysis may provide a useful method for the evaluation of the effectiveness of idiographic approaches to psychotherapy where outcome is not understood solely in terms of symptom reduction.


Author(s):  
Timothy Gupton ◽  
Tania Leal Méndez

AbstractThe current article examines two experimental investigations of the syntaxdiscourse interface, which address theoretical questions in different ways: the first is an L1 investigation of Galician speakers in Gupton (2010) and the second is a dual investigation of L1 and L2 Spanish reported on in Leal Méndez & Slabakova (2011). These investigations gathered quantitative data via psycholinguistic tasks with accompanying audio utilizing the WebSurveyor platform. They involved counterbalanced designs and were followed by statistical analysis. While acknowledging that experimental data does not have primacy over intuitive data, the authors endorse the use of experimental methods of data elicitation (such as the ones already used in generative SLA research) in theoretical syntax in order to avoid experimenter bias and to get a more complete picture of native speaker intuition and competencies.


Sign in / Sign up

Export Citation Format

Share Document