Do we know what we test and do we test what we want to know?

2011 ◽  
Vol 35 (6) ◽  
pp. 550-560 ◽  
Author(s):  
Irene Klugkist ◽  
Floryt van Wesel ◽  
Jessie Bullens

Null hypothesis testing (NHT) is the most commonly used tool in empirical psychological research even though it has several known limitations. It is argued that since the hypotheses evaluated with NHT do not reflect the research-question or theory of the researchers, conclusions from NHT must be formulated with great modesty, that is, they cannot be stated in a confirmative way. Since confirmation or theory evaluation is, however, what researchers often aim for, we present an alternative approach that is based on the specification of explicit, informative statistical hypotheses. The statistical approach for the evaluation of these hypotheses is a Bayesian model-selection procedure. A non-technical explanation of the Bayesian approach is provided and it will be shown that results obtained with this method give more direct answers to the questions asked and are easier to interpret. An additional advantage of the offered possibility to formulate and evaluate informative hypotheses is that it stimulates researchers to more carefully think through and specify their expectations.

2021 ◽  
Author(s):  
Tri Tam Le

I give some benefits of the Bayesian analysis method from my personal experience in psychological research.


2018 ◽  
Vol 8 (1) ◽  
pp. 3-19 ◽  
Author(s):  
Yuanyuan Zhou ◽  
Susan Troncoso Skidmore

Historically, ANOVA has been the most prevalent statistical method used in educational and psychological research and today ANOVA continues to be widely used.  A comprehensive review published in 1998 examined several APA journals and discovered persistent concerns in ANOVA reporting practices.  The present authors examined all articles published in 2012 in three APA journals (Journal of Applied Psychology, Journal of Counseling Psychology, and Journal of Personality and Social Psychology) to review ANOVA reporting practices including p values and effect sizes.  Results indicated that ANOVA continues to be prevalent in the reviewed journals as a test of the primary research question, as well as to test conditional assumptions prior to the primary analysis.  Still, ANOVA reporting practices are essentially unchanged from what was previously reported.  However, effect size reporting has improved.


2016 ◽  
Vol 58 (2) ◽  
pp. 404-438 ◽  
Author(s):  
Timo Meynhardt ◽  
Peter Gomez

Carroll shaped the corporate social responsibility (CSR) discourse into a four-dimensional pyramid framework, which was later adapted to corporate citizenship and sustainability approaches. The four layers of the pyramid—structured from foundation to apex as economic, legal, ethical, and philanthropic (or discretionary) responsibilities—drew considerable managerial attention. An important criticism of the economic foundation of the Carroll pyramid concerns the identification and ordering of the four dimensions, which are inadequately justified theoretically. The authors of this article propose an alternative approach that builds on the public value concept, which integrates a microfoundation of psychological research into basic human needs. Drawing on their Swiss Dialogue process, the authors argue that a four-dimensional pyramid does have heuristic value for managers. The advantage of this alternative pyramid logic is that it may be contingently adapted to different cultural contexts, because it allows adaptive internal reordering.


1983 ◽  
Vol 56 (2) ◽  
pp. 407-413 ◽  
Author(s):  
Rainer Westermann ◽  
Willi Hager

When several statistical hypotheses are tested in a study to answer a single research question or to test a single scientific hypothesis, interpretation of the results may be difficult because Type 1 error probabilities cumulate. Researchers often try to solve this problem by reducing the significance level for each test or by applying a multiple-comparison procedure for means. For a constant number of observations, both strategies result in a lower power for each test or comparison of interest, however. Two well-known psychological experiments are reanalyzed in this respect. It is shown that low probabilities of Type 1 error should be given a higher priority only if the scientific hypothesis under scrutiny implies that all null hypotheses of the tests of significance are valid. More often, however, the research hypothesis is supported completely if all alternative hypotheses are accepted. In this case, a high power for each single test is more important than a low significance level.


PhaenEx ◽  
2012 ◽  
Vol 7 (2) ◽  
pp. 96
Author(s):  
JEAN-THOMAS TREMBLAY

This article generates an affective hermeneutics of the political. The research question, What is feeling political? is, at first, refined through the oeuvre of political theorist Simone Weil, whose focus on experience, involvement and attention highlights the role of sentience in political life. The inescapable normativity of Weil’s texts calls for an alternative approach to the question at hand, one that acknowledges the inevitability of the phenomenon of feeling political. In order to produce such an approach, the realm in which said phenomenon occurs is spatialized as an indefinite series of rhizomatic affective atmospheres in which the negotiation of one’s involvement, resistance, association, and isolation prompts a variety of orientations. The work of Lauren Berlant is subsequently considered as a means to stress the interplay between noise and ambience on one hand, and the notions of citizenship and community on the other. Ultimately, a reflection inspired by Gilles Deleuze and Félix Guattari emphasizes the humanist undertone of this investigation, reposing the question of feeling political as an ontological query.  


2018 ◽  
Vol 90 (3) ◽  
Author(s):  
Andreas Langlotz

Research on the facial expression of emotions has become a bone of contention in psychological research. On the one hand, Ekman and his colleagues have argued for a universal set of six basic emotions that are recognized with a considerable degree of accuracy across cultures and automatically displayed in highly similar ways by people. On the other hand, more recent research in cognitive science has provided results that are supportive of a cultural-relativist position. In this paper this controversy is approached from a contrastive perspective on phraseological constructions. It focuses on how emotional displays are codified in somatic idioms in some European (English, German, French, Spanish) and East Asian (Japanese, Korean, Chinese [Cantonese]) languages. Using somatic idioms such as make big eyes or die Nase rümpfen as a pool of evidence to shed linguistic light on the psychological controversy, the paper engages with the following general research question: Is there a significant difference between European and East Asian somatic idioms or do these constructions rather speak for a universal apprehension of facial emotion displays? To answer this question, the paper compares somatic expressions that are selected from (idiom) dictionaries of the languages listed above. Moreover, native speakers of the East Asian languages were consulted to support the analysis of the respective data. All corresponding entries were analysed categorically, i. e. with regard to whether or not they encode a given facial area to denote a specific emotion. The results show arguments both for and against the universalist and the cultural-relativist positions. In general, they speak for an opportunistic encoding of facial emotion displays.


2021 ◽  
Vol 11 ◽  
Author(s):  
Katarina Blask ◽  
Lea Gerhards ◽  
Maria Jalynskij

Starting from the observation that data sharing in general and sharing of reusable behavioral data in particular is still scarce in psychology, we set out to develop a curation standard for behavioral psychological research data rendering data reuse more effective and efficient. Specifically, we propose a standard that is oriented toward the requirements of the psychological research process, thus considering the needs of researchers in their role as data providers and data users. To this end, we suggest that researchers should describe their data on three documentation levels reflecting researchers’ central decisions during the research process. In particular, these levels describe researchers’ decisions on the concrete research design that is most suitable to address the corresponding research question, its operationalization as well as a precise description of the subsequent data collection and analysis process. Accordingly, the first documentation level represents, for instance, researchers’ decision on the concrete hypotheses, inclusion/exclusion criteria and the number of measurement points as well as a conceptual presentation of all substantial variables included in the design. On the second level these substantial variables are presented within an extended codebook allowing for the linkage between the conceptual research design and the actually operationalized variables as presented within the data. Finally, the third level includes all materials, data preparation and analyses scripts as well as a detailed procedure graphic that allows the data user to link the information from all three documentation levels at a single glance. After a comprehensive presentation of the standard, we will offer some arguments for its integration into the psychological research process.


Author(s):  
Magnus Rönn

This article presents results from a study of prequalification in architectural competitions. The aim is to develop knowledge of how the organizer appoints candidates to restricted competitions in Sweden. Prequalification is a selection procedure used early in the competition process to identify suitable candidates for the following design phase. The overall research question in the study is about how organizers identify architects / design teams. The methodology includes an inventory of competitions, case studies, document review and interviews of key-persons. Ten municipal and governmental competitions have been examined in the study. The invitation emerges during negotiation at the organizing body. General conditions, submission requirements and criteria for the evaluation of applications by architect firms are part of an established practice. All clients have an assessment procedure made up of two distinct stages. First they check whether applications meet the specific "must requirements" in the invitation. Thereafter follows an evaluative assessment of the candidate's professional profile, which is based on the criteria in the invitation. Reference projects and information from the referees are important sources of information in this stage. Decisive in the final assessment is the organizer's perception of the candidates' ability to produce projects of architectural quality, the ability to combine creative solutions with functional requirements and aptitude to work with developers and contractors.


2004 ◽  
Vol 27 (3) ◽  
pp. 329-331
Author(s):  
Siu L. Chow

Ambiguous data obtained by deception say nothing about social behavior. A balanced social psychology requires separating statistical hypotheses from substantive hypotheses. Neither statistical norms nor moral rules are psychological theories. Explanatory substantive theories stipulate the structures and processes underlying behavior. The Bayesian approach is incompatible with the requirement that all to-be-tested theories be given the benefit of the doubt.


2002 ◽  
Vol 7 (3) ◽  
pp. 175-190 ◽  
Author(s):  
Paul Fogel ◽  
Pascal Collette ◽  
Alain Dupront ◽  
Tina Garyantes ◽  
Denis Guédini

HTS data from primary screening are usually analyzed by setting a cutoff for activity, in order to minimize both false-negative and false-positive rates. An alternative approach, based on a calculated probability of being active, is presented here. Given the predicted confirmation rate derived from this probability, the number of primary positives selected for follow-up can be optimized to maximize the number of true positives without picking too many false positives. Typical cutoff-determining methods are more serendipitous in their nature and not easily optimized in an effort to optimize screening efforts. An additional advantage of calculating a probability of being active for each compound screened is that orthogonal mixtures can be deconvoluted without presetting a deconvolution threshold. An important consequence of using the probability of being active with orthogonal mixtures is that individual compound screening results can be recorded irrespective of whether the assays were performed on single compounds or on cocktails.


Sign in / Sign up

Export Citation Format

Share Document