scholarly journals Cross-Validation: A Method Every Psychologist Should Know

2020 ◽  
Vol 3 (2) ◽  
pp. 248-263 ◽  
Author(s):  
Mark de Rooij ◽  
Wouter Weeda

Cross-validation is a statistical procedure that every psychologist should know. Most are possibly familiar with the procedure in a global way but have not used it for the analysis of their own data. We introduce cross-validation for the purpose of model selection in a general sense, as well as an R package we have developed for this kind of analysis, and we present examples illustrating the use of this package for types of research problems that are often encountered in the social sciences. Cross-validation can be an easy-to-use alternative to null-hypothesis testing, and it has the benefit that it does not make as many assumptions.

Author(s):  
Rory Allen

Universal laws are notoriously hard to discover in the social sciences, but there is one which can be stated with a fair degree of confidence: “all students hate statistics”. Students in the social sciences often need to learn basic statistics as part of a research methods module, and anyone who has ever been responsible for teaching statistics to these students will soon discover that they find it to be the hardest and least popular part of any social science syllabus. A typical problem for students is the use of Fisher’s F-test as a significance test, which even in the simple case of a one-factor analysis of variance (ANOVA) presents difficulties. These are two in number. Firstly, the test is presented as a test of the null hypothesis, that is, that there is no effect of one variable (the independent variable, IV) on the other, dependent variable (DV). This highlights the opposite of what one generally wants to prove, the experimental hypothesis, which is usually that there is an effect of the IV on the DV. Students, if they think about the question at all, may be tempted to ask “why not try to prove the experimental hypothesis directly rather than using this back-to-front approach?” Secondly, the F-ratio itself is presented in the form of an algebraic manipulation, involving the ratio of two mean sums of squares, and these means are themselves moderately complicated to understand. Even students specializing in mathematics often find algebra difficult, and to non- athematicians this formula is simply baffling. Instructors do not usually make a serious attempt to remedy this confusion by attempting to explain what the F-ratio is attempting to measure, and when they do, the explanation is not usually very enlightening. Students may struggle with the statement that the F-ratio is the ratio of “two different estimates of the variance of the population being sampled from, under the null hypothesis”. So what? The result is that students frequently end up applying statistical analysis programs such as SPSS and R, without having the faintest understanding of how the mathematics works. They use the results in a mechanical way, according to a procedure learned by rote memory, and may overlook different tests which might be more appropriate for their data. This might be called the cookbook approach to data analysis, and it is the opposite of the ultimate aim of high quality teaching, which is to provide a deep understanding of principles, which will allow the student to use these principles flexibly in real life challenges, without violating the assumptions of the statistical tests being employed.


2006 ◽  
Vol 7 (8) ◽  
pp. 661-680 ◽  
Author(s):  
Stephan Leibfried ◽  
Christoph Möllers ◽  
Christoph Schmid ◽  
Peer Zumbansen

This essay describes an emergent scheme for modernizing the study of law in German universities, creating a structure that is better equipped to address twentyfirst century socio-legal issues and bring legal scholarship to bear on relevant research problems in the social sciences—and vice versa. It is a by-product of efforts by University of Bremen professors and administrators to foster their university's coming of age as a mature, internationally recognized research university and to compete for new funds that the German government is making available to select universities. As such, it provides a rare example of the integration of legal studies into a large interdisciplinary research program, and of law professors rising to the challenges of contemporary funding demands, joining forces with political scientists, sociologists, economists, and philosophers.


2021 ◽  
pp. 053901842098782
Author(s):  
Ercan Gündoğan

At the background of this article lies the question of how social sciences can internalize spatial and cultural phenomena and, in the most general sense, the ‘principle of difference’. Therefore, it has more than one problem and tries to see many seemingly contradictory phenomena as parts of a whole by employing a complex dialectical method. It looks at the relationships between the following phenomena: social and cultural; natural and cultural; universal and particular; similar and different. The article proceeds according to this method: it relates the opposites to each other through space and thus tries to show the following dialectical transitions: the social is produced as culture through the social space and the production of the social space itself. The article suggests that the transition between universal and particular constitutes the problematic of space, that space realizes the social as culture, and that this is the only realization of the social. The articles argues that the social is universalizing, and the cultural is particularizing, and that the social/universal can fulfill itself as necessarily cultural/particular. It also defends the principle of universality by stating that differences occur in relation to a whole. The article critically exploits classical social theory, specifically Marxist social theory and spatial Marxists such as Henri Lefebvre and David Harvey, some recent historical sociology, some postcolonial ideas, planetary urbanization theory, and tries to support the development of the theory of historical-geographical materialism.


2020 ◽  
Author(s):  
Anne M. Scheel ◽  
Leonid Tiokhin ◽  
Peder Mortvedt Isager ◽  
Daniel Lakens

For almost half a century, Paul Meehl educated psychologists about how the mindless use of null-hypothesis significance tests made research on theories in the social sciences basically uninterpretable (Meehl, 1990). In response to the replication crisis, reforms in psychology have focused on formalising procedures for testing hypotheses. These reforms were necessary and impactful. However, as an unexpected consequence, psychologists have begun to realise that they may not be ready to test hypotheses. Forcing researchers to prematurely test hypotheses before they have established a sound ‘derivation chain’ between test and theory is counterproductive. Instead, various non-confirmatory research activities should be used to obtain the inputs necessary to make hypothesis tests informative. Before testing hypotheses, researchers should spend more time forming concepts, developing valid measures, establishing the causal relationships between concepts and their functional form, and identifying boundary conditions and auxiliary assumptions. Providing these inputs should be recognised and incentivised as a crucial goal in and of itself. In this article, we discuss how shifting the focus to non-confirmatory research can tie together many loose ends of psychology’s reform movement and help us lay the foundation to develop strong, testable theories, as Paul Meehl urged us to.


Author(s):  
Rory Allen

Universal laws are notoriously hard to discover in the social sciences, but there is one which can be stated with a fair degree of confidence: “all students hate statistics”. Students in the social sciences often need to learn basic statistics as part of a research methods module, and anyone who has ever been responsible for teaching statistics to these students will soon discover that they find it to be the hardest and least popular part of any social science syllabus. A typical problem for students is the use of Fisher’s F-test as a significance test, which even in the simple case of a one-factor analysis of variance (ANOVA) presents difficulties. These are two in number. Firstly, the test is presented as a test of the null hypothesis, that is, that there is no effect of one variable (the independent variable, IV) on the other, dependent variable (DV). This highlights the opposite of what one generally wants to prove, the experimental hypothesis, which is usually that there is an effect of the IV on the DV. Students, if they think about the question at all, may be tempted to ask “why not try to prove the experimental hypothesis directly rather than using this back-to-front approach?” Secondly, the F-ratio itself is presented in the form of an algebraic manipulation, involving the ratio of two mean sums of squares, and these means are themselves moderately complicated to understand. Even students specializing in mathematics often find algebra difficult, and to non- athematicians this formula is simply baffling. Instructors do not usually make a serious attempt to remedy this confusion by attempting to explain what the F-ratio is attempting to measure, and when they do, the explanation is not usually very enlightening. Students may struggle with the statement that the F-ratio is the ratio of “two different estimates of the variance of the population being sampled from, under the null hypothesis”. So what? The result is that students frequently end up applying statistical analysis programs such as SPSS and R, without having the faintest understanding of how the mathematics works. They use the results in a mechanical way, according to a procedure learned by rote memory, and may overlook different tests which might be more appropriate for their data. This might be called the cookbook approach to data analysis, and it is the opposite of the ultimate aim of high quality teaching, which is to provide a deep understanding of principles, which will allow the student to use these principles flexibly in real life challenges, without violating the assumptions of the statistical tests being employed.


1970 ◽  
Vol 4 ◽  
pp. 85-98
Author(s):  
Pradeep Acharya

The paper principally aimed to present a brief overview of the historical shift on the notion of ethnicity and prejudice around the wider global context with particular focus on local Nepalese context has solely based on the secondary information obtained from the review of pertinent literatures on ethnicity. The genesis of the approaches to ethnicity comprises some conceptual idea on ethnicity regarding its emergence and usage as a term in the social sciences. Ethnicity can be said to be very closely interlinked with prejudice in policy and practice at the level of individual, society and the state. The historical evidence suggests that there has been a gradual shift on the notion of both ethnicity and prejudice around different places at different time periods. In addition, it does contain significant research problems, which, can surely be elaborated, and its full significance drawn. Keywords: Ethnicity; prejudice; boundaries; social constructionist model; multilevel theory DOI: 10.3126/dsaj.v4i0.4514 Dhaulagiri Journal of Sociology and Anthropology Vol.4 2010 pp.85-98


2019 ◽  
Vol 45 ◽  
Author(s):  
Kevin R. Murphy

Problemification: Over-reliance on null hypothesis significance testing (NHST) is one of the most important causes of the emerging crisis over the credibility and reproducibility of our science.Implications: Most studies in the behavioural and social sciences have low levels of statistical power. Because ‘significant’ results are often required, but often difficult to produce, the temptation to engage in questionable research practices that will produce these results is immense.Purpose: Methodologists have been trying for decades to convince researchers, reviewers and editors that significance tests are neither informative nor useful. A recent set of articles published in top journals and endorsed by hundreds of scientists around the world seem to provide a fresh impetus for overturning the practice of using NHST as the primary, and sometimes sole basis for evaluating research results.Recommendations: Authors, reviewers and journal editors are asked to change long-engrained habits and realise that ‘statistically significant’ says more about the design of one’s study than about the importance of one’s results. They are urged to embrace the ATOM principle in evaluating research results, that is, accept that there will always be uncertainty, and be thoughtful, open and modest in evaluating what the data mean.


Sign in / Sign up

Export Citation Format

Share Document