scholarly journals Research methods of pedagogical and psychological science: essence, concept and types

Author(s):  
Sh. Maigeldiyeva ◽  
◽  
B. Paridinova ◽  
2018 ◽  
Vol 45 (2) ◽  
pp. 158-163 ◽  
Author(s):  
William J. Chopik ◽  
Ryan H. Bremner ◽  
Andrew M. Defever ◽  
Victor N. Keller

Over the past 10 years, crises surrounding replication, fraud, and best practices in research methods have dominated discussions in the field of psychology. However, no research exists examining how to communicate these issues to undergraduates and what effect this has on their attitudes toward the field. We developed and validated a 1-hr lecture communicating issues surrounding the replication crisis and current recommendations to increase reproducibility. Pre- and post-lecture surveys suggest that the lecture serves as an excellent pedagogical tool. Following the lecture, students trusted psychological studies slightly less but saw greater similarities between psychology and natural science fields. We discuss challenges for instructors taking the initiative to communicate these issues to undergraduates in an evenhanded way.


2019 ◽  
Vol 28 (6) ◽  
pp. 560-566 ◽  
Author(s):  
Anat Rafaeli ◽  
Shelly Ashtar ◽  
Daniel Altman

New technologies create and archive digital traces—records of people’s behavior—that can supplement and enrich psychological research. Digital traces offer psychological-science researchers novel, large-scale data (which reflect people’s actual behaviors), rapidly collected and analyzed by new tools. We promote the integration of digital-traces data into psychological science, suggesting that it can enrich and overcome limitations of current research. In this article, we review helpful data sources, tools, and resources and discuss challenges associated with using digital traces in psychological research. Our review positions digital-traces research as complementary to traditional psychological-research methods and as offering the potential to enrich insights on human psychology.


2020 ◽  
Vol 30 (3) ◽  
pp. 287-305
Author(s):  
Catriona Ida Macleod ◽  
Sunil Bhatia ◽  
Wen Liu

In this special issue, we bring together papers that speak to feminisms in relation to decolonisation in the discipline of psychology. The six articles and two book reviews address a range of issues: race, citizenship, emancipatory politics, practising decolonial refusal, normalising slippery subjectivity, Islamic anti-patriarchal liberation psychology, and decolonisation of the hijab. In this editorial we outline the papers’ contributions to discussions on understanding decolonisation, how feminisms and decolonisation speak to each other, and the implications of the papers for feminist decolonising psychology. Together the papers highlight the importance of undermining the gendered coloniality of power, knowledge and being. The interweaving of feminisms and decolonising efforts can be achieved through: each mutually informing and shaping the other, conducting intersectional analyses, and drawing on transnational feminisms. Guiding principles for feminist decolonising psychology include: undermining the patriarchal colonialist legacy of mainstream psychological science; connecting gendered coloniality with other systems of power such as globalisation; investigating topics that surface the intertwining of colonialist and gendered power relations; using research methods that dovetail with feminist decolonising psychology; and focussing praxis on issues that enable decolonisation. Given the complexities of the coloniality and patriarchy of power-knowledge-being, feminist decolonising psychology may fail. The issues raised in this special issue point to why it mustn’t.


2020 ◽  
Vol 14 ◽  
Author(s):  
Aline da Silva Frost ◽  
Alison Ledgerwood

Abstract This article provides an accessible tutorial with concrete guidance for how to start improving research methods and practices in your lab. Following recent calls to improve research methods and practices within and beyond the borders of psychological science, resources have proliferated across book chapters, journal articles, and online media. Many researchers are interested in learning more about cutting-edge methods and practices but are unsure where to begin. In this tutorial, we describe specific tools that help researchers calibrate their confidence in a given set of findings. In Part I, we describe strategies for assessing the likely statistical power of a study, including when and how to conduct different types of power calculations, how to estimate effect sizes, and how to think about power for detecting interactions. In Part II, we provide strategies for assessing the likely type I error rate of a study, including distinguishing clearly between data-independent (“confirmatory”) and data-dependent (“exploratory”) analyses and thinking carefully about different forms and functions of preregistration.


2020 ◽  
Vol 116 (5) ◽  
pp. 100-109
Author(s):  
Vladislav A. Medintsev ◽  

In psychological science, the «method problem» remains one of the most fundamental and relevant, and a new content shade of this problem is associated with the activation of discussion on the psychological knowledge integration. In this context, the problem acquires an updated content as a problem of a universal method in psychology. There is a reason to believe that the «method problem» is transformed into the «universal method problem» and then into the «universal method integration problem». The efforts to solve these problems are often depreciated due to the ignorance of experimenting and practicing psychologists by methodological knowledge. The possible way to build a universal method for theoretical research in psychology is to use for this purpose a procedural interpretation of theorizing based on set-theoretic process description method. In the article components of theoretical research are considered as the purpose, object, subject, hypothesis of the research, as well as the considered empiricism, theoretical foundations, method of theorizing and research tasks. Two methodological «poles» of theoretical research are identified – the «normative» method and modern research methods, and a variant of analyzing their structures is proposed. To create a universal method suitable for psychological knowledge integration is associated with obstacles, which can be overcome by their systematic analysis. The article outlines a variant of this analysis, in which the causes and sources of these obstacles are differentiated based on the system of concepts used for describing processes. The sources of integration obstacles include components of prototype modi, and the causes are properties of modi functions in the recording of processes as maps of sets. The examples describe the integration obstacles at the two levels of interactions.


2020 ◽  
Author(s):  
Aline da Silva Frost ◽  
Alison Ledgerwood

This article provides an accessible tutorial with concrete guidance for how to start improving research methods and practices in your lab. Following recent calls to improve research methods and practices within and beyond the borders of psychological science, resources have proliferated across book chapters, journal articles, and online media. Many researchers are interested in learning more about cutting-edge methods and practices, but are unsure where to begin. In this tutorial, we describe specific tools that help researchers calibrate their confidence in a given set of findings. In Part I, we describe strategies for assessing the likely statistical power of a study, including when and how to conduct different types of power calculations, how to estimate effect sizes, and how to think about power for detecting interactions. In Part II, we provide strategies for assessing the likely Type I error rate of a study, including distinguishing clearly between data-independent (“confirmatory”) and data-dependent (“exploratory”) analyses and thinking carefully about different forms and functions of preregistration.


2019 ◽  
Author(s):  
Zoltan Kekecs ◽  
Balazs Aczel ◽  
Bence Palfi ◽  
Barnabas Szaszi ◽  
Peter Szecsi ◽  
...  

Those wishing to join as collaborating labs to data collection can do so via: https://t.co/W0fv5VwPi2?amp=1. Those interested in signing up as auditors for the project should send an email to the first author. ABSTRACT: The low reproducibility rate in social sciences lead researchers to hesitate to accept published findings at their face value. It became apparent that the field is lacking the tools necessary to verify credibility of research reports. In the present paper, we describe methodologies that let researchers craft highly credible research, and allow their peers to verify this credibility. We demonstrate the application of these methods in a fully transparent multi-lab replication of Bem’s Experiment 1 (2011), which was co-designed by a consensus panel including both proponents and opponents of Bem’s original hypothesis. In the main study, we applied direct data deposition, in combination with born-open data and real-time research report to extend transparency to protocol delivery and data collection. We also used piloting, checklists, laboratory logs and video documented trial sessions to ascertain as-intended protocol delivery by the experimenters, and external research auditors to monitor research integrity. We found X% successful guesses, while Bem reported 53.07% success rate. The effect reported by Bem was not/was replicated in our study/This study outcome did not reach the pre-specified criteria for supporting or contradicting Bem’s findings. [Conclusions about the feasibility of the credibility-enhancing methodologies will be discussed here.].Plain word summary:[In case of a negative result:]This project aimed to demonstrate the use of research methods that could improve the reliability of scientific findings in psychological science. Using rigorous methodology, we could not replicate the positive findings of Bem’s 2011 Experiment 1. This finding does not confirm, nor contradict the existence of ESP in general, and this was not the point of our study. Instead, the results tell us that (1) it is likely that the original experiment was biased by methodological flaws, and (2) it is improbable that the paradigm used in the original study would be useful in detecting ESP effects if they exist.[In case of a positive result:]This project aimed to demonstrate the use of research methods that could improve the reliability of scientific findings in psychological science. Using rigorous methodology we could replicate the positive findings of Bem’s 2011 Experiment 1. This finding does not confirm, nor contradict the existence of ESP in general, and this was not the point of our study. Instead, the results tell us that (1) it is unlikely that the positive findings of the original experiment can be explained only by the currently known methodological biases, and (2) more studies are warranted to investigate the causes for the positive effect. We do not know yet what these causes are, but it is important to note, that neither our study, nor the original study provide any evidence that these causes would be “paranormal”. Thus, it is still safe to assume that the effects at play are within the boundaries of known physics, psychology, and research methodology.


2017 ◽  
Author(s):  
Coosje Lisabet Sterre Veldkamp

THE HUMAN FALLIBILITY OF SCIENTISTSDealing with error and bias in academic researchRecent studies have highlighted that not all published findings in the scientific literature are trustworthy, suggesting that currently implemented control mechanisms such as high standards for the reporting of research methods and results, peer review, and replication, are not sufficient. In psychology in particular, solutions are sought to deal with poor reproducibility and replicability of research results. In this dissertation project I considered these problems from the perspective that the scien¬tific enterprise must better recognize the human fallibility of scientists, and I examined potential solutions aimed at dealing with human error and bias in psychological science. First, I studied whether the human fallibility of scientists is actually recognized (Chapter 2). I examined the degree to which scientists and lay people believe in the storybook image of the scientist: the image that scientists are more objective, rational, open-minded, intelligent, honest and communal than other human beings. The results suggested that belief in this storybook image is strong, particularly among scientists themselves. In addition, I found indications that scientists believe that scientists like themselves fit the storybook image better than other scientists. I consider scientist’s lack of acknowledgement of their own fallibility problematic, because I believe that critical self-reflection is the first line of defense against potential human error aggravated by confirmation bias, hindsight bias, motivated reasoning, and other human cognitive biases that could affect any professional in their work. Then I zoomed in on psychological science and focused on human error in the use of null the most widely used statistical framework in psychology: hypothesis significance testing (NHST). In Chapters 3 and 4, I examined the prevalence of errors in the reporting of statistical results in published articles, and evaluated a potential best practice to reduce such errors: the so called ‘co-pilot model of statistical analysis’. This model entails a simple code of conduct prescribing that statistical analyses are always conducted independently by at least two persons (typically co-authors). Using statcheck, a software package that is able to quickly retrieve and check statistical results in large sets of published articles, I replicated the alarmingly high error rates found in earlier studies. Although I did not find support for the effectiveness of the co-pilot model in reducing these errors, I proposed several ways to deal with human error in (psychological) research and suggested how the effectiveness of the proposed practices might be studied in future research. Finally, I turned to the risk of bias in psychological science. Psychological data can often be analyzed in many different ways. The often arbitrary choices that researchers face in analyzing their data are called researcher degrees of freedom. Researchers might be tempted to use these researcher degrees of freedom in an opportunistic manner in their pursuit of statistical significance (often called p-hacking). This is problematic because it renders research results unreliable. In Chapter 5 I presented a list of researcher degrees of freedom in psychological studies, focusing on the use of NHST. This list can be used to assess the potential for bias in psychological studies, it can be used in research methods education, and it can be used to examine the effectiveness of a potential solution to restrict oppor¬tunistic use of RDFs: study pre-registration. Pre-registration requires researchers to stipulate in advance the research hypothesis, data collection plan, data analyses, and what will be reported in the paper. Different forms of pre-registration are currently emerging in psychology, mainly varying in terms of the level of detail with respect to the research plan they require researchers to provide. In Chapter 6, I assessed the extent to which current pre-registrations restricted opportunistic use of the researcher degrees of freedom on the list presented in Chapter 5. We found that most pre-registrations were not sufficiently restrictive, but that those that were written following better guidelines and requirements restricted opportunistic use of researcher degrees of freedom considerably better than basic pre-registrations that were written following a limited set of guidelines and requirements. We concluded that better instructions, specific questions, and stricter requirements are necessary in order for pre-registrations to do what they are supposed to do: to protect researchers from their own biases.


Sign in / Sign up

Export Citation Format

Share Document