Reporting Practices in Quantitative Teacher Education Research: One Look at the Evidence Cited in the AERA Panel Report

2008 ◽  
Vol 37 (4) ◽  
pp. 208-216 ◽  
Author(s):  
Linda Reichwein Zientek ◽  
Mary Margaret Capraro ◽  
Robert M. Capraro

The authors of this article examine the analytic and reporting features of research articles cited in Studying Teacher Education: The Report of the AERA Panel on Research and Teacher Education ( Cochran-Smith & Zeichner, 2005b ) that used quantitative reporting practices. Their purpose was to help to identify reporting practices that can be improved to further the creation of the best possible evidence base for teacher education. Their findings indicate that many study reports lack (a) effect sizes, (b) confidence intervals, and (c) reliability and validity coefficients. One possible solution is for journal editors to emphasize clearly the expectations established in Standards for Reporting on Empirical Social Science Research in AERA Publications ( AERA, 2006 ).

2015 ◽  
Vol 27 (4) ◽  
pp. 487-493 ◽  
Author(s):  
Tracy Wharton

Dissemination of research is the most challenging aspect of building the evidence base. Despite peer review, evidence suggests that a substantial proportion of papers leave out details that are necessary to judge bias, consider replication, or initiate meta-analyses and systematic reviews. Reporting guidelines were created to ensure minimally adequate reporting of research and have become increasingly popular since the 1990s. There are over 200 guidelines for authors to assist in reporting a range of study methodologies. Although guidelines are freely available, they are underutilized and there is criticism regarding assumptions about methodologies targeted by guidelines. As journal editors lean into endorsements, social work authors may benefit from considering guidelines appropriate for their work. This article explores pros and cons of guideline use by authors and journals and presents some suggestions for the field of social work, including assessment of whether profession-specific reporting guidelines are needed, and cautions regarding limitations.


Author(s):  
Jeasik Cho

This chapter discusses a number of practical evaluation tools used by qualitative research journals. First, the chapter discusses the American Educational Research Association’s “Standards for Reporting on Empirical Social Science Research,” which emphasizes warrantability and transparency. Second, many ideas on reviewing qualitative research are briefly presented. Third, current qualitative research journals that use and those that do not use specific evaluation tools are discussed. The reasons why some journal editors do not use such specific evaluation tools are identified: trust, freedom, the nature of qualitative research, and “it works.” Other journals that use specific review guides are analyzed. This chapter suggests a holistic way of understanding the evaluation of qualitative research by taking three elements (core values, research processes, and key dimensions) into consideration. The seven most commonly used evaluation criteria are discussed: importance to the field, qualities, writing, data analysis, theoretical framework, participant, and impact/readership.


HortScience ◽  
1998 ◽  
Vol 33 (3) ◽  
pp. 554c-554
Author(s):  
Sonja M. Skelly ◽  
Jennifer Campbell Bradley

Survey research has a long precedence of use in the social sciences. With a growing interest in the area of social science research in horticulture, survey methodology needs to be explored. In order to conduct proper and accurate survey research, a valid and reliable instrument must be used. In many cases, however, an existing measurement tool that is designed for specific research variables is unavailable thus, an understanding of how to design and evaluate a survey instrument is necessary. Currently, there are no guidelines in horticulture research for developing survey instruments for use with human subjects. This presents a problem when attempting to compare and reference similar research. This workshop will explore the methodology involved in preparing a survey instrument; topics covered will include defining objectives for the survey, constructing questions, pilot testing the survey, and obtaining reliability and validity information. In addition to these topics some examples will be provided which will illustrate how to complete these steps. At the conclusion of this session a discussion will be initiated for others to share information and experiences dealing with creating survey instruments.


2019 ◽  
Vol 51 (5) ◽  
pp. 2022-2038 ◽  
Author(s):  
Jesse Chandler ◽  
Cheskie Rosenzweig ◽  
Aaron J. Moss ◽  
Jonathan Robinson ◽  
Leib Litman

Abstract Amazon Mechanical Turk (MTurk) is widely used by behavioral scientists to recruit research participants. MTurk offers advantages over traditional student subject pools, but it also has important limitations. In particular, the MTurk population is small and potentially overused, and some groups of interest to behavioral scientists are underrepresented and difficult to recruit. Here we examined whether online research panels can avoid these limitations. Specifically, we compared sample composition, data quality (measured by effect sizes, internal reliability, and attention checks), and the non-naivete of participants recruited from MTurk and Prime Panels—an aggregate of online research panels. Prime Panels participants were more diverse in age, family composition, religiosity, education, and political attitudes. Prime Panels participants also reported less exposure to classic protocols and produced larger effect sizes, but only after screening out several participants who failed a screening task. We conclude that online research panels offer a unique opportunity for research, yet one with some important trade-offs.


AMBIO ◽  
2015 ◽  
Vol 45 (1) ◽  
pp. 52-62 ◽  
Author(s):  
David M. Oliver ◽  
Nick D. Hanley ◽  
Melanie van Niekerk ◽  
David Kay ◽  
A. Louise Heathwaite ◽  
...  

2016 ◽  
Vol 49 (01) ◽  
pp. 77-81 ◽  
Author(s):  
Vanessa Williamson

ABSTRACTThis article examines the ethics of crowdsourcing in social science research, with reference to my own experience using Amazon’s Mechanical Turk. As these types of research tools become more common in scholarly work, we must acknowledge that many participants are not one-time respondents or even hobbyists. Many people work long hours completing surveys and other tasks for very low wages, relying on those incomes to meet their basic needs. I present my own experience of interviewing Mechanical Turk participants about their sources of income, and I offer recommendations to individual researchers, social science departments, and journal editors regarding the more ethical use of crowdsourcing.


2013 ◽  
Vol 18 (2) ◽  
pp. 199-228 ◽  
Author(s):  
Jean Parkinson

That complement clauses are a prominent feature of various registers including conversation and academic prose. In academic prose, that-clauses are of interest because they frame research findings, the writer’s central message to the reader. To achieve this persuasive purpose, that-clauses are employed to draw in various voices, including those of other researchers, research participants, research findings and the writer. This study extends prior investigation of complement clauses to examine their distribution across different sections of a corpus of research articles in social science. The social action of each section is partially achieved through what the different voices in the different sections of the article talk about, and the subtle variations in the stance of the author and other voices across sections. This study finds that use of reporting verbs is nuanced according to authors’ purposes in different sections, and also according to the source of the proposition in the that-clause.


2006 ◽  
Vol 3 (2) ◽  
Author(s):  
Tina Kogovšek

Egocentered networks are common in social science research. Here, the unit of analysis is a respondent (ego) together with his/her personal network (alters). Usually, several variables are measured to describe the relationship between egos and alters. In this paper, the aim is to estimate the reliability and validity of the averages of these measures by the multitrait-multimethod (MTMM) approach. In the study, web and telephone modes of data collection are compared on a convenience sample of 238 second year students at the Faculty of Social Sciences at the University of Ljubljana. The data was collected in 2003. The results show that the telephone mode produces more reliable data than the web mode of data collection. Also, method order effect was shown: the data collection mode used first produces data of lower reliability than the mode used for the second measurement. There were no large differences in validity of measurement.


Sign in / Sign up

Export Citation Format

Share Document