Designing and Evaluating Survey Instruments for Research in Human Issues in Horticulture

HortScience ◽  
1998 ◽  
Vol 33 (3) ◽  
pp. 554c-554
Author(s):  
Sonja M. Skelly ◽  
Jennifer Campbell Bradley

Survey research has a long precedence of use in the social sciences. With a growing interest in the area of social science research in horticulture, survey methodology needs to be explored. In order to conduct proper and accurate survey research, a valid and reliable instrument must be used. In many cases, however, an existing measurement tool that is designed for specific research variables is unavailable thus, an understanding of how to design and evaluate a survey instrument is necessary. Currently, there are no guidelines in horticulture research for developing survey instruments for use with human subjects. This presents a problem when attempting to compare and reference similar research. This workshop will explore the methodology involved in preparing a survey instrument; topics covered will include defining objectives for the survey, constructing questions, pilot testing the survey, and obtaining reliability and validity information. In addition to these topics some examples will be provided which will illustrate how to complete these steps. At the conclusion of this session a discussion will be initiated for others to share information and experiences dealing with creating survey instruments.

1954 ◽  
Vol 13 (1) ◽  
pp. 20-27 ◽  
Author(s):  
A. Vidich ◽  
J. Bensman

Scattered through the professional journals in fields commonly included in the social sciences—sociology, anthropology, social psychology, personality, public opinion—there is found an increasing concern with the reliability and validity of information secured for social science analysis. Much of this interest stems from or was stimulated by the now classical Social Science Research Council Bulletins on the use of personal documents, or by work being done simultaneously in England.


2008 ◽  
Vol 41 (03) ◽  
pp. 475-476 ◽  
Author(s):  
Robert J-P. Hauck

In the 1990s I testified before a National Science Foundation (NSF) panel headed by Cora Marrett, then assistant director for the NSF Directorate for the Social, Behavioral and Economic Sciences. The subject of the panel's inquiry, and this issue's symposium, was social science research and the federally mandated but decentralized human subjects protection program and its principal actors, institutional review boards (IRBs). My testimony addressed the ways in which the regulatory system ill-fit and ill-served political science research. IRBs had expanded their mission to include all research, not just research funded by the federal government, enhancing their scope of authority while slowing the timeliness of reviews. Similarly, and with the same result, IRBs were evaluating secondary research as well as primary research. Although the federal legislation provided for a nuanced assessment of risk, the distinction between potentially risk-laden research necessitating a full IRB review and research posing minimal or no risk that could be either exempted or given expedited review was disappearing. The length of the review process threatened the beginning or completion of course work and degree programs. IRBs were judging the merits of research projects rather than the risks involved. This trend was especially problematic because representation on many IRBs was skewed toward biological and behavioral scientists often unfamiliar with the methods and fields of political science and the other social sciences. And the list went on.


Author(s):  
Gary Goertz ◽  
James Mahoney

Some in the social sciences argue that the same logic applies to both qualitative and quantitative research methods. This book demonstrates that these two paradigms constitute different cultures, each internally coherent yet marked by contrasting norms, practices, and toolkits. The book identifies and discusses major differences between these two traditions that touch nearly every aspect of social science research, including design, goals, causal effects and models, concepts and measurement, data analysis, and case selection. Although focused on the differences between qualitative and quantitative research, the book also seeks to promote toleration, exchange, and learning by enabling scholars to think beyond their own culture and see an alternative scientific worldview. The book is written in an easily accessible style and features a host of real-world examples to illustrate methodological points.


Author(s):  
Valentina Kuskova ◽  
Stanley Wasserman

Network theoretical and analytic approaches have reached a new level of sophistication in this decade, accompanied by a rapid growth of interest in adopting these approaches in social science research generally. Of course, much social and behavioral science focuses on individuals, but there are often situations where the social environment—the social system—affects individual responses. In these circumstances, to treat individuals as isolated social atoms, a necessary assumption for the application of standard statistical analysis is simply incorrect. Network methods should be part of the theoretical and analytic arsenal available to sociologists. Our focus here will be on the exponential family of random graph distributions, p*, because of its inclusiveness. It includes conditional uniform distributions as special cases.


2021 ◽  
Vol 7 ◽  
pp. 237802312110244
Author(s):  
Katrin Auspurg ◽  
Josef Brüderl

In 2018, Silberzahn, Uhlmann, Nosek, and colleagues published an article in which 29 teams analyzed the same research question with the same data: Are soccer referees more likely to give red cards to players with dark skin tone than light skin tone? The results obtained by the teams differed extensively. Many concluded from this widely noted exercise that the social sciences are not rigorous enough to provide definitive answers. In this article, we investigate why results diverged so much. We argue that the main reason was an unclear research question: Teams differed in their interpretation of the research question and therefore used diverse research designs and model specifications. We show by reanalyzing the data that with a clear research question, a precise definition of the parameter of interest, and theory-guided causal reasoning, results vary only within a narrow range. The broad conclusion of our reanalysis is that social science research needs to be more precise in its “estimands” to become credible.


Sign in / Sign up

Export Citation Format

Share Document