scholarly journals Institutional Review Board and International Field Research in Conflict Zones

2014 ◽  
Vol 47 (04) ◽  
pp. 840-844 ◽  
Author(s):  
Srobana Bhattacharya

ABSTRACTResearch on political conflict can benefit immensely from fieldwork. However, the Institutional Review Board (IRB) process is elaborate and daunting that discourages rather than encourages this type of research. Existing policies often are insensitive to the many uncertainties related to field research abroad, especially in conflict zones. Three reasons for this are identified in this article. First, the federal regulations to protect human subjects of social science research are most suitable for biomedical sciences. Second, there is huge gap between “procedural ethics” and “ethics in practice.” Third, there is a lack of communication or dialogue between researchers and IRBs. After discussing these reasons, I offer the following suggestions: bridging the gap between the researcher and the IRB; reducing delays in the IRB approval and revision process; encouraging collaboration and dialogue among researchers; and advocating a proactive stance by academic associations.

1980 ◽  
Vol 59 (3_suppl) ◽  
pp. 1305-1306

An "add-on" study has been brought to the attention of the University's Institutional Review Board (IRB) which has approved Dr. A's study. As a member of the IRB, do you have any questions or concerns about the investigation?


2008 ◽  
Vol 41 (03) ◽  
pp. 477-482 ◽  
Author(s):  
Mitchell A. Seligson

Social scientists are well aware of the unintended consequences of public policies. The protection of human subjects regulations, which emerged in response to a serious problem in the medical community, provides an ideal example of such unintended consequences; to paraphrase an old aphorism, “the road to bureaucratic hell is paved with well-intentioned public policies.” In this essay I will seek to make three points. First, the protection of human subjects by federal regulation was long overdue. Second, this benefit to society has, in its application, ignored another widely accepted regulatory principle, namely that the costs of regulation should not outweigh its benefits; a combination of “bureaucratic creep” and litigation phobia has resulted in intrusive and counterproductive regulation of social science research, such that the cure has become worse than the disease. Third, ironically, because of institutional review boards' definition of what is and what is not research, the protection of human subjects is denied to subjects who actually could be at risk.


2008 ◽  
Vol 41 (03) ◽  
pp. 475-476 ◽  
Author(s):  
Robert J-P. Hauck

In the 1990s I testified before a National Science Foundation (NSF) panel headed by Cora Marrett, then assistant director for the NSF Directorate for the Social, Behavioral and Economic Sciences. The subject of the panel's inquiry, and this issue's symposium, was social science research and the federally mandated but decentralized human subjects protection program and its principal actors, institutional review boards (IRBs). My testimony addressed the ways in which the regulatory system ill-fit and ill-served political science research. IRBs had expanded their mission to include all research, not just research funded by the federal government, enhancing their scope of authority while slowing the timeliness of reviews. Similarly, and with the same result, IRBs were evaluating secondary research as well as primary research. Although the federal legislation provided for a nuanced assessment of risk, the distinction between potentially risk-laden research necessitating a full IRB review and research posing minimal or no risk that could be either exempted or given expedited review was disappearing. The length of the review process threatened the beginning or completion of course work and degree programs. IRBs were judging the merits of research projects rather than the risks involved. This trend was especially problematic because representation on many IRBs was skewed toward biological and behavioral scientists often unfamiliar with the methods and fields of political science and the other social sciences. And the list went on.


HortScience ◽  
1998 ◽  
Vol 33 (3) ◽  
pp. 554c-554
Author(s):  
Sonja M. Skelly ◽  
Jennifer Campbell Bradley

Survey research has a long precedence of use in the social sciences. With a growing interest in the area of social science research in horticulture, survey methodology needs to be explored. In order to conduct proper and accurate survey research, a valid and reliable instrument must be used. In many cases, however, an existing measurement tool that is designed for specific research variables is unavailable thus, an understanding of how to design and evaluate a survey instrument is necessary. Currently, there are no guidelines in horticulture research for developing survey instruments for use with human subjects. This presents a problem when attempting to compare and reference similar research. This workshop will explore the methodology involved in preparing a survey instrument; topics covered will include defining objectives for the survey, constructing questions, pilot testing the survey, and obtaining reliability and validity information. In addition to these topics some examples will be provided which will illustrate how to complete these steps. At the conclusion of this session a discussion will be initiated for others to share information and experiences dealing with creating survey instruments.


2020 ◽  
pp. 1-10
Author(s):  
Bryce J. Dietrich

Abstract Although previous scholars have used image data to answer important political science questions, less attention has been paid to video-based measures. In this study, I use motion detection to understand the extent to which members of Congress (MCs) literally cross the aisle, but motion detection can be used to study a wide range of political phenomena, like protests, political speeches, campaign events, or oral arguments. I find not only are Democrats and Republicans less willing to literally cross the aisle, but this behavior is also predictive of future party voting, even when previous party voting is included as a control. However, this is one of the many ways motion detection can be used by social scientists. In this way, the present study is not the end, but the beginning of an important new line of research in which video data is more actively used in social science research.


2021 ◽  
Vol 7 ◽  
pp. 237802312110244
Author(s):  
Katrin Auspurg ◽  
Josef Brüderl

In 2018, Silberzahn, Uhlmann, Nosek, and colleagues published an article in which 29 teams analyzed the same research question with the same data: Are soccer referees more likely to give red cards to players with dark skin tone than light skin tone? The results obtained by the teams differed extensively. Many concluded from this widely noted exercise that the social sciences are not rigorous enough to provide definitive answers. In this article, we investigate why results diverged so much. We argue that the main reason was an unclear research question: Teams differed in their interpretation of the research question and therefore used diverse research designs and model specifications. We show by reanalyzing the data that with a clear research question, a precise definition of the parameter of interest, and theory-guided causal reasoning, results vary only within a narrow range. The broad conclusion of our reanalysis is that social science research needs to be more precise in its “estimands” to become credible.


Sign in / Sign up

Export Citation Format

Share Document