scholarly journals Guidelines on the Provision and Handling of Research Data in Sociology

2020 ◽  
Author(s):  
Academy of Sociology

With these guidelines the Academy of Sociology (a German professional association) gives recommendations on how social science data could be made open. The aim is to make the Social Sciences more open.

1984 ◽  
Vol 8 (1) ◽  
pp. 19-24 ◽  
Author(s):  
B.C. Brookes

In a critical review of all the empirical laws of bibliometrics and scientometrics, the Russian statistician S.D. Haitun has shown that the application of modern statistical theory to social science data is 'inadmissible', i.e. it 'does not work'. Haitun thus points to the need to develop a wholly new statistical theory for the social sciences in general and for informetrics in particular. This paper discusses the implications of Haitun's work and explains why the older Bradford law still has an important role to play in the development of a new theory.


2016 ◽  
Vol 29 (2) ◽  
pp. 62-73
Author(s):  
Kalpana Shankar ◽  
Kristin R. Eschenfelder ◽  
Greg Downey

We map out a new arena of analysis for knowledge and cyberinfrastructure scholars: Social Science Data Archives (SSDA). SSDA have influenced the international development of the social sciences, research methods, and data standards in the latter half of the twentieth century. They provide entry points to understand how fields organise themselves to be ‘data intensive’. Longitudinal studies of SSDA can increase our understanding of the sustainability of knowledge infrastructure more generally. We argue for special attention to the following themes: the co-shaping of data use and users, the materiality of shifting revenue sources, field level relationships as an important component of infrastructure, and the implications of centralisation and federation of institutions and resources. We briefly describe our ongoing study of primarily quantitative social science data archives. We conclude by discussing how cross-institutional and longitudinal analyses can contribute to the scholarship of knowledge infrastructure.Keywords: social sciences; data archives; institutional sustainability


2021 ◽  
pp. 1-19
Author(s):  
Michelle Torres ◽  
Francisco Cantú

Abstract We provide an introduction of the functioning, implementation, and challenges of convolutional neural networks (CNNs) to classify visual information in social sciences. This tool can help scholars to make more efficient the tedious task of classifying images and extracting information from them. We illustrate the implementation and impact of this methodology by coding handwritten information from vote tallies. Our paper not only demonstrates the contributions of CNNs to both scholars and policy practitioners, but also presents the practical challenges and limitations of the method, providing advice on how to deal with these issues.


2020 ◽  
Vol 16 (1) ◽  
Author(s):  
Kevin Louis Bardosh ◽  
Daniel H. de Vries ◽  
Sharon Abramowitz ◽  
Adama Thorlie ◽  
Lianne Cremers ◽  
...  

Abstract Background The importance of integrating the social sciences in epidemic preparedness and response has become a common feature of infectious disease policy and practice debates. However to date, this integration remains inadequate, fragmented and under-funded, with limited reach and small initial investments. Based on data collected prior to the COVID-19 pandemic, in this paper we analysed the variety of knowledge, infrastructure and funding gaps that hinder the full integration of the social sciences in epidemics and present a strategic framework for addressing them. Methods Senior social scientists with expertise in public health emergencies facilitated expert deliberations, and conducted 75 key informant interviews, a consultation with 20 expert social scientists from Africa, Asia and Europe, 2 focus groups and a literature review of 128 identified high-priority peer reviewed articles. We also analysed 56 interviews from the Ebola 100 project, collected just after the West African Ebola epidemic. Analysis was conducted on gaps and recommendations. These were inductively classified according to various themes during two group prioritization exercises. The project was conducted between February and May 2019. Findings from the report were used to inform strategic prioritization of global investments in social science capacities for health emergencies. Findings Our analysis consolidated 12 knowledge and infrastructure gaps and 38 recommendations from an initial list of 600 gaps and 220 recommendations. In developing our framework, we clustered these into three areas: 1) Recommendations to improve core social science response capacities, including investments in: human resources within response agencies; the creation of social science data analysis capacities at field and global level; mechanisms for operationalizing knowledge; and a set of rapid deployment infrastructures; 2) Recommendations to strengthen applied and basic social sciences, including the need to: better define the social science agenda and core competencies; support innovative interdisciplinary science; make concerted investments in developing field ready tools and building the evidence-base; and develop codes of conduct; and 3) Recommendations for a supportive social science ecosystem, including: the essential foundational investments in institutional development; training and capacity building; awareness-raising activities with allied disciplines; and lastly, support for a community of practice. Interpretation Comprehensively integrating social science into the epidemic preparedness and response architecture demands multifaceted investments on par with allied disciplines, such as epidemiology and virology. Building core capacities and competencies should occur at multiple levels, grounded in country-led capacity building. Social science should not be a parallel system, nor should it be “siloed” into risk communication and community engagement. Rather, it should be integrated across existing systems and networks, and deploy interdisciplinary knowledge “transversally” across all preparedness and response sectors and pillars. Future work should update this framework to account for the impact of the COVID-19 pandemic on the institutional landscape.


2001 ◽  
Vol 25 (2) ◽  
pp. 24
Author(s):  
Janez Stebe ◽  
Irena Vipavc

The Social Science Data Archive in Slovenia


1995 ◽  
Vol 20 (2) ◽  
pp. 115-147 ◽  
Author(s):  
David Draper

Hierarchical models (HMs; Lindley & Smith, 1972) offer considerable promise to increase the level of realism in social science modeling, but the scope of what can be validly concluded with them is limited, and recent technical advances in allied fields may not yet have been put to best use in implementing them. In this article, I (a) examine 3 levels of inferential strength supported by typical social science data-gathering methods, and call for a greater degree of explicitness, when HMs and other models are applied, in identifying which level is appropriate; (b) reconsider the use of HMs in school effectiveness studies and meta-analysis from the perspective of causal inference; and (c) recommend the increased use of Gibbs sampling and other Markov-chain Monte Carlo (MCMC) methods in the application of HMs in the social sciences, so that comparisons between MCMC and better-established fitting methods—including full or restricted maximum likelihood estimation based on the EM algorithm, Fisher scoring, and iterative generalized least squares—may be more fully informed by empirical practice.


2020 ◽  
Vol 43 (4) ◽  
pp. 1-2
Author(s):  
Karsten Boye Rasmussen

Welcome to the fourth issue of volume 43 of the IASSIST Quarterly (IQ 43:4, 2019). The first article is authored by Jessica Mozersky, Heidi Walsh, Meredith Parsons, Tristan McIntosh, Kari Baldwin, and James M. DuBois – all located at the Bioethics Research Center, Washington University School of Medicine, St. Louis, Missouri in USA. They ask the question “Are we ready to share qualitative research data?”, with the subtitle “Knowledge and preparedness among qualitative researchers, IRB Members, and data repository curators.” The subtitle indicates that their research includes a survey of key personnel related to scientific data sharing. The report is obtained through semi-structured in-depth interviews with 30 data repository curators, 30 qualitative researchers, and 30 IRB staff members in the USA. IRB stands for Institutional Review Board, which in other countries might be called research ethics committee or similar. There is generally an increasing trend towards data sharing and open science, but qualitative data are rarely shared. The dilemma behind this reluctance to share is exemplified by health data where qualitative methods explore sensitive topics. The sensitivity leads to protection of confidentiality, which hinders keeping sufficient contextual detail for secondary analyses. You could add that protection of confidentiality is a much bigger task in qualitative data, where sensitive information can be hidden in every corner of the data, that consequently must be fine-combed, while with quantitative data most decisions concerning confidentiality can be made at the level of variables. The reporting in the article gives insights into the differences between the three stakeholder groups. An often-found answer among researchers is that data sharing is associated with quantitative data, while IRB members have little practice with qualitative. Among curators, about half had curated qualitative data, but many only worked with quantitative data. In general, qualitative data sharing lacks guidance and standards.   The second article also raises a question: “How many ways can we teach data literacy?” We are now in Asia with a connection to the USA. The author Yun Dai is working at the Library of New York University Shanghai, where they have explored many ways to teach data literacy to undergraduate students. These initiatives, described in the article, included workshops and in-class instruction - which tempted students by offering up-to-date technology, through online casebooks of topics in the data lifecycle, to event series with appealing names like “Lying with Data.” The event series had a marketing mascot - a “Lying with Data” Pinocchio - and sessions on being fooled by advertisements and getting the truth out of opinion surveys. Data literacy has a resemblance to information literacy and in that perspective, data literacy is defined as “critical thinking applied to evaluating data sources and formats, and interpreting and communicating findings,” while statistical literacy is “the ability to evaluate statistical information as evidence.” The article presents the approaches and does not conclude on the question, “How many?” No readers will be surprised by the missing answer, and I am certain readers will enjoy the ideas of the article and the marketing focus.   With the last article “Examining barriers for establishing a national data service,” the author Janez Štebe takes us to Europe. Janez Štebe is head of the social science data archives (Arhiv Družboslovnih Podatkov) at the University of Ljubljana, Slovenia. The Consortium of European Social Science Data Archives (CESSDA) is a distributed European social science data infrastructure for access to research data. CESSDA has many - but not all - European countries as members. The focus is on the situation in 20 non-CESSDA member European countries, with emerging and immature data archive services being developed through such projects as the CESSDA Strengthening and Widening (SaW 2016 and 2017) and CESSDA Widening Activities (WA 2018). By identifying and comparing gaps and differences, a group of countries at a similar level may consider following similar best practice examples to achieve a more mature and supportive open scientific data ecosystem. Like the earlier articles, this article provides good references to earlier literature and description of previous studies in the area. In this project 22 countries were selected, all CESSDA non-members, and interviewees among social science researchers and data librarians were contacted with an e-mail template between October 2018 and January 2019. The article brings results and discussion of the national data sharing culture and data infrastructure. Yes, there is a lack of money! However, it is the process of gradually establishing a robust data infrastructure that is believed to impact the growth of a data sharing culture and improve the excellence and the efficiency of research in general.   Submissions of papers for the IASSIST Quarterly are always very welcome. We welcome input from IASSIST conferences or other conferences and workshops, from local presentations or papers especially written for the IQ. When you are preparing such a presentation, give a thought to turning your one-time presentation into a lasting contribution. Doing that after the event also gives you the opportunity of improving your work after feedback. We encourage you to login or create an author login to https://www.iassistquarterly.com (our Open Journal System application). We permit authors to “deep link” into the IQ as well as to deposit the paper in your local repository. Chairing a conference session with the purpose of aggregating and integrating papers for a special issue IQ is also much appreciated as the information reaches many more people than the limited number of session participants and will be readily available on the IASSIST Quarterly website at https://www.iassistquarterly.com.  Authors are very welcome to take a look at the instructions and layout: https://www.iassistquarterly.com/index.php/iassist/about/submissions Authors can also contact me directly via e-mail: [email protected]. Should you be interested in compiling a special issue for the IQ as guest editor(s) I will also be delighted to hear from you. Karsten Boye Rasmussen - December 2019


1978 ◽  
Vol 2 (1) ◽  
pp. 3
Author(s):  
Alice Robbin

The Impact of Computer Networking on the Social Science Data Library


Sign in / Sign up

Export Citation Format

Share Document