scholarly journals Data Curation Implications of Qualitative Data Reuse and Big Social Research

2021 ◽  
Vol 10 (4) ◽  
Author(s):  
Sara Mannheimer

Objective: Big social data (such as social media and blogs) and archived qualitative data (such as interview transcripts, field notebooks, and diaries) are similar, but their respective communities of practice are under-connected. This paper explores shared challenges in qualitative data reuse and big social research and identifies implications for data curation. Methods: This paper uses a broad literature search and inductive coding of 300 articles relating to qualitative data reuse and big social research. The literature review produces six key challenges relating to data use and reuse that are present in both qualitative data reuse and big social research—context, data quality, data comparability, informed consent, privacy & confidentiality, and intellectual property & data ownership. Results: This paper explores six key challenges related to data use and reuse for qualitative data and big social research and discusses their implications for data curation practices. Conclusions: Data curators can benefit from understanding these six key challenges and examining data curation implications. Data curation implications from these challenges include strategies for: providing clear documentation; linking and combining datasets; supporting trustworthy repositories; using and advocating for metadata standards; discussing alternative consent strategies with researchers and IRBs; understanding and supporting deidentification challenges; supporting restricted access for data; creating data use agreements; supporting rights management and data licensing; developing and supporting alternative archiving strategies. Considering these data curation implications will help data curators support sounder practices for both qualitative data reuse and big social research.

2007 ◽  
Vol 12 (3) ◽  
pp. 39-42 ◽  
Author(s):  
Jennifer Mason

This article is written to accompany and respond to the articles that form the special issue of Sociological Research Online on ‘Re-using qualitative data’. It argues that the articles are a welcome contribution, because they help to move the debate beyond moralistic and polarised positions, to demonstrate instead what sociologists can achieve by ‘re-using’ qualitative data. The article argues for an investigative epistemology and investigative practices to guide qualitative data use and ‘re-use’, and suggests that this is particularly important in the current social research climate.


2020 ◽  
Vol 11 (4) ◽  
pp. 34-44
Author(s):  
Jahnette Wilson ◽  
Sam Brower ◽  
Teresa Edgar ◽  
Amber Thompson ◽  
Shea Culpepper

Accountability and rigor in teacher education have been the focus of recent policy initiatives. Thus, data use practices have become increasingly critical to informing program improvement. Educational researchers have established self-study as a research methodology to intentionally be used by teacher educators to improve their practice. The purpose of the self-study described in this article was to examine the data use practices of one teacher preparation program in an effort to facilitate improvement of the program's capacity in using program data. The qualitative data gathered in this case study proved to be pivotal in the continuous improvement efforts of the teacher preparation program; thus, the usefulness and value of the findings within this case study have implications for how institutional self-study and qualitative data can support quantitative programmatic data in order to facilitate programmatic improvement initiatives.


Author(s):  
Jahnette Wilson ◽  
Samuel R. Brower ◽  
Teresa Edgar ◽  
Amber Thompson ◽  
Shea Culpepper

Proponents of the evidence-based movement in education maintain that decisions around policy and practice should be grounded in data outcomes. However, insufficient research exists on data use in teacher education programs as much of the research on data use is concentrated on K-12 programs. The purpose of this case study was to investigate the data use practices of an educator preparation program so as to facilitate program improvement efforts. The collective qualitative data described in this study was key to informing continuous improvement areas within this educator preparation program. Therefore, this case study offers insight as to how qualitative data can support and inform program improvement efforts.


Metabolites ◽  
2020 ◽  
Vol 10 (10) ◽  
pp. 416
Author(s):  
Gabriel Riquelme ◽  
Nicolás Zabalegui ◽  
Pablo Marchi ◽  
Christina M. Jones ◽  
María Eugenia Monge

Preprocessing data in a reproducible and robust way is one of the current challenges in untargeted metabolomics workflows. Data curation in liquid chromatography–mass spectrometry (LC–MS) involves the removal of biologically non-relevant features (retention time, m/z pairs) to retain only high-quality data for subsequent analysis and interpretation. The present work introduces TidyMS, a package for the Python programming language for preprocessing LC–MS data for quality control (QC) procedures in untargeted metabolomics workflows. It is a versatile strategy that can be customized or fit for purpose according to the specific metabolomics application. It allows performing quality control procedures to ensure accuracy and reliability in LC–MS measurements, and it allows preprocessing metabolomics data to obtain cleaned matrices for subsequent statistical analysis. The capabilities of the package are shown with pipelines for an LC–MS system suitability check, system conditioning, signal drift evaluation, and data curation. These applications were implemented to preprocess data corresponding to a new suite of candidate plasma reference materials developed by the National Institute of Standards and Technology (NIST; hypertriglyceridemic, diabetic, and African-American plasma pools) to be used in untargeted metabolomics studies in addition to NIST SRM 1950 Metabolites in Frozen Human Plasma. The package offers a rapid and reproducible workflow that can be used in an automated or semi-automated fashion, and it is an open and free tool available to all users.


2017 ◽  
Vol 28 (10) ◽  
pp. 1640-1649 ◽  
Author(s):  
Roschelle L. Fritz ◽  
Roxanne Vandermause

This methods article is a reflection on the use of in-depth email interviewing in a qualitative descriptive study. The use of emailing to conduct interviews is thought to be an effective way to collect qualitative data. Building on current methodological literature in qualitative research regarding in-depth email interviewing, we move the conversation toward elicitation of quality data and management of multiple concurrent email interviews. Excerpts are shared from a field journal that was kept throughout one study, with commentary on developing insights. Valuable lessons learned include the importance of (a) logistics and timing related to the management of multiple concurrent email interviews, (b) language and eliciting the data, (c) constructing the email, and (d) processing text-based data and preparing transcripts. Qualitative researchers seeking deeply reflective answers and geographically diverse samples may wish to consider using in-depth email interviews.


2008 ◽  
Vol 25 (3) ◽  
pp. 208-227 ◽  
Author(s):  
øyvind F. Standal ◽  
Ejgil Jespersen

The purpose of this study was to investigate the learning that takes place when people with disabilities interact in a rehabilitation context. Data were generated through in-depth interviews and close observations in a 2½ week-long rehabilitation program, where the participants learned both wheelchair skills and adapted physical activities. The findings from the qualitative data analysis are discussed in the context of situated learning (Lave & Wenger, 1991; Wenger, 1998). The results indicate that peer learning extends beyond skills and techniques, to include ways for the participants to make sense of their situations as wheelchair users. Also, it was found that the community of practice established between the participants represented a critical corrective to instructions provided by rehabilitation professionals.


2013 ◽  
Vol 74 (2) ◽  
pp. 195-207 ◽  
Author(s):  
Jingfeng Xia ◽  
Ying Liu

This paper uses Genome Expression Omnibus (GEO), a data repository in biomedical sciences, to examine the usage patterns of open data repositories. It attempts to identify the degree of recognition of data reuse value and understand how e-science has impacted a large-scale scholarship. By analyzing a list of 1,211 publications that cite GEO data to support their independent studies, it discovers that free data can support a wealth of high-quality investigations, that the rate of open data use keeps growing over the years, and that scholars in different countries show different rates of complying with data-sharing policies.


2015 ◽  
Vol 10 (1) ◽  
pp. 82-94 ◽  
Author(s):  
Tiffany Chao

Understanding the methods and processes implemented by data producers to generate research data is essential for fostering data reuse. Yet, producing the metadata that describes these methods remains a time-intensive activity that data producers do not readily undertake. In particular, researchers in the long tail of science often lack the financial support or tools for metadata generation, thereby limiting future access and reuse of data produced. The present study investigates research journal publications as a potential source for identifying descriptive metadata about methods for research data. Initial results indicate that journal articles provide rich descriptive content that can be sufficiently mapped to existing metadata standards with methods-related elements, resulting in a mapping of the data production process for a study. This research has implications for enhancing the generation of robust metadata to support the curation of research data for new inquiry and innovation.


Sign in / Sign up

Export Citation Format

Share Document