Collecting and Analyzing Qualitative Data

Author(s):  
Brent Wolff ◽  
Frank Mahoney ◽  
Anna Leena Lohiniva ◽  
Melissa Corkum

Qualitative research provides an adaptable, open-ended, rigorous method to explore local perceptions of an issue. Qualitative approaches are effective at revealing the subjective logic motivating behavior. They are particularly appropriate for research questions that are exploratory in nature or involve issues of meaning rather than magnitude or frequency. Key advantages of qualitative approaches include speed, flexibility and high internal validity resulting from emphasis on rapport building and ability to probe beneath the surface of initial responses. Given the time-intensive nature of qualitative analysis, samples tend to be small and purposively selected to assure every interview counts. Qualitative studies can be done independently or embedded in mixed-method designs. Qualitative data analysis depends on rigorous reading and rereading texts ideally with more than one analyst to confirm interpretations. Computer software is useful for analyzing large data sets but manual coding is often sufficient for rapid assessments in field settings..

2019 ◽  
Vol 18 ◽  
pp. 160940691988069 ◽  
Author(s):  
Rebecca L. Brower ◽  
Tamara Bertrand Jones ◽  
La’Tara Osborne-Lampkin ◽  
Shouping Hu ◽  
Toby J. Park-Gaghan

Big qualitative data (Big Qual), or research involving large qualitative data sets, has introduced many newly evolving conventions that have begun to change the fundamental nature of some qualitative research. In this methodological essay, we first distinguish big data from big qual. We define big qual as data sets containing either primary or secondary qualitative data from at least 100 participants analyzed by teams of researchers, often funded by a government agency or private foundation, conducted either as a stand-alone project or in conjunction with a large quantitative study. We then present a broad debate about the extent to which big qual may be transforming some forms of qualitative inquiry. We present three questions, which examine the extent to which large qualitative data sets offer both constraints and opportunities for innovation related to funded research, sampling strategies, team-based analysis, and computer-assisted qualitative data analysis software (CAQDAS). The debate is framed by four related trends to which we attribute the rise of big qual: the rise of big quantitative data, the growing legitimacy of qualitative and mixed methods work in the research community, technological advances in CAQDAS, and the willingness of government and private foundations to fund large qualitative projects.


Author(s):  
LAZHAR LABIOD ◽  
NISTOR GROZAVU ◽  
YOUNÈS BENNANI

This paper introduces a relational topological map model, dedicated to multidimensional categorial data (or qualitative data) arising in the form of a binary matrix or a sum of binary matrices. This approach is based on the principle of Kohonen's model (conservation of topological order) and uses the Relational Analysis formalism by maximizing a modified Condorcet criterion. This proposed method is developed from the classical Relational Analysis approach by adding a neighborhood constraint to the Condorcet criterion. We propose a hybrid algorithm, which deals linearly with large data sets, provides a natural clusters identification and allows a visualization of the clustering result on a two-dimensional grid while preserving the a priori topological order of this data. The proposed approach called Relational Topological Map (RTM) was validated on several databases and the experimental results showed very promising performances.


2016 ◽  
Vol 19 (1) ◽  
pp. 18-27 ◽  
Author(s):  
Nicole D. Osier ◽  
Christopher C. Imes ◽  
Heba Khalil ◽  
Jamie Zelazny ◽  
Ann E. Johansson ◽  
...  

Omics approaches, including genomics, transcriptomics, proteomics, epigenomics, microbiomics, and metabolomics, generate large data sets. Once they have been used to address initial study aims, these large data sets are extremely valuable to the greater research community for ancillary investigations. Repurposing available omics data sets provides data to address research questions, generate and test hypotheses, replicate findings, and conduct mega-analyses. Many well-characterized, longitudinal, epidemiological studies collected extensive phenotype data related to symptom occurrence and severity. While the main phenotype of interest for many of these studies was often not symptom related, these data were collected to better understand the primary phenotype of interest. A search for symptom data (i.e., cognitive impairment, fatigue, gastrointestinal distress/nausea, sleep, and pain) in the database of genotypes and phenotypes (dbGaP) revealed many studies that collected symptom and omics data. There is thus a real possibility for nurse scientists to be able to look at symptom data over time from thousands of individuals and use omics data to identify key biological underpinnings that account for the development and severity of symptoms without recruiting participants or generating any new data. The purpose of this article is to introduce the reader to resources that provide omics data to the research community for repurposing, provide guidance on using these databases, and encourage the use of these data to move symptom science forward.


KWALON ◽  
2020 ◽  
Vol 25 (2) ◽  
Author(s):  
Abdessamad Bouabid

From data panic to Moroccan panic: A qualitative analysis of large data collections using codes, code groups and networks in Atlas.ti Large qualitative data collections can cause ‘data panic’ among qualitative researchers when reaching the stage of analysis. They often find it difficult to get a grip on such large data sets and to find a method of analysis that is both systematic and pragmatic and that can help them with this. In this article, I describe how I used a deductive and inductive method of analysis to get a grip on a large qualitative data collection (consisting of different formats) and how qualitative data analysis software facilitated this. This data reduction method consists of three stages: (1) deductive and inductive coding in Atlas.ti; (2) pattern coding in code groups and networks in Atlas.ti; and (3) reporting on the findings by transforming the networks into written text. This method is useful for researchers from all disciplines who want to analyze large qualitative data collections systematically, but at the same time do not want to drown in rigid methodological protocols that neutralize the creativity, reflexivity and flexibility of the researcher.


2018 ◽  
Author(s):  
Peter Branney ◽  
Kate Reid ◽  
Nollaig Frost ◽  
Susan Coan ◽  
Amy Mathieson ◽  
...  

To date, open science, and particularly open data, in Psychology, has focused on quantitative research. This paper aims to explore ethical and practical issues encountered by UK-based psychologists utilising open qualitative datasets. Semi-structured telephone interviews with eight qualitative psychologists were explored using a framework analysis. From the findings, we offer up a context-consent meta-framework as a resource to help in the design of studies sharing their data and/or studies using open data. We recommend ‘secondary’ studies conduct archaeologies of context and consent to examine if the data available is suitable for their research questions. In conclusion, this research is the first we know of in the study of ‘doing’ (or not doing) open science, which could be repeated to develop a longitudinal picture or complemented with additional approaches, such as observational studies of how context and consent are negotiated in pre-registered studies and open data.


Field Methods ◽  
2019 ◽  
Vol 31 (2) ◽  
pp. 116-130 ◽  
Author(s):  
M. Ariel Cascio ◽  
Eunlye Lee ◽  
Nicole Vaudrin ◽  
Darcy A. Freedman

In this article, we discuss methodological opportunities related to using a team-based approach for iterative-inductive analysis of qualitative data involving detailed open coding of semistructured interviews and focus groups. Iterative-inductive methods generate rich thematic analyses useful in sociology, anthropology, public health, and many other applied fields. A team-based approach to analyzing qualitative data increases confidence in dependability and trustworthiness, facilitates analysis of large data sets, and supports collaborative and participatory research by including diverse stakeholders in the analytic process. However, it can be difficult to reach consensus when coding with multiple coders. We report on one approach for creating consensus when open coding within an iterative-inductive analytical strategy. The strategy described may be used in a variety of settings to foster efficient and credible analysis of larger qualitative data sets, particularly useful in applied research settings where rapid results are often required.


Field Methods ◽  
2018 ◽  
Vol 30 (3) ◽  
pp. 191-207 ◽  
Author(s):  
Andrew Lowe ◽  
Anthony C. Norris ◽  
A. Jane Farris ◽  
Duncan R. Babbage

An important aspect of qualitative research is reaching saturation—loosely, a point at which observing more data will not lead to discovery of more information related to the research questions. However, there has been no validated means of objectively establishing saturation. This article proposes a novel quantitative approach to measuring thematic saturation based on a sound statistical model. The model is validated on two data sets from different qualitative research projects involving interviews, focus groups, and literature surveys. The proposed model provides consistent estimates of saturation across data sets, within branches of data sets, and over the course of a research project. The model can be used for both quantifying saturation and estimating the number of observations required to achieve a specified level of saturation.


2020 ◽  
pp. 109821401989378
Author(s):  
Traci H. Abraham ◽  
Erin P. Finley ◽  
Karen L. Drummond ◽  
Elizabeth K. Haro ◽  
Alison B. Hamilton ◽  
...  

This article outlines a three-phase, team-based approach used to analyze qualitative data from a nation-wide needs assessment of access to Veteran Health Administration services for rural-dwelling veterans. The method described here was used to develop the trustworthiness of findings from analysis of a large qualitative data set, without the use of analytic software. In Phase 1, we used templates to summarize content from 205 individual semistructured interviews. During Phase 2, a matrix display was constructed for each of 10 project sites to synthesize and display template content by participant, domain, and category. In the final phase, the summary tabulation technique was developed by a member of our team to facilitate trustworthy observations regarding patterns and variation in the large volume of qualitative data produced by the interviews. This accessible and efficient team-based strategy was feasible within the constraints of our project while preserving the richness of qualitative data.


Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Sign in / Sign up

Export Citation Format

Share Document