scholarly journals The Double Bind of Qualitative Comparative Analysis

2019 ◽  
pp. 004912411988246 ◽  
Author(s):  
Vincent Arel-Bundock

Qualitative comparative analysis (QCA) is an influential methodological approach motivated by set theory and boolean logic. QCA proponents have developed algorithms to analyze quantitative data, in a bid to uncover necessary and sufficient conditions where causal relationships are complex, conditional, or asymmetric. This article uses computer simulations to show that researchers in the QCA tradition face a vexing double bind. On the one hand, QCA algorithms often require large data sets in order to recover an accurate causal model, even if that model is relatively simple. On the other hand, as data sets increase in size, it becomes harder to guarantee data integrity, and QCA algorithms can be highly sensitive to measurement error, data entry mistakes, or misclassification.

2021 ◽  
Author(s):  
Vincent Arel-Bundock

Qualitative comparative analysis (QCA) is an influential methodological approach motivated by set theory and boolean logic. QCA proponents have developed algorithms to analyze quantitative data, in a bid to uncover necessary and sufficient conditions where causal relationships are complex, conditional, or asymmetric. This article uses computer simulations to show that researchers in the QCA tradition face a vexing double bind. On the one hand, QCA algorithms often require large data sets in order to recover an accurate causal model, even if that model is relatively simple. On the other hand, as data sets increase in size, it becomes harder to guarantee data integrity, and QCA algorithms can be highly sensitive to measurement error, data entry mistakes, or misclassification.


2016 ◽  
Vol 46 (2) ◽  
pp. 242-251 ◽  
Author(s):  
Bear F. Braumoeller

Fuzzy-set qualitative comparative analysis (fsQCA) has become one of the most prominent methods in the social sciences for capturing causal complexity, especially for scholars with small- and medium- N data sets. This research note explores two key assumptions in fsQCA’s methodology for testing for necessary and sufficient conditions—the cumulation assumption and the triangular data assumption—and argues that, in combination, they produce a form of aggregation bias that has not been recognized in the fsQCA literature. It also offers a straightforward test to help researchers answer the question of whether their findings are plausibly the result of aggregation bias.


Author(s):  
Martyna Daria Swiatczak

AbstractThis study assesses the extent to which the two main Configurational Comparative Methods (CCMs), i.e. Qualitative Comparative Analysis (QCA) and Coincidence Analysis (CNA), produce different models. It further explains how this non-identity is due to the different algorithms upon which both methods are based, namely QCA’s Quine–McCluskey algorithm and the CNA algorithm. I offer an overview of the fundamental differences between QCA and CNA and demonstrate both underlying algorithms on three data sets of ascending proximity to real-world data. Subsequent simulation studies in scenarios of varying sample sizes and degrees of noise in the data show high overall ratios of non-identity between the QCA parsimonious solution and the CNA atomic solution for varying analytical choices, i.e. different consistency and coverage threshold values and ways to derive QCA’s parsimonious solution. Clarity on the contrasts between the two methods is supposed to enable scholars to make more informed decisions on their methodological approaches, enhance their understanding of what is happening behind the results generated by the software packages, and better navigate the interpretation of results. Clarity on the non-identity between the underlying algorithms and their consequences for the results is supposed to provide a basis for a methodological discussion about which method and which variants thereof are more successful in deriving which search target.


Author(s):  
Demissie Damite Degato

The traditional approach to innovation assessment has mainly focused on the economic outcomes and failed to capture the ecological and social dimensions of sustainability. By giving high attention to the role of specific kind of innovation (technological innovation), there is little empirical work on whether combining different kinds of innovation leads to progress in social-ecological sustainability in developing countries. The sustainability orientation in the assessment of innovation performance becomes increasingly important for achieve successful transformation towards sustainability. The research question of this study is under what condition or combination of conditions the intervention for innovation reconciles the trade-offs between socioeconomic and ecological performance and thus improve progress towards sustainability in poor countries. Combing concepts and methods from literature on strategic corporate social responsibility (CSR), value chain upgrading, sustainability, and technological capability, this study identifies different mechanisms and conditions for building innovation capacity and then empirically evaluates the relationship between the degree of innovation capacity and the progress towards social-ecological sustainability by taking four cases from Ethiopia. The data for this study is collected using key informant interviews, focus group discussion, and biodiversity and innovation scorecard questionnaire. Mixed methods combing comprehensive fuzzy evaluation, biodiversity scorecard and qualitative comparative analysis are used for analysis. The study found that combing value chain innovation and green governance innovation either with technological upgrading or innovation platform learning are sufficient conditions for achieving social-ecological sustainability. We also found that innovation in green governance and in value chain are necessary conditions for sustainability. By developing and applying fuzzy comprehensive evaluation model for measuring innovation capacity and fuzzy set qualitative comparative analysis for identifying necessary and sufficient conditions for sustainability, this study made an important methodological contribution to existing literature.


2021 ◽  
pp. 59-68
Author(s):  
Béla Cehla ◽  
Ferenc Ede Búzás ◽  
Sándor Kiss ◽  
István Szűcs ◽  
László Posta

Technological development makes it possible to simplify and accelerate decision-making processes by adequately processing and evaluating large volumes of data. Sub-data obtained from large data sets have a very important practical role in asset valuation, forecasting and valuing delineated or difficult-to-map areas, or in the context of portfolio management. Land valuation is a separate segment within asset valuation and it requires a specific methodological approach on behalf of evaluators. In this study, the authors compared the transaction data of arable land and the value of other land use categories. Based on empirical assessments, the authors developed proposals for the fast and cost-effective determination of the value of land use categories other than arable land - mainly meadows and pastures.


2021 ◽  
pp. 1-30
Author(s):  
Matthias Duller

Abstract Using Qualitative Comparative Analysis, this article presents a systematic comparison of differences in the institutional success of sociology in 25 European countries during the academic expansion from 1945 until the late 1960s. Combining context-sensitive national histories of sociology, concept formation, and formal analyses of necessary and sufficient conditions, the article searches for historical explanations for both successful and inhibited processes of the institutionalization of sociology. Concretely, it assesses the interplay of political regime types, the continuous presence of sociological prewar traditions, political Catholicism, and the effects of sociological communities in neighboring countries and how their various combinations are related to more or less well-established sociologies. The results can help explain adversary effects under democratic conditions as well as supportive factors under nondemocratic conditions.


Sensors ◽  
2020 ◽  
Vol 20 (1) ◽  
pp. 322 ◽  
Author(s):  
Faraz Malik Awan ◽  
Yasir Saleem ◽  
Roberto Minerva ◽  
Noel Crespi

Machine/Deep Learning (ML/DL) techniques have been applied to large data sets in order to extract relevant information and for making predictions. The performance and the outcomes of different ML/DL algorithms may vary depending upon the data sets being used, as well as on the suitability of algorithms to the data and the application domain under consideration. Hence, determining which ML/DL algorithm is most suitable for a specific application domain and its related data sets would be a key advantage. To respond to this need, a comparative analysis of well-known ML/DL techniques, including Multilayer Perceptron, K-Nearest Neighbors, Decision Tree, Random Forest, and Voting Classifier (or the Ensemble Learning Approach) for the prediction of parking space availability has been conducted. This comparison utilized Santander’s parking data set, initiated while working on the H2020 WISE-IoT project. The data set was used in order to evaluate the considered algorithms and to determine the one offering the best prediction. The results of this analysis show that, regardless of the data set size, the less complex algorithms like Decision Tree, Random Forest, and KNN outperform complex algorithms such as Multilayer Perceptron, in terms of higher prediction accuracy, while providing comparable information for the prediction of parking space availability. In addition, in this paper, we are providing Top-K parking space recommendations on the basis of distance between current position of vehicles and free parking spots.


2019 ◽  
Vol 10 (1) ◽  
pp. 12-34
Author(s):  
Diego Ceccobelli

This article presents and adopts a new definition of the popularization of political communication, which is defined as a strategic communicative action through which political actors try to create new connections with those citizens who do not still know, follow and support them and to emotionally strengthen the political bond with their current sympathizers. Second, a comparative analysis of the Facebook pages of the main political leaders of 31 countries shows that the popularization of political communication is a relevant phenomenon on Facebook, while a qualitative comparative analysis (QCA) indicates that the presence of a presidential system, a high digitalization of the media system, and a high level of trust in political institutions are three sufficient conditions for a “pop” communication on Facebook. Finally, the article identifies and discusses its main properties and development under the current hybrid media system.


Author(s):  
A. Sheik Abdullah ◽  
R. Suganya ◽  
S. Selvakumar ◽  
S. Rajaram

Classification is considered to be the one of the data analysis technique which can be used over many applications. Classification model predicts categorical continuous class labels. Clustering mainly deals with grouping of variables based upon similar characteristics. Classification models are experienced by comparing the predicted values to that of the known target values in a set of test data. Data classification has many applications in business modeling, marketing analysis, credit risk analysis; biomedical engineering and drug retort modeling. The extension of data analysis and classification makes the insight into big data with an exploration to processing and managing large data sets. This chapter deals with various techniques, methodologies that correspond to the classification problem in data analysis process and its methodological impacts to big data.


Sign in / Sign up

Export Citation Format

Share Document