scholarly journals Systematic reviews in requirements engineering: A tertiary study

Author(s):  
Muneera Bano ◽  
Didar Zowghi ◽  
Naveed Ikram
2018 ◽  
Vol 95 ◽  
pp. 62-74 ◽  
Author(s):  
David Budgen ◽  
Pearl Brereton ◽  
Sarah Drummond ◽  
Nikki Williams

2019 ◽  
Author(s):  
Bruno Cartaxo ◽  
Gustavo Pinto ◽  
Fernando Kamei ◽  
Danilo Monteiro ◽  
Fabio Queda ◽  
...  

Context: One of the goals of Evidence-Based Software Engineering is to leverage evidence from research to practice. However, some studies suggest this goal has not being fully accomplished. Objective: This paper proposes a strategy to assess how systematic reviews cover practitioners' issues in software engineering. Method: We selected 24 systematic reviews identified by a comprehensive tertiary study. Using search strings of the selected systematic reviews, we queried most relevant practitioners' issues on five active Stack Exchange communities, a professional and high-quality Question & Answer platform. After examining more than 1,800 issues, we investigated how findings of the selected systematic reviews could help to solve (i.e. cover) practitioners' issues. Results: After excluding false positives and duplicates, a total of 424 issues were considered related to the selected systematic reviews. This number corresponds to 1.75% of the 26,687 most relevant issues on the five Stack Exchange communities. Among these 424 issues, systematic reviews can successfully cover 14.1% (60) of them. Based on a qualitative analysis, we identified 45 recurrent issues spread in many software engineering areas. The most demanded topic is related to agile software development, with 15 recurrent issues identified and 127 practitioners' issues as a whole. Conclusions: An overall coverage rate of 14.1% reveals a good opportunity for conducting systematic reviews in software engineering to fill the gap of not covered issues. We also observed practitioners explicitly demanding for scientific empirical evidence, rich in context and oriented to specific target audiences. Finally, we also provided guidelines for researchers who want to conduct systematic reviews more connected with software engineering practice.


2019 ◽  
Author(s):  
Bruno Cartaxo ◽  
Gustavo Pinto ◽  
Fernando Kamei ◽  
Danilo Monteiro ◽  
Fabio Queda ◽  
...  

Context: One of the goals of Evidence-Based Software Engineering is to leverage evidence from research to practice. However, some studies suggest this goal has not being fully accomplished. Objective: This paper proposes a strategy to assess how systematic reviews cover practitioners' issues in software engineering. Method: We selected 24 systematic reviews identified by a comprehensive tertiary study. Using search strings of the selected systematic reviews, we queried most relevant practitioners' issues on five active Stack Exchange communities, a professional and high-quality Question & Answer platform. After examining more than 1,800 issues, we investigated how findings of the selected systematic reviews could help to solve (i.e. cover) practitioners' issues. Results: After excluding false positives and duplicates, a total of 424 issues were considered related to the selected systematic reviews. This number corresponds to 1.75% of the 26,687 most relevant issues on the five Stack Exchange communities. Among these 424 issues, systematic reviews can successfully cover 14.1% (60) of them. Based on a qualitative analysis, we identified 45 recurrent issues spread in many software engineering areas. The most demanded topic is related to agile software development, with 15 recurrent issues identified and 127 practitioners' issues as a whole. Conclusions: An overall coverage rate of 14.1% reveals a good opportunity for conducting systematic reviews in software engineering to fill the gap of not covered issues. We also observed practitioners explicitly demanding for scientific empirical evidence, rich in context and oriented to specific target audiences. Finally, we also provided guidelines for researchers who want to conduct systematic reviews more connected with software engineering practice.


2019 ◽  
Vol 22 (1) ◽  
Author(s):  
Leonardo Villalobos-Arias ◽  
Christan Quesada-López ◽  
Alexandra Martinez ◽  
Marcelo Jenkins

Context: Model-based testing is one of the most studied approaches by secondary studies in the area of software testing. Aggregating knowledge from secondary studies on model- based testing can be useful for both academia and industry. Objective: The goal of this study is to characterize secondary studies in model-based testing, in terms of the areas, tools and challenges they have investigated. Method: We conducted a tertiary study following the guidelines for systematic mapping studies. Our mapping included 22 secondary studies, of which 12 were literature surveys and 10 systematic reviews, over the period 1996–2016. Results: A hierarchy of model-based testing areas and subareas was built based on existing taxonomies as well as data that emerged from the secondary studies themselves. This hierarchy was then used to classify studies, tools, challenges and their tendencies in a unified classification scheme. We found that the two most studied areas are UML models and transition-based notations, both being modeling paradigms. Regarding tendencies of areas in time, we found two areas with constant activity through time, namely, test objectives and model specification. With respect to tools, we only found five studies that compared and classified model-based testing tools. These tools have been classified into common dimensions that mainly refer to the model type and phases of the model-based testing process they support. We reclassified all the tools into the hierarchy of model-based testing areas we proposed, and found that most tools were reported within the modeling paradigm area. With regard to tendencies of tools, we found that tools for testing the functional behavior of software have prevailed over time. Another finding was the shift from tools that support the generation of abstract tests to those that support the generation of executable tests. For analyzing challenges, we used six categories that emerged from the data (based on a grounded analysis): efficacy, availability, complexity, professional skills, investment, cost & effort, and evaluation & empirical evidence. We found that most challenges were related to availability. Besides, we too classified challenges according to our hierarchy of model-based testing areas, and found that most challenges fell in the model specification area. With respect to tendencies in challenges, we found they have moved from complexity of the approaches to the lack of approaches for specific software domains. Conclusions: Only a few systematic reviews on model-based testing could be found, therefore some areas still lack secondary studies, particularly, test execution aspects, language types, model dynamics, as well as some modeling paradigms and generation methods. We thus encourage the community to perform further systematic reviews and mapping studies, following known protocols and reporting procedures, in order to increase the quality and quantity of empirical studies in model-based testing.


ASHA Leader ◽  
2013 ◽  
Vol 18 (3) ◽  
pp. 60-60

Nominate Clinical Questions for Systematic Reviews


2020 ◽  
Vol 228 (1) ◽  
pp. 1-2
Author(s):  
Michael Bošnjak ◽  
Nadine Wedderhoff

Abstract. This editorial gives a brief introduction to the six articles included in the fourth “Hotspots in Psychology” of the Zeitschrift für Psychologie. The format is devoted to systematic reviews and meta-analyses in research-active fields that have generated a considerable number of primary studies. The common denominator is the research synthesis nature of the included articles, and not a specific psychological topic or theme that all articles have to address. Moreover, methodological advances in research synthesis methods relevant for any subfield of psychology are being addressed. Comprehensive supplemental material to the articles can be found in PsychArchives ( https://www.psycharchives.org ).


2011 ◽  
Author(s):  
Elizabeth O'Connor ◽  
Evelyn Whitlock ◽  
Bonnie Spring
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document