Software development in Austria: results of an empirical study among small and very small enterprises

Author(s):  
C. Hofer
2014 ◽  
pp. 1363-1384
Author(s):  
Mohammad Zarour ◽  
Alain Abran ◽  
Jean-Marc Desharnais

Software organizations have been struggling for decades to improve the quality of their products by improving their software development processes. Designing an improvement program for a software development process is a demanding and complex task. This task consists of two main processes: the assessment process and the improvement process. A successful improvement process requires first a successful assessment; failing to assess the organization’s software development process could create unsatisfactory results. Although very small enterprises (VSEs) have several interesting characteristics such as flexibility and ease of communications, initiating an assessment and improvement process based on well-known Software Process Improvement (SPI) models such as Capability Maturity Model Integration (CMMI) and ISO 15504 is more challenging in such VSEs. Accordingly, researchers and practitioners have designed a few assessment methods to meet the needs of VSEs organizations to initiate an SPI process. This chapter discusses the assessment and improvement process in VSEs; we first examine VSEs characteristics and problems. Next, we discuss the different assessment methods and standards designed to fit the needs of such organizations and how to compare them. Finally, we present future research work perceived in this context.


2017 ◽  
Vol 27 (09n10) ◽  
pp. 1507-1527
Author(s):  
Judith F. Islam ◽  
Manishankar Mondal ◽  
Chanchal K. Roy ◽  
Kevin A. Schneider

Code cloning is a recurrent operation in everyday software development. Whether it is a good or bad practice is an ongoing debate among researchers and developers for the last few decades. In this paper, we conduct a comparative study on bug-proneness in clone code and non-clone code by analyzing commit logs. According to our inspection of thousands of revisions of seven diverse subject systems, the percentage of changed files due to bug-fix commits is significantly higher in clone code compared with non-clone code. We perform a Mann–Whitney–Wilcoxon (MWW) test to show the statistical significance of our findings. In addition, the possibility of occurrence of severe bugs is higher in clone code than in non-clone code. Bug-fixing changes affecting clone code should be considered more carefully. Finally, our manual investigation shows that clone code containing if-condition and if–else blocks has a high risk of having severing bugs. Changes to such types of clone fragments should be done carefully during software maintenance. According to our findings, clone code appears to be more bug-prone than non-clone code.


2017 ◽  
Vol 66 (3) ◽  
pp. 806-824 ◽  
Author(s):  
Tse-Hsun Chen ◽  
Stephen W. Thomas ◽  
Hadi Hemmati ◽  
Meiyappan Nagappan ◽  
Ahmed E. Hassan

2013 ◽  
Vol 2013 ◽  
pp. 1-21 ◽  
Author(s):  
Mahmoud O. Elish ◽  
Tarek Helmy ◽  
Muhammad Imtiaz Hussain

Accurate estimation of software development effort is essential for effective management and control of software development projects. Many software effort estimation methods have been proposed in the literature including computational intelligence models. However, none of the existing models proved to be suitable under all circumstances; that is, their performance varies from one dataset to another. The goal of an ensemble model is to manage each of its individual models’ strengths and weaknesses automatically, leading to the best possible decision being taken overall. In this paper, we have developed different homogeneous and heterogeneous ensembles of optimized hybrid computational intelligence models for software development effort estimation. Different linear and nonlinear combiners have been used to combine the base hybrid learners. We have conducted an empirical study to evaluate and compare the performance of these ensembles using five popular datasets. The results confirm that individual models are not reliable as their performance is inconsistent and unstable across different datasets. Although none of the ensemble models was consistently the best, many of them were frequently among the best models for each dataset. The homogeneous ensemble of support vector regression (SVR), with the nonlinear combiner adaptive neurofuzzy inference systems-subtractive clustering (ANFIS-SC), was the best model when considering the average rank of each model across the five datasets.


Sign in / Sign up

Export Citation Format

Share Document