MICROSCOPIC MODEL OF SOFTWARE BUG DYNAMICS: CLOSED SOURCE VERSUS OPEN SOURCE

Author(s):  
DAMIEN CHALLET ◽  
YANN LE DU

We introduce a simple microscopic description of software bug dynamics where users, programmers and a maintainer of a given program interact through bug creation, detection and correction. When the program is written from scratch, the first phase of development is characterized by a fast decline of the number of bugs, followed by a slow phase where most bugs have been fixed, hence, are hard to find. Releasing immediately bug fixes speeds up the debugging process, which substantiates bazaar open-source methodology. We provide a mathematical analysis that supports our numerical simulations. Finally, we apply our model to Linux history and determine the existence of a lower bound to the quality of its programmers.

2020 ◽  
Vol 22 (2) ◽  
pp. 34-51
Author(s):  
Manar Abu Talib ◽  
Areej Alsaafin ◽  
Selma Manel Medjden

Open source software (OSS) has recently become very important due to the rapid expansion of the software industry. In order to determine whether the quality of the software can achieve the intended purposes, the components of OSS need to be assessed as they are in closed source (conventional) software. Several quality in-use models have been introduced to evaluate software quality in various fields. The banking sector is one of the most critical sectors, as it deals with highly sensitive data; it therefore requires an accurate and effective assessment of software quality. In this article, two pieces of banking software are compared: one open source and one closed source. A new quality in use model, inspired by ISO/IEC 25010, is used to ensure concise results in the comparison. The results obtained show the great potential of OSS, especially in the banking field.


Author(s):  
Manar Abu Talib ◽  
Areej Alsaafin ◽  
Selma Manel Medjden

Open source software (OSS) has recently become very important due to the rapid expansion of the software industry. In order to determine whether the quality of the software can achieve the intended purposes, the components of OSS need to be assessed as they are in closed source (conventional) software. Several quality in-use models have been introduced to evaluate software quality in various fields. The banking sector is one of the most critical sectors, as it deals with highly sensitive data; it therefore requires an accurate and effective assessment of software quality. In this article, two pieces of banking software are compared: one open source and one closed source. A new quality in use model, inspired by ISO/IEC 25010, is used to ensure concise results in the comparison. The results obtained show the great potential of OSS, especially in the banking field.


Author(s):  
Mehreen Sirshar ◽  
Asma Ali ◽  
Sara Ibrahim

The complexity of software is increasing day by day due to the increase in the size of the projects being developed. For better planning and management of large software projects, estimation of software quality is important. During the development processes, complexity metrics are used for the indication of the attributes/characteristics of the quality software. There are many studies about the effect of the complexity of the software on the cost and quality. In this study, we discussed the effects of software complexity on the quality attributes of the software for open source and closed source software. Though, the quality metrics for open and closed source software are not distinct from each other. In this paper, we comparatively analyzed the impact of complexity metrics on open source and private software. We also presented various models for the management of the project complexity such as William’s Model, Stacey’s Agreement and Certainty matrix, Kahane’s Approach and UCP Model. Quality metrics here refer to the standards for the measurement of the quality of software which contains certain attributes or characteristics of the software that are related to the quality of the software. Certain quality attributes addressed in this study are Usability, Reliability, Security, Portability, Maintainability, Efficiency, Cost, Standards and Availability, etc. Both Open source and Closed source software are evaluated on the basis of these quality attributes. This study also recommended future approaches to manage the quality of project Open source and Closed source software and specify which one of them is mostly used in the industry.


SLEEP ◽  
2020 ◽  
Author(s):  
Luca Menghini ◽  
Nicola Cellini ◽  
Aimee Goldstone ◽  
Fiona C Baker ◽  
Massimiliano de Zambotti

Abstract Sleep-tracking devices, particularly within the consumer sleep technology (CST) space, are increasingly used in both research and clinical settings, providing new opportunities for large-scale data collection in highly ecological conditions. Due to the fast pace of the CST industry combined with the lack of a standardized framework to evaluate the performance of sleep trackers, their accuracy and reliability in measuring sleep remains largely unknown. Here, we provide a step-by-step analytical framework for evaluating the performance of sleep trackers (including standard actigraphy), as compared with gold-standard polysomnography (PSG) or other reference methods. The analytical guidelines are based on recent recommendations for evaluating and using CST from our group and others (de Zambotti and colleagues; Depner and colleagues), and include raw data organization as well as critical analytical procedures, including discrepancy analysis, Bland–Altman plots, and epoch-by-epoch analysis. Analytical steps are accompanied by open-source R functions (depicted at https://sri-human-sleep.github.io/sleep-trackers-performance/AnalyticalPipeline_v1.0.0.html). In addition, an empirical sample dataset is used to describe and discuss the main outcomes of the proposed pipeline. The guidelines and the accompanying functions are aimed at standardizing the testing of CSTs performance, to not only increase the replicability of validation studies, but also to provide ready-to-use tools to researchers and clinicians. All in all, this work can help to increase the efficiency, interpretation, and quality of validation studies, and to improve the informed adoption of CST in research and clinical settings.


2021 ◽  
Vol 11 (12) ◽  
pp. 5690
Author(s):  
Mamdouh Alenezi

The evolution of software is necessary for the success of software systems. Studying the evolution of software and understanding it is a vocal topic of study in software engineering. One of the primary concepts of software evolution is that the internal quality of a software system declines when it evolves. In this paper, the method of evolution of the internal quality of object-oriented open-source software systems has been examined by applying a software metric approach. More specifically, we analyze how software systems evolve over versions regarding size and the relationship between size and different internal quality metrics. The results and observations of this research include: (i) there is a significant difference between different systems concerning the LOC variable (ii) there is a significant correlation between all pairwise comparisons of internal quality metrics, and (iii) the effect of complexity and inheritance on the LOC was positive and significant, while the effect of Coupling and Cohesion was not significant.


2015 ◽  
Vol 19 (4) ◽  
pp. 791-813 ◽  
Author(s):  
Zilia Iskoujina ◽  
Joanne Roberts

Purpose – This paper aims to add to the understanding of knowledge sharing in online communities through an investigation of the relationship between individual participant’s motivations and management in open source software (OSS) communities. Drawing on a review of literature concerning knowledge sharing in organisations, the factors that motivate participants to share their knowledge in OSS communities, and the management of such communities, it is hypothesised that the quality of management influences the extent to which the motivations of members actually result in knowledge sharing. Design/methodology/approach – To test the hypothesis, quantitative data were collected through an online questionnaire survey of OSS web developers with the aim of gathering respondents’ opinions concerning knowledge sharing, motivations to share knowledge and satisfaction with the management of OSS projects. Factor analysis, descriptive analysis, correlation analysis and regression analysis were used to explore the survey data. Findings – The analysis of the data reveals that the individual participant’s satisfaction with the management of an OSS project is an important factor influencing the extent of their personal contribution to a community. Originality/value – Little attention has been devoted to understanding the impact of management in OSS communities. Focused on OSS developers specialising in web development, the findings of this paper offer an important original contribution to understanding the connections between individual members’ satisfaction with management and their motivations to contribute to an OSS project. The findings reveal that motivations to share knowledge in online communities are influenced by the quality of management. Consequently, the findings suggest that appropriate management can enhance knowledge sharing in OSS projects and online communities, and organisations more generally.


Author(s):  
Ruben Brondeel ◽  
Yan Kestens ◽  
Javad Rahimipour Anaraki ◽  
Kevin Stanley ◽  
Benoit Thierry ◽  
...  

Background: Closed-source software for processing and analyzing accelerometer data provides little to no information about the algorithms used to transform acceleration data into physical activity indicators. Recently, an algorithm was developed in MATLAB that replicates the frequently used proprietary ActiLife activity counts. The aim of this software profile was (a) to translate the MATLAB algorithm into R and Python and (b) to test the accuracy of the algorithm on free-living data. Methods: As part of the INTErventions, Research, and Action in Cities Team, data were collected from 86 participants in Victoria (Canada). The participants were asked to wear an integrated global positioning system and accelerometer sensor (SenseDoc) for 10 days on the right hip. Raw accelerometer data were processed in ActiLife, MATLAB, R, and Python and compared using Pearson correlation, interclass correlation, and visual inspection. Results: Data were collected for a combined 749 valid days (>10 hr wear time). MATLAB, Python, and R counts per minute on the vertical axis had Pearson correlations with the ActiLife counts per minute of .998, .998, and .999, respectively. All three algorithms overestimated ActiLife counts per minute, some by up to 2.8%. Conclusions: A MATLAB algorithm for deriving ActiLife counts was implemented in R and Python. The different implementations provide similar results to ActiLife counts produced in the closed source software and can, for all practical purposes, be used interchangeably. This opens up possibilities to comparing studies using similar accelerometers from different suppliers, and to using free, open-source software.


2021 ◽  
Author(s):  
Jason Hunter ◽  
Mark Thyer ◽  
Dmitri Kavetski ◽  
David McInerney

<p>Probabilistic predictions provide crucial information regarding the uncertainty of hydrological predictions, which are a key input for risk-based decision-making. However, they are often excluded from hydrological modelling applications because suitable probabilistic error models can be both challenging to construct and interpret, and the quality of results are often reliant on the objective function used to calibrate the hydrological model.</p><p>We present an open-source R-package and an online web application that achieves the following two aims. Firstly, these resources are easy-to-use and accessible, so that users need not have specialised knowledge in probabilistic modelling to apply them. Secondly, the probabilistic error model that we describe provides high-quality probabilistic predictions for a wide range of commonly-used hydrological objective functions, which it is only able to do by including a new innovation that resolves a long-standing issue relating to model assumptions that previously prevented this broad application.  </p><p>We demonstrate our methods by comparing our new probabilistic error model with an existing reference error model in an empirical case study that uses 54 perennial Australian catchments, the hydrological model GR4J, 8 common objective functions and 4 performance metrics (reliability, precision, volumetric bias and errors in the flow duration curve). The existing reference error model introduces additional flow dependencies into the residual error structure when it is used with most of the study objective functions, which in turn leads to poor-quality probabilistic predictions. In contrast, the new probabilistic error model achieves high-quality probabilistic predictions for all objective functions used in this case study.</p><p>The new probabilistic error model and the open-source software and web application aims to facilitate the adoption of probabilistic predictions in the hydrological modelling community, and to improve the quality of predictions and decisions that are made using those predictions. In particular, our methods can be used to achieve high-quality probabilistic predictions from hydrological models that are calibrated with a wide range of common objective functions.</p>


Author(s):  
K. Pavlov ◽  
◽  
S. Ilyin ◽  

The article presents the author's formalized approach to the formation and evaluation of the system of indicators for conducting electronic business in static and dynamic states, contributing to the detailed disclosure of the interaction of factor and resultant parameters by which organizations will be able to justify the decisions taken by entrepreneurs in modern economic conditions to ensure the best final and intermediate benchmarks for themselves. The purpose of the study is aimed at creating a reliable system that integrates general and private indicators of e-business for the subsequent balancing of results, costs and extraction of marginal financial benefits. The objectives of the study are focused on annotating the activities of organizations that operate and implement business processes in the current economic environment, as well as integrating indicators based on it that reflect static and dynamic cause-and-effect relationships in the field of commerce. The methods for performing the research were chain substitutions supplemented with mathematical analysis, which will allow organizations to obtain complete and reliable information about the quality of e-business. The work is intended for managers and specialists of commercial organizations, educators and scientists engaged in the study of the economics of entrepreneurship.


Sign in / Sign up

Export Citation Format

Share Document