scholarly journals 113 times Tomcat: A dataset

Author(s):  
Giuseppe Destefanis ◽  
Mahir Arzoky ◽  
Steve Counsell ◽  
Stephen Swift ◽  
Marco Ortu ◽  
...  

Measuring software to get information about its properties and quality is one of the main issues in modern software engineering. The aim of this paper is to present a dataset of metrics associated to 113 versions of Tomcat. We describe the dataset along with the adopted criteria and the opportunities of research, providing preliminary results. This dataset can enhance the reliability of empirical studies, enabling their reproducibility, reducing their cost, and it can foster further research on software quality and software metrics.

Author(s):  
Giuseppe Destefanis ◽  
Mahir Arzoky ◽  
Steve Counsell ◽  
Stephen Swift ◽  
Marco Ortu ◽  
...  

Measuring software to get information about its properties and quality is one of the main issues in modern software engineering. The aim of this paper is to present a dataset of metrics associated to 113 versions of Tomcat. We describe the dataset along with the adopted criteria and the opportunities of research, providing preliminary results. This dataset can enhance the reliability of empirical studies, enabling their reproducibility, reducing their cost, and it can foster further research on software quality and software metrics.


2009 ◽  
pp. 3142-3159 ◽  
Author(s):  
Witold Pedrycz ◽  
Giancarlo Succi

The learning abilities and high transparency are the two important and highly desirable features of any model of software quality. The transparency and user-centricity of quantitative models of software engineering are of paramount relevancy as they help us gain a better and more comprehensive insight into the revealed relationships characteristic to software quality and software processes. In this study, we are concerned with logic-driven architectures of logic models based on fuzzy multiplexers (fMUXs). Those constructs exhibit a clear and modular topology whose interpretation gives rise to a collection of straightforward logic expressions. The design of the logic models is based on the genetic optimization and genetic algorithms, in particular. Through the prudent usage of this optimization framework, we address the issues of structural and parametric optimization of the logic models. Experimental studies exploit software data that relates software metrics (measures) to the number of modifications made to software modules.


Author(s):  
TAGHI M. KHOSHGOFTAAR ◽  
EDWARD B. ALLEN

Embedded-computer systems have become essential to life in modern society. For example, the backbone of society's information infrastructure is telecommunications. Embedded systems must have highly reliable software, so that we avoid the severe consequences of failures, intolerable down-time, and expensive repairs in remote locations. Moreover, today's fast-moving technology marketplace mandates that embedded systems evolve, resulting in multiple software releases embedded in multiple products. Software quality models can be valuable tools for software engineering of embedded systems, because some software-enhancement techniques are so expensive or time-consuming that it is not practical to apply them to all modules. Targeting such enhancement techniques is an effective way to reduce the likelihood of faults discovered in the field. Research has shown software metrics to be useful predictors of software faults. A software quality model is developed using measurements and fault data from a past release. The calibrated model is then applied to modules currently under development. Such models yield predictions on a module-by-module basis. This paper examines the Classification And Regression Trees (CART) algorithm for building tree-based models that predict which software modules have high risk of faults to be discovered during operations. CART is attractive because it emphasizes pruning to achieve robust models. This paper presents details on the CART algorithm in the context of software engineering of embedded systems. We illustrate this approach with a case study of four consecutive releases of software embedded in a large telecommunications system. The level of accuracy achieved in the case study would be useful to developers of an embedded system. The case study indicated that this model would continue to be useful over several releases as the system evolves.


Author(s):  
Witold Pedrycz ◽  
Giancarlo Succi

The learning abilities and high transparency are the two important and highly desirable features of any model of software quality. The transparency and user-centricity of quantitative models of software engineering are of paramount relevancy as they help us gain a better and more comprehensive insight into the revealed relationships characteristic to software quality and software processes. In this study, we are concerned with logic-driven architectures of logic models based on fuzzy multiplexers (fMUXs). Those constructs exhibit a clear and modular topology whose interpretation gives rise to a collection of straightforward logic expressions. The design of the logic models is based on the genetic optimization and genetic algorithms, in particular. Through the prudent usage of this optimization framework, we address the issues of structural and parametric optimization of the logic models. Experimental studies exploit software data that relates software metrics (measures) to the number of modifications made to software modules.


Software metrics have a direct link with measurement in software engineering. Correct measurement is the prior condition in any engineering fields, and software engineering may be not an exemption, as those size and complicated nature of software increases, manual examination of software becomes a harder assignment. Most Software Engineers worry about the quality of software, how to measure and enhance its quality. The overall objective of this study was to asses and analysis software metrics used to measure the software product and process. In this Study, the researcher used a collection of literatures from various electronic databases, available since 2008 to understand and know the software metrics. Finally, in this study, the researcher has been identified software quality will be a method for measuring how software is designed and how well the software conforms to that configuration. A percentage of the variables that we would be searching for software superiority and Correctness, item quality, Scalability, completeness and absence of bugs of those quality standard that might have been utilized from you quit offering on that one association will be unique in relation to others for this reason it may be better to apply the software measurements to measure the quality of software and the current is most common software metrics tools to decrease the partiality of faults during the valuation of software quality. The central influence of this study is an indicationaround software metrics to illustrate for development in this field by critical investigation about key metrics initiated onboth developer and user interactiona unified definition of software quality management on User and Developer (SQMUD) is proposed


2019 ◽  
Vol 214 ◽  
pp. 05007
Author(s):  
Marco Canaparo ◽  
Elisabetta Ronchieri

Software quality monitoring and analysis are among the most productive topics in software engineering research. Their results may be effectively employed by engineers during software development life cycle. Open source software constitutes a valid test case for the assessment of software characteristics. The data mining approach has been proposed in literature to extract software characteristics from software engineering data. This paper aims at comparing diverse data mining techniques (e.g., derived from machine learning) for developing effective software quality prediction models. To achieve this goal, we tackled various issues, such as the collection of software metrics from open source repositories, the assessment of prediction models to detect software issues and the adoption of statistical methods to evaluate data mining techniques. The results of this study aspire to identify the data mining techniques that perform better amongst all the ones used in this paper for software quality prediction models.


Author(s):  
SANG HUN OH ◽  
YOON JOON LEE ◽  
MYOUNG HO KIM

A management discipline of software metrics facilitates their consistent measurement and usage during the life cycle of software systems, by maintaining knowledge and data related to their evaluation tasks and application domain knowledge. To investigate environment of the management discipline, we represent software engineering environments by an activity flow model and a micro cycle model. A unified management view of software metrics is established by their collectivity and comprehensiveness in two-dimensional flows of the activity flow model. Then, we propose the Software Quality Manager (SQM) as a realization of the management discipline of software metrics. The proposed software quality manager makes use of two knowledge bases: (1) knowledge about application domain in the form of reusable software components and (2) knowledge about software engineering. The three-stage architecture and preparation process of the proposed system are also presented. Finally, we propose how to support the measurement and analysis task for database management systems within the context of the software quality manager.


2021 ◽  
Vol 26 (5) ◽  
Author(s):  
Maria Ulan ◽  
Welf Löwe ◽  
Morgan Ericsson ◽  
Anna Wingkvist

AbstractIt is a well-known practice in software engineering to aggregate software metrics to assess software artifacts for various purposes, such as their maintainability or their proneness to contain bugs. For different purposes, different metrics might be relevant. However, weighting these software metrics according to their contribution to the respective purpose is a challenging task. Manual approaches based on experts do not scale with the number of metrics. Also, experts get confused if the metrics are not independent, which is rarely the case. Automated approaches based on supervised learning require reliable and generalizable training data, a ground truth, which is rarely available. We propose an automated approach to weighted metrics aggregation that is based on unsupervised learning. It sets metrics scores and their weights based on probability theory and aggregates them. To evaluate the effectiveness, we conducted two empirical studies on defect prediction, one on ca. 200 000 code changes, and another ca. 5 000 software classes. The results show that our approach can be used as an agnostic unsupervised predictor in the absence of a ground truth.


Author(s):  
Seetharam .K ◽  
Sharana Basava Gowda ◽  
. Varadaraj

In Software engineering software metrics play wide and deeper scope. Many projects fail because of risks in software engineering development[1]t. Among various risk factors creeping is also one factor. The paper discusses approximate volume of creeping requirements that occur after the completion of the nominal requirements phase. This is using software size measured in function points at four different levels. The major risk factors are depending both directly and indirectly associated with software size of development. Hence It is possible to predict risk due to creeping cause using size.


Sign in / Sign up

Export Citation Format

Share Document