Synthesis of Object-Oriented Software Structural Models Using Quality Metrics And Co-Evolutionary Genetic Algorithms

Author(s):  
André Vargas Abs da Cruz ◽  
Dilza Szwarcman ◽  
Thiago S. M. Guimarães ◽  
Marco Aurélio C. Pacheco

One of the biggest challenges for the developer of object-oriented software is the modeling and developing of the objects themselves, so that they are easily reusable in complex systems. The final quality of the software depends mostly on the quality of the modeling developed for it. Modeling and specification of software are fundamental steps for making the software development an activity of engineering. Design is the activity in which software behavior and structure are elaborated. During this phase many models are developed anticipating several views of the final product and making software evaluation possible even before the software is implemented. Consequently, the synthesis of a software model can be seen as a problem of optimization, where the attempt to find a better configuration among the elements chosen through the object-oriented paradigm, such as classes, methods and attributes that meet quality design criteria. This work studies a possibility to synthesize higher quality modelings through the evolution of Genetic Algorithms, a technique that has proved to be efficient in dealing with problems involving large search spaces. The work was divided in three main stages: a study of object-oriented software engineering; the definition of a model using genetic algorithms; and co-evolution of species for the synthesis of object-oriented software modeling, aiming at quality improvement; and at the implementation of a model for case study. The study of object-oriented software engineering involved the establishment of software development phases and the characterization of the representation used in modeling phase and, in particular, the characterization of class diagrams based on UML. The study also investigated software quality metrics such as Reutilization, Flexibility, Understandability, Functionality, Extensibility and Effectiveness. The specification of genetic algorithm consisted in the definition of the structure of the chromosome that could provide a good representation of modeling diagram and a function of evaluation of the design that could take the software quality metrics in to consideration. As a result, the chromosomes represent metadata of a simplified UML diagram of classes, which may later be used as an entry of a CASE (Computer Aided Software Engineering) Tool that can create the implementation code in the chosen pattern. The evaluation function was defined focusing at the synthesis of a higher quality object-oriented software modeling. In order to observe the use of more than one objective at the same time the Pareto technique for multi objective problems was used. The evolution is directed towards the improvement of quality metrics by searching for a qualitatively better modeling, based on Bansiya’s (Bansiya and Davis, 2002) study. The construction of a co-evolutionary model consisted in defining distinct species so that each one would represent part of the problem to be evolved, thus enabling a more efficient representation of the model. The co-evolutionary model allowed the evolution of more complex structures, which would not be possible in a simple Genetic Algorithm. The chromosomes related to each species codify metadata and that is why the solution assembly (design) makes use of a decoder. A case study was done to synthesize the modeling of an elevator controller. The results achieved in this section were compared to the modelings produced by specialists, and the characteristics of these results were analyzed. The GA performance in the optimization process was compared to that of a random search and, in every case, the results achieved by the model were always better.

2013 ◽  
Vol 2013 ◽  
pp. 1-12 ◽  
Author(s):  
Andrés Cencerrado ◽  
Ana Cortés ◽  
Tomàs Margalef

This work presents a framework for assessing how the existing constraints at the time of attending an ongoing forest fire affect simulation results, both in terms of quality (accuracy) obtained and the time needed to make a decision. In the wildfire spread simulation and prediction area, it is essential to properly exploit the computational power offered by new computing advances. For this purpose, we rely on a two-stage prediction process to enhance the quality oftraditionalpredictions, taking advantage of parallel computing. This strategy is based on an adjustment stage which is carried out by a well-known evolutionary technique: Genetic Algorithms. The core of this framework is evaluated according to the probability theory principles. Thus, a strong statistical study is presented and oriented towards the characterization of such an adjustment technique in order to help the operation managers deal with the two aspects previously mentioned: time and quality. The experimental work in this paper is based on a region in Spain which is one of the most prone to forest fires:El Cap de Creus.


2015 ◽  
Vol 6 (4) ◽  
Author(s):  
Mamluatul Hani’ah ◽  
Yogi Kurniaawan ◽  
Umi Laili Yuhana

Abstract. Software quality assurance is one method to increase quality of software. Improvement of software quality can be measured with software quality metric. Software quality metrics are part of software quality measurement model. Currently software quality models have a very diverse types, so that software quality metrics become increasingly diverse. The various types of metrics to measure the quality of software create proper metrics selection issues to fit the desired quality measurement parameters. Another problem is the validation need to be performed on these metrics in order to obtain objective and valid results. In this paper, a systematic mapping of the software quality metric is conducted in the last nine years. This paper brings up issues in software quality metrics that can be used by other researchers. Furthermore, current trends are introduced and discussed.Keywords: Software Quality, Software Assesment, Metric Abstrak. Penjaminan kualitas suatu perangkat lunak merupakan salah satu cara meningkatkan kualitas suatu perangkat lunak. Metrik kualitas perangkat lunak merupakan bagian dari model pengukuran kualitas perangkat lunak. Model kualitas perangkat lunak memiliki jenis yang sangat beragam, sehinggga metrik kualitas perangkat lunak menjadi semakin beragam jenisnya. Beragamnya jenis metrik pengukuran kualitas perangkat lunak memberikan permasalahan pemilihan metrik yang tepat agar sesuai dengan parameter pengukuran kualitas yang diinginkan. Permasalahan yang lain adalah validasi yang harus dilakukan terhadap metrik tersebut agar diperoleh hasil yang obyektif dan valid. Dalam makalah ini akan dilakukan pemetaan sistemastis terhadap metrik pengukuran kualitas perangkat lunak pada sembilan tahun terakhir. Diharapkan dengan pemetaan sistematis akan dapat memunculkan permasalahan-permasalahan pada metrik kualitas perangkat lunak yang dapat digunakan sebagai penelitian untuk peneliti yang lain. Kata Kunci: Kualitas Perangkat Lunak, Penjaminan Perangkat Lunak, Metrik


Author(s):  
Vincenzo De Florio ◽  
Chris Blondia

Current software systems and the environments such systems are meant for requiring a precise characterization of the available resources and provisions to constantly re-optimize in the face of endogenous and exogenous changes and failures. This paper claims that it is simply not possible today to conceive software design without explicitly addressing adaptability and dependability. As an example, the authors remark on how mobile computing technologies call for effective software engineering techniques to design, develop and maintain services that are prepared to continue the distribution of a fixed, agreed-upon quality of service despite of the changes in the location of the client software, performance failures, and the characteristics of the environment. This paper concludes that novel paradigms are required for software engineering so as to provide effective system structures for adaptive and dependable services while keeping the design complexity under control. In this paper, the authors discuss this problem and propose one such structure, also briefly surveying the major milestones in the state of the art in this domain.


Diffusion for gaseous sources comprising more than one type of substance is examined to show how relative concentrations change with time and distance. The large variations which are predicted make nonsense of the popular assumption that odour or smell is an intrinsic property of the source material. However, some characterization of volatile chemical substances is needed. It is shown that this is possible by creating a uniform and stable atmosphere after the relapse of sufficient time by introducing the gas mixture into an enclosed space. In this investigation the situation is analysed for a spherical enclosure using Fourier analysis techniques for the long timescale behaviour and the Laplace transform for the short timescale solution. The measurement of odours via the response of sensor arrays within a spherical enclosure is considered and a proposal is made for utilizing such an enclosure in a definition of volatile molecular substances (analogous to biological ‘smell’). The conditions for optimum compatibility between an array of sensors and a set of calibrands are discussed and the practical means of effecting such measurements are considered in relation to known types of sensor. It is concluded that the quality of volatile molecular substances is definable and measurable down to very low gas concentrations in air: probably below 10 parts per billion for a wide range of gas mixtures unconstrained by such limitations associated with a biological nose such as toxicity, temperature and subjective evaluation.


Author(s):  
Volodymyr Baturkin ◽  
Vladylen Zaripov ◽  
Charles E. Andraka ◽  
Timothy A. Moss

Elaboration of robust and reliable capillary systems for solar energy heat pipe receivers is the important step for future application of this product for systems with thermal power of 30–80 kW. The paper considers a new approach to fabrication of capillary structure of heat pipe receivers on the basis of discrete metal fibers with diameter of 30 microns made of stainless steel 316L, and describes some of methods of wicks characterization as well. This technology has been demonstrated by fabrication of porous 4 mm thick wicks with bulk porosity about 0.82 applied to convex surfaces of dome-shaped receivers with radius 178 mm/height 119 mm (thermal power 36 kW) and radius 247 mm/height 173 mm (thermal power 68 kW) and for inner surface of tube with length 450 mm and diameter 73 mm (thermal power 14 kW). The distinction of the proposed technology is in the use of discrete fibers, which are felted on the curved surface in a special way and in the combination of procedures of the felt formation and their sintering to the surface (substrate material is Haynes Alloy 230). Execution of an extensive program of experimental characterization of a wick layer attached to the substrate has been developed and completed. The characterization of applied wicks determines a definition of their structural (local porosity, thickness of porous layer), mechanical (quality of wick bonding to substrate) and hydrodynamic properties (pumping diameter, one-phase and two-phase permeability). Initial estimation of wick performance was performed on the basis of methods developed at the National Technical University of Ukraine for two main modes of receiver operation — with return of sodium to a point on the dome (reflux) and without it. Prediction of receiver thermal performance, when they operate as a part of solar concentration assembly, was determined by specialized heat pipe performance software /Sandia National Laboratories, C. Andraka, 1999/.


2020 ◽  
Vol 5 (17) ◽  
pp. 1-5
Author(s):  
Jitendrea Kumar Saha ◽  
Kailash Patidar ◽  
Rishi Kushwah ◽  
Gaurav Saxena

Software quality estimation is an important aspect as it eliminates design and code defects. Object- oriented quality metrics prediction can help in the estimation of software quality of any defects and the chances of errors. In this paper a survey and the case analytics have been presented for the object-oriented quality prediction. It shows the analytical and experimental aspects of previous methodologies. This survey also elaborates different object-oriented parameters which is useful for the same problem. It also elaborates the problem aspects as well the limitations for the future directions. Machine learning and artificial intelligence methods have been considered mostly for this survey. The parameters considered are inheritance, dynamic behavior, encapsulation, objects etc.


Author(s):  
Mazen Ismaeel Ghareb ◽  
Gary Allen

This paper explores a new framework for calculating hybrid system metrics using software quality metrics aspect-oriented and object-oriented programming. Software metrics for qualitative and quantitative measurement is a mix of static and dynamic software metrics. It is noticed from the literature survey that to date, most of the architecture considered only the evaluation focused on static metrics for aspect-oriented applications. In our work, we mainly discussed the collection of static parameters ,  long with AspectJ-specific dynamic software metrics.The structure may provide a new direction for research while predicting software attributes because earlier dynamic metrics were ignored when evaluating quality attributes such as maintainability, reliability, and understandability of Asepect Oriented software. Dynamic metrics based on the  fundamentals of software engineering are equally crucial for software analysis as are static metrics. A similar concept is borrowed with the introduction of dynamic software metrics to implement aspect-riented software development.Currently, we only propose a structure and model using static and dynamic parameters to test the aspect-oriented method, but we still need to validate the proposed approach.


Author(s):  
Vincenzo De Florio ◽  
Chris Blondia

Current software systems and the environments such systems are meant for requiring a precise characterization of the available resources and provisions to constantly re-optimize in the face of endogenous and exogenous changes and failures. This paper claims that it is simply not possible today to conceive software design without explicitly addressing adaptability and dependability. As an example, the authors remark on how mobile computing technologies call for effective software engineering techniques to design, develop and maintain services that are prepared to continue the distribution of a fixed, agreed-upon quality of service despite of the changes in the location of the client software, performance failures, and the characteristics of the environment. This paper concludes that novel paradigms are required for software engineering so as to provide effective system structures for adaptive and dependable services while keeping the design complexity under control. In this paper, the authors discuss this problem and propose one such structure, also briefly surveying the major milestones in the state of the art in this domain.


2021 ◽  
pp. 103-123
Author(s):  
Małgorzata Kowalcze

The paper applies selected devices of the methodology of Object-Oriented Ontology to study William Golding’s novel Free Fall. Particular attention is given to Graham Harman’s project, whose definition of an object accounts for all beings, humans included. Within the ontological structure of an object two components can be distinguished: the “sensual object”, which can engage in relationships with other objects, and the “real object”, which refrains from any connections. The author aims to show how the main protagonist of Golding’s novel is impacted on by material objects, how other humans are perceived by him as inherently dual beings, but most importantly how the protagonist himself discovers the thing-like quality of his own human condition.


Sign in / Sign up

Export Citation Format

Share Document