scholarly journals Hybrid Maintainability Prediction using Soft Computing Techniques

2021 ◽  
pp. 350-356
Author(s):  
Manju Duhan ◽  
Pradeep Kumar Bhatia

Effective software maintenance is a crucial factor to measure that can be achieved with the help of software metrics. In this paper, authors derived a new approach for measuring the maintainability of software based on hybrid metrics that takes advantages of both i.e. static metrics and dynamic metrics in an object-oriented environment whereas, dynamic metrics capture the run time features of object-oriented languages i.e. run time polymorphism, dynamic binding etc. which is not covered by static metrics. To achieve this, the authors proposed a model based on static and hybrid metrics to measure maintainability factor by using soft computing techniques and it is found that the proposed neuro-fuzzy model was trained well and predict adequate results with MAE 0.003 and RMSE 0.009 based on hybrid metrics. Additionally, the proposed model was validated on two test datasets and it is concluded that the proposed model performed well, based on hybrid metrics.

Author(s):  
Pankaj H. Chandankhede

Texture can be considered as a repeating pattern of local variation of pixel intensities. Cosine Transform (DCT) coefficients of texture images. As DCT works on gray level images, the color scheme of each image is transformed into gray levels. For classifying the images using DCT, two popular soft computing techniques namely neurocomputing and neuro-fuzzy computing are used. A feedforward neural network is used to train the backpropagation learning algorithm and an evolving fuzzy neural network to classify the textures. The soft computing models were trained using 80% of the texture data and the remaining was used for testing and validation purposes. A performance comparison was made among the soft computing models for the texture classification problem. In texture classification the goal is to assign an unknown sample image to a set of known texture classes. It is observed that the proposed neuro-fuzzy model performed better than the neural network.


2019 ◽  
Vol 3 (2) ◽  
pp. 74
Author(s):  
Mazen Ismaeel Ghareb ◽  
Garry Allen

   The quality evaluation of software metrics measurement is considered as the primary indicator of imperfection prediction and software maintenance in various empirical studies of software products. However, there is no agreement on which metrics are compelling quality pointers for new software development approaches such as aspect-oriented programming (AOP) techniques. AOP intends to enhance programming quality by providing fundamentally different parts of the systems, for example, pointcuts, advice, and intertype relationships. Hence, it is not evident if quality characteristics for AOP could be extracted from direct expansions of traditional object-oriented programming (OOP) measurements. Then again, investigations of AOP do regularly depend on established static and dynamic metrics measurement; notwithstanding the late research of AOP in empirical studies, few analyses been adopted using the International Organization for Standardization 9126 quality model as useful markers of flaw inclination in this context. This paper examination we have considered different programming quality models given by various authors every once in a while and distinguished that adaptability was deficient in the current model. We have testing 10 projects developed by AOP. We have used many applications to extract the metrics, but none of them could extract all AOP Metrics. It only can measure some of AOP Metrics, not all of them. This study investigates the suitable framework for extract AOP Metrics, for instance, static and dynamic metrics measurement for hybrid application systems (AOP and OOP) or only AOP application.


2020 ◽  
Vol 13 (5) ◽  
pp. 1047-1056
Author(s):  
Akshi Kumar ◽  
Arunima Jaiswal

Background: Sentiment analysis of big data such as Twitter primarily aids the organizations with the potential of surveying public opinions or emotions for the products and events associated with them. Objective: In this paper, we propose the application of a deep learning architecture namely the Convolution Neural Network. The proposed model is implemented on benchmark Twitter corpus (SemEval 2016 and SemEval 2017) and empirically analyzed with other baseline supervised soft computing techniques. The pragmatics of the work includes modelling the behavior of trained Convolution Neural Network on wellknown Twitter datasets for sentiment classification. The performance efficacy of the proposed model has been compared and contrasted with the existing soft computing techniques like Naïve Bayesian, Support Vector Machines, k-Nearest Neighbor, Multilayer Perceptron and Decision Tree using precision, accuracy, recall, and F-measure as key performance indicators. Methods: Majority of the studies emphasize on the utilization of feature mining using lexical or syntactic feature extraction that are often unequivocally articulated through words, emoticons and exclamation marks. Subsequently, CNN, a deep learning based soft computing technique is used to improve the sentiment classifier’s performance. Results: The empirical analysis validates that the proposed implementation of the CNN model outperforms the baseline supervised learning algorithms with an accuracy of around 87% to 88%. Conclusion: Statistical analysis validates that the proposed CNN model outperforms the existing techniques and thus can enhance the performance of sentiment classification viability and coherency.


Author(s):  
Mazen Ismaeel Ghareb ◽  
Gary Allen

This paper explores a new framework for calculating hybrid system metrics using software quality metrics aspect-oriented and object-oriented programming. Software metrics for qualitative and quantitative measurement is a mix of static and dynamic software metrics. It is noticed from the literature survey that to date, most of the architecture considered only the evaluation focused on static metrics for aspect-oriented applications. In our work, we mainly discussed the collection of static parameters ,  long with AspectJ-specific dynamic software metrics.The structure may provide a new direction for research while predicting software attributes because earlier dynamic metrics were ignored when evaluating quality attributes such as maintainability, reliability, and understandability of Asepect Oriented software. Dynamic metrics based on the  fundamentals of software engineering are equally crucial for software analysis as are static metrics. A similar concept is borrowed with the introduction of dynamic software metrics to implement aspect-riented software development.Currently, we only propose a structure and model using static and dynamic parameters to test the aspect-oriented method, but we still need to validate the proposed approach.


Weather forecasting and warning is the application of science and technology to predict the state of the weather for a future time of a given location. The emergence of adverse effects of weather has endangered the life of general public in previous years. The unpredicted flood and super cyclone in many places have created havoc. The government and private agencies are working on its behaviours but still it is challenging and incomplete. But, the application of soft computing techniques in weather prediction has made a significant perfomance now a days. This research work presents the comparative study of soft computing techniques like MultiLayer Perceptron(MLP), Support Vector Machine(SVM) and J48 Decision Tree for forecasting the weather of Delhi with ten years data comprising of temperature, dew, humidity, air pressure, wind speed and visibility. This paper tries to describe the comparison among above models using four different error values like Relative Absolute Error(RAE), Mean Absolute Error(MAE), Root Mean Squared Error(RMSE) and Root Relative Squared Error(R2 ) with a proposed model by defining new algorithm. Further the performance can be enhanced if textmining will be applied in this proposed model.


Author(s):  
Isong Bassey

Object-oriented software (OOS) is dominating the software development world today and thus, has to be of high quality and maintainable. However, their recent size and complexity affects the delivering of software products with high quality as well as their maintenance. In the perspective of software maintenance, software change impact analysis (SCIA) is used to avoid performing change in the “dark”. Unfortunately, OOS classes are not without faults and the existing SCIA techniques only predict impact set. The intuition is that, if a class is faulty and change is implemented on it, it will increase the risk of software failure. To balance these, maintenance should incorporate both impact and fault-proneness (FP) predictions. Therefore, this paper propose an extended approach of SCIA that incorporates both activities. The goal is to provide important information that can be used to focus verification and validation efforts on the high risk classes that would probably cause severe failures when changes are made. This will in turn increase maintenance, testing efficiency and preserve software quality. This study constructed a prediction model using software metrics and faults data from NASA data set in the public domain. The results obtained were analyzed and presented. Additionally, a tool called Class Change Recommender (CCRecommender) was developed to assist software engineers compute the risks associated with making change to any OOS class in the impact set.


2012 ◽  
Vol 3 (3) ◽  
pp. 401-405 ◽  
Author(s):  
Hamed J. Al-Fawareh

Software maintenance is the last phase of the software life cycle. The aim of the software maintenance is to maintain the software system in accordance with advancement in software and hardware technology. In this paper, we discuss a maintenance system for object-oriented techniques. The paper therefore discusses about a problems in object oriented techniques under the maintenance environment. These problems include understanding object oriented system, complex dependencies in object-oriented system, inheritance, polymorphism and dynamic binding problem that maintainers and developers commonly face. Finally, we talk about the proposed object-oriented maintenance tool.


2019 ◽  
Vol 8 (4) ◽  
pp. 9793-9798

Soft computing techniques have become very popular now-a-days as these techniques have replaced the traditional and statistical prediction mechanisms in weather forecasting, stock market prediction, crop prediction, solar energy prediction, and predictions in physics and chemistry etc. Each model has its advantages and disadvantages. Hybrid soft computing model is the mechanism of designing the models by exploiting the advantages of two or more models and suppressing their disadvantages. If the advantages of two or more number of models will be taken together in the new proposed model, then the accuracy in the prediction will be enhanced with decrease in error rate. This paper intends to design a hybrid model by taking the advantages of J48 Decision Tree and Fuzzy Logic andit is used to predict the weather parameters in Delhi with better accuracy.


Effective software maintainability is one of the most significant and challenging activity in the field of component based software. Several maintainability models are proposed by the researchers to reduce the maintenance cost, to improve the quality and life span of the software product. The proposed model will assist the software designers to develop maintainable softwares. This paper discusses a maintainability model, which selects four crucial factors that highly affect maintainability of component based software system. Soft computing techniques are employed to demonstrate strong correlation of these factors with maintainability. MATLAB’s Fuzzy logic toolbox is used for predicting the maintainability level of component (such as Excellent, Fair, Good, Bad and worst). Data generated by fuzzy model are provided as input to artificial neural network model. Experimental results shows mean absolute error (MAE) to be .028 and Relative Error (RE) to be .045.To further improve the performance of the model; neuro-fuzzy tool was employed. With the use of self learning capability of this tool, MAE and RE are now improved to the value .0029 and .039. It means that the model was sound enough to provide satisfactory outcomes in comparison to neural network.


Author(s):  
Shashi Bhushan

This paper presents an enhanced system in the field of text identification using Soft computing techniques. The model designed in this work analyzes the blogs or input text and classifies the personality into five major categories; Neuroticism, Extraversion, Openness, Conscientiousness and Agreeableness. The blog or text is first passed through POS tagger then a feature vector matrix is generated according to the attributes of the personality chart. Each column of FVM is calculated in its domain that improves the final result of personality identification. The result of the proposed model is improvement over similar work by other researchers [1, 2, 3].


Sign in / Sign up

Export Citation Format

Share Document