A Survey on Deep Learning Techniques Used for Quality Process

Author(s):  
Vanyashree Mardi ◽  
Naresh E. ◽  
Vijaya Kumar B. P.

In the current era, software development and software quality has become extensively important for implementing the real-world software application, and it will enhance the software functionality. Moreover, early prediction of expected error and fault level in the quality process is critical to the software development process. Deep learning techniques are the most appropriate methods for this problem, and this chapter carries out an extensive systematic survey on a variety of deep learning. These techniques are used in the software quality process along with a hypothesis justification for each of the proposed solutions. The deep learning and machine learning techniques are considered to be the most suitable systems for software quality prediction. Deep learning is a computational model made up of various hidden layers of investigation used to portray of information with the goal that researchers can better understand complex information issues.

Author(s):  
Saumendra Pattnaik ◽  
Binod Kumar Pattanayak ◽  
Srikanta Patnaik

In the global market, the value of software quality is a major factor in software industries. In a majority of cases, software quality needs to be quantified and measured. Previously, many models like the ISO model, the McCall model, the Boehm model, etc., have been used for the quantification of the parameters which determine the software quality. The software quality can be judged by analyzing all the characteristics of the product. In this article, a new pragmatic model which is the modification of the existing ISO Model has been proposed in order for the quantification of the software quality factors in a more precise way. Because of the nature of the attributes which determine the software quality, a Fuzzy logic-based approach is considered to be a better technique among various existing machine learning techniques. Here, the simulink tool is used for constructing the running model which helps the net quality of the software.


2020 ◽  
Vol 11 (2) ◽  
pp. 20-40
Author(s):  
Somya Goyal ◽  
Pradeep Kumar Bhatia

Software quality prediction is one the most challenging tasks in the development and maintenance of software. Machine learning (ML) is widely being incorporated for the prediction of the quality of a final product in the early development stages of the software development life cycle (SDLC). An ML prediction model uses software metrics and faulty data from previous projects to detect high-risk modules for future projects, so that the testing efforts can be targeted to those specific ‘risky' modules. Hence, ML-based predictors contribute to the detection of development anomalies early and inexpensively and ensure the timely delivery of a successful, failure-free and supreme quality software product within budget. This article has a comparison of 30 software quality prediction models (5 technique * 6 dataset) built on five ML techniques: artificial neural network (ANN); support vector machine (SVMs); Decision Tree (DTs); k-Nearest Neighbor (KNN); and Naïve Bayes Classifiers (NBC), using six datasets: CM1, KC1, KC2, PC1, JM1, and a combined one. These models exploit the predictive power of static code metrics, McCabe complexity metrics, for quality prediction. All thirty predictors are compared using a receiver operator curve (ROC), area under the curve (AUC), and accuracy as performance evaluation criteria. The results show that the ANN technique for software quality prediction is promising for accurate quality prediction irrespective of the dataset used.


Mathematics ◽  
2020 ◽  
Vol 8 (12) ◽  
pp. 2258
Author(s):  
Madhab Raj Joshi ◽  
Lewis Nkenyereye ◽  
Gyanendra Prasad Joshi ◽  
S. M. Riazul Islam ◽  
Mohammad Abdullah-Al-Wadud ◽  
...  

Enhancement of Cultural Heritage such as historical images is very crucial to safeguard the diversity of cultures. Automated colorization of black and white images has been subject to extensive research through computer vision and machine learning techniques. Our research addresses the problem of generating a plausible colored photograph of ancient, historically black, and white images of Nepal using deep learning techniques without direct human intervention. Motivated by the recent success of deep learning techniques in image processing, a feed-forward, deep Convolutional Neural Network (CNN) in combination with Inception- ResnetV2 is being trained by sets of sample images using back-propagation to recognize the pattern in RGB and grayscale values. The trained neural network is then used to predict two a* and b* chroma channels given grayscale, L channel of test images. CNN vividly colorizes images with the help of the fusion layer accounting for local features as well as global features. Two objective functions, namely, Mean Squared Error (MSE) and Peak Signal-to-Noise Ratio (PSNR), are employed for objective quality assessment between the estimated color image and its ground truth. The model is trained on the dataset created by ourselves with 1.2 K historical images comprised of old and ancient photographs of Nepal, each having 256 × 256 resolution. The loss i.e., MSE, PSNR, and accuracy of the model are found to be 6.08%, 34.65 dB, and 75.23%, respectively. Other than presenting the training results, the public acceptance or subjective validation of the generated images is assessed by means of a user study where the model shows 41.71% of naturalness while evaluating colorization results.


Vibration ◽  
2021 ◽  
Vol 4 (2) ◽  
pp. 341-356
Author(s):  
Jessada Sresakoolchai ◽  
Sakdirat Kaewunruen

Various techniques have been developed to detect railway defects. One of the popular techniques is machine learning. This unprecedented study applies deep learning, which is a branch of machine learning techniques, to detect and evaluate the severity of rail combined defects. The combined defects in the study are settlement and dipped joint. Features used to detect and evaluate the severity of combined defects are axle box accelerations simulated using a verified rolling stock dynamic behavior simulation called D-Track. A total of 1650 simulations are run to generate numerical data. Deep learning techniques used in the study are deep neural network (DNN), convolutional neural network (CNN), and recurrent neural network (RNN). Simulated data are used in two ways: simplified data and raw data. Simplified data are used to develop the DNN model, while raw data are used to develop the CNN and RNN model. For simplified data, features are extracted from raw data, which are the weight of rolling stock, the speed of rolling stock, and three peak and bottom accelerations from two wheels of rolling stock. In total, there are 14 features used as simplified data for developing the DNN model. For raw data, time-domain accelerations are used directly to develop the CNN and RNN models without processing and data extraction. Hyperparameter tuning is performed to ensure that the performance of each model is optimized. Grid search is used for performing hyperparameter tuning. To detect the combined defects, the study proposes two approaches. The first approach uses one model to detect settlement and dipped joint, and the second approach uses two models to detect settlement and dipped joint separately. The results show that the CNN models of both approaches provide the same accuracy of 99%, so one model is good enough to detect settlement and dipped joint. To evaluate the severity of the combined defects, the study applies classification and regression concepts. Classification is used to evaluate the severity by categorizing defects into light, medium, and severe classes, and regression is used to estimate the size of defects. From the study, the CNN model is suitable for evaluating dipped joint severity with an accuracy of 84% and mean absolute error (MAE) of 1.25 mm, and the RNN model is suitable for evaluating settlement severity with an accuracy of 99% and mean absolute error (MAE) of 1.58 mm.


2021 ◽  
pp. 1-55
Author(s):  
Emma A. H. Michie ◽  
Behzad Alaei ◽  
Alvar Braathen

Generating an accurate model of the subsurface for the purpose of assessing the feasibility of a CO2 storage site is crucial. In particular, how faults are interpreted is likely to influence the predicted capacity and integrity of the reservoir; whether this is through identifying high risk areas along the fault, where fluid is likely to flow across the fault, or by assessing the reactivation potential of the fault with increased pressure, causing fluid to flow up the fault. New technologies allow users to interpret faults effortlessly, and in much quicker time, utilizing methods such as Deep Learning. These Deep Learning techniques use knowledge from Neural Networks to allow end-users to compute areas where faults are likely to occur. Although these new technologies may be attractive due to reduced interpretation time, it is important to understand the inherent uncertainties in their ability to predict accurate fault geometries. Here, we compare Deep Learning fault interpretation versus manual fault interpretation, and can see distinct differences to those faults where significant ambiguity exists due to poor seismic resolution at the fault; we observe an increased irregularity when Deep Learning methods are used over conventional manual interpretation. This can result in significant differences between the resulting analyses, such as fault reactivation potential. Conversely, we observe that well-imaged faults show a close similarity between the resulting fault surfaces when both Deep Learning and manual fault interpretation methods are employed, and hence we also observe a close similarity between any attributes and fault analyses made.


Sign in / Sign up

Export Citation Format

Share Document