scholarly journals Performance Evaluation of Pseudo Code with Weka for Accuracy Calculation

2019 ◽  
Vol 8 (4) ◽  
pp. 7818-7823

Programming testing is a fundamental and essential advance of the existence cycle of programming improvement to recognize and defects in programming and afterward fix the deficiencies. The reliability of the data transmission or the quality of proper processing ,maintenance and retrieval of information to a server can be tested for some systems. Accuracy is also one factor that is usually used to the Joint Interoperability Test Command as a criterion for accessing interoperability. This is the main investigation of PC flaw forecast and exactness as per our examination, which spotlights on the utilization of PROMISE database dataset. Some PROMISE database dataset tests are compared between pseudo code (PYTHON) and actual software (WEKA),which in computer fault prediction and accuracy measurement are effective software metrics and machine learning methods.

2021 ◽  
pp. 105-116
Author(s):  
A. M. KOZIN ◽  
◽  
A. D. LYKOV ◽  
I. A. VYAZANKIN ◽  
A. S. VYAZANKIN ◽  
...  

The “Middle Atmosphere” Regional Information and Analytic Center (Central Aerological Observatory) works out algorithms for analyzing the quality of aerological data based on machine learning methods. Different approaches to the data preparation are described, the examples of data that were rejected using standard approaches are given, the ways to develop and improve the quality of aerological information transmitted to the WMO international network are outlined.


2021 ◽  
Vol 111 (09) ◽  
pp. 650-653
Author(s):  
Rainer Müller ◽  
Anne Blum ◽  
Steffen Klein ◽  
Tizian Schneider ◽  
Andreas Schütze ◽  
...  

In diesem Beitrag wird ein Fügeprozess mittels sensitiver Robotik vorgestellt, bei dem gleichzeitig eine Inprozess-Dichtheitsprüfung durch Methoden des maschinellen Lernens erfolgt. Dabei werden komplexe Wirkzusammenhänge in den Daten extrahiert und Informationen über die Qualität eines zu montierenden Produkts gewonnen. Durch die Kombination eines Füge- und Prüfprozesses wird die Wertschöpfung einzelner Prozesse gesteigert, wodurch eine zeitaufwendige End-of-Line-Prüfung entfallen kann.   In this paper, a joining process using sensitive robotics is introduced, in which an in-process leak test is performed at the same time using machine learning methods. Complex interactions in the data are extracted and information about the quality of a product to be assembled is obtained. By combining a joining and testing process, the added value of individual processes is increased, which eliminates the need for time-consuming end-of-line testing.


2014 ◽  
Vol 2014 ◽  
pp. 1-15 ◽  
Author(s):  
Yeresime Suresh ◽  
Lov Kumar ◽  
Santanu Ku. Rath

Experimental validation of software metrics in fault prediction for object-oriented methods using statistical and machine learning methods is necessary. By the process of validation the quality of software product in a software organization is ensured. Object-oriented metrics play a crucial role in predicting faults. This paper examines the application of linear regression, logistic regression, and artificial neural network methods for software fault prediction using Chidamber and Kemerer (CK) metrics. Here, fault is considered as dependent variable and CK metric suite as independent variables. Statistical methods such as linear regression, logistic regression, and machine learning methods such as neural network (and its different forms) are being applied for detecting faults associated with the classes. The comparison approach was applied for a case study, that is, Apache integration framework (AIF) version 1.6. The analysis highlights the significance of weighted method per class (WMC) metric for fault classification, and also the analysis shows that the hybrid approach of radial basis function network obtained better fault prediction rate when compared with other three neural network models.


Author(s):  
Raed Shatnawi

BACKGROUND: Fault data is vital to predicting the fault-proneness in large systems. Predicting faulty classes helps in allocating the appropriate testing resources for future releases. However, current fault data face challenges such as unlabeled instances and data imbalance. These challenges degrade the performance of the prediction models. Data imbalance happens because the majority of classes are labeled as not faulty whereas the minority of classes are labeled as faulty. AIM: The research proposes to improve fault prediction using software metrics in combination with threshold values. Statistical techniques are proposed to improve the quality of the datasets and therefore the quality of the fault prediction. METHOD: Threshold values of object-oriented metrics are used to label classes as faulty to improve the fault prediction models The resulting datasets are used to build prediction models using five machine learning techniques. The use of threshold values is validated on ten large object-oriented systems. RESULTS: The models are built for the datasets with and without the use of thresholds. The combination of thresholds with machine learning has improved the fault prediction models significantly for the five classifiers. CONCLUSION: Threshold values can be used to label software classes as fault-prone and can be used to improve machine learners in predicting the fault-prone classes.


2021 ◽  
Vol 28 (1) ◽  
pp. 38-51
Author(s):  
Petr D. Borisov ◽  
Yury V. Kosolapov

Obfuscation is used to protect programs from analysis and reverse engineering. There are theoretically effective and resistant obfuscation methods, but most of them are not implemented in practice yet. The main reasons are large overhead for the execution of obfuscated code and the limitation of application only to a specific class of programs. On the other hand, a large number of obfuscation methods have been developed that are applied in practice. The existing approaches to the assessment of such obfuscation methods are based mainly on the static characteristics of programs. Therefore, the comprehensive (taking into account the dynamic characteristics of programs) justification of their effectiveness and resistance is a relevant task. It seems that such a justification can be made using machine learning methods, based on feature vectors that describe both static and dynamic characteristics of programs. In this paper, it is proposed to build such a vector on the basis of characteristics of two compared programs: the original and obfuscated, original and deobfuscated, obfuscated and deobfuscated. In order to obtain the dynamic characteristics of the program, a scheme based on a symbolic execution is constructed and presented in this paper. The choice of the symbolic execution is justified by the fact that such characteristics can describe the difficulty of comprehension of the program in dynamic analysis. The paper proposes two implementations of the scheme: extended and simplified. The extended scheme is closer to the process of analyzing a program by an analyst, since it includes the steps of disassembly and translation into intermediate code, while in the simplified scheme these steps are excluded. In order to identify the characteristics of symbolic execution that are suitable for assessing the effectiveness and resistance of obfuscation based on machine learning methods, experiments with the developed schemes were carried out. Based on the obtained results, a set of suitable characteristics is determined.


2021 ◽  
Vol 73 (1) ◽  
pp. 126-133
Author(s):  
B.S. Akhmetov ◽  
◽  
D.V. Isaykin ◽  
М.B. Bereke ◽  
◽  
...  

The article shows the development of the methodology for changing the resolution of images obtained from CCTV cameras on railway transport. The research was carried out on the basis of the application of machine learning methods (MLM). Thanks to the implementation of this approach, it was possible to expand the functionality of the MMO. In particular, it is proposed to carry out the oversampling process with the target coefficient of information content of the image frames. This factor is applicable for both increasing and decreasing RI. This should provide a high quality resampling and, at the same time, reduce the training time for neural-like structures (NLS). The proposed solutions are characterized by a reduction in the size of the computing resources that are required for such a procedure.


2021 ◽  
Vol 129 ◽  
pp. 09001
Author(s):  
Meseret Yihun Amare ◽  
Stanislava Simonova

Research background: In this era of globalization, data growth in research and educational communities have shown an increase in analysis accuracy, benefits dropout detection, academic status prediction, and trend analysis. However, the analysis accuracy is low when the quality of educational data is incomplete. Moreover, the current approaches on dropout prediction cannot utilize available sources. Purpose of the article: This article aims to develop a prediction model for students’ dropout prediction using machine learning techniques. Methods: The study used machine learning methods to identify early dropouts of students during their study. The performance of different machine learning methods was evaluated using accuracy, precision, support, and f-score methods. The algorithm that best suits the datasets for these performance measurements was used to create the best prediction model. Findings & value added: This study contributes to tackling the current global challenges of student dropouts from their study. The developed prediction model allows higher education institutions to target students who are likely to dropout and intervene timely to improve retention rates and quality of education. It can also help the institutions to plan resources in advance for the coming academic semester and allocate it appropriately. Generally, the learning analytics prediction model would allow higher education institutions to target students who are likely to dropout and intervene timely to improve retention rates and quality of education.


Sign in / Sign up

Export Citation Format

Share Document