Predicting the Quality of High-power Connector Joints with Different Machine Learning Methods

Author(s):  
Elisabeth Birgit Schwarz ◽  
Fabian Bleier ◽  
Jean-Pierre Bergmann
2019 ◽  
Vol 8 (4) ◽  
pp. 7818-7823

Programming testing is a fundamental and essential advance of the existence cycle of programming improvement to recognize and defects in programming and afterward fix the deficiencies. The reliability of the data transmission or the quality of proper processing ,maintenance and retrieval of information to a server can be tested for some systems. Accuracy is also one factor that is usually used to the Joint Interoperability Test Command as a criterion for accessing interoperability. This is the main investigation of PC flaw forecast and exactness as per our examination, which spotlights on the utilization of PROMISE database dataset. Some PROMISE database dataset tests are compared between pseudo code (PYTHON) and actual software (WEKA),which in computer fault prediction and accuracy measurement are effective software metrics and machine learning methods.


2021 ◽  
pp. 105-116
Author(s):  
A. M. KOZIN ◽  
◽  
A. D. LYKOV ◽  
I. A. VYAZANKIN ◽  
A. S. VYAZANKIN ◽  
...  

The “Middle Atmosphere” Regional Information and Analytic Center (Central Aerological Observatory) works out algorithms for analyzing the quality of aerological data based on machine learning methods. Different approaches to the data preparation are described, the examples of data that were rejected using standard approaches are given, the ways to develop and improve the quality of aerological information transmitted to the WMO international network are outlined.


2021 ◽  
Vol 111 (09) ◽  
pp. 650-653
Author(s):  
Rainer Müller ◽  
Anne Blum ◽  
Steffen Klein ◽  
Tizian Schneider ◽  
Andreas Schütze ◽  
...  

In diesem Beitrag wird ein Fügeprozess mittels sensitiver Robotik vorgestellt, bei dem gleichzeitig eine Inprozess-Dichtheitsprüfung durch Methoden des maschinellen Lernens erfolgt. Dabei werden komplexe Wirkzusammenhänge in den Daten extrahiert und Informationen über die Qualität eines zu montierenden Produkts gewonnen. Durch die Kombination eines Füge- und Prüfprozesses wird die Wertschöpfung einzelner Prozesse gesteigert, wodurch eine zeitaufwendige End-of-Line-Prüfung entfallen kann.   In this paper, a joining process using sensitive robotics is introduced, in which an in-process leak test is performed at the same time using machine learning methods. Complex interactions in the data are extracted and information about the quality of a product to be assembled is obtained. By combining a joining and testing process, the added value of individual processes is increased, which eliminates the need for time-consuming end-of-line testing.


2021 ◽  
Vol 28 (1) ◽  
pp. 38-51
Author(s):  
Petr D. Borisov ◽  
Yury V. Kosolapov

Obfuscation is used to protect programs from analysis and reverse engineering. There are theoretically effective and resistant obfuscation methods, but most of them are not implemented in practice yet. The main reasons are large overhead for the execution of obfuscated code and the limitation of application only to a specific class of programs. On the other hand, a large number of obfuscation methods have been developed that are applied in practice. The existing approaches to the assessment of such obfuscation methods are based mainly on the static characteristics of programs. Therefore, the comprehensive (taking into account the dynamic characteristics of programs) justification of their effectiveness and resistance is a relevant task. It seems that such a justification can be made using machine learning methods, based on feature vectors that describe both static and dynamic characteristics of programs. In this paper, it is proposed to build such a vector on the basis of characteristics of two compared programs: the original and obfuscated, original and deobfuscated, obfuscated and deobfuscated. In order to obtain the dynamic characteristics of the program, a scheme based on a symbolic execution is constructed and presented in this paper. The choice of the symbolic execution is justified by the fact that such characteristics can describe the difficulty of comprehension of the program in dynamic analysis. The paper proposes two implementations of the scheme: extended and simplified. The extended scheme is closer to the process of analyzing a program by an analyst, since it includes the steps of disassembly and translation into intermediate code, while in the simplified scheme these steps are excluded. In order to identify the characteristics of symbolic execution that are suitable for assessing the effectiveness and resistance of obfuscation based on machine learning methods, experiments with the developed schemes were carried out. Based on the obtained results, a set of suitable characteristics is determined.


2021 ◽  
Vol 73 (1) ◽  
pp. 126-133
Author(s):  
B.S. Akhmetov ◽  
◽  
D.V. Isaykin ◽  
М.B. Bereke ◽  
◽  
...  

The article shows the development of the methodology for changing the resolution of images obtained from CCTV cameras on railway transport. The research was carried out on the basis of the application of machine learning methods (MLM). Thanks to the implementation of this approach, it was possible to expand the functionality of the MMO. In particular, it is proposed to carry out the oversampling process with the target coefficient of information content of the image frames. This factor is applicable for both increasing and decreasing RI. This should provide a high quality resampling and, at the same time, reduce the training time for neural-like structures (NLS). The proposed solutions are characterized by a reduction in the size of the computing resources that are required for such a procedure.


2021 ◽  
Vol 129 ◽  
pp. 09001
Author(s):  
Meseret Yihun Amare ◽  
Stanislava Simonova

Research background: In this era of globalization, data growth in research and educational communities have shown an increase in analysis accuracy, benefits dropout detection, academic status prediction, and trend analysis. However, the analysis accuracy is low when the quality of educational data is incomplete. Moreover, the current approaches on dropout prediction cannot utilize available sources. Purpose of the article: This article aims to develop a prediction model for students’ dropout prediction using machine learning techniques. Methods: The study used machine learning methods to identify early dropouts of students during their study. The performance of different machine learning methods was evaluated using accuracy, precision, support, and f-score methods. The algorithm that best suits the datasets for these performance measurements was used to create the best prediction model. Findings & value added: This study contributes to tackling the current global challenges of student dropouts from their study. The developed prediction model allows higher education institutions to target students who are likely to dropout and intervene timely to improve retention rates and quality of education. It can also help the institutions to plan resources in advance for the coming academic semester and allocate it appropriately. Generally, the learning analytics prediction model would allow higher education institutions to target students who are likely to dropout and intervene timely to improve retention rates and quality of education.


2020 ◽  
Vol 66 (6) ◽  
pp. 2495-2522 ◽  
Author(s):  
Duncan Simester ◽  
Artem Timoshenko ◽  
Spyros I. Zoumpoulis

We investigate how firms can use the results of field experiments to optimize the targeting of promotions when prospecting for new customers. We evaluate seven widely used machine-learning methods using a series of two large-scale field experiments. The first field experiment generates a common pool of training data for each of the seven methods. We then validate the seven optimized policies provided by each method together with uniform benchmark policies in a second field experiment. The findings not only compare the performance of the targeting methods, but also demonstrate how well the methods address common data challenges. Our results reveal that when the training data are ideal, model-driven methods perform better than distance-driven methods and classification methods. However, the performance advantage vanishes in the presence of challenges that affect the quality of the training data, including the extent to which the training data captures details of the implementation setting. The challenges we study are covariate shift, concept shift, information loss through aggregation, and imbalanced data. Intuitively, the model-driven methods make better use of the information available in the training data, but the performance of these methods is more sensitive to deterioration in the quality of this information. The classification methods we tested performed relatively poorly. We explain the poor performance of the classification methods in our setting and describe how the performance of these methods could be improved. This paper was accepted by Matthew Shum, marketing.


Logistics ◽  
2020 ◽  
Vol 4 (4) ◽  
pp. 35
Author(s):  
Sidharth Sankhye ◽  
Guiping Hu

The rising popularity of smart factories and Industry 4.0 has made it possible to collect large amounts of data from production stages. Thus, supervised machine learning methods such as classification can viably predict product compliance quality using manufacturing data collected during production. Elimination of uncertainty via accurate prediction provides significant benefits at any stage in a supply chain. Thus, early knowledge of product batch quality can save costs associated with recalls, packaging, and transportation. While there has been thorough research on predicting the quality of specific manufacturing processes, the adoption of classification methods to predict the overall compliance of production batches has not been extensively investigated. This paper aims to design machine learning based classification methods for quality compliance and validate the models via case study of a multi-model appliance production line. The proposed classification model could achieve an accuracy of 0.99 and Cohen’s Kappa of 0.91 for the compliance quality of unit batches. Thus, the proposed method would enable implementation of a predictive model for compliance quality. The case study also highlights the importance of feature construction and dataset knowledge in training classification models.


Diagnostics ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1390
Author(s):  
Mohamed A. Kassem ◽  
Khalid M. Hosny ◽  
Robertas Damaševičius ◽  
Mohamed Meselhy Eltoukhy

Computer-aided systems for skin lesion diagnosis is a growing area of research. Recently, researchers have shown an increasing interest in developing computer-aided diagnosis systems. This paper aims to review, synthesize and evaluate the quality of evidence for the diagnostic accuracy of computer-aided systems. This study discusses the papers published in the last five years in ScienceDirect, IEEE, and SpringerLink databases. It includes 53 articles using traditional machine learning methods and 49 articles using deep learning methods. The studies are compared based on their contributions, the methods used and the achieved results. The work identified the main challenges of evaluating skin lesion segmentation and classification methods such as small datasets, ad hoc image selection and racial bias.


Sign in / Sign up

Export Citation Format

Share Document