scholarly journals Dressing Tool Condition Monitoring through Impedance-Based Sensors: Part 2—Neural Networks and K-Nearest Neighbor Classifier Approach

Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4453 ◽  
Author(s):  
Pedro Junior ◽  
Doriana D’Addona ◽  
Paulo Aguiar ◽  
Roberto Teti

This paper presents an approach for impedance-based sensor monitoring of dressing tool condition in grinding by using the electromechanical impedance (EMI) technique. This method was introduced in Part 1 of this work and the purpose of this paper (Part 2) is to achieve an optimal selection of the excitation frequency band based on multi-layer neural networks (MLNN) and k-nearest neighbor classifier (k-NN). The proposed approach was validated on the basis of dressing tool condition information obtained from the monitoring of experimental dressing tests with two industrial stationary single-point dressing tools. Moreover, representative damage indices for diverse damage cases, obtained from impedance signatures at different frequency bands, were taken into account for MLNN data processing. The intelligent system was able to select the most damage-sensitive features based on optimal frequency band. The best models showed a general overall error lower than 2%, thus robustly contributing to the efficient automation of grinding and dressing operations. The promising results of this study foster the EMI-based sensor monitoring approach to fault diagnosis in dressing operations and its effective implementation for industrial grinding process automation.

10.29007/5gzr ◽  
2018 ◽  
Author(s):  
Cezary Kaliszyk ◽  
Josef Urban

Two complementary AI methods are used to improve the strength of the AI/ATP service for proving conjectures over the HOL Light and Flyspeck corpora. First, several schemes for frequency-based feature weighting are explored in combination with distance-weighted k-nearest-neighbor classifier. This results in 16% improvement (39.0% to 45.5% Flyspeck problems solved) of the overall strength of the service when using 14 CPUs and 30 seconds. The best premise-selection/ATP combination is improved from 24.2% to 31.4%, i.e. by 30%. A smaller improvement is obtained by evolving targetted E prover strategies on two particular premise selections, using the Blind Strategymaker (BliStr) system. This raises the performance of the best AI/ATP method from 31.4% to 34.9%, i.e. by 11%, and raises the current 14-CPU power of the service to 46.9%.


2020 ◽  
Author(s):  
Daniel B Hier ◽  
Jonathan Kopel ◽  
Steven U Brint ◽  
Donald C Wunsch II ◽  
Gayla R Olbricht ◽  
...  

Abstract Objective: Neurologists lack a metric for measuring the distance between neurological patients. When neurological signs and symptoms are represented as neurological concepts from a hierarchical ontology and neurological patients are represented as sets of concepts, distances between patients can be represented as inter-set distances.Methods:We converted the neurological signs and symptoms from 721 published neurology cases into sets of concepts with corresponding machine-readable codes. We calculated inter-concept distances based a hierarchical ontology and we calculated inter-patient distances by semantic weighted bipartite matching. We evaluated the accuracy of a k-nearest neighbor classifier to allocate patients into 40 diagnostic classes.Results:Within a given diagnosis, mean patient distance differed by diagnosis, suggesting that across diagnoses there are differences in how similar patients are to other patients with the same diagnosis. The mean distance from one diagnosis to another diagnosis differed by diagnosis, suggesting that diagnoses differ in their proximity to other diagnoses. Utilizing a k-nearest neighbor classifier and inter-patient distances, the risk of misclassification differed by diagnosis.Conclusion:If signs and symptoms are converted to machine-readable codes and patients are represented as sets of these codes, patient distances can be computed as an inter-set distance. These patient distances given insights into how homogeneous patients are within a diagnosis (stereotypy), the distance between different diagnoses (proximity), and the risk of diagnosis misclassification (diagnostic error).


2017 ◽  
Vol 9 (1) ◽  
pp. 1-9
Author(s):  
Fandiansyah Fandiansyah ◽  
Jayanti Yusmah Sari ◽  
Ika Putri Ningrum

Face recognition is one of the biometric system that mostly used for individual recognition in the absent machine or access control. This is because the face is the most visible part of human anatomy and serves as the first distinguishing factor of a human being. Feature extraction and classification are the key to face recognition, as they are to any pattern classification task. In this paper, we describe a face recognition method based on Linear Discriminant Analysis (LDA) and k-Nearest Neighbor classifier. LDA used for feature extraction, which directly extracts the proper features from image matrices with the objective of maximizing between-class variations and minimizing within-class variations. The features of a testing image will be compared to the features of database image using K-Nearest Neighbor classifier. The experiments in this paper are performed by using using 66 face images of 22 different people. The experimental result shows that the recognition accuracy is up to 98.33%. Index Terms—face recognition, k nearest neighbor, linear discriminant analysis.


Sign in / Sign up

Export Citation Format

Share Document