scholarly journals Short-Circuited Turn Fault Diagnosis in Transformers by Using Vibration Signals, Statistical Time Features, and Support Vector Machines on FPGA

Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3598
Author(s):  
Jose R. Huerta-Rosales ◽  
David Granados-Lieberman ◽  
Arturo Garcia-Perez ◽  
David Camarena-Martinez ◽  
Juan P. Amezquita-Sanchez ◽  
...  

One of the most critical devices in an electrical system is the transformer. It is continuously under different electrical and mechanical stresses that can produce failures in its components and other electrical network devices. The short-circuited turns (SCTs) are a common winding failure. This type of fault has been widely studied in literature employing the vibration signals produced in the transformer. Although promising results have been obtained, it is not a trivial task if different severity levels and a common high-level noise are considered. This paper presents a methodology based on statistical time features (STFs) and support vector machines (SVM) to diagnose a transformer under several SCTs conditions. As STFs, 19 indicators from the transformer vibration signals are computed; then, the most discriminant features are selected using the Fisher score analysis, and the linear discriminant analysis is used for dimension reduction. Finally, a support vector machine classifier is employed to carry out the diagnosis in an automatic way. Once the methodology has been developed, it is implemented on a field-programmable gate array (FPGA) to provide a system-on-a-chip solution. A modified transformer capable of emulating different SCTs severities is employed to validate and test the methodology and its FPGA implementation. Results demonstrate the effectiveness of the proposal for diagnosing the transformer condition as an accuracy of 96.82% is obtained.

2012 ◽  
Vol 8 (S295) ◽  
pp. 180-180
Author(s):  
He Ma ◽  
Yanxia Zhang ◽  
Yongheng Zhao ◽  
Bo Zhang

AbstractIn this work, two different algorithms: Linear Discriminant Analysis (LDA) and Support Vector Machines (SVMs) are combined for the classification of unresolved sources from SDSS DR8 and UKIDSS DR8. The experimental result shows that this joint approach is effective for our case.


2020 ◽  
Vol 26 (3) ◽  
pp. 42-53
Author(s):  
Vuk Vranjkovic ◽  
Rastislav Struharik

In this paper, a hardware accelerator for sparse support vector machines (SVM) is proposed. We believe that the proposed accelerator is the first accelerator of this kind. The accelerator is designed for use in field programmable gate arrays (FPGA) systems. Additionally, a novel algorithm for the pruning of SVM models is developed. The pruned SVM model has a smaller memory footprint and can be processed faster compared to dense SVM models. In the systems with memory throughput, compute or power constraints, such as edge computing, this can be a big advantage. The experiments on several standard datasets are conducted, which aim is to compare the efficiency of the proposed architecture and the developed algorithm to the existing solutions. The results of the experiments reveal that the proposed hardware architecture and SVM pruning algorithm has superior characteristics in comparison to the previous work in the field. A memory reduction from 3 % to 85 % is achieved, with a speed-up in a range from 1.17 to 7.92.


2011 ◽  
Vol 291-294 ◽  
pp. 2089-2093
Author(s):  
Zheng Zhong Shi ◽  
Yi Jian Huang

Aiming at drawbacks of current methods for predicting the screening efficiency of probability sieve, this paper proposed a method of predict and study the screening efficiency of probability sieve based on higher-order spectrum(HOS) analysis and support vector machines(SVMs). First setting up trispectrum model with the vibration signals, then fitting out polynomial with least square method using the data which get out by the reconstruct power spectrum. Finaly, using support vector machines to predicting the screening efficiency with the coefficient of the polynomial as the sample input. The results show that the relative errors are all less than 2.4% and the absolute errors are all less than 0.021, which is ideal for efficiency forecast.


Author(s):  
Zuherman Rustam ◽  
Yasirly Amalia ◽  
Sri Hartini ◽  
Glori Stephani Saragih

<span id="docs-internal-guid-4db59d91-7fff-c659-478a-6dd7456f380f"><span>Breast cancer is an abnormal cell growth in the breast that keeps changed uncontrolled and it forms a tumor. The tumor can be benign or malignant. Benign could not be dangerous to health and cancerous, but malignant could be has a probability dangerous to health and be cancerous. A specialist doctor will diagnose the patient and give treatment based on the diagnosis which is benign or malignant. Machine learning offer times efficiency to determine a cancer cell. The machine will learn the pattern based on the information from the dataset. Support vector machines and linear discriminant analysis are common methods that can be used in the classification of cancer. In this study, both of linear discriminant analysis and support vector machines are compared by looking from accuracy, sensitivity, specificity, and F1-score. We will know which methods are better in classifying breast cancer dataset. The result shows that the support vector machine has better performance than the linear discriminant analysis. It can be seen from the accuracy is 98.77%.</span></span>


2009 ◽  
pp. 261-293
Author(s):  
Constantine Kotropoulos ◽  
Ioannis Pitas

This chapter addresses both low- and high-level problems in visual speech processing and recognition In particular, mouth region segmentation and lip contour extraction are addressed first. Next, visual speech recognition with parallel support vector machines and temporal Viterbi lattices is demonstrated on a small vocabulary task.


Author(s):  
Clyde Coelho ◽  
Aditi Chattopadhyay

This paper proposes a computationally efficient methodology for classifying damage in structural hotspots. Data collected from a sensor instrumented lug joint subjected to fatigue loading was preprocessed using a linear discriminant analysis (LDA) to extract features that are relevant for classification and reduce the dimensionality of the data. The data is then reduced in the feature space by analyzing the structure of the mapped clusters and removing the data points that do not affect the construction of interclass separating hyperplanes. The reduced data set is used to train a support vector machines (SVM) based classifier and the results of the classification problem are compared to those when the entire data set is used for training. To further improve the efficiency of the classification scheme, the SVM classifiers are arranged in a binary tree format to reduce the number of comparisons that are necessary. The experimental results show that the data reduction does not reduce the ability of the classifier to distinguish between classes while providing a nearly fourfold decrease in the amount of training data processed.


Sign in / Sign up

Export Citation Format

Share Document