Fractal and twin SVM-based handgrip recognition for healthy subjects and trans-radial amputees using myoelectric signal

2016 ◽  
Vol 61 (1) ◽  
pp. 87-94 ◽  
Author(s):  
Sridhar Poosapadi Arjunan ◽  
Dinesh Kant Kumar ◽  
Jayadeva J.

Abstract Identifying functional handgrip patterns using surface electromygram (sEMG) signal recorded from amputee residual muscle is required for controlling the myoelectric prosthetic hand. In this study, we have computed the signal fractal dimension (FD) and maximum fractal length (MFL) during different grip patterns performed by healthy and transradial amputee subjects. The FD and MFL of the sEMG, referred to as the fractal features, were classified using twin support vector machines (TSVM) to recognize the handgrips. TSVM requires fewer support vectors, is suitable for data sets with unbalanced distributions, and can simultaneously be trained for improving both sensitivity and specificity. When compared with other methods, this technique resulted in improved grip recognition accuracy, sensitivity, and specificity, and this improvement was significant (κ=0.91).

2014 ◽  
Vol 2014 ◽  
pp. 1-14 ◽  
Author(s):  
Beatriz Leon ◽  
Angelo Basteris ◽  
Francesco Infarinato ◽  
Patrizio Sale ◽  
Sharon Nijenhuis ◽  
...  

Stroke survivors often suffer impairments on their wrist and hand. Robot-mediated rehabilitation techniques have been proposed as a way to enhance conventional therapy, based on intensive repeated movements. Amongst the set of activities of daily living, grasping is one of the most recurrent. Our aim is to incorporate the detection of grasps in the machine-mediated rehabilitation framework so that they can be incorporated into interactive therapeutic games. In this study, we developed and tested a method based on support vector machines for recognizing various grasp postures wearing a passive exoskeleton for hand and wrist rehabilitation after stroke. The experiment was conducted with ten healthy subjects and eight stroke patients performing the grasping gestures. The method was tested in terms of accuracy and robustness with respect to intersubjects’ variability and differences between different grasps. Our results show reliable recognition while also indicating that the recognition accuracy can be used to assess the patients’ ability to consistently repeat the gestures. Additionally, a grasp quality measure was proposed to measure the capabilities of the stroke patients to perform grasp postures in a similar way than healthy people. These two measures can be potentially used as complementary measures to other upper limb motion tests.


2012 ◽  
Vol 24 (4) ◽  
pp. 1047-1084 ◽  
Author(s):  
Xiao-Tong Yuan ◽  
Shuicheng Yan

We investigate Newton-type optimization methods for solving piecewise linear systems (PLSs) with nondegenerate coefficient matrix. Such systems arise, for example, from the numerical solution of linear complementarity problem, which is useful to model several learning and optimization problems. In this letter, we propose an effective damped Newton method, PLS-DN, to find the exact (up to machine precision) solution of nondegenerate PLSs. PLS-DN exhibits provable semiiterative property, that is, the algorithm converges globally to the exact solution in a finite number of iterations. The rate of convergence is shown to be at least linear before termination. We emphasize the applications of our method in modeling, from a novel perspective of PLSs, some statistical learning problems such as box-constrained least squares, elitist Lasso (Kowalski & Torreesani, 2008 ), and support vector machines (Cortes & Vapnik, 1995 ). Numerical results on synthetic and benchmark data sets are presented to demonstrate the effectiveness and efficiency of PLS-DN on these problems.


Author(s):  
Melih S. Aslan ◽  
Hossam Abd El Munim ◽  
Aly A. Farag ◽  
Mohamed Abou El-Ghar

Graft failure of kidneys after transplantation is most often the consequence of the acute rejection. Hence, early detection of the kidney rejection is important for the treatment of renal diseases. In this chapter, authors introduce a new automatic approach to classify normal kidney function from kidney rejection using dynamic contrast enhanced magnetic resonance imaging (DCE-MRI). The kidney has three regions named the cortex, medulla, and pelvis. In their experiment, they use the medulla region because it has specific responses to DCE-MRI that are helpful to identify kidney rejection. In the authors’ process they segment the kidney using the level sets method. They then employ several classification methods such as the Euclidean distance, Mahalanobis distance, and least square support vector machines (LS-SVM). The authors’preliminary results are very encouraging and reproducibility of the results was achieved for 55 clinical data sets. The classification accuracy, diagnostic sensitivity, and diagnostic specificity are 84%, 75%, and 96%, respectively.


2016 ◽  
Vol 28 (6) ◽  
pp. 1217-1247 ◽  
Author(s):  
Yunlong Feng ◽  
Yuning Yang ◽  
Xiaolin Huang ◽  
Siamak Mehrkanoon ◽  
Johan A. K. Suykens

This letter addresses the robustness problem when learning a large margin classifier in the presence of label noise. In our study, we achieve this purpose by proposing robustified large margin support vector machines. The robustness of the proposed robust support vector classifiers (RSVC), which is interpreted from a weighted viewpoint in this work, is due to the use of nonconvex classification losses. Besides the robustness, we also show that the proposed RSCV is simultaneously smooth, which again benefits from using smooth classification losses. The idea of proposing RSVC comes from M-estimation in statistics since the proposed robust and smooth classification losses can be taken as one-sided cost functions in robust statistics. Its Fisher consistency property and generalization ability are also investigated. Besides the robustness and smoothness, another nice property of RSVC lies in the fact that its solution can be obtained by solving weighted squared hinge loss–based support vector machine problems iteratively. We further show that in each iteration, it is a quadratic programming problem in its dual space and can be solved by using state-of-the-art methods. We thus propose an iteratively reweighted type algorithm and provide a constructive proof of its convergence to a stationary point. Effectiveness of the proposed classifiers is verified on both artificial and real data sets.


Author(s):  
Cagatay Catal ◽  
Serkan Tugul ◽  
Basar Akpinar

Software repositories consist of thousands of applications and the manual categorization of these applications into domain categories is very expensive and time-consuming. In this study, we investigate the use of an ensemble of classifiers approach to solve the automatic software categorization problem when the source code is not available. Therefore, we used three data sets (package level/class level/method level) that belong to 745 closed-source Java applications from the Sharejar repository. We applied the Vote algorithm, AdaBoost, and Bagging ensemble methods and the base classifiers were Support Vector Machines, Naive Bayes, J48, IBk, and Random Forests. The best performance was achieved when the Vote algorithm was used. The base classifiers of the Vote algorithm were AdaBoost with J48, AdaBoost with Random Forest, and Random Forest algorithms. We showed that the Vote approach with method attributes provides the best performance for automatic software categorization; these results demonstrate that the proposed approach can effectively categorize applications into domain categories in the absence of source code.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Yixue Zhu ◽  
Boyue Chai

With the development of increasingly advanced information technology and electronic technology, especially with regard to physical information systems, cloud computing systems, and social services, big data will be widely visible, creating benefits for people and at the same time facing huge challenges. In addition, with the advent of the era of big data, the scale of data sets is getting larger and larger. Traditional data analysis methods can no longer solve the problem of large-scale data sets, and the hidden information behind big data is digging out, especially in the field of e-commerce. We have become a key factor in competition among enterprises. We use a support vector machine method based on parallel computing to analyze the data. First, the training samples are divided into several working subsets through the SOM self-organizing neural network classification method. Compared with the ever-increasing progress of information technology and electronic equipment, especially the related physical information system finally merges the training results of each working set, so as to quickly deal with the problem of massive data prediction and analysis. This paper proposes that big data has the flexibility of expansion and quality assessment system, so it is meaningful to replace the double-sidedness of quality assessment with big data. Finally, considering the excellent performance of parallel support vector machines in data mining and analysis, we apply this method to the big data analysis of e-commerce. The research results show that parallel support vector machines can solve the problem of processing large-scale data sets. The emergence of data dirty problems has increased the effective rate by at least 70%.


2020 ◽  
Vol 9 (11) ◽  
pp. 3415
Author(s):  
HyunBum Kim ◽  
Juhyeong Jeon ◽  
Yeon Jae Han ◽  
YoungHoon Joo ◽  
Jonghwan Lee ◽  
...  

Voice changes may be the earliest signs in laryngeal cancer. We investigated whether automated voice signal analysis can be used to distinguish patients with laryngeal cancer from healthy subjects. We extracted features using the software package for speech analysis in phonetics (PRAAT) and calculated the Mel-frequency cepstral coefficients (MFCCs) from voice samples of a vowel sound of /a:/. The proposed method was tested with six algorithms: support vector machine (SVM), extreme gradient boosting (XGBoost), light gradient boosted machine (LGBM), artificial neural network (ANN), one-dimensional convolutional neural network (1D-CNN) and two-dimensional convolutional neural network (2D-CNN). Their performances were evaluated in terms of accuracy, sensitivity, and specificity. The result was compared with human performance. A total of four volunteers, two of whom were trained laryngologists, rated the same files. The 1D-CNN showed the highest accuracy of 85% and sensitivity and sensitivity and specificity levels of 78% and 93%. The two laryngologists achieved accuracy of 69.9% but sensitivity levels of 44%. Automated analysis of voice signals could differentiate subjects with laryngeal cancer from those of healthy subjects with higher diagnostic properties than those performed by the four volunteers.


2011 ◽  
Vol 291-294 ◽  
pp. 2742-2745
Author(s):  
Qing Zhu Wang ◽  
Xin Zhu Wang ◽  
Ji Song Bie ◽  
Bin Wang

A priority based ‘One against all (OAA)’ Multi-class Least Square-Support Vector Machines is designed to remove the unclassifiable regions exist in basic OAA. POAA develops the sensitivity and specificity in Computer-aided Diagnosis (CAD) for detection of lung nodules.


Sign in / Sign up

Export Citation Format

Share Document