scholarly journals Regulating Grip Forces through EMG-Controlled Protheses for Transradial Amputees

2021 ◽  
Vol 11 (23) ◽  
pp. 11199
Author(s):  
Irati Rasines ◽  
Miguel Prada ◽  
Viacheslav Bobrov ◽  
Dhruv Agrawal ◽  
Leire Martinez ◽  
...  

This study aims to evaluate different combinations of features and algorithms to be used in the control of a prosthetic hand wherein both the configuration of the fingers and the gripping forces can be controlled. This requires identifying machine learning algorithms and feature sets to detect both intended force variation and hand gestures in EMG signals recorded from upper-limb amputees. However, despite the decades of research into pattern recognition techniques, each new problem requires researchers to find a suitable classification algorithm, as there is no such thing as a universal ’best’ solution. Consideration of different techniques and data representation represents a fundamental practice in order to achieve maximally effective results. To this end, we employ a publicly-available database recorded from amputees to evaluate different combinations of features and classifiers. Analysis of data from 9 different individuals shows that both for classic features and for time-dependent power spectrum descriptors (TD-PSD) the proposed logarithmically scaled version of the current window plus previous window achieves the highest classification accuracy. Using linear discriminant analysis (LDA) as a classifier and applying a majority-voting strategy to stabilize the individual window classification, we obtain 88% accuracy with classic features and 89% with TD-PSD features.

2020 ◽  
Vol 223 ◽  
pp. 03013
Author(s):  
Anton Sokolov ◽  
Egor Dmitriev ◽  
Ioannis Cheliotis ◽  
Hervé Delbarre ◽  
Elsa Dieudonne ◽  
...  

We present algorithms and results of automated processing of LiDAR measurements obtained during VEGILOT measuring campaign in Paris in autumn 2014 in order to study horizontal turbulent atmospheric regimes on urban scales. To process images obtained by horizontal atmospheric scanning using Doppler LiDAR, the method is proposed based on texture analysis and classification using supervised machine learning algorithms. The results of the parallel classification by various classifiers were combined using the majority voting strategy. The obtained estimates of accuracy demonstrate the efficiency of the proposed method for solving the problem of remote sensing of regional-scale turbulent patterns in the atmosphere.


Author(s):  
Sri Harsha Dumpala ◽  
Rupayan Chakraborty ◽  
Sunil Kumar Kopparapu

Class imbalance refers to the scenario where certain classes are highly under-represented compared to other classes in terms of the availability of training data. This situation hinders the applicability of conventional machine learning algorithms to most of the classification problems where class imbalance is prominent. Most existing methods addressing class imbalance either rely on sampling techniques or cost-sensitive learning methods; thus inheriting their shortcomings. In this paper, we introduce a novel approach that is different from sampling or cost-sensitive learning based techniques, to address the class imbalance problem, where two samples are simultaneously considered to train the classifier. Further, we propose a mechanism to use a single base classifier, instead of an ensemble of classifiers, to obtain the output label of the test sample using majority voting method. Experimental results on several benchmark datasets clearly indicate the usefulness of the proposed approach over the existing state-of-the-art techniques.


2020 ◽  
Vol 21 (18) ◽  
pp. 6914
Author(s):  
Chin-Hsien Lin ◽  
Shu-I Chiu ◽  
Ta-Fu Chen ◽  
Jyh-Shing Roger Jang ◽  
Ming-Jang Chiu

Easily accessible biomarkers for Alzheimer’s disease (AD), Parkinson’s disease (PD), frontotemporal dementia (FTD), and related neurodegenerative disorders are urgently needed in an aging society to assist early-stage diagnoses. In this study, we aimed to develop machine learning algorithms using the multiplex blood-based biomarkers to identify patients with different neurodegenerative diseases. Plasma samples (n = 377) were obtained from healthy controls, patients with AD spectrum (including mild cognitive impairment (MCI)), PD spectrum with variable cognitive severity (including PD with dementia (PDD)), and FTD. We measured plasma levels of amyloid-beta 42 (Aβ42), Aβ40, total Tau, p-Tau181, and α-synuclein using an immunomagnetic reduction-based immunoassay. We observed increased levels of all biomarkers except Aβ40 in the AD group when compared to the MCI and controls. The plasma α-synuclein levels increased in PDD when compared to PD with normal cognition. We applied machine learning-based frameworks, including a linear discriminant analysis (LDA), for feature extraction and several classifiers, using features from these blood-based biomarkers to classify these neurodegenerative disorders. We found that the random forest (RF) was the best classifier to separate different dementia syndromes. Using RF, the established LDA model had an average accuracy of 76% when classifying AD, PD spectrum, and FTD. Moreover, we found 83% and 63% accuracies when differentiating the individual disease severity of subgroups in the AD and PD spectrum, respectively. The developed LDA model with the RF classifier can assist clinicians in distinguishing variable neurodegenerative disorders.


2020 ◽  
pp. 1-11
Author(s):  
Mayamin Hamid Raha ◽  
Tonmoay Deb ◽  
Mahieyin Rahmun ◽  
Tim Chen

Face recognition is the most efficient image analysis application, and the reduction of dimensionality is an essential requirement. The curse of dimensionality occurs with the increase in dimensionality, the sample density decreases exponentially. Dimensionality Reduction is the process of taking into account the dimensionality of the feature space by obtaining a set of principal features. The purpose of this manuscript is to demonstrate a comparative study of Principal Component Analysis and Linear Discriminant Analysis methods which are two of the highly popular appearance-based face recognition projection methods. PCA creates a flat dimensional data representation that describes as much data variance as possible, while LDA finds the vectors that best discriminate between classes in the underlying space. The main idea of PCA is to transform high dimensional input space into the function space that displays the maximum variance. Traditional LDA feature selection is obtained by maximizing class differences and minimizing class distance.


Sensors ◽  
2019 ◽  
Vol 19 (11) ◽  
pp. 2547 ◽  
Author(s):  
Tuo Gao ◽  
Yongchen Wang ◽  
Chengwu Zhang ◽  
Zachariah A. Pittman ◽  
Alexandra M. Oliveira ◽  
...  

Nanoparticle based chemical sensor arrays with four types of organo-functionalized gold nanoparticles (AuNPs) were introduced to classify 35 different teas, including black teas, green teas, and herbal teas. Integrated sensor arrays were made using microfabrication methods including photolithography and lift-off processing. Different types of nanoparticle solutions were drop-cast on separate active regions of each sensor chip. Sensor responses, expressed as the ratio of resistance change to baseline resistance (ΔR/R0), were used as input data to discriminate different aromas by statistical analysis using multivariate techniques and machine learning algorithms. With five-fold cross validation, linear discriminant analysis (LDA) gave 99% accuracy for classification of all 35 teas, and 98% and 100% accuracy for separate datasets of herbal teas, and black and green teas, respectively. We find that classification accuracy improves significantly by using multiple types of nanoparticles compared to single type nanoparticle arrays. The results suggest a promising approach to monitor the freshness and quality of tea products.


Hypertension ◽  
2021 ◽  
Vol 78 (5) ◽  
pp. 1595-1604
Author(s):  
Fabrizio Buffolo ◽  
Jacopo Burrello ◽  
Alessio Burrello ◽  
Daniel Heinrich ◽  
Christian Adolf ◽  
...  

Primary aldosteronism (PA) is the cause of arterial hypertension in 4% to 6% of patients, and 30% of patients with PA are affected by unilateral and surgically curable forms. Current guidelines recommend screening for PA ≈50% of patients with hypertension on the basis of individual factors, while some experts suggest screening all patients with hypertension. To define the risk of PA and tailor the diagnostic workup to the individual risk of each patient, we developed a conventional scoring system and supervised machine learning algorithms using a retrospective cohort of 4059 patients with hypertension. On the basis of 6 widely available parameters, we developed a numerical score and 308 machine learning-based models, selecting the one with the highest diagnostic performance. After validation, we obtained high predictive performance with our score (optimized sensitivity of 90.7% for PA and 92.3% for unilateral PA [UPA]). The machine learning-based model provided the highest performance, with an area under the curve of 0.834 for PA and 0.905 for diagnosis of UPA, with optimized sensitivity of 96.6% for PA, and 100.0% for UPA, at validation. The application of the predicting tools allowed the identification of a subgroup of patients with very low risk of PA (0.6% for both models) and null probability of having UPA. In conclusion, this score and the machine learning algorithm can accurately predict the individual pretest probability of PA in patients with hypertension and circumvent screening in up to 32.7% of patients using a machine learning-based model, without omitting patients with surgically curable UPA.


SLEEP ◽  
2021 ◽  
Author(s):  
Arun Sebastian ◽  
Peter A Cistulli ◽  
Gary Cohen ◽  
Philip de Chazal

Abstract Study objectives Acoustic analysis of isolated events and snoring by previous researchers suggests a correlation between individual acoustic features and individual site of collapse events. In this study, we hypothesised that multi-parameter evaluation of snore sounds during natural sleep would provide a robust prediction of the predominant site of airway collapse. Methods The audio signals of 58 OSA patients were recorded simultaneously with full night polysomnography. The site of collapse was determined by manual analysis of the shape of the airflow signal during hypopnoea events and corresponding audio signal segments containing snore were manually extracted and processed. Machine learning algorithms were developed to automatically annotate the site of collapse of each hypopnoea event into three classes (lateral wall, palate and tongue-base). The predominant site of collapse for a sleep period was determined from the individual hypopnoea annotations and compared to the manually determined annotations. This was a retrospective study that used cross-validation to estimate performance. Results Cluster analysis showed that the data fits well in two clusters with a mean silhouette coefficient of 0.79 and an accuracy of 68% for classifying tongue/non-tongue collapse. A classification model using linear discriminants achieved an overall accuracy of 81% for discriminating tongue/non-tongue predominant site of collapse and accuracy of 64% for all site of collapse classes. Conclusions Our results reveal that the snore signal during hypopnoea can provide information regarding the predominant site of collapse in the upper airway. Therefore, the audio signal recorded during sleep could potentially be used as a new tool in identifying the predominant site of collapse and consequently improving the treatment selection and outcome.


2018 ◽  
Vol 61 (5) ◽  
pp. 1497-1504
Author(s):  
Zhenjie Wang ◽  
Ke Sun ◽  
Lihui Du ◽  
Jian Yuan ◽  
Kang Tu ◽  
...  

Abstract. In this study, computer vision was used for the identification and classification of fungi on moldy paddy. To develop a rapid and efficient method for the classification of common fungal species found in stored paddy, computer vision was used to acquire images of individual colonies of growing fungi for three consecutive days. After image processing, the color, shape, and texture features were acquired and used in a subsequent discriminant analysis. Both linear (i.e., linear discriminant analysis and partial least squares discriminant analysis) and nonlinear (i.e., random forest and support vector machine [SVM]) pattern recognition models were employed for the classification of fungal colonies, and the results were compared. The results indicate that when using all of the features for three consecutive days, the performance of the nonlinear tools was superior to that of the linear tools, especially in the case of the SVM models, which achieved an accuracy of 100% on the calibration sets and an accuracy of 93.2% to 97.6% on the prediction sets. After sequential selection of projection algorithm, ten common features were selected for building the classification models. The results showed that the SVM model achieved an overall accuracy of 95.6%, 98.3%, and 99.0% on the prediction sets on days 2, 3, and 4, respectively. This work demonstrated that computer vision with several features is suitable for the identification and classification of fungi on moldy paddy based on the form of the individual colonies at an early growth stage during paddy storage. Keywords: Classification, Computer vision, Fungal colony, Feature selection, SVM.


2018 ◽  
Author(s):  
Nicola Asuni ◽  
Steven Wilder

AbstractHuman genetic variants are usually represented by four values with variable length: chromosome, position, reference and alternate alleles. There is no guarantee that these components are represented in a consistent way across different data sources, and processing variant-based data can be inefficient because four different comparison operations are needed for each variant, three of which are string comparisons. Existing variant identifiers do not typically represent every possible variant we may be interested in, nor they are directly reversible. Similarly, genomic regions are typically represented inconsistently by three or four values. Working with strings, in contrast to numbers, poses extra challenges on computer memory allocation and data-representation. To overcome these limitations, a novel reversible numerical encoding schema for human genetic variants (VariantKey) and genomics regions (RegionKey), is presented here alongside a multi-language open-source software implementation (https://github.com/Genomicsplc/variantkey). VariantKey and RegionKey represents variants and regions as single 64 bit numeric entities, while preserving the ability to be searched and sorted by chromosome and position. The individual components of short variants can be directly read back from the VariantKey, while long variants are supported with a fast lookup table.


Sign in / Sign up

Export Citation Format

Share Document