Neural Network-Based Gait Adaptation Algorithms for Lower Limbs Active Orthoses

Author(s):  
Marciel A. Gomes ◽  
Adriano A. G. Siqueira ◽  
Guilherme L. M. Silveira

This work deals with neural network-based gait-pattern adaptation algorithm for an active orthosis. The proposed device is developed for lower limbs and based on a commercially available orthosis, Figure 1. Active orthoses can be designed for helping physically weak or injured people during rehabilitation procedures [1]. The robotic orthosis Lokomat is being recently used for rehabilitation of patients with stroke or spinal cord injury individuals [2]. Gait-pattern adaptation algorithms are proposed by Riener, et. al [3], considering the human-machine interaction. The algorithms in Riener, et. al [3] were developed for a fixed base robotic system; they can not be applied directly in the proposed orthosis, since no stability of the gait pattern is considered. A trajectory generator for biped robots taking into account the ZMP (Zero Moment Point) criterion is presented in Huang, et al. [4]. This method presents suitable results with smooth and second-order differentiable curves.

Author(s):  
Nagaraja N Poojary ◽  
Dr. Shivakumar G S ◽  
Akshath Kumar B.H

Language is human's most important communication and speech is basic medium of communication. Emotion plays a crucial role in social interaction. Recognizing the emotion in a speech is important as well as challenging because here we are dealing with human machine interaction. Emotion varies from person to person were same person have different emotions all together has different way express it. When a person express his emotion each will be having different energy, pitch and tone variation are grouped together considering upon different subject. Therefore the speech emotion recognition is a future goal of computer vision. The aim of our project is to develop the smart emotion recognition speech based on the convolutional neural network. Which uses different modules for emotion recognition and the classifier are used to differentiate emotion such as happy sad angry surprise. The machine will convert the human speech signals into waveform and process its routine at last it will display the emotion. The data is speech sample and the characteristics are extracted from the speech sample using librosa package. We are using RAVDESS dataset which are used as an experimental dataset. This study shows that for our dataset all classifiers achieve an accuracy of 68%.


2020 ◽  
Vol 5 (8) ◽  
pp. 849-854
Author(s):  
Muhammad Sajid Khan ◽  
Andrew Ware ◽  
Misha Karim ◽  
Nisar Bahoo ◽  
Muhammad Junaid Khalid

The ability for automated technologies to correctly identify a human’s actions provides considerable scope for systems that make use of human-machine interaction. Thus, automatic3D Human Action Recognition is an area that has seen significant research effort. In work described here, a human’s everyday 3D actions recorded in the NTU RGB+D dataset are identified using a novel structured-tree neural network. The nodes of the tree represent the skeleton joints, with the spine joint being represented by the root. The connection between a child node and its parent is known as the incoming edge while the reciprocal connection is known as the outgoing edge. The uses of tree structure lead to a system that intuitively maps to human movements. The classifier uses the change in displacement of joints and change in the angles between incoming and outgoing edges as features for classification of the actions performed


Research ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Hang Guo ◽  
Ji Wan ◽  
Haobin Wang ◽  
Hanxiang Wu ◽  
Chen Xu ◽  
...  

Handwritten signatures widely exist in our daily lives. The main challenge of signal recognition on handwriting is in the development of approaches to obtain information effectively. External mechanical signals can be easily detected by triboelectric nanogenerators which can provide immediate opportunities for building new types of active sensors capable of recording handwritten signals. In this work, we report an intelligent human-machine interaction interface based on a triboelectric nanogenerator. Using the horizontal-vertical symmetrical electrode array, the handwritten triboelectric signal can be recorded without external energy supply. Combined with supervised machine learning methods, it can successfully recognize handwritten English letters, Chinese characters, and Arabic numerals. The principal component analysis algorithm preprocesses the triboelectric signal data to reduce the complexity of the neural network in the machine learning process. Further, it can realize the anticounterfeiting recognition of writing habits by controlling the samples input to the neural network. The results show that the intelligent human-computer interaction interface has broad application prospects in signature security and human-computer interaction.


2018 ◽  
pp. 113-119
Author(s):  
Iryna Perova ◽  
Yevgeniy Bodyanskiy

Feature Selection task is one of the most complicated and actual in the areas of Data Mining and Human Machine Interaction. Many approaches to its solving are based on non-mathematical and presentative hypothesis. New approach to evaluation of medical features information quantity, based on optimized combination of feature selection and feature extraction methods is proposed. This approach allows us to produce optimal reduced number of features with linguistic interpreting of each of them. Hybrid system of feature selection/extraction based on Neural Network-Physician interaction is investigated. This system is numerically simple, can produce feature selection/extraction with any number of factors in online mode using neural network-physician interaction based on Oja’s neurons for online principal component analysis and calculating distance between first principal component and all input features. A series of experiments confirms efficiency of proposed approaches in Medical Data Mining area and allows physicians to have the most informative features without losing their linguistic interpreting.


2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Xin Jin ◽  
Jia Guo ◽  
Zhong Li ◽  
Ruihao Wang

With the development of powered exoskeleton in recent years, one important limitation is the capability of collaborating with human. Human-machine interaction requires the exoskeleton to accurately predict the human motion of the upcoming movement. Many recent works implement neural network algorithms such as recurrent neural networks (RNN) in motion prediction. However, they are still insufficient in efficiency and accuracy. In this paper, a Gaussian process latent variable model (GPLVM) is employed to transform the high-dimensional data into low-dimensional data. Combining with the nonlinear autoregressive (NAR) neural network, the GPLVM-NAR method is proposed to predict human motions. Experiments with volunteers wearing powered exoskeleton performing different types of motion are conducted. Results validate that the proposed method can forecast the future human motion with relative error of 2%∼5% and average calculation time of 120 s∼155 s, depending on the type of different motions.


Sign in / Sign up

Export Citation Format

Share Document