Concurrent Prediction of Finger Forces Based on Source Separation and Classification of Neuron Discharge Information

Author(s):  
Yang Zheng ◽  
Xiaogang Hu

A reliable neural-machine interface is essential for humans to intuitively interact with advanced robotic hands in an unconstrained environment. Existing neural decoding approaches utilize either discrete hand gesture-based pattern recognition or continuous force decoding with one finger at a time. We developed a neural decoding technique that allowed continuous and concurrent prediction of forces of different fingers based on spinal motoneuron firing information. High-density skin-surface electromyogram (HD-EMG) signals of finger extensor muscle were recorded, while human participants produced isometric flexion forces in a dexterous manner (i.e. produced varying forces using either a single finger or multiple fingers concurrently). Motoneuron firing information was extracted from the EMG signals using a blind source separation technique, and each identified neuron was further classified to be associated with a given finger. The forces of individual fingers were then predicted concurrently by utilizing the corresponding motoneuron pool firing frequency of individual fingers. Compared with conventional approaches, our technique led to better prediction performances, i.e. a higher correlation ([Formula: see text] versus [Formula: see text]), a lower prediction error ([Formula: see text]% MVC versus [Formula: see text]% MVC), and a higher accuracy in finger state (rest/active) prediction ([Formula: see text]% versus [Formula: see text]%). Our decoding method demonstrated the possibility of classifying motoneurons for different fingers, which significantly alleviated the cross-talk issue of EMG recordings from neighboring hand muscles, and allowed the decoding of finger forces individually and concurrently. The outcomes offered a robust neural-machine interface that could allow users to intuitively control robotic hands in a dexterous manner.

2020 ◽  
Author(s):  
Thomas Stadelmayer ◽  
Avik Santra

Radar sensors offer a promising and effective sensing modality for<br>human activity classification. Human activity classification enables several smart<br>homes applications for energy saving, human-machine interface for gesture<br>controlled appliances and elderly fall-motion recognition. Present radar-based<br>activity recognition system exploit micro-Doppler signature by generating Doppler<br>spectrograms or video of range-Doppler images (RDIs), followed by deep neural<br>network or machine learning for classification. Although, deep convolutional neural<br>networks (DCNN) have been shown to implicitly learn features from raw sensor<br>data in other fields, such as camera and speech, yet for the case of radar DCNN<br>preprocessing followed by feature image generation, such as video of RDI or<br>Doppler spectrogram, is required to develop a scalable and robust classification<br>or regression application. In this paper, we propose a parametric convolutional<br>neural network that mimics the radar preprocessing across fast-time and slow-time<br>radar data through 2D sinc filter or 2D wavelet filter kernels to extract features for<br>classification of various human activities. It is demonstrated that our proposed<br>solution shows improved results compared to equivalent state-of-art DCNN solutions<br>that rely on Doppler spectrogram or video of RDIs as feature images.


Author(s):  
Hiroaki Hashimoto ◽  
Seiji Kameda ◽  
Hitoshi Maezawa ◽  
Satoru Oshino ◽  
Naoki Tani ◽  
...  

To realize a brain–machine interface to assist swallowing, neural signal decoding is indispensable. Eight participants with temporal-lobe intracranial electrode implants for epilepsy were asked to swallow during electrocorticogram (ECoG) recording. Raw ECoG signals or certain frequency bands of the ECoG power were converted into images whose vertical axis was electrode number and whose horizontal axis was time in milliseconds, which were used as training data. These data were classified with four labels (Rest, Mouth open, Water injection, and Swallowing). Deep transfer learning was carried out using AlexNet, and power in the high-[Formula: see text] band (75–150[Formula: see text]Hz) was the training set. Accuracy reached 74.01%, sensitivity reached 82.51%, and specificity reached 95.38%. However, using the raw ECoG signals, the accuracy obtained was 76.95%, comparable to that of the high-[Formula: see text] power. We demonstrated that a version of AlexNet pre-trained with visually meaningful images can be used for transfer learning of visually meaningless images made up of ECoG signals. Moreover, we could achieve high decoding accuracy using the raw ECoG signals, allowing us to dispense with the conventional extraction of high-[Formula: see text] power. Thus, the images derived from the raw ECoG signals were equivalent to those derived from the high-[Formula: see text] band for transfer deep learning.


2014 ◽  
Vol 556-562 ◽  
pp. 2748-2751
Author(s):  
Hong Li Wang ◽  
Bing Xu ◽  
Xue Dong Xue ◽  
Kan Cheng

One method for diagnosis of faults with generator rotor is contrived by combining local wave method and blind source separation. Time-frequency image varies with local wave of different fault signals, and this feature is applied to identify different faults. In order to realize automatic classification of faults, blind source separation is employed for separation of independent components in time-frequency image of local wave of different fault signals, so as to derive projection coefficients for a set of source images. On the basis of this, automatic classification of faults is realized with probability nerve network. Taking fault signal of rotor as an example, this method is investigated, and the validity is proved by experimental results.


2004 ◽  
Vol 91 (5) ◽  
pp. 2366-2375 ◽  
Author(s):  
Mariano Julián Rodriguez ◽  
Irene Raquel Iscla ◽  
Lidia Szczupak

Central regulation of somatosensory signals has been extensively studied, but little is known about their regulation in the periphery. Given the widespread exposure of the skin sensory terminals to the environment, it is of interest to explore how somatosensory sensitivity is affected by changes in properties of the skin. In the leech, the annuli that subdivide the skin can be erected under the control of the annulus erector (AE) motoneurons. To analyze whether this surface change influences mechanosensory sensitivity, we studied the responses of low threshold mechanosensory T cells to mechanical stimulation of the skin as AE motoneurons were activated. In segments of the body wall connected to the corresponding ganglion and submerged in an aqueous environment, T cells responded to localized bubbling on the skin and to water flow parallel to its surface. Excitation of AE motoneurons diminished these responses in a way that depended on the motoneuron firing frequency. Video recordings established that the range of AE firing frequencies that produced effective annulus erection coincided with that influencing T cell responses. In isolated ganglia, AE firing had no effect on T cell excitability, suggesting that annulus erection diminished T cell responsiveness to mechanical input. Counteracting this effect, mechanosensory inputs inhibited AE motoneurons. However, because depolarization of AE cells caused a decrease in their input resistance, the more active the motoneuron, the less sensitive it became to inhibitory signals. Thus when brought to fire, AE motoneurons would stay “committed” to a high activity level, and this would limit sensory responsiveness to incoming mechanical signals.


2011 ◽  
Vol 63-64 ◽  
pp. 385-389
Author(s):  
Geng Huang Yang ◽  
Fei Fei Wang ◽  
Shi Gang Cui ◽  
Li Zhao ◽  
Qing Guo Meng ◽  
...  

The Electfoencephalogram (EEG) and electromyography (EMG) sampled from skin surface are the primary information to mirror the idea of human being. The human-machine interface based on EEG and EMG is used to control machine such as a robot. It is a new taste to apply this type of interface to some special condition such as an astronaut controlling the outside robot in a space ship. Digital signal processor (DSP) is used as sample EEG and EMG in the device. The feature of signal is extract by algorithm running in DSP to control the machine. The speech recognition based on fixed Chinese words is included in the device. Many tests proved that the developed device is capable to control the robot for key operation on a panel with high reliability.


2010 ◽  
Vol 22 (04) ◽  
pp. 293-300 ◽  
Author(s):  
Sridhar P. Arjunan ◽  
Dinesh K. Kumar ◽  
Ganesh R. Naik

Classification of surface electromyogram (sEMG) for identification of hand and finger flexions has a number of applications such as sEMG-based controllers for near elbow amputees and human-computer interface devices for the elderly. However, the classification of an sEMG becomes difficult when the level of muscle contraction is low and when there are multiple active muscles. The presence of noise and crosstalk from closely located and simultaneously active muscles is exaggerated when muscles are weakly active such as during sustained wrist and finger flexion and of people with neuropathological disorders or who are amputees. This paper reports analysis of fractal length and fractal dimension of two channels to obtain accurate identification of hand and finger flexion. An alternate technique, which consists of source separation of an sEMG to obtain individual muscle activity to identify the finger and hand flexion actions, is also reported. The results show that both the fractal features and muscle activity obtained using modified independent component analysis of an sEMG from the forearm can accurately identify a set of finger and wrist flexion-based actions even when the muscle activity is very weak.


2021 ◽  
Author(s):  
Leonardo Ceravolo ◽  
Marius Moisa ◽  
Didier Grandjean ◽  
Christian Ruff ◽  
Sascha Fruhholz

The evaluation of socio-affective sound information is accomplished by the primate neural auditory cortex in collaboration with limbic and inferior frontal brain nodes. For the latter, activity in inferior frontal cortex (IFC) is often observed during classification of voice sounds, especially if they carry affective information. Partly opposing views have been proposed, with IFC either coding cognitive processing challenges in case of sensory ambiguity or representing categorical object and affect information for clear vocalizations. Here, we presented clear and ambiguous affective speech to two groups of human participants during neuroimaging, while in one group we inhibited right IFC activity with transcranial magnetic stimulation (TMS) prior to brain scanning. Inhibition of IFC activity led to partly faster affective decisions, more accurate choice probabilities and reduced auditory cortical activity for clear affective speech, while fronto-limbic connectivity increased for clear vocalizations. This indicates that IFC inhibition might lead to a more intuitive and efficient processing of affect information in voices. Contrarily, normal IFC activity might represent a more deliberate form of affective sound processing (i.e., enforcing cognitive analysis) that flags categorial sound decisions with precaution (i.e., representation of categorial uncertainty). This would point to an intermediate functional property of the IFC between previously assumed mechanisms.


Sign in / Sign up

Export Citation Format

Share Document