scholarly journals Generalizable cursor click control using grasp-related neural transients

Author(s):  
Brian M Dekleva ◽  
Jeffrey M Weiss ◽  
Michael L Boninger ◽  
Jennifer Collinger

Intracortical brain-computer interfaces (iBCI) have the potential to restore independence for individuals with significant motor or communication impairments. One of the most realistic avenues for clinical translation of iBCI technology is to enable control of a computer cursor−i.e. movement-related neural activity is interpreted (decoded) and used to drive cursor function. Both nonhuman primate and human studies have demonstrated high-level cursor translation control using attempted upper limb reaching movements. However, cursor click control−based on identifying attempted grasp−has only been successful in providing discrete click functionality; the ability to maintain click during translation does not yet exist. Here we present a novel decoding approach for cursor click based on identifying transient neural responses that emerge at the onset and offset of intended hand grasp. We demonstrate in a human participant, who used the BCI system independently in his home, that this transient-based approach provides high-functioning, generalized click control that can be used for both point-and-click and click-and-drag applications.

2021 ◽  
Vol 7 (22) ◽  
pp. eabe7547
Author(s):  
Meenakshi Khosla ◽  
Gia H. Ngo ◽  
Keith Jamison ◽  
Amy Kuceyeski ◽  
Mert R. Sabuncu

Naturalistic stimuli, such as movies, activate a substantial portion of the human brain, invoking a response shared across individuals. Encoding models that predict neural responses to arbitrary stimuli can be very useful for studying brain function. However, existing models focus on limited aspects of naturalistic stimuli, ignoring the dynamic interactions of modalities in this inherently context-rich paradigm. Using movie-watching data from the Human Connectome Project, we build group-level models of neural activity that incorporate several inductive biases about neural information processing, including hierarchical processing, temporal assimilation, and auditory-visual interactions. We demonstrate how incorporating these biases leads to remarkable prediction performance across large areas of the cortex, beyond the sensory-specific cortices into multisensory sites and frontal cortex. Furthermore, we illustrate that encoding models learn high-level concepts that generalize to task-bound paradigms. Together, our findings underscore the potential of encoding models as powerful tools for studying brain function in ecologically valid conditions.


2014 ◽  
Vol 112 (6) ◽  
pp. 1584-1598 ◽  
Author(s):  
Marino Pagan ◽  
Nicole C. Rust

The responses of high-level neurons tend to be mixtures of many different types of signals. While this diversity is thought to allow for flexible neural processing, it presents a challenge for understanding how neural responses relate to task performance and to neural computation. To address these challenges, we have developed a new method to parse the responses of individual neurons into weighted sums of intuitive signal components. Our method computes the weights by projecting a neuron's responses onto a predefined orthonormal basis. Once determined, these weights can be combined into measures of signal modulation; however, in their raw form these signal modulation measures are biased by noise. Here we introduce and evaluate two methods for correcting this bias, and we report that an analytically derived approach produces performance that is robust and superior to a bootstrap procedure. Using neural data recorded from inferotemporal cortex and perirhinal cortex as monkeys performed a delayed-match-to-sample target search task, we demonstrate how the method can be used to quantify the amounts of task-relevant signals in heterogeneous neural populations. We also demonstrate how these intuitive quantifications of signal modulation can be related to single-neuron measures of task performance ( d′).


2017 ◽  
Vol 117 (1) ◽  
pp. 388-402 ◽  
Author(s):  
Michael A. Cohen ◽  
George A. Alvarez ◽  
Ken Nakayama ◽  
Talia Konkle

Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing.


2019 ◽  
Author(s):  
Jeffrey M. Weiss ◽  
Robert A. Gaunt ◽  
Robert Franklin ◽  
Michael Boninger ◽  
Jennifer L. Collinger

AbstractWhile recent advances in intracortical brain-computer interfaces (iBCI) have demonstrated the ability to restore motor and communication functions, such demonstrations have generally been confined to controlled experimental settings and have required bulky laboratory hardware. Here, we developed and evaluated a self-contained portable iBCI that enabled the user to interact with various computer programs. The iBCI, which weighs 1.5 kg, consists of digital headstages, a small signal processing hub, and a tablet PC. A human participant tested the portable iBCI in laboratory and home settings under an FDA Investigational Device Exemption (NCT01894802). The participant successfully completed 96% of trials in a 2D cursor center-out task with the portable iBCI, a rate indistinguishable from that achieved with the standard laboratory iBCI. The participant also completed a variety of free-form tasks, including drawing, gaming, and typing.


Author(s):  
Andrej Zgank ◽  
Izidor Mlakar ◽  
Uros Berglez ◽  
Danilo Zimsek ◽  
Matej Borko ◽  
...  

The chapter presents an overview of human-computer interfaces, which are a crucial element of an ambient intelligence solution. The focus is given to the embodied conversational agents, which are needed to communicate with users in a most natural way. Different input and output modalities, with supporting methods, to process the captured information (e.g., automatic speech recognition, gesture recognition, natural language processing, dialog processing, text to speech synthesis, etc.), have the crucial role to provide the high level of quality of experience to the user. As an example, usage of embodied conversational agent for e-Health domain is proposed.


Author(s):  
Chang S. Nam ◽  
Matthew Moore ◽  
Inchul Choi ◽  
Yueqing Li

Despite the increase in research interest in the brain–computer interface (BCI), there remains a general lack of understanding of, and even inattention to, human factors/ergonomics (HF/E) issues in BCI research and development. The goal of this article is to raise awareness of the importance of HF/E involvement in the emerging field of BCI technology by providing HF/E researchers with a brief guide on how to design and implement a cost-effective, steady-state visually evoked potential (SSVEP)–based BCI system. We also discuss how SSVEP BCI systems can be improved to accommodate users with special needs.


2016 ◽  
Vol 25 (2) ◽  
pp. 208-230 ◽  
Author(s):  
Yousef Rezaei Tabar ◽  
Ugur Halici

Brain Computer Interface (BCI) systems provide control of external devices by using only brain activity. In recent years, there has been a great interest in developing BCI systems for different applications. These systems are capable of solving daily life problems for both healthy and disabled people. One of the most important applications of BCI is to provide communication for disabled people that are totally paralysed. In this paper, different parts of a BCI system and different methods used in each part are reviewed. Neuroimaging devices, with an emphasis on EEG (electroencephalography), are presented and brain activities as well as signal processing methods used in EEG-based BCIs are explained in detail. Current methods and paradigms in BCI based speech communication are considered.


Author(s):  
Matthew E. Taylor

Reinforcement learning (RL) has had many successes when learning autonomously. This paper and accompanying talk consider how to make use of a non-technical human participant, when available. In particular, we consider the case where a human could 1) provide demonstrations of good behavior, 2) provide online evaluative feedback, or 3) define a curriculum of tasks for the agent to learn on. In all cases, our work has shown such information can be effectively leveraged. After giving a high-level overview of this work, we will highlight a set of open questions and suggest where future work could be usefully focused.


2017 ◽  
Author(s):  
Noam Roth ◽  
Nicole C. Rust

AbstractFinding a sought visual target object requires combining visual information about a scene with a remembered representation of the target to create a “target match” signal that indicates when a target is in view. Target match signals have been reported to exist within high-level visual brain areas including inferotemporal cortex (IT), where they are mixed with representations of image and object identity. However, these signals are not well understood, particularly in the context of the real-world challenge that the objects we search for typically appear at different positions, sizes, and within different background contexts. To investigate these signals, we recorded neural responses in IT as two rhesus monkeys performed a delayed-match-to-sample object search task in which target objects could appear at a variety of identity-preserving transformations. Consistent with the existence of behaviorally-relevant target match signals in IT, we found that IT contained a linearly separable target match representation that reflected behavioral confusions on trials in which the monkeys made errors. Additionally, target match signals were highly distributed across the IT population, and while a small fraction of units reflected target match signals as target match suppression, most units reflected target match signals as target match enhancement. Finally, we found that the potentially detrimental impact of target match signals on visual representations was mitigated by target match modulation that was approximately (albeit imperfectly) multiplicative. Together, these results support the existence of a robust, behaviorally-relevant target match representation in IT that is configured to minimally interfere with IT visual representations.


Author(s):  
Konstantin Ryabinin ◽  
Svetlana Chuprina ◽  
Ivan Labutin

In the last decade, the recent advances in software and hardware facilitate the increase of interest in conducting experiments in the field of neurosciences, especially related to human-machine interaction. There are many mature and popular platforms leveraging experiments in this area including systems for representing the stimuli. However, these solutions often lack high-level adaptability to specific conditions, specific experiment setups, and third-party software and hardware, which may be involved in the experimental pipelines. This paper presents an adaptable solution based on ontology engineering that allows creating and tuning the EEG-based brain-computer interfaces. This solution relies on the ontology-driven SciVi visual analytics platform developed earlier. In the present work, we introduce new capabilities of SciVi, which enable organizing the pipeline for neuroscience-related experiments, including the representation of audio-visual stimuli, as well as retrieving, processing, and analyzing the EEG data. The distinctive feature of our approach is utilizing the ontological description of both the neural interface and processing tools used. This increases the semantic power of experiments, simplifies the reuse of pipeline parts between different experiments, and allows automatic distribution of data acquisition, storage, processing, and visualization on different computing nodes in the network to balance the computation load and to allow utilizing various hardware platforms, EEG devices, and stimuli controllers.


Sign in / Sign up

Export Citation Format

Share Document