scholarly journals Coordinating With a Robot Partner Affects Action Monitoring Related Neural Processing

2021 ◽  
Author(s):  
Artur Czeszumski ◽  
Anna L. Gert ◽  
Ashima Keshava ◽  
Ali Ghadirzadeh ◽  
Tilman Kalthoff ◽  
...  

Robots start to play a role in our social landscape, and they are progressively becoming responsive, both physically and socially. It begs the question of how humans react to and interact with robots in a coordinated manner and what the neural underpinnings of such behavior are. This exploratory study aims to understand the differences in human-human and human-robot interactions at a behavioral level and from a neurophysiological perspective. For this purpose, we adapted a collaborative dynamical paradigm from Hwang et al. (1). All 16 participants held two corners of a tablet while collaboratively guiding a ball around a circular track either with another participant or a robot. In irregular intervals, the ball was perturbed outward creating an artificial error in the behavior, which required corrective measures to return to the circular track again. Concurrently, we recorded electroencephalography (EEG). In the behavioral data, we found an increased velocity and positional error of the ball from the track in the human-human condition vs. human-robot condition. For the EEG data, we computed event-related potentials. To explore the temporal and spatial differences in the two conditions, we used time-regression with overlap-control and corrected for multiple-comparisons using Threshold-Free-Cluster Enhancement. We found a significant difference between human and robot partners driven by significant clusters at fronto-central electrodes. The amplitudes were stronger with a robot partner, suggesting a different neural processing. All in all, our exploratory study suggests that coordinating with robots affects action monitoring related processing. In the investigated paradigm, human participants treat errors during human-robot interaction differently from those made during interactions with other humans.

2021 ◽  
Vol 15 ◽  
Author(s):  
Artur Czeszumski ◽  
Anna L. Gert ◽  
Ashima Keshava ◽  
Ali Ghadirzadeh ◽  
Tilman Kalthoff ◽  
...  

Robots start to play a role in our social landscape, and they are progressively becoming responsive, both physically and socially. It begs the question of how humans react to and interact with robots in a coordinated manner and what the neural underpinnings of such behavior are. This exploratory study aims to understand the differences in human-human and human-robot interactions at a behavioral level and from a neurophysiological perspective. For this purpose, we adapted a collaborative dynamical paradigm from the literature. We asked 12 participants to hold two corners of a tablet while collaboratively guiding a ball around a circular track either with another participant or a robot. In irregular intervals, the ball was perturbed outward creating an artificial error in the behavior, which required corrective measures to return to the circular track again. Concurrently, we recorded electroencephalography (EEG). In the behavioral data, we found an increased velocity and positional error of the ball from the track in the human-human condition vs. human-robot condition. For the EEG data, we computed event-related potentials. We found a significant difference between human and robot partners driven by significant clusters at fronto-central electrodes. The amplitudes were stronger with a robot partner, suggesting a different neural processing. All in all, our exploratory study suggests that coordinating with robots affects action monitoring related processing. In the investigated paradigm, human participants treat errors during human-robot interaction differently from those made during interactions with other humans. These results can improve communication between humans and robot with the use of neural activity in real-time.


2019 ◽  
Vol 8 (7) ◽  
pp. 1077 ◽  
Author(s):  
Ping-Song Chou ◽  
Sharon Chia-Ju Chen ◽  
Chung-Yao Hsu ◽  
Li-Min Liou ◽  
Meng-Ni Wu ◽  
...  

(1) Background: Although it is known that obstructive sleep apnea (OSA) impairs action-monitoring function, there is only limited information regarding the associated cerebral substrate underlying this phenomenon. (2) Methods: The modified Flanker task, error-related event-related potentials (ERPs), namely, error-related negativity (ERN) and error positivity (Pe), and functional magnetic resonance imaging (fMRI) were used to evaluate neural activities and the functional connectivity underlying action-monitoring dysfunction in patients with different severities of OSA. (3) Results: A total of 14 control (Cont) subjects, 17 patients with moderate OSA (mOSA), and 10 patients with severe OSA (sOSA) were enrolled. A significant decline in posterror correction rate was observed in the modified Flanker task when patients with mOSA were compared with Cont subjects. Comparison between patients with mOSA and sOSA did not reveal any significant difference. In the analysis of ERPs, ERN and Pe exhibited declined amplitudes in patients with mOSA compared with Cont subjects, which were found to increase in patients with sOSA. Results of fMRI revealed a decreased correlation in multiple anterior cingulate cortex functional-connected areas in patients with mOSA compared with Cont subjects. However, these areas appeared to be reconnected in patients with sOSA. (4) Conclusions: The behavioral, neurophysiological, and functional image findings obtained in this study suggest that mOSA leads to action-monitoring dysfunction; however, compensatory neural recruitment might have contributed to the maintenance of the action-monitoring function in patients with sOSA.


2021 ◽  
Vol 12 ◽  
Author(s):  
Yuwei Yang ◽  
Shunshun Du ◽  
Hui He ◽  
Chengming Wang ◽  
Xueke Shan ◽  
...  

Although risk decision-making plays an important role in leadership practice, the distinction in behavior between humans with differing levels of leadership, as well as the underlying neurocognitive mechanisms involved, remain unclear. In this study, the Ultimatum Game (UG) was utilized in concert with electroencephalograms (EEG) to investigate the temporal course of cognitive and emotional processes involved in economic decision-making between high and low leadership level college students. Behavioral results from this study found that the acceptance rates in an economic transaction, when the partner was a computer under unfair/sub unfair condition, were significantly higher than in transactions with real human partners for the low leadership group, while there was no significant difference in acceptance rates for the high leadership group. Results from Event-Related Potentials (ERP) analysis further indicated that there was a larger P3 amplitude in the low leadership group than in the high leadership group. We concluded that the difference between high and low leadership groups was at least partly due to their different emotional management abilities.


2019 ◽  
Vol 9 (12) ◽  
pp. 362
Author(s):  
Antonia M. Karellas ◽  
Paul Yielder ◽  
James J. Burkitt ◽  
Heather S. McCracken ◽  
Bernadette A. Murphy

Multisensory integration (MSI) is necessary for the efficient execution of many everyday tasks. Alterations in sensorimotor integration (SMI) have been observed in individuals with subclinical neck pain (SCNP). Altered audiovisual MSI has previously been demonstrated in this population using performance measures, such as reaction time. However, neurophysiological techniques have not been combined with performance measures in the SCNP population to determine differences in neural processing that may contribute to these behavioral characteristics. Electroencephalography (EEG) event-related potentials (ERPs) have been successfully used in recent MSI studies to show differences in neural processing between different clinical populations. This study combined behavioral and ERP measures to characterize MSI differences between healthy and SCNP groups. EEG was recorded as 24 participants performed 8 blocks of a simple reaction time (RT) MSI task, with each block consisting of 34 auditory (A), visual (V), and audiovisual (AV) trials. Participants responded to the stimuli by pressing a response key. Both groups responded fastest to the AV condition. The healthy group demonstrated significantly faster RTs for the AV and V conditions. There were significant group differences in neural activity from 100–140 ms post-stimulus onset, with the control group demonstrating greater MSI. Differences in brain activity and RT between individuals with SCNP and a control group indicate neurophysiological alterations in how individuals with SCNP process audiovisual stimuli. This suggests that SCNP alters MSI. This study presents novel EEG findings that demonstrate MSI differences in a group of individuals with SCNP.


2020 ◽  
Vol 123 (3) ◽  
pp. 876-884 ◽  
Author(s):  
Gülsüm Akdeniz ◽  
Sadiye Gumusyayla ◽  
Gonul Vural ◽  
Hesna Bektas ◽  
Orhan Deniz

Migraine is a multifactorial brain disorder characterized by recurrent disabling headache attacks. One of the possible mechanisms in the pathogenesis of migraine may be a decrease in inhibitory cortical stimuli in the primary visual cortex attributable to cortical hyperexcitability. The aim of this study was to investigate the neural correlates underlying face and face pareidolia processing in terms of the event-related potential (ERP) components, N170, vertex positive potential (VPP), and N250, in patients with migraine. In total, 40 patients with migraine without aura, 23 patients with migraine and aura, and 30 healthy controls were enrolled. We recorded ERPs during the presentation of face and face pareidolia images. N170, VPP, and N250 mean amplitudes and latencies were examined. N170 was significantly greater in patients with migraine with aura than in healthy controls. VPP amplitude was significantly greater in patients with migraine without aura than in healthy controls. The face stimuli evoked significantly earlier VPP responses to faces (168.7 ms, SE = 1.46) than pareidolias (173.4 ms, SE = 1.41) in patients with migraine with aura. We did not find a significant difference between N250 amplitude for face and face pareidolia processing. A significant difference was observed between the groups for pareidolia in terms of N170 [F(2,86) = 14,75, P < 0.001] and VPP [F(2,86) = 16.43, P < 0.001] amplitudes. Early ERPs are a valuable tool to study the neural processing of face processing in patients with migraine to demonstrate visual cortical hyperexcitability. NEW & NOTEWORTHY Event-related potentials (ERPs) are important for understanding face and face pareidolia processing in patients with migraine. N170, vertex positive potential (VPP), and N250 ERPs were investigated. N170 was revealed as a potential component of cortical excitability for face and face pareidolia processing in patients with migraine.


2016 ◽  
Vol 2016 ◽  
pp. 1-8 ◽  
Author(s):  
Ahmed Izzidien ◽  
Sriharasha Ramaraju ◽  
Mohammed Ali Roula ◽  
Peter W. McCarthy

We aim to measure the postintervention effects of A-tDCS (anodal-tDCS) on brain potentials commonly used in BCI applications, namely, Event-Related Desynchronization (ERD), Event-Related Synchronization (ERS), and P300. Ten subjects were given sham and 1.5 mA A-tDCS for 15 minutes on two separate experiments in a double-blind, randomized order. Postintervention EEG was recorded while subjects were asked to perform a spelling task based on the “oddball paradigm” while P300 power was measured. Additionally, ERD and ERS were measured while subjects performed mental motor imagery tasks. ANOVA results showed that the absolute P300 power exhibited a statistically significant difference between sham and A-tDCS when measured over channel Pz (p=0.0002). However, the difference in ERD and ERS power was found to be statistically insignificant, in controversion of the the mainstay of the litrature on the subject. The outcomes confirm the possible postintervention effect of tDCS on the P300 response. Heightening P300 response using A-tDCS may help improve the accuracy of P300 spellers for neurologically impaired subjects. Additionally, it may help the development of neurorehabilitation methods targeting the parietal lobe.


2021 ◽  
Vol 12 ◽  
Author(s):  
Qiaoling Sun ◽  
Yehua Fang ◽  
Yongyan Shi ◽  
Lifeng Wang ◽  
Xuemei Peng ◽  
...  

Objective: Auditory verbal hallucinations (AVH), with unclear mechanisms, cause extreme distresses to schizophrenia patients. Deficits of inhibitory top-down control may be linked to AVH. Therefore, in this study, we focused on inhibitory top-down control in schizophrenia patients with AVH.Method: The present study recruited 40 schizophrenia patients, including 20 AVH patients and 20 non-AVH patients, and 23 healthy controls. We employed event-related potentials to investigate the N2 and P3 amplitude and latency differences among these participants during a Go/NoGo task.Results: Relative to healthy controls, the two patient groups observed longer reaction time (RT) and reduced accuracy. The two patient groups had smaller NoGo P3 amplitude than the healthy controls, and the AVH patients showed smaller NoGo P3 amplitude than the non-AVH patients. In all the groups, the parietal area showed smaller NoGo P3 than frontal and central areas. However, no significant difference was found in N2 and Go P3 amplitude between the three groups.Conclusions: AVH patients might have worse inhibitory top-down control, which might be involved in the occurrence of AVH. Hopefully, our results could enhance understanding of the pathology of AVH.


2018 ◽  
Author(s):  
Kyveli Kompatsiari ◽  
Jairo Pérez-Osorio ◽  
Davide De Tommaso ◽  
Giorgio Metta ◽  
Agnieszka Wykowska

The present study highlights the benefits of using well-controlled experimental designs, grounded in experimental psychology research and objective neuroscientific methods, for generating progress in human-robot interaction (HRI) research. In this study, we implemented a well-studied paradigm of attentional cueing through gaze (the so-called “joint attention” or “gaze cueing”) in an HRI protocol involving the iCub robot. We replicated the standard phenomenon of joint attention both in terms of behavioral measures and event-related potentials of the EEG signal. Our methodology of combining neuroscience methods with an HRI protocol opens promising avenues both for a better design of robots which are to interact with humans, and also for increasing the ecological validity of research in social and cognitive neuroscience.


2019 ◽  
Author(s):  
Amr Farahat ◽  
Christoph Reichert ◽  
Catherine M. Sweeney-Reed ◽  
Hermann Hinrichs

ABSTRACTObjectiveConvolutional neural networks (CNNs) have proven successful as function approximators and have therefore been used for classification problems including electroencephalography (EEG) signal decoding for brain-computer interfaces (BCI). Artificial neural networks, however, are considered black boxes, because they usually have thousands of parameters, making interpretation of their internal processes challenging. Here we systematically evaluate the use of CNNs for EEG signal decoding and investigate a method for visualizing the CNN model decision process.ApproachWe developed a CNN model to decode the covert focus of attention from EEG event-related potentials during object selection. We compared the CNN and the commonly used linear discriminant analysis (LDA) classifier performance, applied to datasets with different dimensionality, and analyzed transfer learning capacity. Moreover, we validated the impact of single model components by systematically altering the model. Furthermore, we investigated the use of saliency maps as a tool for visualizing the spatial and temporal features driving the model output.Main resultsThe CNN model and the LDA classifier achieved comparable accuracy on the lower-dimensional dataset, but CNN exceeded LDA performance significantly on the higher-dimensional dataset (without hypothesis-driven preprocessing), achieving an average decoding accuracy of 90.7% (chance level = 8.3%). Parallel convolutions, tanh or ELU activation functions, and dropout regularization proved valuable for model performance, whereas the sequential convolutions, ReLU activation function, and batch normalization components, reduced accuracy or yielded no significant difference. Saliency maps revealed meaningful features, displaying the typical spatial distribution and latency of the P300 component expected during this task.SignificanceFollowing systematic evaluation, we provide recommendations for when and how to use CNN models in EEG decoding. Moreover, we propose a new approach for investigating the neural correlates of a cognitive task by training CNN models on raw high-dimensional EEG data and utilizing saliency maps for relevant feature extraction.


PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0234219
Author(s):  
Georgette Argiris ◽  
Raffaella I. Rumiati ◽  
Davide Crepaldi

Category-specific impairments witnessed in patients with semantic deficits have broadly dissociated into natural and artificial kinds. However, how the category of food (more specifically, fruits and vegetables) fits into this distinction has been difficult to interpret, given a pattern of deficit that has inconsistently mapped onto either kind, despite its intuitive membership to the natural domain. The present study explores the effects of a manipulation of a visual sensory (i.e., color) or functional (i.e., orientation) feature on the consequential semantic processing of fruits and vegetables (and tools, by comparison), first at the behavioral and then at the neural level. The categorization of natural (i.e., fruits/vegetables) and artificial (i.e., utensils) entities was investigated via cross–modal priming. Reaction time analysis indicated a reduction in priming for color-modified natural entities and orientation-modified artificial entities. Standard event-related potentials (ERP) analysis was performed, in addition to linear classification. For natural entities, a N400 effect at central channel sites was observed for the color-modified condition compared relative to normal and orientation conditions, with this difference confirmed by classification analysis. Conversely, there was no significant difference between conditions for the artificial category in either analysis. These findings provide strong evidence that color is an integral property to the categorization of fruits/vegetables, thus substantiating the claim that feature-based processing guides as a function of semantic category.


Sign in / Sign up

Export Citation Format

Share Document