Proceedings #44: Closed-loop apparatus for brain state-dependent tES: a proof of principle based on SMR

2019 ◽  
Vol 12 (2) ◽  
pp. e114-e116
Author(s):  
Eliana Garcia-Cossio ◽  
Sophia Wunder ◽  
Klaus Schellhorn
2016 ◽  
Vol 9 (3) ◽  
pp. 415-424 ◽  
Author(s):  
Dominic Kraus ◽  
Georgios Naros ◽  
Robert Bauer ◽  
Fatemeh Khademi ◽  
Maria Teresa Leão ◽  
...  

2016 ◽  
Vol 127 (3) ◽  
pp. e41 ◽  
Author(s):  
C. Zrenner ◽  
J. Tünnerhoff ◽  
C. Zipser ◽  
F. Müller-Dahlhaus ◽  
U. Ziemann

2021 ◽  
Vol 11 (1) ◽  
pp. 38
Author(s):  
Aqsa Shakeel ◽  
Takayuki Onojima ◽  
Toshihisa Tanaka ◽  
Keiichi Kitajo

It is a technically challenging problem to assess the instantaneous brain state using electroencephalography (EEG) in a real-time closed-loop setup because the prediction of future signals is required to define the current state, such as the instantaneous phase and amplitude. To accomplish this in real-time, a conventional Yule–Walker (YW)-based autoregressive (AR) model has been used. However, the brain state-dependent real-time implementation of a closed-loop system employing an adaptive method has not yet been explored. Our primary purpose was to investigate whether time-series forward prediction using an adaptive least mean square (LMS)-based AR model would be implementable in a real-time closed-loop system or not. EEG state-dependent triggers synchronized with the EEG peaks and troughs of alpha oscillations in both an open-eyes resting state and a visual task. For the resting and visual conditions, statistical results showed that the proposed method succeeded in giving triggers at a specific phase of EEG oscillations for all participants. These individual results showed that the LMS-based AR model was successfully implemented in a real-time closed-loop system targeting specific phases of alpha oscillations and can be used as an adaptive alternative to the conventional and machine-learning approaches with a low computational load.


Author(s):  
Celia K S Lau ◽  
Meghan Jelen ◽  
Michael D Gordon

Abstract Feeding is an essential part of animal life that is greatly impacted by the sense of taste. Although the characterization of taste-detection at the periphery has been extensive, higher order taste and feeding circuits are still being elucidated. Here, we use an automated closed-loop optogenetic activation screen to detect novel taste and feeding neurons in Drosophila melanogaster. Out of 122 Janelia FlyLight Project GAL4 lines preselected based on expression pattern, we identify six lines that acutely promote feeding and 35 lines that inhibit it. As proof of principle, we follow up on R70C07-GAL4, which labels neurons that strongly inhibit feeding. Using split-GAL4 lines to isolate subsets of the R70C07-GAL4 population, we find both appetitive and aversive neurons. Furthermore, we show that R70C07-GAL4 labels putative second-order taste interneurons that contact both sweet and bitter sensory neurons. These results serve as a resource for further functional dissection of fly feeding circuits.


Author(s):  
Andreas Meinel ◽  
Jan Sosulski ◽  
Stephan Schraivogel ◽  
Janine Reis ◽  
Michael Tangermann

Sign in / Sign up

Export Citation Format

Share Document