Real-Time Single-Cell Monitoring of Drug Effects Using Droplet-Based Microfluidic Technology: A Proof-of-Concept Study

Author(s):  
Muge Kasim ◽  
Elif Gencturk ◽  
Kutlu O. Ulgen
2013 ◽  
Vol 27 (1) ◽  
pp. 138-148 ◽  
Author(s):  
Annette Beatrix Brühl ◽  
Sigrid Scherpiet ◽  
James Sulzer ◽  
Philipp Stämpfli ◽  
Erich Seifritz ◽  
...  

2020 ◽  
Author(s):  
Eleonora De Filippi ◽  
Mara Wolter ◽  
Bruno Melo ◽  
Carlos J. Tierra-Criollo ◽  
Tiago Bortolini ◽  
...  

AbstractDuring the last decades, neurofeedback training for emotional self-regulation has received significant attention from both the scientific and clinical communities. However, most studies have focused on broader emotional states such as “negative vs. positive”, primarily due to our poor understanding of the functional anatomy of more complex emotions at the electrophysiological level. Our proof-of-concept study aims at investigating the feasibility of classifying two complex emotions that have been implicated in mental health, namely tenderness and anguish, using features extracted from the electroencephalogram (EEG) signal in healthy participants. Electrophysiological data were recorded from fourteen participants during a block-designed experiment consisting of emotional self-induction trials combined with a multimodal virtual scenario. For the within-subject classification, the linear Support Vector Machine was trained with two sets of samples: random cross-validation of the sliding windows of all trials; and 2) strategic cross-validation, assigning all the windows of one trial to the same fold. Spectral features, together with the frontal-alpha asymmetry, were extracted using Complex Morlet Wavelet analysis. Classification results with these features showed an accuracy of 79.3% on average when doing random cross-validation, and 73.3% when applying strategic cross-validation. We extracted a second set of features from the amplitude time-series correlation analysis, which significantly enhanced random cross-validation accuracy while showing similar performance to spectral features when doing strategic cross-validation. These results suggest that complex emotions show distinct electrophysiological correlates, which paves the way for future EEG-based, real-time neurofeedback training of complex emotional states.Significance statementThere is still little understanding about the correlates of high-order emotions (i.e., anguish and tenderness) in the physiological signals recorded with the EEG. Most studies have investigated emotions using functional magnetic resonance imaging (fMRI), including the real-time application in neurofeedback training. However, concerning the therapeutic application, EEG is a more suitable tool with regards to costs and practicability. Therefore, our proof-of-concept study aims at establishing a method for classifying complex emotions that can be later used for EEG-based neurofeedback on emotion regulation. We recorded EEG signals during a multimodal, near-immersive emotion-elicitation experiment. Results demonstrate that intraindividual classification of discrete emotions with features extracted from the EEG is feasible and may be implemented in real-time to enable neurofeedback.


2018 ◽  
Vol 44 (suppl_1) ◽  
pp. S174-S174
Author(s):  
Natasza Orlov ◽  
Vincent Giampietro ◽  
Owen O’Daly ◽  
Gareth Barker ◽  
Katya Rubia ◽  
...  

2015 ◽  
Vol 112 (32) ◽  
pp. 9978-9983 ◽  
Author(s):  
David Calligaris ◽  
Daniel R. Feldman ◽  
Isaiah Norton ◽  
Olutayo Olubiyi ◽  
Armen N. Changelian ◽  
...  

We present a proof of concept study designed to support the clinical development of mass spectrometry imaging (MSI) for the detection of pituitary tumors during surgery. We analyzed by matrix-assisted laser desorption/ionization (MALDI) MSI six nonpathological (NP) human pituitary glands and 45 hormone secreting and nonsecreting (NS) human pituitary adenomas. We show that the distribution of pituitary hormones such as prolactin (PRL), growth hormone (GH), adrenocorticotropic hormone (ACTH), and thyroid stimulating hormone (TSH) in both normal and tumor tissues can be assessed by using this approach. The presence of most of the pituitary hormones was confirmed by using MS/MS and pseudo-MS/MS methods, and subtyping of pituitary adenomas was performed by using principal component analysis (PCA) and support vector machine (SVM). Our proof of concept study demonstrates that MALDI MSI could be used to directly detect excessive hormonal production from functional pituitary adenomas and generally classify pituitary adenomas by using statistical and machine learning analyses. The tissue characterization can be completed in fewer than 30 min and could therefore be applied for the near-real-time detection and delineation of pituitary tumors for intraoperative surgical decision-making.


2013 ◽  
Vol 442 (1-2) ◽  
pp. 20-26 ◽  
Author(s):  
James D. Stephens ◽  
Brian R. Kowalczyk ◽  
Bruno C. Hancock ◽  
Goldi Kaul ◽  
Cetin Cetinkaya

2021 ◽  
Author(s):  
Victor E Staartjes ◽  
Anna Volokitin ◽  
Luca Regli ◽  
Ender Konukoglu ◽  
Carlo Serra

Abstract BACKGROUND Current intraoperative orientation methods either rely on preoperative imaging, are resource-intensive to implement, or difficult to interpret. Real-time, reliable anatomic recognition would constitute another strong pillar on which neurosurgeons could rest for intraoperative orientation. OBJECTIVE To assess the feasibility of machine vision algorithms to identify anatomic structures using only the endoscopic camera without prior explicit anatomo-topographic knowledge in a proof-of-concept study. METHODS We developed and validated a deep learning algorithm to detect the nasal septum, the middle turbinate, and the inferior turbinate during endoscopic endonasal approaches based on endoscopy videos from 23 different patients. The model was trained in a weakly supervised manner on 18 and validated on 5 patients. Performance was compared against a baseline consisting of the average positions of the training ground truth labels using a semiquantitative 3-tiered system. RESULTS We used 367 images extracted from the videos of 18 patients for training, as well as 182 test images extracted from the videos of another 5 patients for testing the fully developed model. The prototype machine vision algorithm was able to identify the 3 endonasal structures qualitatively well. Compared to the baseline model based on location priors, the algorithm demonstrated slightly but statistically significantly (P < .001) improved annotation performance. CONCLUSION Automated recognition of anatomic structures in endoscopic videos by means of a machine vision model using only the endoscopic camera without prior explicit anatomo-topographic knowledge is feasible. This proof of concept encourages further development of fully automated software for real-time intraoperative anatomic guidance during surgery.


Sign in / Sign up

Export Citation Format

Share Document