Low level lead inhibits the human brain cation pump

Life Sciences ◽  
1991 ◽  
Vol 48 (22) ◽  
pp. 2149-2156 ◽  
Author(s):  
John M. Bertoni ◽  
Pamela M. Sprenkle
Keyword(s):  
1967 ◽  
Vol 1 (3) ◽  
pp. 240-251 ◽  
Author(s):  
Eli Robins ◽  
James M. Robins ◽  
Adele B. Croninger ◽  
Sylvia G. Moses ◽  
Sylvia J. Spencer ◽  
...  
Keyword(s):  

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Yi-Chia Shan ◽  
Wei Fang ◽  
Yang-Chyuan Chang ◽  
Wen-Dien Chang ◽  
Jih-Huah Wu

In our previous study, the low-level laser (LLL) stimulation at the palm with a stimulation frequency of 10 Hz was able to induce significant brain activation in normal subjects. The electroencephalography (EEG) changes caused by the stimulation of light-emitting diode (LED) in normal subjects have not been investigated. This study aimed at identifying the effects of LED stimulation on the human brain using EEG analysis. Moreover, the dosage has been raised 4 times than that in the previous LLL study. The LED array stimulator (6 pcs LEDs, central wavelength 850 nm, output power 30 mW, and operating frequency 10 Hz) was used as the stimulation source. The LED stimulation was found to induce significant variation in alpha activity in the occipital, parietal, and temporal regions of the brain. Compared to the previous low-level laser study, LED has similar effects on EEG in alpha (8–12 Hz) activity. Theta (4–7 Hz) power significantly increased in the posterior head region of the brain. The effect lasted for at least 15 minutes after stimulation ceased. Conversely, beta (13–35 Hz) intensity in the right parietal area increased significantly, and a biphasic dose response has been observed in this study.


2021 ◽  
Author(s):  
Francesca M. Barbero ◽  
Roberta P. Calce ◽  
Siddharth Talwar ◽  
Bruno Rossion ◽  
Olivier Collignon

AbstractVoices are arguably among the most relevant sounds in humans’ everyday life, and several studies have suggested the existence of voice-selective regions in the human brain. Despite two decades of research, defining the human brain regions supporting voice recognition remains challenging. Moreover, whether neural selectivity to voices is merely driven by acoustic properties specific to human voices (e.g. spectrogram, harmonicity), or whether it also reflects a higher-level categorization response is still under debate. Here, we objectively measured rapid automatic categorization responses to human voices with Fast Periodic Auditory Stimulation (FPAS) combined with electroencephalography (EEG). Participants were tested with stimulation sequences containing heterogeneous non-vocal sounds from different categories presented at 4 Hz (i.e., 4 stimuli/second), with vocal sounds appearing every 3 stimuli (1.333 Hz). A few minutes of stimulation are sufficient to elicit robust 1.333 Hz voice-selective focal brain responses over superior temporal regions of individual participants. This response is virtually absent for sequences using frequency-scrambled sounds, but is clearly observed when voices are presented among sounds from musical instruments matched for pitch and harmonicity-to-noise ratio. Overall, our FPAS paradigm demonstrates that the human brain seamlessly categorizes human voices when compared to other sounds including matched musical instruments and that voice-selective responses are at least partially independent from low-level acoustic features, making it a powerful and versatile tool to understand human auditory categorization in general.Significance statementVoices are arguably among the most relevant sounds we hear in our everyday life, and several studies have corroborated the existence of regions in the human brain that respond preferentially to voices. However, whether this preference is driven by specific acoustic properties of voices or if it rather reflects a higher-level categorization response to voices is still under debate. We propose a new approach to objectively identify rapid automatic voice-selective responses with frequency tagging and electroencephalographic recordings. In four minutes of recording only, we recorded robust voice-selective responses independent from low-level acoustic cues, making this approach highly promising for studying auditory perception in children and clinical populations.


Brain ◽  
2016 ◽  
Vol 139 (8) ◽  
pp. 2290-2306 ◽  
Author(s):  
Marie K. Bondulich ◽  
Tong Guo ◽  
Christopher Meehan ◽  
John Manion ◽  
Teresa Rodriguez Martin ◽  
...  
Keyword(s):  

2020 ◽  
Author(s):  
Leonardo Ceravolo ◽  
Coralie Debracque ◽  
Thibaud Gruber ◽  
Didier Grandjean

AbstractIn recent years, research on voice processing, particularly the study of temporal voice areas (TVA), was dedicated almost exclusively to human voice. To characterize commonalities and differences regarding primate vocalization representations in the human brain, the inclusion of closely related primates, especially chimpanzees and bonobos, is needed. We hypothesized that commonalities would depend on both phylogenetic and acoustic proximity, with chimpanzees ranking the closest to Homo. Presenting human participants with four primate species vocalizations (rhesus macaques, chimpanzees, bonobos and humans) and taking into account acoustic distance or removing voxels explained solely by vocalization low-level acoustics, we observed within-TVA enhanced left and right anterior superior temporal gyrus activity for chimpanzee compared to all other species, and chimpanzee compared to human vocalizations. Our results provide evidence for a common neural basis in the TVA for the processing of phylogenetically and acoustically close vocalizations, namely those of humans and chimpanzees.


Sign in / Sign up

Export Citation Format

Share Document