Seeing the Same Words Differently: The Time Course of Automaticity and Top–Down Intention in Reading

2015 ◽  
Vol 27 (8) ◽  
pp. 1542-1551 ◽  
Author(s):  
Kristof Strijkers ◽  
Daisy Bertrand ◽  
Jonathan Grainger

We investigated how linguistic intention affects the time course of visual word recognition by comparing the brain's electrophysiological response to a word's lexical frequency, a well-established psycholinguistic marker of lexical access, when participants actively retrieve the meaning of the written input (semantic categorization) versus a situation where no language processing is necessary (ink color categorization). In the semantic task, the ERPs elicited by high-frequency words started to diverge from those elicited by low-frequency words as early as 120 msec after stimulus onset. On the other hand, when categorizing the colored font of the very same words in the color task, word frequency did not modulate ERPs until some 100 msec later (220 msec poststimulus onset) and did so for a shorter period and with a smaller scalp distribution. The results demonstrate that, although written words indeed elicit automatic recognition processes in the brain, the speed and quality of lexical processing critically depends on the top–down intention to engage in a linguistic task.

2016 ◽  
Vol 116 (6) ◽  
pp. 2497-2512 ◽  
Author(s):  
Anne Kösem ◽  
Anahita Basirat ◽  
Leila Azizi ◽  
Virginie van Wassenhove

During speech listening, the brain parses a continuous acoustic stream of information into computational units (e.g., syllables or words) necessary for speech comprehension. Recent neuroscientific hypotheses have proposed that neural oscillations contribute to speech parsing, but whether they do so on the basis of acoustic cues (bottom-up acoustic parsing) or as a function of available linguistic representations (top-down linguistic parsing) is unknown. In this magnetoencephalography study, we contrasted acoustic and linguistic parsing using bistable speech sequences. While listening to the speech sequences, participants were asked to maintain one of the two possible speech percepts through volitional control. We predicted that the tracking of speech dynamics by neural oscillations would not only follow the acoustic properties but also shift in time according to the participant's conscious speech percept. Our results show that the latency of high-frequency activity (specifically, beta and gamma bands) varied as a function of the perceptual report. In contrast, the phase of low-frequency oscillations was not strongly affected by top-down control. Whereas changes in low-frequency neural oscillations were compatible with the encoding of prelexical segmentation cues, high-frequency activity specifically informed on an individual's conscious speech percept.


1998 ◽  
Vol 2 ◽  
pp. 115-122
Author(s):  
Donatas Švitra ◽  
Jolanta Janutėnienė

In the practice of processing of metals by cutting it is necessary to overcome the vibration of the cutting tool, the processed detail and units of the machine tool. These vibrations in many cases are an obstacle to increase the productivity and quality of treatment of details on metal-cutting machine tools. Vibration at cutting of metals is a very diverse phenomenon due to both it’s nature and the form of oscillatory motion. The most general classification of vibrations at cutting is a division them into forced vibration and autovibrations. The most difficult to remove and poorly investigated are the autovibrations, i.e. vibrations arising at the absence of external periodic forces. The autovibrations, stipulated by the process of cutting on metalcutting machine are of two types: the low-frequency autovibrations and high-frequency autovibrations. When the low-frequency autovibration there appear, the cutting process ought to be terminated and the cause of the vibrations eliminated. Otherwise, there is a danger of a break of both machine and tool. In the case of high-frequency vibration the machine operates apparently quiently, but the processed surface feature small-sized roughness. The frequency of autovibrations can reach 5000 Hz and more.


2013 ◽  
Vol 25 (2) ◽  
pp. 175-187 ◽  
Author(s):  
Jihoon Oh ◽  
Jae Hyung Kwon ◽  
Po Song Yang ◽  
Jaeseung Jeong

Neural responses in early sensory areas are influenced by top–down processing. In the visual system, early visual areas have been shown to actively participate in top–down processing based on their topographical properties. Although it has been suggested that the auditory cortex is involved in top–down control, functional evidence of topographic modulation is still lacking. Here, we show that mental auditory imagery for familiar melodies induces significant activation in the frequency-responsive areas of the primary auditory cortex (PAC). This activation is related to the characteristics of the imagery: when subjects were asked to imagine high-frequency melodies, we observed increased activation in the high- versus low-frequency response area; when the subjects were asked to imagine low-frequency melodies, the opposite was observed. Furthermore, we found that A1 is more closely related to the observed frequency-related modulation than R in tonotopic subfields of the PAC. Our findings suggest that top–down processing in the auditory cortex relies on a mechanism similar to that used in the perception of external auditory stimuli, which is comparable to early visual systems.


2021 ◽  
pp. 1-34
Author(s):  
Hyein Jeong ◽  
Emiel van den Hoven ◽  
Sylvain Madec ◽  
Audrey Bürki

Abstract Usage-based theories assume that all aspects of language processing are shaped by the distributional properties of the language. The frequency not only of words but also of larger chunks plays a major role in language processing. These theories predict that the frequency of phrases influences the time needed to prepare these phrases for production and their acoustic duration. By contrast, dominant psycholinguistic models of utterance production predict no such effects. In these models, the system keeps track of the frequency of individual words but not of co-occurrences. This study investigates the extent to which the frequency of phrases impacts naming latencies and acoustic duration with a balanced design, where the same words are recombined to build high- and low-frequency phrases. The brain signal of participants is recorded so as to obtain information on the electrophysiological bases and functional locus of frequency effects. Forty-seven participants named pictures using high- and low-frequency adjective–noun phrases. Naming latencies were shorter for high-frequency than low-frequency phrases. There was no evidence that phrase frequency impacted acoustic duration. The electrophysiological signal differed between high- and low-frequency phrases in time windows that do not overlap with conceptualization or articulation processes. These findings suggest that phrase frequency influences the preparation of phrases for production, irrespective of the lexical properties of the constituents, and that this effect originates at least partly when speakers access and encode linguistic representations. Moreover, this study provides information on how the brain signal recorded during the preparation of utterances changes with the frequency of word combinations.


2019 ◽  
Vol 18 (8) ◽  
pp. 658-666 ◽  
Author(s):  
Ching-Hsiang Chen ◽  
Kuo-Sheng Hung ◽  
Yu-Chu Chung ◽  
Mei-Ling Yeh

Background: Stroke, a medical condition that causes physical disability and mental health problems, impacts negatively on quality of life. Post-stroke rehabilitation is critical to restoring quality of life in these patients. Objectives: This study was designed to evaluate the effect of a mind–body interactive qigong intervention on the physical and mental aspects of quality of life, considering bio-physiological and mental covariates in subacute stroke inpatients. Methods: A randomized controlled trial with repeated measures design was used. A total of 68 participants were recruited from the medical and rehabilitation wards at a teaching hospital in northern Taiwan and then randomly assigned either to the Chan-Chuang qigong group, which received standard care plus a 10-day mind–body interactive exercise program, or to the control group, which received standard care only. Data were collected using the National Institutes of Health Stroke Scale, Hospital Anxiety and Depression Scale, Short Form-12, stroke-related neurologic deficit, muscular strength, heart rate variability and fatigue at three time points: pre-intervention, halfway through the intervention (day 5) and on the final day of the intervention (day 10). Results: The results of the mixed-effect model analysis showed that the qigong group had a significantly higher quality of life score at day 10 ( p<0.05) than the control group. Among the covariates, neurologic deficit ( p=0.04), muscle strength ( p=0.04), low frequency to high frequency ratio ( p=0.02) and anxiety ( p=0.04) were significantly associated with changes in quality of life. Conversely, heart rate, heart rate variability (standard deviation of normal-to-normal intervals, low frequency and high frequency), fatigue and depression were not significantly associated with change in quality of life ( p >0.05). Conclusions: This study supports the potential benefits of a 10-day mind–body interactive exercise (Chan-Chuang qigong) program for subacute stroke inpatients and provides information that may be useful in planning adjunctive rehabilitative care for stroke inpatients.


Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4136 ◽  
Author(s):  
Sang Ho Choi ◽  
Heenam Yoon ◽  
Hyung Won Jin ◽  
Hyun Bin Kwon ◽  
Seong Min Oh ◽  
...  

Sleep plays a primary function for health and sustains physical and cognitive performance. Although various stimulation systems for enhancing sleep have been developed, they are difficult to use on a long-term basis. This paper proposes a novel stimulation system and confirms its feasibility for sleep. Specifically, in this study, a closed-loop vibration stimulation system that detects the heart rate (HR) and applies −n% stimulus beats per minute (BPM) computed on the basis of the previous 5 min of HR data was developed. Ten subjects participated in the evaluation experiment, in which they took a nap for approximately 90 min. The experiment comprised one baseline and three stimulation conditions. HR variability analysis showed that the normalized low frequency (LF) and LF/high frequency (HF) parameters significantly decreased compared to the baseline condition, while the normalized HF parameter significantly increased under the −3% stimulation condition. In addition, the HR density around the stimulus BPM significantly increased under the −3% stimulation condition. The results confirm that the proposed stimulation system could influence heart rhythm and stabilize the autonomic nervous system. This study thus provides a new stimulation approach to enhance the quality of sleep and has the potential for enhancing health levels through sleep manipulation.


2016 ◽  
Vol 33 ◽  
Author(s):  
FILIPP SCHMIDT ◽  
ANDREAS WEBER ◽  
ANKE HABERKAMP

AbstractVisual perception is not instantaneous; the perceptual representation of our environment builds up over time. This can strongly affect our responses to visual stimuli. Here, we study the temporal dynamics of visual processing by analyzing the time course of priming effects induced by the well-known Ebbinghaus illusion. In slower responses, Ebbinghaus primes produce effects in accordance with their perceptual appearance. However, in fast responses, these effects are reversed. We argue that this dissociation originates from the difference between early feedforward-mediated gist of the scene processing and later feedback-mediated more elaborate processing. Indeed, our findings are well explained by the differences between low-frequency representations mediated by the fast magnocellular pathway and high-frequency representations mediated by the slower parvocellular pathway. Our results demonstrate the potentially dramatic effect of response speed on the perception of visual illusions specifically and on our actions in response to objects in our visual environment generally.


Author(s):  
Robert Fiorentino

Research in neurolinguistics examines how language is organized and processed in the human brain. The findings from neurolinguistic studies on language can inform our understanding of the basic ingredients of language and the operations they undergo. In the domain of the lexicon, a major debate concerns whether and to what extent the morpheme serves as a basic unit of linguistic representation, and in turn whether and under what circumstances the processing of morphologically complex words involves operations that identify, activate, and combine morpheme-level representations during lexical processing. Alternative models positing some role for morphemes argue that complex words are processed via morphological decomposition and composition in the general case (full-decomposition models), or only under certain circumstances (dual-route models), while other models do not posit a role for morphemes (non-morphological models), instead arguing that complex words are related to their constituents not via morphological identity, but either via associations among whole-word representations or via similarity in formal and/or semantic features. Two main approaches to investigating the role of morphemes from a neurolinguistic perspective are neuropsychology, in which complex word processing is typically investigated in cases of brain insult or neurodegenerative disease, and brain imaging, which makes it possible to examine the temporal dynamics and neuroanatomy of complex word processing as it occurs in the brain. Neurolinguistic studies on morphology have examined whether the processing of complex words involves brain mechanisms that rapidly segment the input into potential morpheme constituents, how and under what circumstances morpheme representations are accessed from the lexicon, and how morphemes are combined to form complex morphosyntactic and morpho-semantic representations. Findings from this literature broadly converge in suggesting a role for morphemes in complex word processing, although questions remain regarding the precise time course by which morphemes are activated, the extent to which morpheme access is constrained by semantic or form properties, as well as regarding the brain mechanisms by which morphemes are ultimately combined into complex representations.


2019 ◽  
Vol 116 (6) ◽  
pp. 2027-2032 ◽  
Author(s):  
Jasper H. Fabius ◽  
Alessio Fracasso ◽  
Tanja C. W. Nijboer ◽  
Stefan Van der Stigchel

Humans move their eyes several times per second, yet we perceive the outside world as continuous despite the sudden disruptions created by each eye movement. To date, the mechanism that the brain employs to achieve visual continuity across eye movements remains unclear. While it has been proposed that the oculomotor system quickly updates and informs the visual system about the upcoming eye movement, behavioral studies investigating the time course of this updating suggest the involvement of a slow mechanism, estimated to take more than 500 ms to operate effectively. This is a surprisingly slow estimate, because both the visual system and the oculomotor system process information faster. If spatiotopic updating is indeed this slow, it cannot contribute to perceptual continuity, because it is outside the temporal regime of typical oculomotor behavior. Here, we argue that the behavioral paradigms that have been used previously are suboptimal to measure the speed of spatiotopic updating. In this study, we used a fast gaze-contingent paradigm, using high phi as a continuous stimulus across eye movements. We observed fast spatiotopic updating within 150 ms after stimulus onset. The results suggest the involvement of a fast updating mechanism that predictively influences visual perception after an eye movement. The temporal characteristics of this mechanism are compatible with the rate at which saccadic eye movements are typically observed in natural viewing.


Electronics ◽  
2019 ◽  
Vol 8 (2) ◽  
pp. 229
Author(s):  
Jiao Jiao ◽  
Lingda Wu

In order to improve the fusion quality of multispectral (MS) and panchromatic (PAN) images, a pansharpening method with a gradient domain guided image filter (GIF) that is based on non-subsampled shearlet transform (NSST) is proposed. First, multi-scale decomposition of MS and PAN images is performed by NSST. Second, different fusion rules are designed for high- and low-frequency coefficients. A fusion rule that is based on morphological filter-based intensity modulation (MFIM) technology is proposed for the low-frequency coefficients, and the edge refinement is carried out based on a gradient domain GIF to obtain the fused low-frequency coefficients. For the high-frequency coefficients, a fusion rule based on an improved pulse coupled neural network (PCNN) is adopted. The gradient domain GIF optimizes the firing map of the PCNN model, and then the fusion decision map is calculated to guide the fusion of the high-frequency coefficients. Finally, the fused high- and low-frequency coefficients are reconstructed with inverse NSST to obtain the fusion image. The proposed method was tested using the WorldView-2 and QuickBird data sets; the subjective visual effects and objective evaluation demonstrate that the proposed method is superior to the state-of-the-art pansharpening methods, and it can efficiently improve the spatial quality and spectral maintenance.


Sign in / Sign up

Export Citation Format

Share Document