About the Change in the Amount of Perspiration and Steer Characteristic by Giving Visual Information and Body Sensory Information in the Drift Cornering

2005 ◽  
Author(s):  
Hiromichi Nozaki
2012 ◽  
Vol 25 (0) ◽  
pp. 111
Author(s):  
Shuichi Sakamoto ◽  
Gen Hasegawa ◽  
Akio Honda ◽  
Yukio Iwaya ◽  
Yôiti Suzuki ◽  
...  

High-definition multimodal displays are necessary to advance information and communications technologies. Such systems mainly present audio–visual information because this sensory information includes rich spatiotemporal information. Recently, not only audio–visual information but also other sensory information, for example touch, smell, and vibration, has come to be presented easily. The potential of such information is expanded to realize high-definition multimodal displays. We specifically examined the effects of full body vibration information on perceived reality from audio–visual content. As indexes of perceived reality, we used the sense of presence and the sense of verisimilitude. The latter is the appreciative role of foreground components in multimodal contents, although the former is related more closely to background components included in a scene. Our previous report described differences of characteristics of both senses to audio–visual contents (Kanda et al., IMRF2011). In the present experiments, various amounts of full body vibration were presented with an audio–visual movie, which was recorded via a camera and microphone set on wheelchair. Participants reported the amounts of perceived sense of presence and verisimilitude. Results revealed that the intensity of full body vibration characterized both senses differently. The sense of presence increased linearly according to the intensity of full body vibration, while the sense of verisimilitude showed a nonlinear tendency. These results suggest that not only audio–visual information but also full body vibration is importantto develop high-definition multimodal displays.


2018 ◽  
Vol 5 (2) ◽  
pp. 171785 ◽  
Author(s):  
Martin F. Strube-Bloss ◽  
Wolfgang Rössler

Flowers attract pollinating insects like honeybees by sophisticated compositions of olfactory and visual cues. Using honeybees as a model to study olfactory–visual integration at the neuronal level, we focused on mushroom body (MB) output neurons (MBON). From a neuronal circuit perspective, MBONs represent a prominent level of sensory-modality convergence in the insect brain. We established an experimental design allowing electrophysiological characterization of olfactory, visual, as well as olfactory–visual induced activation of individual MBONs. Despite the obvious convergence of olfactory and visual pathways in the MB, we found numerous unimodal MBONs. However, a substantial proportion of MBONs (32%) responded to both modalities and thus integrated olfactory–visual information across MB input layers. In these neurons, representation of the olfactory–visual compound was significantly increased compared with that of single components, suggesting an additive, but nonlinear integration. Population analyses of olfactory–visual MBONs revealed three categories: (i) olfactory, (ii) visual and (iii) olfactory–visual compound stimuli. Interestingly, no significant differentiation was apparent regarding different stimulus qualities within these categories. We conclude that encoding of stimulus quality within a modality is largely completed at the level of MB input, and information at the MB output is integrated across modalities to efficiently categorize sensory information for downstream behavioural decision processing.


2011 ◽  
Vol 105 (2) ◽  
pp. 846-859 ◽  
Author(s):  
Lore Thaler ◽  
Melvyn A. Goodale

Studies that have investigated how sensory feedback about the moving hand is used to control hand movements have relied on paradigms such as pointing or reaching that require subjects to acquire target locations. In the context of these target-directed tasks, it has been found repeatedly that the human sensory-motor system relies heavily on visual feedback to control the ongoing movement. This finding has been formalized within the framework of statistical optimality according to which different sources of sensory feedback are combined such as to minimize variance in sensory information during movement control. Importantly, however, many hand movements that people perform every day are not target-directed, but based on allocentric (object-centered) visual information. Examples of allocentric movements are gesture imitation, drawing, or copying. Here we tested if visual feedback about the moving hand is used in the same way to control target-directed and allocentric hand movements. The results show that visual feedback is used significantly more to reduce movement scatter in the target-directed as compared with the allocentric movement task. Furthermore, we found that differences in the use of visual feedback between target-directed and allocentric hand movements cannot be explained based on differences in uncertainty about the movement goal. We conclude that the role played by visual feedback for movement control is fundamentally different for target-directed and allocentric movements. The results suggest that current computational and neural models of sensorimotor control that are based entirely on data derived from target-directed paradigms have to be modified to accommodate performance in the allocentric tasks used in our experiments. As a consequence, the results cast doubt on the idea that models of sensorimotor control developed exclusively from data obtained in target-directed paradigms are also valid in the context of allocentric tasks, such as drawing, copying, or imitative gesturing, that characterize much of human behavior.


Author(s):  
Yuri B. Saalmann ◽  
Sabine Kastner

Neural mechanisms of selective attention route behaviourally relevant information through brain networks for detailed processing. These attention mechanisms are classically viewed as being solely implemented in the cortex, relegating the thalamus to a passive relay of sensory information. However, this passive view of the thalamus is being revised in light of recent studies supporting an important role for the thalamus in selective attention. Evidence suggests that the first-order thalamic nucleus, the lateral geniculate nucleus, regulates the visual information transmitted from the retina to visual cortex, while the higher-order thalamic nucleus, the pulvinar, regulates information transmission between visual cortical areas, according to attentional demands. This chapter discusses how modulation of thalamic responses, switching the response mode of thalamic neurons, and changes in neural synchrony across thalamo-cortical networks contribute to selective attention.


2018 ◽  
Vol 31 (3-4) ◽  
pp. 227-249 ◽  
Author(s):  
Alix L. de Dieuleveult ◽  
Anne-Marie Brouwer ◽  
Petra C. Siemonsma ◽  
Jan B. F. van Erp ◽  
Eli Brenner

Older individuals seem to find it more difficult to ignore inaccurate sensory cues than younger individuals. We examined whether this could be quantified using an interception task. Twenty healthy young adults (age 18–34) and twenty-four healthy older adults (age 60–82) were asked to tap on discs that were moving downwards on a screen with their finger. Moving the background to the left made the discs appear to move more to the right. Moving the background to the right made them appear to move more to the left. The discs disappeared before the finger reached the screen, so participants had to anticipate how the target would continue to move. We examined how misjudging the disc’s motion when the background moves influenced tapping. Participants received veridical feedback about their performance, so their sensitivity to the illusory motion indicates to what extent they could ignore the task-irrelevant visual information. We expected older adults to be more sensitive to the illusion than younger adults. To investigate whether sensorimotor or cognitive load would increase this sensitivity, we also asked participants to do the task while standing on foam or counting tones. Background motion influenced older adults more than younger adults. The secondary tasks did not increase the background’s influence. Older adults might be more sensitive to the moving background because they find it more difficult to ignore irrelevant sensory information in general, but they may rely more on vision because they have less reliable proprioceptive and vestibular information.


Author(s):  
Chris L. E. Paffen ◽  
Andre Sahakian ◽  
Marijn E. Struiksma ◽  
Stefan Van der Stigchel

AbstractOne of the most influential ideas within the domain of cognition is that of embodied cognition, in which the experienced world is the result of an interplay between an organism’s physiology, sensorimotor system, and its environment. An aspect of this idea is that linguistic information activates sensory representations automatically. For example, hearing the word ‘red’ would automatically activate sensory representations of this color. But does linguistic information prioritize access to awareness of congruent visual information? Here, we show that linguistic verbal cues accelerate matching visual targets into awareness by using a breaking continuous flash suppression paradigm. In a speeded reaction time task, observers heard spoken color labels (e.g., red) followed by colored targets that were either congruent (red), incongruent (green), or neutral (a neutral noncolor word) with respect to the labels. Importantly, and in contrast to previous studies investigating a similar question, the incidence of congruent trials was not higher than that of incongruent trials. Our results show that RTs were selectively shortened for congruent verbal–visual pairings, and that this shortening occurred over a wide range of cue–target intervals. We suggest that linguistic verbal information preactivates sensory representations, so that hearing the word ‘red’ preactivates (visual) sensory information internally.


Author(s):  
Farran Briggs

Many mammals, including humans, rely primarily on vision to sense the environment. While a large proportion of the brain is devoted to vision in highly visual animals, there are not enough neurons in the visual system to support a neuron-per-object look-up table. Instead, visual animals evolved ways to rapidly and dynamically encode an enormous diversity of visual information using minimal numbers of neurons (merely hundreds of millions of neurons and billions of connections!). In the mammalian visual system, a visual image is essentially broken down into simple elements that are reconstructed through a series of processing stages, most of which occur beneath consciousness. Importantly, visual information processing is not simply a serial progression along the hierarchy of visual brain structures (e.g., retina to visual thalamus to primary visual cortex to secondary visual cortex, etc.). Instead, connections within and between visual brain structures exist in all possible directions: feedforward, feedback, and lateral. Additionally, many mammalian visual systems are organized into parallel channels, presumably to enable efficient processing of information about different and important features in the visual environment (e.g., color, motion). The overall operations of the mammalian visual system are to: (1) combine unique groups of feature detectors in order to generate object representations and (2) integrate visual sensory information with cognitive and contextual information from the rest of the brain. Together, these operations enable individuals to perceive, plan, and act within their environment.


2007 ◽  
Vol 98 (4) ◽  
pp. 2399-2413 ◽  
Author(s):  
Vivian M. Ciaramitaro ◽  
Giedrius T. Buračas ◽  
Geoffrey M. Boynton

Attending to a visual or auditory stimulus often requires irrelevant information to be filtered out, both within the modality attended and in other modalities. For example, attentively listening to a phone conversation can diminish our ability to detect visual events. We used functional magnetic resonance imaging (fMRI) to examine brain responses to visual and auditory stimuli while subjects attended visual or auditory information. Although early cortical areas are traditionally considered unimodal, we found that brain responses to the same ignored information depended on the modality attended. In early visual area V1, responses to ignored visual stimuli were weaker when attending to another visual stimulus, compared with attending to an auditory stimulus. The opposite was true in more central visual area MT+, where responses to ignored visual stimuli were weaker when attending to an auditory stimulus. Furthermore, fMRI responses to the same ignored visual information depended on the location of the auditory stimulus, with stronger responses when the attended auditory stimulus shared the same side of space as the ignored visual stimulus. In early auditory cortex, responses to ignored auditory stimuli were weaker when attending a visual stimulus. A simple parameterization of our data can describe the effects of redirecting attention across space within the same modality (spatial attention) or across modalities (cross-modal attention), and the influence of spatial attention across modalities (cross-modal spatial attention). Our results suggest that the representation of unattended information depends on whether attention is directed to another stimulus in the same modality or the same region of space.


2011 ◽  
Vol 23 (2) ◽  
pp. 491-501 ◽  
Author(s):  
Hans Supèr ◽  
August Romeo

Perceptual filling-in is the phenomenon where visual information is perceived although information is not physically present. For instance, the blind spot, which corresponds to the retinal location where there are no photoreceptor cells to capture the visual signals, is filled-in by the surrounding visual signals. The neural mechanism for such immediate filling-in of surfaces is unclear. By means of computational modeling, we show that surround inhibition produces rebound or after-discharge spiking in neurons that otherwise do not receive sensory information. The behavior of rebound spiking mimics the immediate surface filling-in illusion observed at the blind spot and also reproduces the filling-in of an empty object after a background flash, like in the color dove illusion. In conclusion, we propose rebound spiking as a possible neural mechanism for surface filling-in.


2019 ◽  
Vol 50 (6) ◽  
pp. 429-435
Author(s):  
Takayuki Kodama ◽  
Osamu Katayama ◽  
Hideki Nakano ◽  
Tomohiro Ueda ◽  
Shin Murata

Objective. We describe the case of a 66-year-old Japanese male patient who developed medial medullary infarction along with severe motor paralysis and intense numbness of the left arm, pain catastrophizing, and abnormal physical sensation. We further describe his recovery using a new imagery neurofeedback-based multisensory systems (iNems) training method. Clinical Course and Intervention. The patient underwent physical therapy for the rehabilitation of motor paralysis and numbness of the paralyzed upper limbs; in addition, we implemented iNems training using EEG activity, which aims to synchronize movement intent (motor imagery) with sensory information (feedback visual information). Results. Considerable improvement in motor function, pain catastrophizing, representation of the body in the brain, and abnormal physical sensations was accomplished with iNems training. Furthermore, iNems training improved the neural activity of the default mode network at rest and the sensorimotor region when the movement was intended. Conclusions. The newly developed iNems could prove a novel, useful tool for neurorehabilitation considering that both behavioral and neurophysiological changes were observed in our case.


Sign in / Sign up

Export Citation Format

Share Document