visual integration
Recently Published Documents


TOTAL DOCUMENTS

317
(FIVE YEARS 52)

H-INDEX

33
(FIVE YEARS 3)

Author(s):  
Pablo Torres-Carrion ◽  
Carina González-González ◽  
César Bernal Bravo ◽  
Alfonso Infante-Moro

AbstractPeople with Down syndrome present cognitive difficulties that affect their reading skills. In this study, we present results about using gestural interaction with the Kinect sensor to improve the reading skills of students with Down syndrome. We found improvements in the visual association, visual comprehension, sequential memory, and visual integration after this stimulation in the experimental group compared to the control group. We also found that the number of errors and delay time in the interaction decreased between sessions in the experimental group.


2021 ◽  
Vol 5 (S3) ◽  
pp. 1651-1665
Author(s):  
Usman Sidabutar ◽  
Nenni Triana Sinaga ◽  
Nurhayati Sitorus ◽  
Febrika Dwi Lestari

The digitization of education is the use of technology as an aspect of the learning system, from the curriculum to the education system, which has a very big influence on innovation. As society and academic demand increase in the modern era, the online learning system (OLS) is encountered by society and academics. The multimodal approach refers to the huge learning competence in English ability as if the method or application is integrated in a digitizing manner. The multimodal approach contains using images, verbal and visual integration with words or text which provides meanings in the mode of communication. The process of analyzing and disclosing the text will give essence to the meaning of a picture message in a printed book will reveal projection, enhancement, concurrence. All of which are discussed based on linguistic analysis to relate the text to the general characteristics of language both verbally and visually. Systemic function Linguistics based on the concept of metafunction has ideational, interpersonal, and textual components linking the internal forms of language and their use in semiotic social contexts will be enacted to the research. The research will use descriptive and qualitative methods.


2021 ◽  
Author(s):  
Hauke S. Meyerhoff ◽  
Nina Gehrer ◽  
Simon Merz ◽  
Christian Frings

We introduce a new audio-visual illusion revealing the interplay between audio-visual integration and selective visual attention. This illusion involves two simultaneously moving objects that change their motion trajectory occasionally, but only the direction changes of one object are accompanied by spatially uninformative tones. We observed a selective increase in perceived object speed of the audio-visually synchronized object by measuring the point of subjective equality in a forced-choice paradigm. The illusory increase in perceived speed of the audio-visually synchronized object persisted when preventing eye movements. Using temporally matched color changes of the synchronized object also increased the perceived speed. Yet, using color changes of a surrounding frame instead of tones had no effect on perceived speed ruling out simple alertness explanations. Thus, in contrast to coinciding tones, visual coincidences only elicit illusory increases in perceived speed when the coincidence provided spatial information. Taken together, our pattern of results suggests that audio-visual synchrony attracts visual attention towards the coinciding visual object, leading to an increase in speed-perception and thus shedding new light on the interplay between attention and multisensory feature integration. We discuss potential limitations such as the choice of paradigm and outline prospective research question to further investigate the effect of audio-visual integration on perceived object speed.


2021 ◽  
Author(s):  
Changfu Pei ◽  
Yuan Qiu ◽  
Fali Li ◽  
Xunan Huang ◽  
Yajing Si ◽  
...  

Human linguistic units are hierarchical, and our brain responds differently when processing linguistic units during sentence comprehension, especially when the modality of the received signal is different (auditory, visual, or audio-visual). However, it is unclear how the brain processes and integrates language information at different linguistic units (words, phrases, and sentences) provided simultaneously in audio and visual modalities. To address the issue, we presented participants with sequences of short Chinese sentences through auditory or visual or combined audio-visual modalities, while electroencephalographic responses were recorded. With a frequency tagging approach, we analyzed the neural representations of basic linguistic units (i.e., characters/monosyllabic words) and higher-level linguistic structures (i.e., phrases and sentences) across the three modalities separately. We found that audio-visual integration occurs at all linguistic units, and the brain areas involved in the integration varied across different linguistic levels. In particular, the integration of sentences activated the local left prefrontal area. Therefore, we used continuous theta-burst stimulation (cTBS) to verify that the left prefrontal cortex plays a vital role in the audio-visual integration of sentence information. Our findings suggest the advantage of bimodal language comprehension at hierarchical stages in language-related information processing and provide evidence for the causal role of the left prefrontal regions in processing information of audio-visual sentences.


2021 ◽  
Author(s):  
◽  
Adele Cherise Hogan

<p>Visual motion prediction is essential for making key judgements about objects in the environment. These judgements are typically investigated using a time-to-contact (TTC) task, in which an object travels along a straight trajectory and disappears behind an occluder. Participants make a response coinciding with the moment the object would have contacted a visual landmark. The assumption is that the motion continues behind the occluder. This task is used to measure how we perceive and predict the arrival-time of objects. The addition of sound to TTC tasks generally enhances visual judgements. One characteristic which may affect how sound influences visual motion judgements is pitch. A rising pitch is associated with speeded motion and a falling pitch with slowed motion. Pitch change could therefore lead to biases in visual motion judgements; however, this has not yet been investigated. Furthermore, TTC tasks can utilise horizontal or vertical motion. In vertical motion, an additional variable that may be critical for TTC estimations is gravity. It is postulated that humans possess an internal model of gravity that allows us to make accurate predictions for downward motion. This model assumes faster downward than upward motion. However, this model can be wrongfully applied in constant speed tasks, producing faster speed estimations for downward stimuli when there is no acceleration. Therefore, vertical motion could lead to additional biases in visual motion judgements.  This thesis investigated whether pitch and gravity could affect the imagined speed of an object under occlusion. Specifically, a rising pitch was hypothesised to produce speeded predicted motion and falling pitch, slowed predicted motion. I investigated the influence of pitch change in vertical and horizontal planes. I also investigated two different aspects of pitch change, since dynamic pitch is a novel addition to TTC paradigms. Experiment 1A explored gradual pitch change and Experiment 1B used sudden pitch change. The hypothesised pitch effects were observed for a gradual, but not a sudden pitch change. However, a gravity effect was observed across both Experiments 1A and 1B, suggesting the presence of sound does not moderate this effect.  I also examined the cortical substrates of the audio-visual TTC task components by using transcranial magnetic stimulation (TMS) in Experiment 2. The superior temporal sulcus (STS) was targeted in this experiment, as it has been implicated in audio-visual integration. TMS causes neuronal inhibition, and as such, can be used to determine whether an area is involved in a task. If the STS is responsible for audio-visual integration in a TTC task, then TMS to the STS should disrupt the pitch effects evidenced in Experiment 1A. That is, a change in pitch should have no effect on TTC judgements compared to a constant tone. This result was evident only for rising tones, suggesting the involvement of the STS in the generating speeded predicted motion. The pitch effects observed in Experiment 1A and Experiment 2 implicate pitch in the production of biases in motion imagery for visual motion judgements, particularly for visual stimuli under occlusion.</p>


2021 ◽  
Author(s):  
◽  
Adele Cherise Hogan

<p>Visual motion prediction is essential for making key judgements about objects in the environment. These judgements are typically investigated using a time-to-contact (TTC) task, in which an object travels along a straight trajectory and disappears behind an occluder. Participants make a response coinciding with the moment the object would have contacted a visual landmark. The assumption is that the motion continues behind the occluder. This task is used to measure how we perceive and predict the arrival-time of objects. The addition of sound to TTC tasks generally enhances visual judgements. One characteristic which may affect how sound influences visual motion judgements is pitch. A rising pitch is associated with speeded motion and a falling pitch with slowed motion. Pitch change could therefore lead to biases in visual motion judgements; however, this has not yet been investigated. Furthermore, TTC tasks can utilise horizontal or vertical motion. In vertical motion, an additional variable that may be critical for TTC estimations is gravity. It is postulated that humans possess an internal model of gravity that allows us to make accurate predictions for downward motion. This model assumes faster downward than upward motion. However, this model can be wrongfully applied in constant speed tasks, producing faster speed estimations for downward stimuli when there is no acceleration. Therefore, vertical motion could lead to additional biases in visual motion judgements.  This thesis investigated whether pitch and gravity could affect the imagined speed of an object under occlusion. Specifically, a rising pitch was hypothesised to produce speeded predicted motion and falling pitch, slowed predicted motion. I investigated the influence of pitch change in vertical and horizontal planes. I also investigated two different aspects of pitch change, since dynamic pitch is a novel addition to TTC paradigms. Experiment 1A explored gradual pitch change and Experiment 1B used sudden pitch change. The hypothesised pitch effects were observed for a gradual, but not a sudden pitch change. However, a gravity effect was observed across both Experiments 1A and 1B, suggesting the presence of sound does not moderate this effect.  I also examined the cortical substrates of the audio-visual TTC task components by using transcranial magnetic stimulation (TMS) in Experiment 2. The superior temporal sulcus (STS) was targeted in this experiment, as it has been implicated in audio-visual integration. TMS causes neuronal inhibition, and as such, can be used to determine whether an area is involved in a task. If the STS is responsible for audio-visual integration in a TTC task, then TMS to the STS should disrupt the pitch effects evidenced in Experiment 1A. That is, a change in pitch should have no effect on TTC judgements compared to a constant tone. This result was evident only for rising tones, suggesting the involvement of the STS in the generating speeded predicted motion. The pitch effects observed in Experiment 1A and Experiment 2 implicate pitch in the production of biases in motion imagery for visual motion judgements, particularly for visual stimuli under occlusion.</p>


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Roy Harpaz ◽  
Minh Nguyet Nguyen ◽  
Armin Bahl ◽  
Florian Engert

AbstractComplex schooling behaviors result from local interactions among individuals. Yet, how sensory signals from neighbors are analyzed in the visuomotor stream of animals is poorly understood. Here, we studied aggregation behavior in larval zebrafish and found that over development larvae transition from overdispersed groups to tight shoals. Using a virtual reality assay, we characterized the algorithms fish use to transform visual inputs from neighbors into movement decisions. We found that young larvae turn away from virtual neighbors by integrating and averaging retina-wide visual occupancy within each eye, and by using a winner-take-all strategy for binocular integration. As fish mature, their responses expand to include attraction to virtual neighbors, which is based on similar algorithms of visual integration. Using model simulations, we show that the observed algorithms accurately predict group structure over development. These findings allow us to make testable predictions regarding the neuronal circuits underlying collective behavior in zebrafish.


Millennium ◽  
2021 ◽  
Vol 18 (1) ◽  
pp. 251-270
Author(s):  
Philipp Niewöhner

Abstract According to the written sources, the Iconoclast controversy was all about the veneration of icons. It started in the late seventh century, after most iconodule provinces had been lost to Byzantine rule, and lasted until the turn of the millennium or so, when icon veneration became generally established in the remaining parts of the Byzantine Empire. However, as far as material evidence and actual images are concerned, the Iconoclast controversy centred on apse images and other, equally large and monumental representations, none of which were ever venerated. Prior to Iconoclasm, such images had not been customary at Constantinople, where the early Christian tradition had been largely aniconic and focused on the symbol of the cross. Thus, the introduction of monumental Christian imagery to Constantinople appears to have been a major aspect of the Iconoclast controversy. This paper asks why and finds that the images in question, whilst not for veneration and therefore not essential to the theological debate, stood out for imperial propaganda. They led to close visual integration of the emperor and the church that had previously been kept apart, because aniconic traditions used to limit imperial presence inside Constantinopolitan church buildings. It seems, then, that the Iconoclast controversy, although conducted in religious terms, was partly driven by a hidden agenda of imperial appropriation and power play.


Sign in / Sign up

Export Citation Format

Share Document