Chapter 3. Where do we look, what do we see, what do we talk about
In Сhapter 3 we compare how verbal and non-verbal visual information is processed. The questions we addresses are: How do the readers integrate text-figure information when reading and understanding verbal and non-verbal patterns, namely one and the same text in verbal for- mat and infographics? How the way humans perceive visual information determines the way they express it in natural language? How the verbalization affects the oculomotor behavior in visual processing? Our results support the assumption of the Cognitive Theory of Multimedia Learning that integration of verbal and pictural information with each other (a polycode text) helps the learners to understand and memorize the text and makes the comprehension easier. We demonstrate the advantages and disadvantages of the infographics (graphical visual repre- sentations of complex information) and verbal text. Also we discuss the relationship between visual processing of images and their verbalization. On one hand, the characteristics of eye movements when looking at the image determine its subsequent verbal description: the more fixations are made and the longer the gaze is directed to the certain area of the image, the more words are dedicated to this area in the following description. On the other hand, verbalization of the previously seen image affects the parameters of eye movements when re-viewing the same image, resulting with the appearance of the ambient processing pattern (short fixations and long saccades), while the re-viewing without verbalization results with the focal processing pattern (longer fixations and shorter saccades). The results obtained open up prospects for fur- ther research on visual perception and can also be used for computer vision models.