The Things Attracting Our Attention: Evidence from Text Reading

Author(s):  
Lijing Chen
Keyword(s):  
2012 ◽  
Vol 71 (3) ◽  
pp. 141-148 ◽  
Author(s):  
Doriane Gras ◽  
Hubert Tardieu ◽  
Serge Nicolas

Predictive inferences are anticipations of what could happen next in the text we are reading. These inferences seem to be activated during reading, but a delay is necessary for their construction. To determine the length of this delay, we first used a classical word-naming task. In the second experiment, we used a Stroop-like task to verify that inference activation was not due to strategies applied during the naming task. The results show that predictive inferences are naturally activated during text reading, after approximately 1 s.


Author(s):  
Tobias Alf Kroll ◽  
A. Alexandre Trindade ◽  
Amber Asikis ◽  
Melissa Salas ◽  
Marcy Lau ◽  
...  

Author(s):  
Khoirunnisa Safitri ◽  
Sudarsono Sudarsono

This research aims to develop Pop-Up Book as supplementary media to support the teaching of narrative texts and to evaluate whether or not the media are feasible to teach narrative texts to the tenth grade students of SMA Negeri 8 Pontianak. The media consisted of narrative texts with pop-up pictures. They were divided based on the structure of a narrative text. The materials were taken from the students’ textbook that the researcher has simplified. The procedures were adapted from ADDIE Model proposed by Branch and it used three phases, namely, Analyse, Design, and Develop. From analyse phase, it was found that the students needed interesting media that was visually attractive to engage them in the teaching learning process and to support the existing materials. The Design phase covered the aspects, which were the focus of the media, of the materials and the pictures for the media, and the structure of the media. The Development phase concerned the development of the essential parts of the media. According to the evaluation result, the media are considered feasible to be applied by the teachers to teach narrative text reading.


Author(s):  
Inna Feltsan ◽  
◽  
Vitaliia Garapko ◽  
Diana Malinovska ◽  
◽  
...  
Keyword(s):  

2021 ◽  
pp. 0145482X2110274
Author(s):  
Christina Granquist ◽  
Susan Y. Sun ◽  
Sandra R. Montezuma ◽  
Tu M. Tran ◽  
Rachel Gage ◽  
...  

Introduction: We compared the print-to-speech properties and human performance characteristics of two artificial intelligence vision aids, Orcam MyEye 1 (a portable device) and Seeing AI (an iPhone and iPad application). Methods: There were seven participants with visual impairments who had no experience with the two reading aids. Four participants had no light perception. Two individuals with measurable acuity and one with light perception were tested while blindfolded. We also tested performance with text of varying appearance in varying viewing conditions. To evaluate human performance, we asked the participants to use the devices to attempt 12 reading tasks similar to activities of daily living. We assessed the ranges of text attributes for which reading was possible, such as print size, contrast, and light level. We also assessed if individuals could complete tasks with the devices and measured accuracy and completion time. Participants also completed a survey concerning the two aids. Results: Both aids achieved greater than 95% accuracy in text recognition for flat, plain word documents and ranged from 13 to 57% accuracy for formatted text on curved surfaces. Both aids could read print sizes as small as 0.8M (20/40 Snellen equivalent, 40 cm viewing distance). Individuals successfully completed 71% and 55% ( p = .114) of tasks while using Orcam MyEye 1 and Seeing AI, respectively. There was no significant difference in time to completion of tasks ( p = .775). Individuals believed both aids would be helpful for daily activities. Discussion: Orcam MyEye 1 and Seeing AI had similar text-reading capability and usability. Both aids were useful to users with severe visual impairments in performing reading tasks. Implications for Practitioners: Selection of a reading device or aid should be based on individual preferences and prior familiarity with the platform, since we found no clear superiority of one solution over the other.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1919
Author(s):  
Shuhua Liu ◽  
Huixin Xu ◽  
Qi Li ◽  
Fei Zhang ◽  
Kun Hou

With the aim to solve issues of robot object recognition in complex scenes, this paper proposes an object recognition method based on scene text reading. The proposed method simulates human-like behavior and accurately identifies objects with texts through careful reading. First, deep learning models with high accuracy are adopted to detect and recognize text in multi-view. Second, datasets including 102,000 Chinese and English scene text images and their inverse are generated. The F-measure of text detection is improved by 0.4% and the recognition accuracy is improved by 1.26% because the model is trained by these two datasets. Finally, a robot object recognition method is proposed based on the scene text reading. The robot detects and recognizes texts in the image and then stores the recognition results in a text file. When the user gives the robot a fetching instruction, the robot searches for corresponding keywords from the text files and achieves the confidence of multiple objects in the scene image. Then, the object with the maximum confidence is selected as the target. The results show that the robot can accurately distinguish objects with arbitrary shape and category, and it can effectively solve the problem of object recognition in home environments.


Sign in / Sign up

Export Citation Format

Share Document