visual task
Recently Published Documents


TOTAL DOCUMENTS

299
(FIVE YEARS 59)

H-INDEX

30
(FIVE YEARS 2)

Sports Vision ◽  
2022 ◽  
pp. 7-15
Author(s):  
Graham B. Erickson
Keyword(s):  

2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Xi Cheng

Most of the existing smoke detection methods are based on manual operation, which is difficult to meet the needs of fire monitoring. To further improve the accuracy of smoke detection, an automatic feature extraction and classification method based on fast regional convolution neural network (fast R–CNN) was introduced in the study. This method uses a selective search algorithm to obtain the candidate images of the sample images. The preselected area coordinates and the sample image of visual task are used as network learning. During the training process, we use the feature migration method to avoid the lack of smoke data or limited data sources. Finally, a target detection model is obtained, which is strongly related to a specified visual task, and it has well-trained weight parameters. Experimental results show that this method not only improves the detection accuracy but also effectively reduces the false alarm rate. It can not only meet the real time and accuracy of fire detection but also realize effective fire detection. Compared with similar fire detection algorithms, the improved algorithm proposed in this paper has better robustness to fire detection and has better performance in accuracy and speed.


2021 ◽  
Vol 15 ◽  
Author(s):  
Antonella Pomè ◽  
Camilla Caponi ◽  
David C. Burr

Perceptual grouping and visual attention are two mechanisms that help to segregate visual input into meaningful objects. Here we report how perceptual grouping, which affects perceived numerosity, is reduced when visual attention is engaged in a concurrent visual task. We asked participants to judge the numerosity of clouds of dot-pairs connected by thin lines, known to cause underestimation of numerosity, while simultaneously performing a color conjunction task. Diverting attention to the concomitant visual distractor significantly reduced the grouping-induced numerosity biases. Moreover, while the magnitude of the illusion under free viewing covaried strongly with AQ-defined autistic traits, under conditions of divided attention the relationship was much reduced. These results suggest that divided attention modulates the perceptual grouping of elements by connectedness and that it is independent of the perceptual style of participants.


Author(s):  
Alyx O. Milne ◽  
Llwyd Orton ◽  
Charlotte H. Black ◽  
Gary C. Jones ◽  
Matthew Sullivan ◽  
...  

Active sensing is the process of moving sensors to extract task-specific information. Whisker touch is often referred to as an active sensory system since whiskers are moved with purposeful control. Even though whisker movements are found in many species, it is unknown if any animal can make task-specific movements with their whiskers. California sea lions (Zalophus californianus) make large, purposeful whisker movements and are capable of performing many whisker-related discrimination tasks. Therefore, California sea lions are an ideal species to explore the active nature of whisker touch sensing. Here, we show that California sea lions can make task-specific whisker movements. California sea lions move their whiskers with large amplitudes around object edges to judge size, make smaller, lateral stroking movements to judge texture and make very small whisker movements during a visual task. These findings, combined with the ease of training mammals and measuring whisker movements, makes whiskers an ideal system for studying mammalian perception, cognition and motor control.


Author(s):  
Rishabh Nevatia

Abstract: Lip reading is the visual task of interpreting phrases from lip movements. While speech is one of the most common ways of communicating among individuals, understanding what a person wants to convey while having access only to their lip movements is till date a task that has not seen its paradigm. Various stages are involved in the process of automated lip reading, ranging from extraction of features to applying neural networks. This paper covers various deep learning approaches that are used for lip reading Keywords: Automatic Speech Recognition, Lip Reading, Neural Networks, Feature Extraction, Deep Learning


2021 ◽  
Author(s):  
Bernt Skottun

The placing of lesions in the magno- and parvocellular layers of the Lateral Geniculate Nucleus (LGN) of the visual stream has been used in attempts to assess the contributions of the two systems to various visual tasks. However, because there are about ten times as many parvocellular cells as magnocellular cells a lesion blocking the parvocellular input would be expected to have a larger deleterious impact than one blocking the magnocellular input. Thus, a visual task that depends upon all inputs, i.e. which is not linked specifically to either the magno- or parvocellular systems, would be expected to be more severely affected by a lesion in the parvocellular system than by one in the magnocellular system simply on the basis of the number of cells involved. A larger impact of a parvocellular lesion can, therefore, not be taken to mean that the task in question is specifically, or predominantly, linked to this system. Effects following magnocellular lesions (and not observed following parvocellular lesions), on the other hand, cannot be accounted for on the basis of cell number. There is, therefore, an asymmetry, in regard to the significance of the effects of lesions placed in the magno- and parvocellular layers of the LGN.


2021 ◽  
Vol 18 (3) ◽  
pp. 1-19
Author(s):  
Benjamin Bressolette ◽  
Sébastien Denjean ◽  
Vincent Roussarie ◽  
Mitsuko Aramaki ◽  
Sølvi Ystad ◽  
...  

Most recent vehicles are equipped with touchscreens, which replace arrays of buttons that control secondary driving functions, such as temperature level, strength of ventilation, GPS, or choice of radio stations. While driving, manipulating such interfaces can be problematic in terms of safety, because they require the drivers’ sight. In this article, we develop an innovative interface, MovEcho, which is piloted with gestures and associated with sounds that are used as informational feedback. We compare this interface to a touchscreen in a perceptual experiment that took place in a driving simulator. The results show that MovEcho allows for a better visual task completion related to traffic and is preferred by the participants. These promising results in a simulator condition have to be confirmed in future studies, in a real vehicle with a comparable expertise for both interfaces.


2021 ◽  
Author(s):  
Yong Yang ◽  
Zike Lei ◽  
Xi Chen ◽  
Tao Huang
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document