scholarly journals Compound facial threat cue perception: Contributions of visual pathways by image size

2017 ◽  
Vol 17 (10) ◽  
pp. 254
Author(s):  
Troy Steiner ◽  
Robert Franklin Jr. ◽  
Kestutis Kveraga ◽  
Reginald Adams, Jr.
2016 ◽  
Vol 16 (12) ◽  
pp. 1375 ◽  
Author(s):  
Reginald Adams ◽  
Hee Yeon Im ◽  
Cody Cushing ◽  
Noreen Ward ◽  
Jasmine Boshyan ◽  
...  

1998 ◽  
Vol 505 (1) ◽  
pp. 50-73 ◽  
Author(s):  
Matthew A. Bershady ◽  
James D. Lowenthal ◽  
David C. Koo

2009 ◽  
Vol 106 (37) ◽  
pp. 15996-16001 ◽  
Author(s):  
Christopher L. Striemer ◽  
Craig S. Chapman ◽  
Melvyn A. Goodale

When we reach toward objects, we easily avoid potential obstacles located in the workspace. Previous studies suggest that obstacle avoidance relies on mechanisms in the dorsal visual stream in the posterior parietal cortex. One fundamental question that remains unanswered is where the visual inputs to these dorsal-stream mechanisms are coming from. Here, we provide compelling evidence that these mechanisms can operate in “real-time” without direct input from primary visual cortex (V1). In our first experiment, we used a reaching task to demonstrate that an individual with a dense left visual field hemianopia after damage to V1 remained strikingly sensitive to the position of unseen static obstacles placed in his blind field. Importantly, in a second experiment, we showed that his sensitivity to the same obstacles in his blind field was abolished when a short 2-s delay (without vision) was introduced before reach onset. These findings have far-reaching implications, not only for our understanding of the time constraints under which different visual pathways operate, but also in relation to how these seemingly “primitive” subcortical visual pathways can control complex everyday behavior without recourse to conscious vision.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 991
Author(s):  
Yuta Nakahara ◽  
Toshiyasu Matsushima

In information theory, lossless compression of general data is based on an explicit assumption of a stochastic generative model on target data. However, in lossless image compression, researchers have mainly focused on the coding procedure that outputs the coded sequence from the input image, and the assumption of the stochastic generative model is implicit. In these studies, there is a difficulty in discussing the difference between the expected code length and the entropy of the stochastic generative model. We solve this difficulty for a class of images, in which they have non-stationarity among segments. In this paper, we propose a novel stochastic generative model of images by redefining the implicit stochastic generative model in a previous coding procedure. Our model is based on the quadtree so that it effectively represents the variable block size segmentation of images. Then, we construct the Bayes code optimal for the proposed stochastic generative model. It requires the summation of all possible quadtrees weighted by their posterior. In general, its computational cost increases exponentially for the image size. However, we introduce an efficient algorithm to calculate it in the polynomial order of the image size without loss of optimality. As a result, the derived algorithm has a better average coding rate than that of JBIG.


Author(s):  
Barnaby J. W. Dixson ◽  
Claire L. Barkhuizen ◽  
Belinda M. Craig
Keyword(s):  

2021 ◽  
Vol 11 (2) ◽  
pp. 813
Author(s):  
Shuai Teng ◽  
Zongchao Liu ◽  
Gongfa Chen ◽  
Li Cheng

This paper compares the crack detection performance (in terms of precision and computational cost) of the YOLO_v2 using 11 feature extractors, which provides a base for realizing fast and accurate crack detection on concrete structures. Cracks on concrete structures are an important indicator for assessing their durability and safety, and real-time crack detection is an essential task in structural maintenance. The object detection algorithm, especially the YOLO series network, has significant potential in crack detection, while the feature extractor is the most important component of the YOLO_v2. Hence, this paper employs 11 well-known CNN models as the feature extractor of the YOLO_v2 for crack detection. The results confirm that a different feature extractor model of the YOLO_v2 network leads to a different detection result, among which the AP value is 0.89, 0, and 0 for ‘resnet18’, ‘alexnet’, and ‘vgg16’, respectively meanwhile, the ‘googlenet’ (AP = 0.84) and ‘mobilenetv2’ (AP = 0.87) also demonstrate comparable AP values. In terms of computing speed, the ‘alexnet’ takes the least computational time, the ‘squeezenet’ and ‘resnet18’ are ranked second and third respectively; therefore, the ‘resnet18’ is the best feature extractor model in terms of precision and computational cost. Additionally, through the parametric study (influence on detection results of the training epoch, feature extraction layer, and testing image size), the associated parameters indeed have an impact on the detection results. It is demonstrated that: excellent crack detection results can be achieved by the YOLO_v2 detector, in which an appropriate feature extractor model, training epoch, feature extraction layer, and testing image size play an important role.


Sign in / Sign up

Export Citation Format

Share Document