Oracle

Leonardo ◽  
2010 ◽  
Vol 43 (5) ◽  
pp. 482-483
Author(s):  

Oracle is an interactive installation, created in the framework of the eMobiLArt Project, which uses tracking systems, generative algorithms, sound and video, forming a dynamic environment. Through the change between an audiovisual display, and the sudden stillness of an image, while one stays still for a longer time, Oracle reveals its answer to the viewers. Witnessing the emergence of original semantics through our daily relationship with images and vast visual information, Oracle stands as a sudden metaphor of our collective unconscious.

2020 ◽  
Author(s):  
Keisuke Fukuda ◽  
April Emily Pereira ◽  
Joseph M. Saito ◽  
Ty Yi Tang ◽  
Hiroyuki Tsubomi ◽  
...  

Visual information around us is rarely static. To carry out a task in such a dynamic environment, we often have to compare current visual input with our working memory representation of the immediate past. However, little is known about what happens to a working memory (WM) representation when it is compared with perceptual input. Here, we tested university students and found that perceptual comparisons retroactively bias working memory representations toward subjectively-similar perceptual inputs. Furthermore, using computational modeling and individual differences analyses, we found that representational integration between WM representations and perceptually-similar input underlies this similarity-induced memory bias. Together, our findings highlight a novel source of WM distortion and suggest a general mechanism that determines how WM representations interact with new perceptual input.


2021 ◽  
Vol 12 (1) ◽  
pp. 49
Author(s):  
Abira Kanwal ◽  
Zunaira Anjum ◽  
Wasif Muhammad

A simultaneous localization and mapping (SLAM) algorithm allows a mobile robot or a driverless car to determine its location in an unknown and dynamic environment where it is placed, and simultaneously allows it to build a consistent map of that environment. Driverless cars are becoming an emerging reality from science fiction, but there is still too much required for the development of technological breakthroughs for their control, guidance, safety, and health related issues. One existing problem which is required to be addressed is SLAM of driverless car in GPS denied-areas, i.e., congested urban areas with large buildings where GPS signals are weak as a result of congested infrastructure. Due to poor reception of GPS signals in these areas, there is an immense need to localize and route driverless car using onboard sensory modalities, e.g., LIDAR, RADAR, etc., without being dependent on GPS information for its navigation and control. The driverless car SLAM using LIDAR and RADAR involves costly sensors, which appears to be a limitation of this approach. To overcome these limitations, in this article we propose a visual information-based SLAM (vSLAM) algorithm for GPS-denied areas using a cheap video camera. As a front-end process, features-based monocular visual odometry (VO) on grayscale input image frames is performed. Random Sample Consensus (RANSAC) refinement and global pose estimation is performed as a back-end process. The results obtained from the proposed approach demonstrate 95% accuracy with a maximum mean error of 4.98.


2019 ◽  
Author(s):  
Alex L. White ◽  
Geoffrey M. Boynton ◽  
Jason D. Yeatman

Interacting with a cluttered and dynamic environment requires making decisions about visual information at relevant locations while ignoring irrelevant locations. Typical adults can do this with covert spatial attention: prioritizing particular visual field locations even without moving the eyes. Deficits of covert spatial attention have been implicated in developmental dyslexia, a specific reading disability. Previous studies of children with dyslexia, however, have been complicated by group differences in overall task ability that are difficult to distinguish from selective spatial attention. Here, we used a single-fixation visual search task to estimate orientation discrimination thresholds with and without an informative spatial cue in a large sample (N=123) of people ranging in age from 5 to 70 years and with a wide range of reading abilities. We assessed the efficiency of attentional selection via the cueing effect: the difference in log thresholds with and without the spatial cue. Across our whole sample, both absolute thresholds and the cueing effect gradually improved throughout childhood and adolescence. Compared to typical readers, individuals with dyslexia had higher thresholds (worse orientation discrimination) as well as smaller cueing effects (weaker attentional selection). Those differences in dyslexia were especially pronounced prior to age 20, when basic visual function is still maturing. Thus, in line with previous theories, literacy skills are associated with the development of selective spatial attention.


Symmetry ◽  
2020 ◽  
Vol 12 (3) ◽  
pp. 446 ◽  
Author(s):  
Qiming Li ◽  
Haishen Wu ◽  
Lu Xu ◽  
Likai Wang ◽  
Yueqi Lv ◽  
...  

A low-light image enhancement method based on a deep symmetric encoder–decoder convolutional network (LLED-Net) is proposed in the paper. In surveillance and tactical reconnaissance, collecting visual information from a dynamic environment and accurately processing that data is critical to making the right decisions and ensuring mission success. However, due to the cost and technical limitations of camera sensors, it is difficult to capture clear images or videos in low-light conditions. In this paper, a special encoder–decoder convolution network is designed to utilize multi-scale feature maps and join jump connections to avoid gradient disappearance. In order to preserve the image texture as much as possible, by using structural similarity (SSIM) loss to train the model on the data sets with different brightness level, the model can adaptively enhance low-light images in low-light environments. The results show that the proposed algorithm provides significant improvements in quantitative comparison with RED-Net and several other representative image enhancement algorithms.


2009 ◽  
Vol 23 (2) ◽  
pp. 63-76 ◽  
Author(s):  
Silke Paulmann ◽  
Sarah Jessen ◽  
Sonja A. Kotz

The multimodal nature of human communication has been well established. Yet few empirical studies have systematically examined the widely held belief that this form of perception is facilitated in comparison to unimodal or bimodal perception. In the current experiment we first explored the processing of unimodally presented facial expressions. Furthermore, auditory (prosodic and/or lexical-semantic) information was presented together with the visual information to investigate the processing of bimodal (facial and prosodic cues) and multimodal (facial, lexic, and prosodic cues) human communication. Participants engaged in an identity identification task, while event-related potentials (ERPs) were being recorded to examine early processing mechanisms as reflected in the P200 and N300 component. While the former component has repeatedly been linked to physical property stimulus processing, the latter has been linked to more evaluative “meaning-related” processing. A direct relationship between P200 and N300 amplitude and the number of information channels present was found. The multimodal-channel condition elicited the smallest amplitude in the P200 and N300 components, followed by an increased amplitude in each component for the bimodal-channel condition. The largest amplitude was observed for the unimodal condition. These data suggest that multimodal information induces clear facilitation in comparison to unimodal or bimodal information. The advantage of multimodal perception as reflected in the P200 and N300 components may thus reflect one of the mechanisms allowing for fast and accurate information processing in human communication.


Author(s):  
Weiyu Zhang ◽  
Se-Hoon Jeong ◽  
Martin Fishbein†

This study investigates how multitasking interacts with levels of sexually explicit content to influence an individual’s ability to recognize TV content. A 2 (multitasking vs. nonmultitasking) by 3 (low, medium, and high sexual content) between-subjects experiment was conducted. The analyses revealed that multitasking not only impaired task performance, but also decreased TV recognition. An inverted-U relationship between degree of sexually explicit content and recognition of TV content was found, but only when subjects were multitasking. In addition, multitasking interfered with subjects’ ability to recognize audio information more than their ability to recognize visual information.


Sign in / Sign up

Export Citation Format

Share Document