scholarly journals Binocular fusion enhances the efficiency of spot-the-difference gameplay

PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0254715
Author(s):  
Kavitha Venkataramanan ◽  
Swanandi Gawde ◽  
Amithavikram R. Hathibelagal ◽  
Shrikant R. Bharadwaj

Spot-the-difference, the popular childhood game and a prototypical change blindness task, involves identification of differences in local features of two otherwise identical scenes using an eye scanning and matching strategy. Through binocular fusion of the companion scenes, the game becomes a visual search task, wherein players can simply scan the cyclopean percept for local features that may distinctly stand-out due to binocular rivalry/lustre. Here, we had a total of 100 visually normal adult (18–28 years of age) volunteers play this game in the traditional non-fusion mode and after cross-fusion of the companion images using a hand-held mirror stereoscope. The results demonstrate that the fusion mode significantly speeds up gameplay and reduces errors, relative to the non-fusion mode, for a range of target sizes, contrasts, and chromaticity tested (all, p<0.001). Amongst the three types of local feature differences available in these images (polarity difference, presence/absence of a local feature difference and shape difference in a local feature difference), features containing polarity difference was identified as first in ~60–70% of instances in both modes of gameplay (p<0.01), with this proportion being larger in the fusion than in the non-fusion mode. The binocular fusion advantage is lost when the lustre cue is purposefully weakened through alterations in target luminance polarity. The spot-the-difference game may thus be cheated using binocular fusion and the differences readily identified through a vivid experience of binocular rivalry/lustre.

1998 ◽  
Vol 353 (1377) ◽  
pp. 1801-1818 ◽  
Author(s):  
◽  
N. K. Logothetis

Figures that can be seen in more than one way are invaluable tools for the study of the neural basis of visual awareness, because such stimuli permit the dissociation of the neural responses that underlie what we perceive at any given time from those forming the sensory representation of a visual pattern. To study the former type of responses, monkeys were subjected to binocular rivalry, and the response of neurons in a number of different visual areas was studied while the animals reported their alternating percepts by pulling levers. Perception–related modulations of neural activity were found to occur to different extents in different cortical visual areas. The cells that were affected by suppression were almost exclusively binocular, and their proportion was found to increase in the higher processing stages of the visual system. The strongest correlations between neural activity and perception were observed in the visual areas of the temporal lobe. A strikingly large number of neurons in the early visual areas remained active during the perceptual suppression of the stimulus, a finding suggesting that conscious visual perception might be mediated by only a subset of the cells exhibiting stimulus selective responses. These physiological findings, together with a number of recent psychophysical studies, offer a new explanation of the phenomenon of binocular rivalry. Indeed, rivalry has long been considered to be closely linked with binocular fusion and stereopsis, and the sequences of dominance and suppression have been viewed as the result of competition between the two monocular channels. The physiological data presented here are incompatible with this interpretation. Rather than reflecting interocular competition, the rivalry is most probably between the two different central neural representations generated by the dichoptically presented stimuli. The mechanisms of rivalry are probably the same as, or very similar to, those underlying multistable perception in general, and further physiological studies might reveal a much about the neural mechanisms of our perceptual organization.


Author(s):  
C Sun ◽  
D Guo ◽  
H Gao ◽  
L Zou ◽  
H Wang

In order to manage the version files and maintain the latest version of the computer-aided design (CAD) files in asynchronous collaborative systems, one method of version merging for CAD files is proposed to resolve the problem based on feature extraction. First of all, the feature information is extracted based on the feature attribute of CAD files and stored in a XML feature file. Then, analyse the feature file, and the feature difference set is obtained by the given algorithm. Finally, the merging result of the difference set and the master files with application programming interface (API) interface functions is achieved, and then the version merging of CAD files is also realized. The application in Catia validated that the proposed method is feasible and valuable in engineering.


2021 ◽  
Vol 13 (22) ◽  
pp. 4518
Author(s):  
Xin Zhao ◽  
Jiayi Guo ◽  
Yueting Zhang ◽  
Yirong Wu

The semantic segmentation of remote sensing images requires distinguishing local regions of different classes and exploiting a uniform global representation of the same-class instances. Such requirements make it necessary for the segmentation methods to extract discriminative local features between different classes and to explore representative features for all instances of a given class. While common deep convolutional neural networks (DCNNs) can effectively focus on local features, they are limited by their receptive field to obtain consistent global information. In this paper, we propose a memory-augmented transformer (MAT) to effectively model both the local and global information. The feature extraction pipeline of the MAT is split into a memory-based global relationship guidance module and a local feature extraction module. The local feature extraction module mainly consists of a transformer, which is used to extract features from the input images. The global relationship guidance module maintains a memory bank for the consistent encoding of the global information. Global guidance is performed by memory interaction. Bidirectional information flow between the global and local branches is conducted by a memory-query module, as well as a memory-update module, respectively. Experiment results on the ISPRS Potsdam and ISPRS Vaihingen datasets demonstrated that our method can perform competitively with state-of-the-art methods.


2021 ◽  
Vol 11 ◽  
Author(s):  
Wang Xiang

To investigate whether implicit detection occurs uniformly during change blindness with single or combination feature stimuli, and whether implicit detection is affected by exposure duration and delay, two one-shot change detection experiments are designed. The implicit detection effect is measured by comparing the reaction times (RTs) of baseline trials, in which stimulus exhibits no change and participants report “same,” and change blindness trials, in which the stimulus exhibits a change but participants report “same.” If the RTs of blindness trials are longer than those of baseline trials, implicit detection has occurred. The strength of the implicit detection effect was measured by the difference in RTs between the baseline and change blindness trials, where the larger the difference, the stronger the implicit detection effect. In both Experiments 1 and 2, the results showed that the RTs of change blindness trials were significantly longer than those of baseline trials. Whether under set size 4, 6, or 8, the RTs of the change blindness trials were significantly longer than those in the baseline trials. In Experiment 1, the difference between the baseline trials’ RTs and change blindness trials’ RTs of the single features was significantly larger than that of the combination features. However, in Experiment 2, the difference between the baseline trials’ RTs and the change blindness trials’ RTs of single features was significantly smaller than that of the combination features. In Experiment 1a, when the exposure duration was shorter, the difference between the baseline and change blindness trials’ RTs was smaller. In Experiment 2, when the delay was longer, the difference between the two trials’ RTs was larger. These results suggest that regardless of whether the change occurs in a single or a combination of features and whether there is a long exposure duration or delay, implicit detection occurs uniformly during the change blindness period. Moreover, longer exposure durations and delays strengthen the implicit detection effect. Set sizes had no significant impact on implicit detection.


2013 ◽  
Vol 303-306 ◽  
pp. 1089-1092
Author(s):  
Li Wang ◽  
Ting Yun ◽  
Hai Feng Lin

The local features based on interest point have achieved much success in action sensing recently. The interest point is not only limited to 2D space, but also extended to 3D space. We apply the 3D interest point to action sensing. A classic method to use 3D interest point is through creating a feature using histogram vector based on bag of words; some better methods take advantage of the position of each interest point besides the local feature; however, it’s difficult to position these points due to the complexity of an action. We propose a simple method to position each interest point, and create a new feature for action sensing. Evaluation of the approach on two sets of videos suggests its effectiveness.


2006 ◽  
Vol 18 (6) ◽  
pp. 744-750
Author(s):  
Ryouta Nakano ◽  
◽  
Kazuhiro Hotta ◽  
Haruhisa Takahashi

This paper presents an object detection method using independent local feature extractor. Since objects are composed of a combination of characteristic parts, a good object detector could be developed if local parts specialized for a detection target are derived automatically from training samples. To do this, we use Independent Component Analysis (ICA) which decomposes a signal into independent elementary signals. We then used the basis vectors derived by ICA as independent local feature extractors specialized for a detection target. These feature extractors are applied to a candidate area, and their outputs are used in classification. However, the number of dimension of extracted independent local features is very high. To reduce the extracted independent local features efficiently, we use Higher-order Local AutoCorrelation (HLAC) features to extract the information that relates neighboring features. This may be more effective for object detection than simple independent local features. To classify detection targets and non-targets, we use a Support Vector Machine (SVM). The proposed method is applied to a car detection problem. Superior performance is obtained by comparison with Principal Component Analysis (PCA).


2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Wei Zhou ◽  
Hao Wu ◽  
Chengdong Wu ◽  
Xiaosheng Yu ◽  
Yugen Yi

The optic disc is a key anatomical structure in retinal images. The ability to detect optic discs in retinal images plays an important role in automated screening systems. Inspired by the fact that humans can find optic discs in retinal images by observing some local features, we propose a local feature spectrum analysis (LFSA) that eliminates the influence caused by the variable spatial positions of local features. In LFSA, a dictionary of local features is used to reconstruct new optic disc candidate images, and the utilization frequencies of every atom in the dictionary are considered as a type of “spectrum” that can be used for classification. We also employ the sparse dictionary selection approach to construct a compact and representative dictionary. Unlike previous approaches, LFSA does not require the segmentation of vessels, and its method of considering the varying information in the retinal images is both simple and robust, making it well-suited for automated screening systems. Experimental results on the largest publicly available dataset indicate the effectiveness of our proposed approach.


1988 ◽  
Vol 67 (1) ◽  
pp. 253-254
Author(s):  
Jukka Saarinen ◽  
Ritva Laaksonen ◽  
Erja Poutiainen

Rapid visual discrimination in patients with unilateral cerebral lesions was investigated using a search task. Both the exposure duration of search arrays and the difference in the texton content between the target pattern and the background patterns were varied. Patients could detect the targets with a large texton difference more rapidly than the targets with a small difference.


2016 ◽  
Vol 858 ◽  
pp. 163-170
Author(s):  
Feng Shan Wang ◽  
Quan Bing Rong ◽  
Hong Jun Zhang

To account for the conflict sensitivity, one model is presented to fuse the high conflict risk evidences about earthquake-damaged underground structure. Following the nature ideology and model rule of Evidence Theory, the earthquake-damaged origin risk evidence is revised with Similarity Coefficients, and the identical intensity and conflict intensity is calculated for each risk evidence; the difference and conflict character is comparatively analyzed about the fusion rules respectively on Similarity Coefficient and Conflict Intensity; Under Standard DS Fusion Mode and Conflict Intensity Fusion Method, the four combination fusion model is presented as Model-AO, Model-RO, Model-AC and Model-RC, and the improved risk fusion operator is given for such earthquake-damaged underground structure evidences. Finally, case demonstrates the validity of the integrated model, which could overcome the high conflict lack in the risk fusion standard DS model.


Sign in / Sign up

Export Citation Format

Share Document