scholarly journals Hallucination of moving objects revealed by a dynamic noise background

2020 ◽  
Author(s):  
Ryohei Nakayama ◽  
Alex O. Holcombe

AbstractWe show that on a dynamic noise background, the perceived disappearance location of a moving object is shifted in the direction of motion. This “twinkle goes” illusion has little dependence on the luminance- or chromaticity-based confusability of the object with the background, or on the amount of background motion energy in the same direction as the object motion. This suggests that the illusion is enabled by the dynamic noise masking the offset transients that otherwise accompany an object’s disappearance. While these results are consistent with an anticipatory process that pre-activates positions ahead of the object’s current position, additional findings suggest an alternative account: a continuation of attentional tracking after the object disappears. First, the shift was greatly reduced when attention was divided between two moving objects. Second, the illusion was associated with a prolonging of the perceived duration of the object, by an amount that matched the extent of extrapolation inferred from the effect of speed on the size of the illusion (~50 ms). While the anticipatory extrapolation theory does not predict this, the continuation of attentional tracking theory does. Specifically, we propose that in the absence of offset transients, attentional tracking keeps moving for several tens of milliseconds after the target disappearance, and this causes one to hallucinate a moving object at the position of attention.

2021 ◽  
Author(s):  
Joshua J Corbett

How do we perceive the location of moving objects? The position and motion literature is currently divided. Predictive accounts of object tracking propose that the position of moving objects is anticipated ahead of sensory signals, whilst non-predictive accounts claim that an anticipatory mechanism is not necessary. A novel illusion called the twinkle goes effect, describing a forward shift in the perceived final location of a moving object in the presence of dynamic noise, presents a novel opportunity to disambiguate these accounts. Across three experiments, we compared the predictions of predictive and non-predictive theories of object tracking by combining the twinkle goes paradigm with a multiple object tracking task. Specifically, we tested whether the size of the twinkle goes illusion would be smaller with greater attentional load (as entailed by the non-predictive, tracking continuation theory) or whether it would not be affected by attentional load (as entailed by predictive extrapolation theory). Our results failed to align with either of these theories of object localisation and tracking. Instead, we found evidence that the twinkle goes effect may be stronger with greater attentional load. We discuss whether this result may be a consequence of an essential, but previously unexplored relationship between the twinkle goes effect and representational momentum. In addition, this study was the first to reveal critical individual differences in the experience of the twinkle goes effect, and in the mislocalisation of moving objects. Together, our results continue to demonstrate the complexity of position and motion perception.


Symmetry ◽  
2019 ◽  
Vol 11 (1) ◽  
pp. 34 ◽  
Author(s):  
Jisang Yoo ◽  
Gyu-cheol Lee

Moving object detection task can be solved by the background subtraction algorithm if the camera is fixed. However, because the background moves, detecting moving objects in a moving car is a difficult problem. There were attempts to detect moving objects using LiDAR or stereo cameras, but when the car moved, the detection rate decreased. We propose a moving object detection algorithm using an object motion reflection model of motion vectors. The proposed method first obtains the disparity map by searching the corresponding region between stereo images. Then, we estimate road by applying v-disparity method to the disparity map. The optical flow is used to acquire the motion vectors of symmetric pixels between adjacent frames where the road has been removed. We designed a probability model of how much the local motion is reflected in the motion vector to determine if the object is moving. We have experimented with the proposed method on two datasets, and confirmed that the proposed method detects moving objects with higher accuracy than other methods.


2021 ◽  
Vol 12 (1) ◽  
pp. 252
Author(s):  
Ke Wu ◽  
Min Li ◽  
Lei Lu ◽  
Jiangtao Xi

The reconstruction of moving objects based on phase shifting profilometry has attracted intensive interests. Most of the methods introduce the phase shift by projecting multiple fringe patterns, which is undesirable in moving object reconstruction as the errors caused by the motion will be intensified when the number of the fringe pattern is increased. This paper proposes the reconstruction of the isolated moving object by projecting two fringe patterns with different frequencies. The phase shift required by the phase shifting profilometry is generated by the object motion, and the model describing the motion-induced phase shift is presented. Then, the phase information in different frequencies is retrieved by analyzing the influence introduced by movement. Finally, the mismatch on the phase information between the two frequencies is compensated and the isolated moving object is reconstructed. Experiments are presented to verify the effectiveness of the proposed method.


Author(s):  
Minh

This paper presents an effective method for the detection of multiple moving objects from a video sequence captured by a moving surveillance camera. Moving object detection from a moving camera is difficult since camera motion and object motion are mixed. In the proposed method, we created a panoramic picture from a moving camera. After that, with each frame captured from this camera, we used the template matching method to found its place in the panoramic picture. Finally, using the image differencing method, we found out moving objects. Experimental results have shown that the proposed method had good performance with more than 80% of true detection rate on average.


2017 ◽  
Vol 118 (1) ◽  
pp. 496-506 ◽  
Author(s):  
Tobias Bockhorst ◽  
Uwe Homberg

Goal-directed behavior is often complicated by unpredictable events, such as the appearance of a predator during directed locomotion. This situation requires adaptive responses like evasive maneuvers followed by subsequent reorientation and course correction. Here we study the possible neural underpinnings of such a situation in an insect, the desert locust. As in other insects, its sense of spatial orientation strongly relies on the central complex, a group of midline brain neuropils. The central complex houses sky compass cells that signal the polarization plane of skylight and thus indicate the animal’s steering direction relative to the sun. Most of these cells additionally respond to small moving objects that drive fast sensory-motor circuits for escape. Here we investigate how the presentation of a moving object influences activity of the neurons during compass signaling. Cells responded in one of two ways: in some neurons, responses to the moving object were simply added to the compass response that had adapted during continuous stimulation by stationary polarized light. By contrast, other neurons disadapted, i.e., regained their full compass response to polarized light, when a moving object was presented. We propose that the latter case could help to prepare for reorientation of the animal after escape. A neuronal network based on central-complex architecture can explain both responses by slight changes in the dynamics and amplitudes of adaptation to polarized light in CL columnar input neurons of the system. NEW & NOTEWORTHY Neurons of the central complex in several insects signal compass directions through sensitivity to the sky polarization pattern. In locusts, these neurons also respond to moving objects. We show here that during polarized-light presentation, responses to moving objects override their compass signaling or restore adapted inhibitory as well as excitatory compass responses. A network model is presented to explain the variations of these responses that likely serve to redirect flight or walking following evasive maneuvers.


2020 ◽  
Vol 13 (1) ◽  
pp. 60
Author(s):  
Chenjie Wang ◽  
Chengyuan Li ◽  
Jun Liu ◽  
Bin Luo ◽  
Xin Su ◽  
...  

Most scenes in practical applications are dynamic scenes containing moving objects, so accurately segmenting moving objects is crucial for many computer vision applications. In order to efficiently segment all the moving objects in the scene, regardless of whether the object has a predefined semantic label, we propose a two-level nested octave U-structure network with a multi-scale attention mechanism, called U2-ONet. U2-ONet takes two RGB frames, the optical flow between these frames, and the instance segmentation of the frames as inputs. Each stage of U2-ONet is filled with the newly designed octave residual U-block (ORSU block) to enhance the ability to obtain more contextual information at different scales while reducing the spatial redundancy of the feature maps. In order to efficiently train the multi-scale deep network, we introduce a hierarchical training supervision strategy that calculates the loss at each level while adding knowledge-matching loss to keep the optimization consistent. The experimental results show that the proposed U2-ONet method can achieve a state-of-the-art performance in several general moving object segmentation datasets.


1979 ◽  
Vol 49 (2) ◽  
pp. 343-346 ◽  
Author(s):  
Marcella V. Ridenour

30 boys and 30 girls, 6 yr. old, participated in a study assessing the influence of the visual patterns of moving objects and their respective backgrounds on the prediction of objects' directionality. An apparatus was designed to permit modified spherical objects with interchangeable covers and backgrounds to move in three-dimensional space in three directions at selected speeds. The subject's task was to predict one of three possible directions of an object: the object either moved toward the subject's midline or toward a point 18 in. to the left or right of the midline. The movements of all objects started at the same place which was 19.5 ft. in front of the subject. Prediction time was recorded on 15 trials. Analysis of variance indicated that visual patterns of the moving object did not influence the prediction of the object's directionality. Visual patterns of the background behind the moving object did not influence the prediction of the object's directionality except during the conditions of a light nonpatterned moving object. It was concluded that visual patterns of the background and that of the moving object have a very limited influence on the prediction of direction.


2013 ◽  
Vol 10 (1) ◽  
pp. 173-195 ◽  
Author(s):  
George Lagogiannis ◽  
Nikos Lorentzos ◽  
Alexander Sideridis

Indexing moving objects usually involves a great amount of updates, caused by objects reporting their current position. In order to keep the present and past positions of the objects in secondary memory, each update introduces an I/O and this process is sometimes creating a bottleneck. In this paper we deal with the problem of minimizing the number of I/Os in such a way that queries concerning the present and past positions of the objects can be answered efficiently. In particular we propose two new approaches that achieve an asymptotically optimal number of I/Os for performing the necessary updates. The approaches are based on the assumption that the primary memory suffices for storing the current positions of the objects.


With the advent in technology, security and authentication has become the main aspect in computer vision approach. Moving object detection is an efficient system with the goal of preserving the perceptible and principal source in a group. Surveillance is one of the most crucial requirements and carried out to monitor various kinds of activities. The detection and tracking of moving objects are the fundamental concept that comes under the surveillance systems. Moving object recognition is challenging approach in the field of digital image processing. Moving object detection relies on few of the applications which are Human Machine Interaction (HMI), Safety and video Surveillance, Augmented Realism, Transportation Monitoring on Roads, Medical Imaging etc. The main goal of this research is the detection and tracking moving object. In proposed approach, based on the pre-processing method in which there is extraction of the frames with reduction of dimension. It applies the morphological methods to clean the foreground image in the moving objects and texture based feature extract using component analysis method. After that, design a novel method which is optimized multilayer perceptron neural network. It used the optimized layers based on the Pbest and Gbest particle position in the objects. It finds the fitness values which is binary values (x_update, y_update) of swarm or object positions. Method and output achieved final frame creation of the moving objects in the video using BLOB ANALYSER In this research , an application is designed using MATLAB VERSION 2016a In activation function to re-filter the given input and final output calculated with the help of pre-defined sigmoid. In proposed methods to find the clear detection and tracking in the given dataset MOT, FOOTBALL, INDOOR and OUTDOOR datasets. To improve the detection accuracy rate, recall rate and reduce the error rates, False Positive and Negative rate and compare with the various classifiers such as KNN, MLPNN and J48 decision Tree.


2009 ◽  
Vol 09 (04) ◽  
pp. 609-627 ◽  
Author(s):  
J. WANG ◽  
N. V. PATEL ◽  
W. I. GROSKY ◽  
F. FOTOUHI

In this paper, we address the problem of camera and object motion detection in the compressed domain. The estimation of camera motion and the moving object segmentation have been widely stated in a variety of context for video analysis, due to their capabilities of providing essential clues for interpreting the high-level semantics of video sequences. A novel compressed domain motion estimation and segmentation scheme is presented and applied in this paper. MPEG-2 compressed domain information, namely Motion Vectors (MV) and Discrete Cosine Transform (DCT) coefficients, is filtered and manipulated to obtain a dense and reliable Motion Vector Field (MVF) over consecutive frames. An iterative segmentation scheme based upon the generalized affine transformation model is exploited to effect the global camera motion detection. The foreground spatiotemporal objects are separated from the background using the temporal consistency check to the output of the iterative segmentation. This consistency check process can coalesce the resulting foreground blocks and weed out unqualified blocks. Illustrative examples are provided to demonstrate the efficacy of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document