Image Motion Estimator to Track Trajectories Specified With Respect to Moving Objects

Author(s):  
J. Pomares ◽  
G. J. García ◽  
L. Payá ◽  
F. Torres
Keyword(s):  
2019 ◽  
Vol 116 (18) ◽  
pp. 9060-9065 ◽  
Author(s):  
Kalpana Dokka ◽  
Hyeshin Park ◽  
Michael Jansen ◽  
Gregory C. DeAngelis ◽  
Dora E. Angelaki

The brain infers our spatial orientation and properties of the world from ambiguous and noisy sensory cues. Judging self-motion (heading) in the presence of independently moving objects poses a challenging inference problem because the image motion of an object could be attributed to movement of the object, self-motion, or some combination of the two. We test whether perception of heading and object motion follows predictions of a normative causal inference framework. In a dual-report task, subjects indicated whether an object appeared stationary or moving in the virtual world, while simultaneously judging their heading. Consistent with causal inference predictions, the proportion of object stationarity reports, as well as the accuracy and precision of heading judgments, depended on the speed of object motion. Critically, biases in perceived heading declined when the object was perceived to be moving in the world. Our findings suggest that the brain interprets object motion and self-motion using a causal inference framework.


Author(s):  
Tyler S. Manning ◽  
Kenneth H. Britten

The ability to see motion is critical to survival in a dynamic world. Decades of physiological research have established that motion perception is a distinct sub-modality of vision supported by a network of specialized structures in the nervous system. These structures are arranged hierarchically according to the spatial scale of the calculations they perform, with more local operations preceding those that are more global. The different operations serve distinct purposes, from the interception of small moving objects to the calculation of self-motion from image motion spanning the entire visual field. Each cortical area in the hierarchy has an independent representation of visual motion. These representations, together with computational accounts of their roles, provide clues to the functions of each area. Comparisons between neural activity in these areas and psychophysical performance can identify which representations are sufficient to support motion perception. Experimental manipulation of this activity can also define which areas are necessary for motion-dependent behaviors like self-motion guidance.


2019 ◽  
Vol 121 (4) ◽  
pp. 1207-1221 ◽  
Author(s):  
Ryo Sasaki ◽  
Dora E. Angelaki ◽  
Gregory C. DeAngelis

Multiple areas of macaque cortex are involved in visual motion processing, but their relative functional roles remain unclear. The medial superior temporal (MST) area is typically divided into lateral (MSTl) and dorsal (MSTd) subdivisions that are thought to be involved in processing object motion and self-motion, respectively. Whereas MSTd has been studied extensively with regard to processing visual and nonvisual self-motion cues, little is known about self-motion signals in MSTl, especially nonvisual signals. Moreover, little is known about how self-motion and object motion signals interact in MSTl and how this differs from interactions in MSTd. We compared the visual and vestibular heading tuning of neurons in MSTl and MSTd using identical stimuli. Our findings reveal that both visual and vestibular heading signals are weaker in MSTl than in MSTd, suggesting that MSTl is less well suited to participate in self-motion perception than MSTd. We also tested neurons in both areas with a variety of combinations of object motion and self-motion. Our findings reveal that vestibular signals improve the separability of coding of heading and object direction in both areas, albeit more strongly in MSTd due to the greater strength of vestibular signals. Based on a marginalization technique, population decoding reveals that heading and object direction can be more effectively dissociated from MSTd responses than MSTl responses. Our findings help to clarify the respective contributions that MSTl and MSTd make to processing of object motion and self-motion, although our conclusions may be somewhat specific to the multipart moving objects that we employed. NEW & NOTEWORTHY Retinal image motion reflects contributions from both the observer’s self-motion and the movement of objects in the environment. The neural mechanisms by which the brain dissociates self-motion and object motion remain unclear. This study provides the first systematic examination of how the lateral subdivision of area MST (MSTl) contributes to dissociating object motion and self-motion. We also examine, for the first time, how MSTl neurons represent translational self-motion based on both vestibular and visual cues.


2009 ◽  
Author(s):  
Piers D. Howe ◽  
Michael A. Cohen ◽  
Yair Pinto ◽  
Todd S. Horowitz
Keyword(s):  

2018 ◽  
Vol 2 (1) ◽  
Author(s):  
Fatima Ameen ◽  
Ziad Mohammed ◽  
Abdulrahman Siddiq

Tracking systems of moving objects provide a useful means to better control, manage and secure them. Tracking systems are used in different scales of applications such as indoors, outdoors and even used to track vehicles, ships and air planes moving over the globe. This paper presents the design and implementation of a system for tracking objects moving over a wide geographical area. The system depends on the Global Positioning System (GPS) and Global System for Mobile Communications (GSM) technologies without requiring the Internet service. The implemented system uses the freely available GPS service to determine the position of the moving objects. The tests of the implemented system in different regions and conditions show that the maximum uncertainty in the obtained positions is a circle with radius of about 16 m, which is an acceptable result for tracking the movement of objects in wide and open environments.


2016 ◽  
Vol 11 (4) ◽  
pp. 324
Author(s):  
Nor Nadirah Abdul Aziz ◽  
Yasir Mohd Mustafah ◽  
Amelia Wong Azman ◽  
Amir Akramin Shafie ◽  
Muhammad Izad Yusoff ◽  
...  

2012 ◽  
Vol 17 (4) ◽  
pp. 217-222
Author(s):  
Piotr Szymczyk ◽  
Magdalena Szymczyk

Abstract In this paper authors describe in details a system dedicated to scene configuration. The user can define different important 2D regions of the scene. There is a possibility to define the following kinds of regions: flour, total covering, down covering, up covering, middle covering, entrance/exit, protected area, prohibited area, allowed direction, prohibited direction, reflections, moving objects, light source, wall and sky. The definition of this regions is very important to further analysis of live stream camera data in the guardian video system.


2012 ◽  
Vol 17 (4) ◽  
pp. 45-50
Author(s):  
Zbigniew Bubliński ◽  
Piotr Pawlik

Abstract The paper presents the modification of background generation algorithm based on analysis of the frequency of occurrences of pixels. The proposed solution allows the generation of the background and its updating, the introduced parameter allows to adjust the algorithm according to the time rate of changes in the image. The results show that the modified method can be applied in many tasks related to the detection and analysis of moving objects.


Sign in / Sign up

Export Citation Format

Share Document