object motion
Recently Published Documents


TOTAL DOCUMENTS

649
(FIVE YEARS 134)

H-INDEX

40
(FIVE YEARS 5)

Object detection (OD) within a video is one of the relevant and critical research areas in the computer vision field. Due to the widespread of Artificial Intelligence, the basic principle in real life nowadays and its exponential growth predicted in the epochs to come, it will transmute the public. Object Detection has been extensively implemented in several areas, including human-machine Interaction, autonomous vehicles, security with video surveillance, and various fields that will be mentioned further. However, this augmentation of OD tackles different challenges such as occlusion, illumination variation, object motion, without ignoring the real-time aspect that can be quite problematic. This paper also includes some methods of application to take into account these issues. These techniques are divided into five subcategories: Point Detection, segmentation, supervised classifier, optical flow, a background modeling. This survey decorticates various methods and techniques used in object detection, as well as application domains and the problems faced. Our study discusses the cruciality of deep learning algorithms and their efficiency on future improvement in object detection topics within video sequences.


2022 ◽  
Vol 2022 ◽  
pp. 1-21
Author(s):  
Ruibin Zhang ◽  
Yingshi Guo ◽  
Yunze Long ◽  
Yang Zhou ◽  
Chunyan Jiang

A vehicle motion state prediction algorithm integrating point cloud timing multiview features and multitarget interaction information is proposed in this work to effectively predict the motion states of traffic participants around intelligent vehicles in complex scenes. The algorithm analyzes the characteristics of object motion that are affected by the surrounding environment and the interaction of nearby objects and is based on the complex traffic environment perception dual multiline light detection and ranging (LiDAR) technology. The time sequence aerial view map and time sequence front view depth map are obtained using real-time point cloud information perceived by the LiDAR. Time sequence high-level abstract combination features in the multiview scene are then extracted by an improved VGG19 network model and are fused with the potential spatiotemporal interaction of the multitarget operation state data extraction features detected by the laser radar by using a one-dimensional convolution neural network. A temporal feature vector is constructed as the input data of the bidirectional long-term and short-term memory (BiLSTM) network, and the desired input-output mapping relationship is trained to predict the motion state of traffic participants. According to the test results, the proposed BiLSTM model based on point cloud multiview and vehicle interaction information is better than other methods in predicting the state of target vehicles. The results can provide support for the research to evaluate the risk of intelligent vehicle operation environment.


2021 ◽  
Vol 2 (3) ◽  
pp. 73-85
Author(s):  
Suliwa ◽  
Wahono Widodo ◽  
Munasir

This study purpose to know the effect of LKPD to facilitate group investigation cooperatives in improving students' science process skills in learning science material for object motion in class VIII MTs Al Miftah Modung for the 2020/2021 academic year. This research is experimental research using a Quasi-Experimental research design. The sample used was all students of class VIII as many as 20 students. The analysis technique of hypothesis testing students' science process skills using Free sample t-test with SPSS version 20.00 program. The results of hypothesis testing students' science process skills were obtained score –ttable ≤ tcount ≥ ttable (-2.262 ️ ≤ 5.071 ≥️ 2,262) then Ho is rejected and Ha is accepted. The average percentage of implementation is 90.25% with a very good category. the average student response questionnaire is 94% with a very good category. Based on the results of the data analysis, it can be concluded that there is an influence of LKPD to facilitate group investigation in improving science process skills for students.


2021 ◽  
Vol 12 (1) ◽  
pp. 252
Author(s):  
Ke Wu ◽  
Min Li ◽  
Lei Lu ◽  
Jiangtao Xi

The reconstruction of moving objects based on phase shifting profilometry has attracted intensive interests. Most of the methods introduce the phase shift by projecting multiple fringe patterns, which is undesirable in moving object reconstruction as the errors caused by the motion will be intensified when the number of the fringe pattern is increased. This paper proposes the reconstruction of the isolated moving object by projecting two fringe patterns with different frequencies. The phase shift required by the phase shifting profilometry is generated by the object motion, and the model describing the motion-induced phase shift is presented. Then, the phase information in different frequencies is retrieved by analyzing the influence introduced by movement. Finally, the mismatch on the phase information between the two frequencies is compensated and the isolated moving object is reconstructed. Experiments are presented to verify the effectiveness of the proposed method.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Silvio Gravano ◽  
Francesco Lacquaniti ◽  
Myrka Zago

AbstractMental imagery represents a potential countermeasure for sensorimotor and cognitive dysfunctions due to spaceflight. It might help train people to deal with conditions unique to spaceflight. Thus, dynamic interactions with the inertial motion of weightless objects are only experienced in weightlessness but can be simulated on Earth using mental imagery. Such training might overcome the problem of calibrating fine-grained hand forces and estimating the spatiotemporal parameters of the resulting object motion. Here, a group of astronauts grasped an imaginary ball, threw it against the ceiling or the front wall, and caught it after the bounce, during pre-flight, in-flight, and post-flight experiments. They varied the throwing speed across trials and imagined that the ball moved under Earth’s gravity or weightlessness. We found that the astronauts were able to reproduce qualitative differences between inertial and gravitational motion already on ground, and further adapted their behavior during spaceflight. Thus, they adjusted the throwing speed and the catching time, equivalent to the duration of virtual ball motion, as a function of the imaginary 0 g condition versus the imaginary 1 g condition. Arm kinematics of the frontal throws further revealed a differential processing of imagined gravity level in terms of the spatial features of the arm and virtual ball trajectories. We suggest that protocols of this kind may facilitate sensorimotor adaptation and help tuning vestibular plasticity in-flight, since mental imagery of gravitational motion is known to engage the vestibular cortex.


2021 ◽  
Author(s):  
HyungGoo Kim ◽  
Dora Angelaki ◽  
Gregory DeAngelis

Detecting objects that move in a scene is a fundamental computation performed by the visual system. This computation is greatly complicated by observer motion, which causes most objects to move across the retinal image. How the visual system detects scene-relative object motion during self-motion is poorly understood. Human behavioral studies suggest that the visual system may identify local conflicts between motion parallax and binocular disparity cues to depth, and may use these signals to detect moving objects. We describe a novel mechanism for performing this computation based on neurons in macaque area MT with incongruent depth tuning for binocular disparity and motion parallax cues. Neurons with incongruent tuning respond selectively to scene-relative object motion and their responses are predictive of perceptual decisions when animals are trained to detect a moving object during selfmotion. This finding establishes a novel functional role for neurons with incongruent tuning for multiple depth cues.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7658
Author(s):  
Peter M. Dickson ◽  
Philip J. Rae

We describe the mathematical transformations required to convert the data recorded using typical 6-axis microelectromechanical systems (MEMS) sensor packages (3-axis rate gyroscopes and 3-axis accelerometers) when attached to an object undergoing a short duration loading event, such as blast loading, where inertial data alone are sufficient to track the object motion. By using the quaternion description, the complex object rotations and displacements that typically occur are translated into the more convenient earth frame of reference. An illustrative example is presented where a large and heavy object was thrown by the action of a very strong air blast in a complex manner. The data conversion process yielded an accurate animation of the object’s subsequent motion.


2021 ◽  
Author(s):  
Gaurav Soni ◽  
Satnam Singh Saini ◽  
Simarjit Singh Malhi ◽  
Bhupinder Kaur Srao ◽  
Ashim Sharma ◽  
...  

2021 ◽  
Author(s):  
Sahir Shrestha ◽  
Mohammad Ali Armin ◽  
Hongdong Li ◽  
Nick Barnes
Keyword(s):  

Author(s):  
Israa A. Alwan ◽  
Faaza A. Almarsoomi

Object tracking is one of the most important topics in the fields of image processing and computer vision. Object tracking is the process of finding interesting moving objects and following them from frame to frame. In this research, Active models–based object tracking algorithm is introduced. Active models are curves placed in an image domain and can evolve to segment the object of interest. Adaptive Diffusion Flow Active Model (ADFAM) is one the most famous types of Active Models. It overcomes the drawbacks of all previous versions of the Active Models specially the leakage problem, noise sensitivity, and long narrow hols or concavities. The ADFAM is well known for its very good capabilities in the segmentation process. In this research, it is adopted for segmentation and tracking purposes. The proposed object tracking algorithm is initiated by detecting the target moving object manually. Then, the ADFAM convergence of the current video frame is reused as an initial estimation for the next video frame and so on. The proposed algorithm is applied to several video sequences, different in terms of the nature of the object, the nature of the background, the speed of the object, object motion direction, and the inter-frame displacement. Experimental results show that the proposed algorithm performed very well and successfully tracked the target object in all different cases.


Sign in / Sign up

Export Citation Format

Share Document