background motion
Recently Published Documents


TOTAL DOCUMENTS

89
(FIVE YEARS 17)

H-INDEX

14
(FIVE YEARS 2)

PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0261266
Author(s):  
Maëlle Tixier ◽  
Stéphane Rousset ◽  
Pierre-Alain Barraud ◽  
Corinne Cian

A large body of research has shown that visually induced self-motion (vection) and cognitive processing may interfere with each other. The aim of this study was to assess the interactive effects of a visual motion inducing vection (uniform motion in roll) versus a visual motion without vection (non-uniform motion) and long-term memory processing using the characteristics of standing posture (quiet stance). As the level of interference may be related to the nature of the cognitive tasks used, we examined the effect of visual motion on a memory task which requires a spatial process (episodic recollection) versus a memory task which does not require this process (semantic comparisons). Results confirm data of the literature showing that compensatory postural response in the same direction as background motion. Repeatedly watching visual uniform motion or increasing the cognitive load with a memory task did not decrease postural deviations. Finally, participants were differentially controlling their balance according to the memory task but this difference was significant only in the vection condition and in the plane of background motion. Increased sway regularity (decreased entropy) combined with decreased postural stability (increase variance) during vection for the episodic task would indicate an ineffective postural control. The different interference of episodic and semantic memory on posture during visual motion is consistent with the involvement of spatial processes during episodic memory recollection. It can be suggested that spatial disorientation due to visual roll motion preferentially interferes with spatial cognitive tasks, as spatial tasks can draw on resources expended to control posture.


2021 ◽  
Vol 21 (11) ◽  
pp. 3
Author(s):  
Emily M. Crowe ◽  
Jeroen B. J. Smeets ◽  
Eli Brenner

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jinchao Huang

PurposeMulti-domain convolutional neural network (MDCNN) model has been widely used in object recognition and tracking in the field of computer vision. However, if the objects to be tracked move rapid or the appearances of moving objects vary dramatically, the conventional MDCNN model will suffer from the model drift problem. To solve such problem in tracking rapid objects under limiting environment for MDCNN model, this paper proposed an auto-attentional mechanism-based MDCNN (AA-MDCNN) model for the rapid moving and changing objects tracking under limiting environment.Design/methodology/approachFirst, to distinguish the foreground object between background and other similar objects, the auto-attentional mechanism is used to selectively aggregate the weighted summation of all feature maps to make the similar features related to each other. Then, the bidirectional gated recurrent unit (Bi-GRU) architecture is used to integrate all the feature maps to selectively emphasize the importance of the correlated feature maps. Finally, the final feature map is obtained by fusion the above two feature maps for object tracking. In addition, a composite loss function is constructed to solve the similar but different attribute sequences tracking using conventional MDCNN model.FindingsIn order to validate the effectiveness and feasibility of the proposed AA-MDCNN model, this paper used ImageNet-Vid dataset to train the object tracking model, and the OTB-50 dataset is used to validate the AA-MDCNN tracking model. Experimental results have shown that the augmentation of auto-attentional mechanism will improve the accuracy rate 2.75% and success rate 2.41%, respectively. In addition, the authors also selected six complex tracking scenarios in OTB-50 dataset; over eleven attributes have been validated that the proposed AA-MDCNN model outperformed than the comparative models over nine attributes. In addition, except for the scenario of multi-objects moving with each other, the proposed AA-MDCNN model solved the majority rapid moving objects tracking scenarios and outperformed than the comparative models on such complex scenarios.Originality/valueThis paper introduced the auto-attentional mechanism into MDCNN model and adopted Bi-GRU architecture to extract key features. By using the proposed AA-MDCNN model, rapid object tracking under complex background, motion blur and occlusion objects has better effect, and such model is expected to be further applied to the rapid object tracking in the real world.


2021 ◽  
Vol 7 (1) ◽  
pp. 67-71
Author(s):  
Markus Philipp ◽  
Neal Bacher ◽  
Jonas Nienhaus ◽  
Lars Hauptmann ◽  
Laura Lang ◽  
...  

Abstract Towards computer-assisted neurosurgery, scene understanding algorithms for microscope video data are required. Previous work utilizes optical flow to extract spatiotemporal context from neurosurgical video sequences. However, to select an appropriate optical flow method, we need to analyze which algorithm yields the highest accuracy for the neurosurgical domain. Currently, there are no benchmark datasets available for neurosurgery. In our work, we present an approach to generate synthetic data for optical flow evaluation on the neurosurgical domain. We simulate image sequences and thereby take into account domainspecific visual conditions such as surgical instrument motion. Then, we evaluate two optical flow algorithms, Farneback and PWC-Net, on our synthetic data. Qualitative and quantitative assessments confirm that our data can be used to evaluate optical flow for the neurosurgical domain. Future work will concentrate on extending the method by modeling additional effects in neurosurgery such as elastic background motion.


2021 ◽  
Vol 13 (4) ◽  
pp. 796
Author(s):  
Long Zhang ◽  
Xuezhi Yang ◽  
Jing Shen

The locations and breathing signal of people in disaster areas are significant information for search and rescue missions in prioritizing operations to save more lives. For detecting the living people who are lying on the ground and covered with dust, debris or ashes, a motion magnification-based method has recently been proposed. This current method estimates the locations and breathing signal of people from a drone video by assuming that only human breathing-related motions exist in the video. However, in natural disasters, background motions, such as swing trees and grass caused by wind, are mixed with human breathing, that distort this assumption, resulting in misleading or even no life signs locations. Therefore, the life signs in disaster areas are challenging to be detected due to the undesired background motions. Note that human breathing is a natural physiological phenomenon, and it is a periodic motion with a steady peak frequency; while background motion always involves complex space-time behaviors, their peak frequencies seem to be variable over time. Therefore, in this work we analyze and focus on the frequency properties of motions to model a frequency variability feature used for extracting only human breathing, while eliminating irrelevant background motions in the video, which would ease the challenge in detection and localization of life signs. The proposed method was validated with both drone and camera videos recorded in the wild. The average precision measures of our method for drone and camera videos were 0.94 and 0.92, which are higher than that of compared methods, demonstrating that our method is more robust and accurate to background motions. The implications and limitations regarding the frequency variability feature were discussed.


Author(s):  
Junhua Yan ◽  
Kun Zhang ◽  
Yin Zhang ◽  
Xuyang Cai ◽  
Jingchun Qi ◽  
...  

There are three critical problems that need to be tackled in target detection when both the target and the photodetector platform are in flight. First, the background is a sky–ground joint background. Second, the background motion is slow when detecting targets from a long distance, and the targets are small, lacking shape information as well as large in number. Third, when approaching the target, the photodetector platform follows the target in violent movements and the background moves fast. This article is comprised of three parts. The first part is the sky–ground joint background separation algorithm, which extracts the boundary between the sky background and the ground background based on their different characteristics. The second part is the algorithm for the detection of small flying targets against the slow moving background (DSFT-SMB), where the double Gaussian background model is used to extract the target pixel points, then the missed targets are supplemented by correlating target trajectories, and the false alarm targets are filtered out using trajectory features. The third part is the algorithm for the detection of flying targets against the fast moving background (DFT-FMB), where the spectral residual model of target is used to extract the target pixel points for the target feature point optical flow, then the speed of target feature point optical flow is calculated in the sky background and the ground background respectively, thereby targets are detected using the density clustering algorithm. Experimental results show that the proposed algorithms exhibit excellent detection performance, with the recall rate higher than 94%, the precision rate higher than 84%, and the F-measure higher than 89% in the DSFT-SMB, and the recall rate higher than 77%, the precision rate higher than 55%, and the F-measure higher than 65% in the DFT-FMB.


2020 ◽  
Author(s):  
BJE Evans ◽  
JM Fabian ◽  
DC O’Carroll ◽  
SD Wiederman

AbstractAerial predators, such as the dragonfly, determine the position and movement of their prey even when embedded in natural scenes. This task is likely supported by a group of optic lobe neurons with responses selective for moving targets of less than a few degrees. These Small Target Motion Detector (STMD) neurons are tuned to target velocity and show profound facilitation in responses to targets that move along continuous trajectories. When presented with a pair of targets, some STMDs competitively select one of the alternatives as if the other does not exist.Here we describe intracellular responses of STMD neurons to the visual presentation of many potential alternatives within cluttered environments comprised of natural scenes. We vary both target contrast and the background scene, across a range of target and background velocities. We find that background motion affects STMD responses indirectly, via the competitive selection of background features. We find that robust target discrimination is limited to scenarios when the target velocity is matched to, or greater than, background velocity. Furthermore, STMD target discriminability is modified by background direction. Backgrounds that move in the neuron’s anti-preferred direction result in the least performance degradation.Significance StatementBiological brains solve the difficult problem of visually detecting and tracking moving features in cluttered environments. We investigated this neuronal processing by recording intracellularly from dragonfly visual neurons that encode the motion of small moving targets subtending less than a few degrees (e.g. prey and conspecifics). However, dragonflies live in a complex visual environment where background features may interfere with tracking by reducing target contrast or providing competitive cues. We find that selective attention towards features drives much of the neuronal response, with background clutter competing with target stimuli for selection. Moreover, the velocity of features is an important component in determining the winner in these competitive interactions.


Sign in / Sign up

Export Citation Format

Share Document