Tracking multiple objects in the presence of articulated and occluded motion

Author(s):  
S.L. Dockstader ◽  
A.M. Tekalp
Author(s):  
J.R. McIntosh ◽  
D.L. Stemple ◽  
William Bishop ◽  
G.W. Hannaway

EM specimens often contain 3-dimensional information that is lost during micrography on a single photographic film. Two images of one specimen at appropriate orientations give a stereo view, but complex structures composed of multiple objects of graded density that superimpose in each projection are often difficult to decipher in stereo. Several analytical methods for 3-D reconstruction from multiple images of a serially tilted specimen are available, but they are all time-consuming and computationally intense.


2019 ◽  
Vol 5 (1) ◽  
pp. 1-9
Author(s):  
Piotr Gulgowski

Abstract Singular nouns in the scope of a distributive operator have been shown to be treated as conceptually plural (Patson and Warren, 2010). The source of this conceptual plurality is not fully clear. In particular, it is not known whether the concept of plurality associated with a singular noun originates from distributing over multiple objects or multiple events. In the present experiment, iterative expressions (distribution over events) were contrasted with collective and distributive sentences using a Stroop-like interference technique (Berent, Pinker, Tzelgov, Bibi, and Goldfarb, 2005; Patson and Warren, 2010). A trend in the data suggests that event distributivity does not elicit a plural interpretation of a grammatically singular noun, however the results were not statistically significant. Possible causes of the non-significant results are discussed.


Author(s):  
Wei Huang ◽  
Xiaoshu Zhou ◽  
Mingchao Dong ◽  
Huaiyu Xu

AbstractRobust and high-performance visual multi-object tracking is a big challenge in computer vision, especially in a drone scenario. In this paper, an online Multi-Object Tracking (MOT) approach in the UAV system is proposed to handle small target detections and class imbalance challenges, which integrates the merits of deep high-resolution representation network and data association method in a unified framework. Specifically, while applying tracking-by-detection architecture to our tracking framework, a Hierarchical Deep High-resolution network (HDHNet) is proposed, which encourages the model to handle different types and scales of targets, and extract more effective and comprehensive features during online learning. After that, the extracted features are fed into different prediction networks for interesting targets recognition. Besides, an adjustable fusion loss function is proposed by combining focal loss and GIoU loss to solve the problems of class imbalance and hard samples. During the tracking process, these detection results are applied to an improved DeepSORT MOT algorithm in each frame, which is available to make full use of the target appearance features to match one by one on a practical basis. The experimental results on the VisDrone2019 MOT benchmark show that the proposed UAV MOT system achieves the highest accuracy and the best robustness compared with state-of-the-art methods.


Author(s):  
Jiahui Huang ◽  
Sheng Yang ◽  
Zishuo Zhao ◽  
Yu-Kun Lai ◽  
Shi-Min Hu

AbstractWe present a practical backend for stereo visual SLAM which can simultaneously discover individual rigid bodies and compute their motions in dynamic environments. While recent factor graph based state optimization algorithms have shown their ability to robustly solve SLAM problems by treating dynamic objects as outliers, their dynamic motions are rarely considered. In this paper, we exploit the consensus of 3D motions for landmarks extracted from the same rigid body for clustering, and to identify static and dynamic objects in a unified manner. Specifically, our algorithm builds a noise-aware motion affinity matrix from landmarks, and uses agglomerative clustering to distinguish rigid bodies. Using decoupled factor graph optimization to revise their shapes and trajectories, we obtain an iterative scheme to update both cluster assignments and motion estimation reciprocally. Evaluations on both synthetic scenes and KITTI demonstrate the capability of our approach, and further experiments considering online efficiency also show the effectiveness of our method for simultaneously tracking ego-motion and multiple objects.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1919
Author(s):  
Shuhua Liu ◽  
Huixin Xu ◽  
Qi Li ◽  
Fei Zhang ◽  
Kun Hou

With the aim to solve issues of robot object recognition in complex scenes, this paper proposes an object recognition method based on scene text reading. The proposed method simulates human-like behavior and accurately identifies objects with texts through careful reading. First, deep learning models with high accuracy are adopted to detect and recognize text in multi-view. Second, datasets including 102,000 Chinese and English scene text images and their inverse are generated. The F-measure of text detection is improved by 0.4% and the recognition accuracy is improved by 1.26% because the model is trained by these two datasets. Finally, a robot object recognition method is proposed based on the scene text reading. The robot detects and recognizes texts in the image and then stores the recognition results in a text file. When the user gives the robot a fetching instruction, the robot searches for corresponding keywords from the text files and achieves the confidence of multiple objects in the scene image. Then, the object with the maximum confidence is selected as the target. The results show that the robot can accurately distinguish objects with arbitrary shape and category, and it can effectively solve the problem of object recognition in home environments.


2019 ◽  
Vol 278 (2) ◽  
pp. 709-720 ◽  
Author(s):  
Thomas Lidbetter ◽  
Kyle Y. Lin
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document