Global Feature Learning with Human Body Region Guided for Person Re-identification

Author(s):  
Zhiqiang Li ◽  
Nong Sang ◽  
Kezhou Chen ◽  
Chuchu Han ◽  
Changxin Gao
Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 393 ◽  
Author(s):  
Jonha Lee ◽  
Dong-Wook Kim ◽  
Chee Won ◽  
Seung-Won Jung

Segmentation of human bodies in images is useful for a variety of applications, including background substitution, human activity recognition, security, and video surveillance applications. However, human body segmentation has been a challenging problem, due to the complicated shape and motion of a non-rigid human body. Meanwhile, depth sensors with advanced pattern recognition algorithms provide human body skeletons in real time with reasonable accuracy. In this study, we propose an algorithm that projects the human body skeleton from a depth image to a color image, where the human body region is segmented in the color image by using the projected skeleton as a segmentation cue. Experimental results using the Kinect sensor demonstrate that the proposed method provides high quality segmentation results and outperforms the conventional methods.


2020 ◽  
Vol 39 (12) ◽  
pp. 3944-3954 ◽  
Author(s):  
Chuanbin Liu ◽  
Hongtao Xie ◽  
Sicheng Zhang ◽  
Zhendong Mao ◽  
Jun Sun ◽  
...  

Author(s):  
Haiyu Zhao ◽  
Maoqing Tian ◽  
Shuyang Sun ◽  
Jing Shao ◽  
Junjie Yan ◽  
...  
Keyword(s):  

Author(s):  
Jia Liu ◽  
Miyi Duan ◽  
Hongqi Gao

The normalized intensity factor based on statistical first-order moment of gray-scale image is defined in this paper. The intensity factor can be used to distinguish the brightness level of a gray-scale image and to determine a threshold value for image segmentation. According to the intensity factor and the characteristic of human body in the gray-scale infrared image, a new algorithm of calculating the intensity-level threshold is designed which can be used for segmenting human body area in an infrared image. In the algorithm, based on the concept of intensity factor, a histogram of low brightness gray-scale image (LGIRI) is divided into three parts: a low-intensity region (0.25[Formula: see text][Formula: see text]), a medium-intensity region (0.25–0.75[Formula: see text][Formula: see text]), and a high-intensity region (0.75–1[Formula: see text][Formula: see text]), and then the intensity [Formula: see text] which satisfies the [Formula: see text] is selected as an intensity-level value [Formula: see text], and the intensity [Formula: see text] which satisfies [Formula: see text] is selected as an intensity-level value [Formula: see text], at last [Formula: see text] is the pixel classification threshold (the intensity-level threshold). It is noted that there is no preprocessing for image noise filtering and/or processing, and all images come from OTCBVS. Compared with the method of selecting trough points of the histogram as the intensity-level threshold, this algorithm avoids the problem of nonexistence of evident trough point at the high-intensity level of a histogram. Also, the experimental results show that the segmenting results of LGIRI processed by the algorithm are better than those of Otsu method.


2020 ◽  
Vol 10 (16) ◽  
pp. 5531
Author(s):  
Dong-seok Lee ◽  
Jong-soo Kim ◽  
Seok Chan Jeong ◽  
Soon-kak Kwon

In this study, an estimation method for human height is proposed using color and depth information. Color images are used for deep learning by mask R-CNN to detect a human body and a human head separately. If color images are not available for extracting the human body region due to low light environment, then the human body region is extracted by comparing between current frame in depth video and a pre-stored background depth image. The topmost point of the human head region is extracted as the top of the head and the bottommost point of the human body region as the bottom of the foot. The depth value of the head top-point is corrected to a pixel value that has high similarity to a neighboring pixel. The position of the body bottom-point is corrected by calculating a depth gradient between vertically adjacent pixels. Two head-top and foot-bottom points are converted into 3D real-world coordinates using depth information. Two real-world coordinates estimate human height by measuring a Euclidean distance. Estimation errors for human height are corrected as the average of accumulated heights. In experiment results, we achieve that the estimated errors of human height with a standing state are 0.7% and 2.2% when the human body region is extracted by mask R-CNN and the background depth image, respectively.


2015 ◽  
Vol 112 ◽  
pp. 43-52 ◽  
Author(s):  
Song-Zhi Su ◽  
Zhi-Hui Liu ◽  
Su-Ping Xu ◽  
Shao-Zi Li ◽  
Rongrong Ji

Author(s):  
Zhizhong Han ◽  
Xinhai Liu ◽  
Yu-Shen Liu ◽  
Matthias Zwicker

Deep learning has achieved remarkable results in 3D shape analysis by learning global shape features from the pixel-level over multiple views. Previous methods, however, compute low-level features for entire views without considering part-level information. In contrast, we propose a deep neural network, called Parts4Feature, to learn 3D global features from part-level information in multiple views. We introduce a novel definition of generally semantic parts, which Parts4Feature learns to detect in multiple views from different 3D shape segmentation benchmarks. A key idea of our architecture is that it transfers the ability to detect semantically meaningful parts in multiple views to learn 3D global features. Parts4Feature achieves this by combining a local part detection branch and a global feature learning branch with a shared region proposal module. The global feature learning branch aggregates the detected parts in terms of learned part patterns with a novel multi-attention mechanism, while the region proposal module enables locally and globally discriminative information to be promoted by each other. We demonstrate that Parts4Feature outperforms the state-of-the-art under three large-scale 3D shape benchmarks.


2018 ◽  
Vol 27 (5) ◽  
pp. 2086-2095 ◽  
Author(s):  
Wujie Zhou ◽  
Lu Yu ◽  
Yang Zhou ◽  
Weiwei Qiu ◽  
Ming-Wei Wu ◽  
...  

2019 ◽  
Vol 28 (8) ◽  
pp. 3986-3999 ◽  
Author(s):  
Zhizhong Han ◽  
Honglei Lu ◽  
Zhenbao Liu ◽  
Chi-Man Vong ◽  
Yu-Shen Liu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document