Coarse-to-fine 3D clothed human reconstruction using peeled semantic segmentation context

2021 ◽  
Author(s):  
Snehith Goud Routhu ◽  
Sai Sagar Jinka ◽  
Avinash Sharma
2020 ◽  
Vol 29 ◽  
pp. 225-236 ◽  
Author(s):  
Longlong Jing ◽  
Yucheng Chen ◽  
Yingli Tian

Electronics ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. 1602 ◽  
Author(s):  
Abbas Khan ◽  
Talha Ilyas ◽  
Muhammad Umraiz ◽  
Zubaer Ibna Mannan ◽  
Hyongsuk Kim

Convolutional neural networks (CNNs) have achieved state-of-the-art performance in numerous aspects of human life and the agricultural sector is no exception. One of the main objectives of deep learning for smart farming is to identify the precise location of weeds and crops on farmland. In this paper, we propose a semantic segmentation method based on a cascaded encoder-decoder network, namely CED-Net, to differentiate weeds from crops. The existing architectures for weeds and crops segmentation are quite deep, with millions of parameters that require longer training time. To overcome such limitations, we propose an idea of training small networks in cascade to obtain coarse-to-fine predictions, which are then combined to produce the final results. Evaluation of the proposed network and comparison with other state-of-the-art networks are conducted using four publicly available datasets: rice seeding and weed dataset, BoniRob dataset, carrot crop vs. weed dataset, and a paddy–millet dataset. The experimental results and their comparisons proclaim that the proposed network outperforms state-of-the-art architectures, such as U-Net, SegNet, FCN-8s, and DeepLabv3, over intersection over union (IoU), F1-score, sensitivity, true detection rate, and average precision comparison metrics by utilizing only (1/5.74 × U-Net), (1/5.77 × SegNet), (1/3.04 × FCN-8s), and (1/3.24 × DeepLabv3) fractions of total parameters.


2019 ◽  
Vol 24 (2) ◽  
pp. 207-215
Author(s):  
Zhenyang Wang ◽  
Zhidong Deng ◽  
Shiyao Wang

Technologies ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 1 ◽  
Author(s):  
Mazen Mel ◽  
Umberto Michieli ◽  
Pietro Zanuttigh

The semantic understanding of a scene is a key problem in the computer vision field. In this work, we address the multi-level semantic segmentation task where a deep neural network is first trained to recognize an initial, coarse, set of a few classes. Then, in an incremental-like approach, it is adapted to segment and label new objects’ categories hierarchically derived from subdividing the classes of the initial set. We propose a set of strategies where the output of coarse classifiers is fed to the architectures performing the finer classification. Furthermore, we investigate the possibility to predict the different levels of semantic understanding together, which also helps achieve higher accuracy. Experimental results on the New York University Depth v2 (NYUDv2) dataset show promising insights on the multi-level scene understanding.


2018 ◽  
Vol 11 (6) ◽  
pp. 304
Author(s):  
Javier Pinzon-Arenas ◽  
Robinson Jimenez-Moreno ◽  
Ruben Hernandez-Beleno

Sign in / Sign up

Export Citation Format

Share Document