Point Encoder GAN: A deep learning model for 3D point cloud inpainting

2020 ◽  
Vol 384 ◽  
pp. 192-199 ◽  
Author(s):  
Yikuan Yu ◽  
Zitian Huang ◽  
Fei Li ◽  
Haodong Zhang ◽  
Xinyi Le
2019 ◽  
Vol 56 (21) ◽  
pp. 211004
Author(s):  
王旭娇 Wang Xujiao ◽  
马杰 Ma Jie ◽  
王楠楠 Wang Nannan ◽  
马鹏飞 Ma Pengfei ◽  
杨立闯 Yang Lichaung

Author(s):  
Y.-T. Cheng ◽  
A. Patel ◽  
D. Bullock ◽  
A. Habib

Abstract. With the rapid development of autonomous vehicles (AV) and high-definition (HD) maps, up-to-date lane marking information is necessary. Over the years, several lane marking extraction approaches have been proposed with many of them based on accurate and dense Light Detection and Ranging (LiDAR) point cloud data collected by mobile mapping systems (MMS). This study proposes a normalized intensity thresholding strategy and a deep learning strategy with automatically generated labels. The former extracts lane markings directly from LiDAR point clouds while the latter utilizes 2D intensity images generated from the LiDAR point cloud. Additionally, the proposed approaches are also compared with state-of-the-art strategies such as original intensity thresholding and a deep learning approach based on manually established labels. Finally, each strategy is evaluated in asphalt and concrete pavements separately to assess their sensitivity to the nature of pavement surface. The results show that the deep learning model trained with automatically generated labels performs the best in both asphalt and concrete pavement area with an F1-score of 84.9% and 85.1%. In asphalt pavement area, original intensity thresholding strategy shows a lane marking extraction performance comparable to the other strategies while in concrete pavement area, it is significantly poor with an F1-score of 65.1%. Between the proposed normalized intensity thresholding and deep learning model trained with manually labeled data, the former performs better in asphalt pavement area while the latter obtains better results in concrete pavements.


2020 ◽  
Vol 13 (4) ◽  
pp. 627-640 ◽  
Author(s):  
Avinash Chandra Pandey ◽  
Dharmveer Singh Rajpoot

Background: Sentiment analysis is a contextual mining of text which determines viewpoint of users with respect to some sentimental topics commonly present at social networking websites. Twitter is one of the social sites where people express their opinion about any topic in the form of tweets. These tweets can be examined using various sentiment classification methods to find the opinion of users. Traditional sentiment analysis methods use manually extracted features for opinion classification. The manual feature extraction process is a complicated task since it requires predefined sentiment lexicons. On the other hand, deep learning methods automatically extract relevant features from data hence; they provide better performance and richer representation competency than the traditional methods. Objective: The main aim of this paper is to enhance the sentiment classification accuracy and to reduce the computational cost. Method: To achieve the objective, a hybrid deep learning model, based on convolution neural network and bi-directional long-short term memory neural network has been introduced. Results: The proposed sentiment classification method achieves the highest accuracy for the most of the datasets. Further, from the statistical analysis efficacy of the proposed method has been validated. Conclusion: Sentiment classification accuracy can be improved by creating veracious hybrid models. Moreover, performance can also be enhanced by tuning the hyper parameters of deep leaning models.


GigaScience ◽  
2021 ◽  
Vol 10 (5) ◽  
Author(s):  
Teng Miao ◽  
Weiliang Wen ◽  
Yinglun Li ◽  
Sheng Wu ◽  
Chao Zhu ◽  
...  

Abstract Background The 3D point cloud is the most direct and effective data form for studying plant structure and morphology. In point cloud studies, the point cloud segmentation of individual plants to organs directly determines the accuracy of organ-level phenotype estimation and the reliability of the 3D plant reconstruction. However, highly accurate, automatic, and robust point cloud segmentation approaches for plants are unavailable. Thus, the high-throughput segmentation of many shoots is challenging. Although deep learning can feasibly solve this issue, software tools for 3D point cloud annotation to construct the training dataset are lacking. Results We propose a top-to-down point cloud segmentation algorithm using optimal transportation distance for maize shoots. We apply our point cloud annotation toolkit for maize shoots, Label3DMaize, to achieve semi-automatic point cloud segmentation and annotation of maize shoots at different growth stages, through a series of operations, including stem segmentation, coarse segmentation, fine segmentation, and sample-based segmentation. The toolkit takes ∼4–10 minutes to segment a maize shoot and consumes 10–20% of the total time if only coarse segmentation is required. Fine segmentation is more detailed than coarse segmentation, especially at the organ connection regions. The accuracy of coarse segmentation can reach 97.2% that of fine segmentation. Conclusion Label3DMaize integrates point cloud segmentation algorithms and manual interactive operations, realizing semi-automatic point cloud segmentation of maize shoots at different growth stages. The toolkit provides a practical data annotation tool for further online segmentation research based on deep learning and is expected to promote automatic point cloud processing of various plants.


Sign in / Sign up

Export Citation Format

Share Document