scholarly journals Point Cloud Generation Using Deep Adversarial Local Features for Augmented and Mixed Reality Contents

Author(s):  
Sohee Lim ◽  
Minwoo Shin ◽  
Joonki Paik
2021 ◽  
Vol 12 (8) ◽  
pp. 730-738
Author(s):  
Yinling Sui ◽  
Zhiyuan Qin ◽  
Xiaochong Tong ◽  
He Li ◽  
Lu Ding ◽  
...  

Author(s):  
A. Kharroubi ◽  
R. Hajji ◽  
R. Billen ◽  
F. Poux

Abstract. With the increasing volume of 3D applications using immersive technologies such as virtual, augmented and mixed reality, it is very interesting to create better ways to integrate unstructured 3D data such as point clouds as a source of data. Indeed, this can lead to an efficient workflow from 3D capture to 3D immersive environment creation without the need to derive 3D model, and lengthy optimization pipelines. In this paper, the main focus is on the direct classification and integration of massive 3D point clouds in a virtual reality (VR) environment. The emphasis is put on leveraging open-source frameworks for an easy replication of the findings. First, we develop a semi-automatic segmentation approach to provide semantic descriptors (mainly classes) to groups of points. We then build an octree data structure leveraged through out-of-core algorithms to load in real time and continuously only the points that are in the VR user's field of view. Then, we provide an open-source solution using Unity with a user interface for VR point cloud interaction and visualisation. Finally, we provide a full semantic VR data integration enhanced through developed shaders for future spatio-semantic queries. We tested our approach on several datasets of which a point cloud composed of 2.3 billion points, representing the heritage site of the castle of Jehay (Belgium). The results underline the efficiency and performance of the solution for visualizing classifieds massive point clouds in virtual environments with more than 100 frame per second.


Author(s):  
J. Zhao ◽  
X. Zhang ◽  
Y. Wang

Abstract. Indoor 3D point clouds semantics segmentation is one of the key technologies of constructing 3D indoor models,which play an important role on domains like indoor navigation and positioning,intelligent city, intelligent robot etc. The deep-learning-based methods for point cloud segmentation take on higher degree of automation and intelligence. PointNet,the first deep neural network which manipulate point cloud directly, mainly extracts the global features but lacks of learning and extracting local features,which causes the poor ability of segmenting the local details of architecture and affects the precision of structural elements segmentation . Focusing on the problems above,this paper put forward an automatic end-to-end segmentation method base on the modified PointNet. According to the characteristic that the intensity of different indoor structural elements differ a lot, we input the point cloud information of 3D coordinate, color and intensity into the feature space of points. Also,a MaxPooling is added into the original PointNet network to improve the ability of attracting and learning local features. In addition, replace the 1×1 convolution kernel of original PointNet with 3×3 convolution kernel in the process of attracting features to improve the segmentation precision of indoor point cloud. The result shows that this method improves the automation and precision of indoor point cloud segmentation for the precision achieves over 80% to segment the structural elements like wall,door and so on ,and the average segmentation precision of every structural elements achieves 66%.


Author(s):  
Jacqueline A. Towson ◽  
Matthew S. Taylor ◽  
Diana L. Abarca ◽  
Claire Donehower Paul ◽  
Faith Ezekiel-Wilder

Purpose Communication between allied health professionals, teachers, and family members is a critical skill when addressing and providing for the individual needs of patients. Graduate students in speech-language pathology programs often have limited opportunities to practice these skills prior to or during externship placements. The purpose of this study was to research a mixed reality simulator as a viable option for speech-language pathology graduate students to practice interprofessional communication (IPC) skills delivering diagnostic information to different stakeholders compared to traditional role-play scenarios. Method Eighty graduate students ( N = 80) completing their third semester in one speech-language pathology program were randomly assigned to one of four conditions: mixed-reality simulation with and without coaching or role play with and without coaching. Data were collected on students' self-efficacy, IPC skills pre- and postintervention, and perceptions of the intervention. Results The students in the two coaching groups scored significantly higher than the students in the noncoaching groups on observed IPC skills. There were no significant differences in students' self-efficacy. Students' responses on social validity measures showed both interventions, including coaching, were acceptable and feasible. Conclusions Findings indicated that coaching paired with either mixed-reality simulation or role play are viable methods to target improvement of IPC skills for graduate students in speech-language pathology. These findings are particularly relevant given the recent approval for students to obtain clinical hours in simulated environments.


Sign in / Sign up

Export Citation Format

Share Document