virtual camera
Recently Published Documents


TOTAL DOCUMENTS

177
(FIVE YEARS 40)

H-INDEX

12
(FIVE YEARS 2)

2021 ◽  
Vol 3 (6) ◽  
pp. 484-500
Author(s):  
Zixiang Zhao ◽  
Quanwei Zhou ◽  
Xiaoguang Han ◽  
Lili Wang
Keyword(s):  

Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7558
Author(s):  
Linyan Cui ◽  
Guolong Zhang ◽  
Jinshen Wang

For the engineering application of manipulator grasping objects, mechanical arm occlusion and limited imaging angle produce various holes in the reconstructed 3D point clouds of objects. Acquiring a complete point cloud model of the grasped object plays a very important role in the subsequent task planning of the manipulator. This paper proposes a method with which to automatically detect and repair the holes in the 3D point cloud model of symmetrical objects grasped by the manipulator. With the established virtual camera coordinate system and boundary detection, repair and classification of holes, the closed boundaries for the nested holes were detected and classified into two kinds, which correspond to the mechanical claw holes caused by mechanical arm occlusion and the missing surface produced by limited imaging angle. These two kinds of holes were repaired based on surface reconstruction and object symmetry. Experiments on simulated and real point cloud models demonstrate that our approach outperforms the other state-of-the-art 3D point cloud hole repair algorithms.


2021 ◽  
Vol 5 (ISS) ◽  
pp. 1-17
Author(s):  
Finn Welsford-Ackroyd ◽  
Andrew Chalmers ◽  
Rafael Kuffner dos Anjos ◽  
Daniel Medeiros ◽  
Hyejin Kim ◽  
...  

In this paper, we present a system that allows a user with a head-mounted display (HMD) to communicate and collaborate with spectators outside of the headset. We evaluate its impact on task performance, immersion, and collaborative interaction. Our solution targets scenarios like live presentations or multi-user collaborative systems, where it is not convenient to develop a VR multiplayer experience and supply each user (and spectator) with an HMD. The spectator views the virtual world on a large-scale tiled video wall and is given the ability to control the orientation of their own virtual camera. This allows spectators to stay focused on the immersed user's point of view or freely look around the environment. To improve collaboration between users, we implemented a pointing system where a spectator can point at objects on the screen, which maps an indicator directly onto the objects in the virtual world. We conducted a user study to investigate the influence of rotational camera decoupling and pointing gestures in the context of HMD-immersed and non-immersed users utilizing a large-scale display. Our results indicate that camera decoupling and pointing positively impacts collaboration. A decoupled view is preferable in situations where both users need to indicate objects of interest in the scene, such as presentations and joint-task scenarios, as it requires a shared reference space. A coupled view, on the other hand, is preferable in synchronous interactions such as remote-assistant scenarios.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Kai Liu ◽  
Qinghan Yang ◽  
Yuhao Lu ◽  
Taoyu Zhang ◽  
Shuo Chen

In the animation industry, with the development of computer software and hardware technology, a new technology began to emerge, that is, three-dimensional animation. Three-dimensional animation software first creates a virtual world in the computer. In this virtual three-dimensional world, the designer builds the model and scene according to the shape and size of the object to be represented and then sets the motion trajectory of the model, the motion of the virtual camera, and the scene according to the requirements. When setting other animation parameters, we need to assign specific materials to the model and turn on lights. When all this is completed, the computer can automatically calculate and generate the final picture. The software Maya can just help animators to complete this work. When using Maya, we can apply many professional courses such as action design, scene design, and storyboarding script design that we have learned. Maya is a 3D software with convenient operability. It can combine the rendered sequence frames with AE to show unique animations. Therefore, the three-dimensional production method is preferred in the production method. The production of animation based on the 3D software Maya brings infinite challenges. At the same time, it also helps everyone grow and has a good position for our employment direction.


2021 ◽  
Vol 13 (18) ◽  
pp. 3647
Author(s):  
Ghizlane Karara ◽  
Rafika Hajji ◽  
Florent Poux

Semantic augmentation of 3D point clouds is a challenging problem with numerous real-world applications. While deep learning has revolutionised image segmentation and classification, its impact on point cloud is an active research field. In this paper, we propose an instance segmentation and augmentation of 3D point clouds using deep learning architectures. We show the potential of an indirect approach using 2D images and a Mask R-CNN (Region-Based Convolution Neural Network). Our method consists of four core steps. We first project the point cloud onto panoramic 2D images using three types of projections: spherical, cylindrical, and cubic. Next, we homogenise the resulting images to correct the artefacts and the empty pixels to be comparable to images available in common training libraries. These images are then used as input to the Mask R-CNN neural network, designed for 2D instance segmentation. Finally, the obtained predictions are reprojected to the point cloud to obtain the segmentation results. We link the results to a context-aware neural network to augment the semantics. Several tests were performed on different datasets to test the adequacy of the method and its potential for generalisation. The developed algorithm uses only the attributes X, Y, Z, and a projection centre (virtual camera) position as inputs.


2021 ◽  
Vol 11 (13) ◽  
pp. 6014
Author(s):  
Kai Guo ◽  
Hu Ye ◽  
Junhao Gu ◽  
Honglin Chen

The aim of the perspective-three-point (P3P) problem is to estimate extrinsic parameters of a camera from three 2D–3D point correspondences, including the orientation and position information. All the P3P solvers have a multi-solution phenomenon that is up to four solutions and needs a fully calibrated camera. In contrast, in this paper we propose a novel method for intrinsic and extrinsic parameter estimation based on three 2D–3D point correspondences with known camera position. Our core contribution is to build a new, virtual camera system whose frame and image plane are defined by the original 3D points, to build a new, intermediate world frame by the original image plane and the original 2D image points, and convert our problem to a P3P problem. Then, the intrinsic and extrinsic parameter estimation is to solve frame transformation and the P3P problem. Lastly, we solve the multi-solution problem by image resolution. Experimental results show its accuracy, numerical stability and uniqueness of the solution for intrinsic and extrinsic parameter estimation in synthetic data and real images.


Author(s):  
Jung Eun Yoo ◽  
Kwanggyoon Seo ◽  
Sanghun Park ◽  
Jaedong Kim ◽  
Dawon Lee ◽  
...  

Author(s):  
Yudong Guo ◽  
Juyong Zhang ◽  
Yihua Chen ◽  
Hongrui Cai ◽  
Zhangjin Huang ◽  
...  

AbstractFace views are particularly important in person-to-person communication. Differenes between the camera location and the face orientation can result in undesirable facial appearances of the participants during video conferencing. This phenomenon is particularly noticeable when using devices where the front-facing camera is placed in unconventional locations such as below the display or within the keyboard. In this paper, we take a video stream from a single RGB camera as input, and generate a video stream that emulates the view from a virtual camera at a designated location. The most challenging issue in this problem is that the corrected view often needs out-of-plane head rotations. To address this challenge, we reconstruct the 3D face shape and re-render it into synthesized frames according to the virtual camera location. To output the corrected video stream with natural appearance in real time, we propose several novel techniques including accurate eyebrow reconstruction, high-quality blending between the corrected face image and background, and template-based 3D reconstruction of glasses. Our system works well for different lighting conditions and skin tones, and can handle users wearing glasses. Extensive experiments and user studies demonstrate that our method provides high-quality results.


Energies ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 353
Author(s):  
Yu Hou ◽  
Rebekka Volk ◽  
Lucio Soibelman

Multi-sensor imagery data has been used by researchers for the image semantic segmentation of buildings and outdoor scenes. Due to multi-sensor data hunger, researchers have implemented many simulation approaches to create synthetic datasets, and they have also synthesized thermal images because such thermal information can potentially improve segmentation accuracy. However, current approaches are mostly based on the laws of physics and are limited to geometric models’ level of detail (LOD), which describes the overall planning or modeling state. Another issue in current physics-based approaches is that thermal images cannot be aligned to RGB images because the configurations of a virtual camera used for rendering thermal images are difficult to synchronize with the configurations of a real camera used for capturing RGB images, which is important for segmentation. In this study, we propose an image translation approach to directly convert RGB images to simulated thermal images for expanding segmentation datasets. We aim to investigate the benefits of using an image translation approach for generating synthetic aerial thermal images and compare those approaches with physics-based approaches. Our datasets for generating thermal images are from a city center and a university campus in Karlsruhe, Germany. We found that using the generating model established by the city center to generate thermal images for campus datasets performed better than using the latter to generate thermal images for the former. We also found that using a generating model established by one building style to generate thermal images for datasets with the same building styles performed well. Therefore, we suggest using training datasets with richer and more diverse building architectural information, more complex envelope structures, and similar building styles to testing datasets for an image translation approach.


Author(s):  
Hocine Chebi

Camera placement in a virtual environment consists of positioning and orienting a 3D virtual camera so as to respect a set of visual or cinematographic properties defined by the user. Carrying out this task is difficult in practice. Indeed, the user has a clear vision of the result he wants to obtain in terms of the arrangement of the objects in the image. In this chapter, the authors identify three areas of research that are relatively little covered by the literature dedicated to camera placement and which nevertheless appear essential. On the one hand, existing approaches offer little flexibility in both solving and describing a problem in terms of visual properties, especially when it has no solution. They propose a flexible solution method which computes the set of solutions, maximizing the satisfaction of the properties of the problem, whether it is over constrained or not. On the other hand, the existing methods calculate only one solution, even when the problem has several classes of equivalent solutions in terms of satisfaction of properties. They introduce the method of semantic volumes which computes the set of classes of semantically equivalent solutions and proposes a representative of each of them to the user. Finally, the problem of occlusion, although essential in the transmission of information, is little addressed by the community. Consequently, they present a new method of taking into account occlusion in dynamic real-time environments.


Sign in / Sign up

Export Citation Format

Share Document