Multidimensional Measurement of Virtual Human Bodies Acquired with Depth Sensors

Author(s):  
Andrés Fuster-Guilló ◽  
Jorge Azorín-López ◽  
Juan Miguel Castillo-Zaragoza ◽  
Cayetano Manchón-Pernis ◽  
Luis Fernando Pérez-Pérez ◽  
...  
2010 ◽  
Vol 30 (11) ◽  
pp. 3084-3086
Author(s):  
Qian LI ◽  
Xiao-min JI ◽  
Ming-liang WANG

2018 ◽  
Vol 30 (6) ◽  
pp. 1110
Author(s):  
Jianpeng Wang ◽  
Wenhu Qin ◽  
Libo Sun
Keyword(s):  

2021 ◽  
Vol 20 (3) ◽  
pp. 1-22
Author(s):  
David Langerman ◽  
Alan George

High-resolution, low-latency apps in computer vision are ubiquitous in today’s world of mixed-reality devices. These innovations provide a platform that can leverage the improving technology of depth sensors and embedded accelerators to enable higher-resolution, lower-latency processing for 3D scenes using depth-upsampling algorithms. This research demonstrates that filter-based upsampling algorithms are feasible for mixed-reality apps using low-power hardware accelerators. The authors parallelized and evaluated a depth-upsampling algorithm on two different devices: a reconfigurable-logic FPGA embedded within a low-power SoC; and a fixed-logic embedded graphics processing unit. We demonstrate that both accelerators can meet the real-time requirements of 11 ms latency for mixed-reality apps. 1


Electronics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 517
Author(s):  
Seong-heum Kim ◽  
Youngbae Hwang

Owing to recent advancements in deep learning methods and relevant databases, it is becoming increasingly easier to recognize 3D objects using only RGB images from single viewpoints. This study investigates the major breakthroughs and current progress in deep learning-based monocular 3D object detection. For relatively low-cost data acquisition systems without depth sensors or cameras at multiple viewpoints, we first consider existing databases with 2D RGB photos and their relevant attributes. Based on this simple sensor modality for practical applications, deep learning-based monocular 3D object detection methods that overcome significant research challenges are categorized and summarized. We present the key concepts and detailed descriptions of representative single-stage and multiple-stage detection solutions. In addition, we discuss the effectiveness of the detection models on their baseline benchmarks. Finally, we explore several directions for future research on monocular 3D object detection.


2015 ◽  
Vol 22 (1) ◽  
pp. 64-65
Author(s):  
Andrea Stevenson Won

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Thomas Treal ◽  
Philip L. Jackson ◽  
Jean Jeuvrey ◽  
Nicolas Vignais ◽  
Aurore Meugnot

AbstractVirtual reality platforms producing interactive and highly realistic characters are being used more and more as a research tool in social and affective neuroscience to better capture both the dynamics of emotion communication and the unintentional and automatic nature of emotional processes. While idle motion (i.e., non-communicative movements) is commonly used to create behavioural realism, its use to enhance the perception of emotion expressed by a virtual character is critically lacking. This study examined the influence of naturalistic (i.e., based on human motion capture) idle motion on two aspects (the perception of other’s pain and affective reaction) of an empathic response towards pain expressed by a virtual character. In two experiments, 32 and 34 healthy young adults were presented video clips of a virtual character displaying a facial expression of pain while its body was either static (still condition) or animated with natural postural oscillations (idle condition). The participants in Experiment 1 rated the facial pain expression of the virtual human as more intense, and those in Experiment 2 reported being more touched by its pain expression in the idle condition compared to the still condition, indicating a greater empathic response towards the virtual human’s pain in the presence of natural postural oscillations. These findings are discussed in relation to the models of empathy and biological motion processing. Future investigations will help determine to what extent such naturalistic idle motion could be a key ingredient in enhancing the anthropomorphism of a virtual human and making its emotion appear more genuine.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2144
Author(s):  
Stefan Reitmann ◽  
Lorenzo Neumann ◽  
Bernhard Jung

Common Machine-Learning (ML) approaches for scene classification require a large amount of training data. However, for classification of depth sensor data, in contrast to image data, relatively few databases are publicly available and manual generation of semantically labeled 3D point clouds is an even more time-consuming task. To simplify the training data generation process for a wide range of domains, we have developed the BLAINDER add-on package for the open-source 3D modeling software Blender, which enables a largely automated generation of semantically annotated point-cloud data in virtual 3D environments. In this paper, we focus on classical depth-sensing techniques Light Detection and Ranging (LiDAR) and Sound Navigation and Ranging (Sonar). Within the BLAINDER add-on, different depth sensors can be loaded from presets, customized sensors can be implemented and different environmental conditions (e.g., influence of rain, dust) can be simulated. The semantically labeled data can be exported to various 2D and 3D formats and are thus optimized for different ML applications and visualizations. In addition, semantically labeled images can be exported using the rendering functionalities of Blender.


Author(s):  
Jordan Sasser ◽  
Fernando Montalvo ◽  
Rhyse Bendell ◽  
P. A. Hancock ◽  
Daniel S. McConnell

Prior research has indicated that perception of acceleration may be a direct process. This direct process may be conceptually linked to the ecological approach to visual perception and a further extension of direct social perception. The present study examines the effects of perception of acceleration in virtual reality on participants’ perceived attributes (perceived intelligence and animacy) of a virtual human-like robot agent and perceived agent competitive/cooperativeness. Perceptual judgments were collected after experiencing one of the five different conditions dependent on the participant’s acceleration: mirrored acceleration, faster acceleration, slowed acceleration, varied acceleration resulting in a win, and varied acceleration resulting in a loss. Participants experienced each condition twice in a counterbalanced fashion. The focus of the experiment was to determine whether different accelerations influenced perceptual judgments of the observers. Results suggest that faster acceleration was perceived as more competitive and slower acceleration was reported as low in animacy and perceived intelligence.


Sign in / Sign up

Export Citation Format

Share Document