virtual human
Recently Published Documents


TOTAL DOCUMENTS

608
(FIVE YEARS 102)

H-INDEX

30
(FIVE YEARS 3)

Heritage ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 1-20
Author(s):  
Nikolaos Partarakis ◽  
Xenophon Zabulis ◽  
Michalis Foukarakis ◽  
Mirοdanthi Moutsaki ◽  
Emmanouil Zidianakis ◽  
...  

The accessibility of Cultural Heritage content for the diverse user population visiting Cultural Heritage Institutions and accessing content online has not been thoroughly discussed. Considering the penetration of new digital media in such physical and virtual spaces, lack of accessibility may result in the exclusion of a large user population. To overcome such emerging barriers, this paper proposes a cost-effective methodology for the implementation of Virtual Humans, which are capable of narrating content in a universally accessible form and acting as virtual storytellers in the context of online and on-site CH experiences. The methodology is rooted in advances in motion capture technologies and Virtual Human implementation, animation, and multi-device rendering. This methodology is employed in the context of a museum installation at the Chios Mastic Museum where VHs are presenting the industrial process of mastic processing for chewing gum production.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Zichun Guo ◽  
Zihao Wang ◽  
Xueguang Jin

How to make communication more effective has been underlined unprecedentedly in the artificial intelligence (AI) era. Nowadays, with the improvement of affective computing and big data, people have generally adapted to construct social networks relying on social robots and smartphones. Although the technologies above have been widely discussed and used, researches on disabled people in the social field are still very limited. In particular, facial disabled people, deaf-mutes, and autistic patients are still meeting great difficulty when interacting with strangers using online video technology. This project creates a virtual human social system called “Avatar to Person” (ATP) based on artificial intelligence and three-dimensional (3D) simulation technology, with which disabled people can complete tasks such as “virtual face repair” and “simulated voice generation,” in order to conduct face-to-face video communication freely and confidently. The system has been proven effective in the enhancement of the sense of online social participation for people with disabilities through user tests. ATP is certain to be a unique area of inquiry and design for disabled people that is categorically different from other types of human-robot interaction.


2021 ◽  
Vol 12 ◽  
Author(s):  
Maryam Saberi ◽  
Steve DiPaola ◽  
Ulysses Bernardet

The attribution of traits plays an important role as a heuristic for how we interact with others. Many psychological models of personality are analytical in that they derive a classification from reported or hypothesised behaviour. In the work presented here, we follow the opposite approach: Our personality model generates behaviour that leads an observer to attribute personality characteristics to the actor. Concretely, the model controls all relevant aspects of non-verbal behaviour such as gaze, facial expression, gesture, and posture. The model, embodied in a virtual human, affords to realistically interact with participants in real-time. Conceptually, our model focuses on the two dimensions of extra/introversion and stability/neuroticism. In the model, personality parameters influence both, the internal affective state as well as the characteristic of the behaviour execution. Importantly, the parameters of the model are based on empirical findings in the behavioural sciences. To evaluate our model, we conducted two types of studies. Firstly, passive experiments where participants rated videos showing variants of behaviour driven by different personality parameter configurations. Secondly, presential experiments where participants interacted with the virtual human, playing rounds of the Rock-Paper-Scissors game. Our results show that the model is effective in conveying the impression of the personality of a virtual character to users. Embodying the model in an artificial social agent capable of real-time interactive behaviour is the only way to move from an analytical to a generative approach to understanding personality, and we believe that this methodology raises a host of novel research questions in the field of personality theory.


2021 ◽  
Vol 24 (2) ◽  
pp. 1740-1747
Author(s):  
Anton Leuski ◽  
David Traum

NPCEditor is a system for building a natural language processing component for virtual humans capable of engaging a user in spoken dialog on a limited domain. It uses a statistical language classification technology for mapping from user's text input to system responses. NPCEditor provides a user-friendly editor for creating effective virtual humans quickly. It has been deployed as a part of various virtual human systems in several applications.


2021 ◽  
Vol 2021 ◽  
pp. 293-299
Author(s):  
D.R. Viziteu ◽  
A. Curteza ◽  
M.L. Avadanei

In the past several years, the application of 3D technologies in the textile and clothing design industry has considerably increased and become more accessible to designers and patternmakers. With digitisation in garment engineering and virtual prototype and modelling techniques becoming more mainstream, a new generation of virtual human models starts to develop to fulfil the demand for protective and functional products designed for specific athletes, such as climbers and mountaineers. We must base our work on an improved understanding of the behaviour of the musculoskeletal system to develop garment patterns that minimise discomfort and improve performance under dynamic body deformations and muscle contractions associated with specific movements. For this study, we explored the possibilities of using existing software packages for virtual prototyping based on human kinematic models for functional clothing.


2021 ◽  
Vol 2021 ◽  
pp. 1-20
Author(s):  
Shahram Payandeh ◽  
Jeffrey Wael

Tracking movements of the body in a natural living environment of a person is a challenging undertaking. Such tracking information can be used as a part of detecting any onsets of anomalies in movement patterns or as a part of a remote monitoring environment. The tracking information can be mapped and visualized using a virtual avatar model of the tracked person. This paper presents an initial novel experimental study of using a commercially available deep-learning body tracking system based on an RGB-D sensor for virtual human model reconstruction. We carried out our study in an indoor environment under natural conditions. To study the performance of the tracker, we experimentally study the output of the tracker which is in the form of a skeleton (stick-figure) data structure under several conditions in order to observe its robustness and identify its drawbacks. In addition, we show and study how the generic model can be mapped for virtual human model reconstruction. It was found that the deep-learning tracking approach using an RGB-D sensor is susceptible to various environmental factors which result in the absence and presence of noise in estimating the resulting locations of skeleton joints. This as a result introduces challenges for further virtual model reconstruction. We present an initial approach for compensating for such noise resulting in a better temporal variation of the joint coordinates in the captured skeleton data. We explored how the extracted joint position information of the skeleton data can be used as a part of the virtual human model reconstruction.


2021 ◽  
Author(s):  
Joy O. Egede ◽  
Dominic Price ◽  
Deepa B. Krishnan ◽  
Shashank Jaiswal ◽  
Natasha Elliot ◽  
...  

2021 ◽  
Vol 8 (9) ◽  
pp. 210537
Author(s):  
Joan Llobera ◽  
Alejandro Beacco ◽  
Ramon Oliva ◽  
Gizem Şenel ◽  
Domna Banakou ◽  
...  

Virtual reality applications depend on multiple factors, for example, quality of rendering, responsiveness, and interfaces. In order to evaluate the relative contributions of different factors to quality of experience, post-exposure questionnaires are typically used. Questionnaires are problematic as the questions can frame how participants think about their experience and cannot easily take account of non-additivity among the various factors. Traditional experimental design can incorporate non-additivity but with a large factorial design table beyond two factors. Here, we extend a previous method by introducing a reinforcement learning (RL) agent that proposes possible changes to factor levels during the exposure and requires the participant to either accept these or not. Eventually, the RL converges on a policy where no further proposed changes are accepted. An experiment was carried out with 20 participants where four binary factors were considered. A consistent configuration of factors emerged where participants preferred to use a teleportation technique for navigation (compared to walking-in-place), a full-body representation (rather than hands only), the responsiveness of virtual human characters (compared to being ignored) and realistic compared to cartoon rendering. We propose this new method to evaluate participant choices and discuss various extensions.


Author(s):  
Mengshan Yang ◽  
Yan Li ◽  
Jun Chen
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document