scholarly journals Designing an Artificial Agent for Cognitive Apprenticeship Learning of Elevator Pitch in Virtual Reality

Author(s):  
Zhenjie Zhao ◽  
Xiaojuan Ma
Author(s):  
Tina Parscal

Cognitive apprenticeship (Collins, Brown, & Newman, 1989) is an instructional framework that uses the underlying principles of traditional apprenticeship learning. The cognitive apprenticeship framework consists of the dimension of content, methods, sequence, and sociology. It focuses specifically on instructional modeling, coaching, and scaffolding. Through modeling, learners see expert facilitation techniques in a realistic setting. According to Schulte, Magenheim, Niere, and Schafer (2003), “the key issue is to make the problem solving process and the expert’s thinking visible to the learner” (p. 271). During coaching, learners receive guidance while they attempt to execute tasks and demonstrate skills. Scaffolding, the process of supporting learners while they acquire new skills, is provided and faded as learners begin to demonstrate mastery of these new skills. These techniques are employed in situated learning environments. Further, cognitive apprenticeship sets out to (a) identify an expert’s problem solving and critical thinking processes and make them visible to learners, (b) situate abstract task in authentic contexts, and (c) vary the diversity of situations in which problem solving may occur and articulate the common aspects in order to increase the potential for learning transfer (Collins, Brown and Newman, 1989).


PeerJ ◽  
2020 ◽  
Vol 8 ◽  
pp. e9843
Author(s):  
James Hirose ◽  
Atsushi Nishikawa ◽  
Yosuke Horiba ◽  
Shigeru Inui ◽  
Todd C. Pataky

Uncanny valley research has shown that human likeness is an important consideration when designing artificial agents. It has separately been shown that artificial agents exhibiting human-like kinematics can elicit positive perceptual responses. However the kinematic characteristics underlying that perception have not been elucidated. This paper proposes kinematic jerk amplitude as a candidate metric for kinematic human likeness, and aims to determine whether a perceptual optimum exists over a range of jerk values. We created minimum-jerk two-digit grasp kinematics in a prosthetic hand model, then added different amplitudes of temporally smooth noise to yield a variety of animations involving different total jerk levels, ranging from maximally smooth to highly jerky. Subjects indicated their perceptual affinity for these animations by simultaneously viewing two different animations side-by-side, first using a laptop, then separately within a virtual reality (VR) environment. Results suggest that (a) subjects generally preferred smoother kinematics, (b) subjects exhibited a small preference for rougher-than minimum jerk kinematics in the laptop experiment, and that (c) the preference for rougher-than minimum-jerk kinematics was amplified in the VR experiment. These results suggest that non-maximally smooth kinematics may be perceptually optimal in robots and other artificial agents.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Martina Fusaro ◽  
Matteo P. Lisi ◽  
Gaetano Tieri ◽  
Salvatore Maria Aglioti

AbstractEmbodying an artificial agent through immersive virtual reality (IVR) may lead to feeling vicariously somatosensory stimuli on one’s body which are in fact never delivered. To explore whether vicarious touch in IVR reflects the basic individual and social features of real-life interpersonal interactions we tested heterosexual men/women and gay men/lesbian women reacting subjectively and physiologically to the observation of a gender-matched virtual body being touched on intimate taboo zones (like genitalia) by male and female avatars. All participants rated as most erogenous caresses on their embodied avatar taboo zones. Crucially, heterosexual men/women and gay men/lesbian women rated as most erogenous taboo touches delivered by their opposite and same gender avatar, respectively. Skin conductance was maximal when taboo touches were delivered by female avatars. Our study shows that IVR may trigger realistic experiences and ultimately allow the direct exploration of sensitive societal and individual issues that can otherwise be explored only through imagination.


2018 ◽  
Author(s):  
Ruohan Zhang ◽  
Shun Zhang ◽  
Matthew H. Tong ◽  
Yuchen Cui ◽  
Constantin A. Rothkopf ◽  
...  

AbstractAlthough a standard reinforcement learning model can capture many aspects of reward-seeking behaviors, it may not be practical for modeling human natural behaviors because of the richness of dynamic environments and limitations in cognitive resources. We propose a modular reinforcement learning model that addresses these factors. Based on this model, a modular inverse reinforcement learning algorithm is developed to estimate both the rewards and discount factors from human behavioral data, which allows predictions of human navigation behaviors in virtual reality with high accuracy across different subjects and with different tasks. Complex human navigation trajectories in novel environments can be reproduced by an artificial agent that is based on the modular model. This model provides a strategy for estimating the subjective value of actions and how they influence sensory-motor decisions in natural behavior.Author summaryIt is generally agreed that human actions can be formalized within the framework of statistical decision theory, which specifies a cost function for actions choices, and that the intrinsic value of actions is controlled by the brain’s dopaminergic reward machinery. Given behavioral data, the underlying subjective reward value for an action can be estimated through a machine learning technique called inverse reinforcement learning. Hence it is an attractive method for studying human reward-seeking behaviors. Standard reinforcement learning methods were developed for artificial intelligence agents, and incur too much computation to be a viable model for real-time human decision making. We propose an approach called modular reinforcement learning that decomposes a complex task into independent decision modules. This model includes a frequently overlooked variable called the discount factor, which controls the degree of impulsiveness in seeking future reward. We develop an algorithm called modular inverse reinforcement learning that estimates both the reward and the discount factor. We show that modular reinforcement learning may be a useful model for natural navigation behaviors. The estimated rewards and discount factors explain human walking direction decisions in a virtual-reality environment, and can be used to train an artificial agent that can accurately reproduce human navigation trajectories.


Sign in / Sign up

Export Citation Format

Share Document