adaptive condition
Recently Published Documents


TOTAL DOCUMENTS

21
(FIVE YEARS 8)

H-INDEX

6
(FIVE YEARS 1)

2022 ◽  
Vol 11 (1) ◽  
pp. 1-17
Author(s):  
Alessia Vignolo ◽  
Henry Powell ◽  
Francesco Rea ◽  
Alessandra Sciutti ◽  
Luke Mcellin ◽  
...  

We tested the hypothesis that, if a robot apparently invests effort in teaching a new skill to a human participant, the human participant will reciprocate by investing more effort in teaching the robot a new skill, too. To this end, we devised a scenario in which the iCub and a human participant alternated in teaching each other new skills. In the Adaptive condition of the robot teaching phase , the iCub slowed down its movements when repeating a demonstration for the human learner, whereas in the Unadaptive condition it sped the movements up when repeating the demonstration. In a subsequent participant teaching phase , human participants were asked to give the iCub a demonstration, and then to repeat it if the iCub had not understood. We predicted that in the Adaptive condition , participants would reciprocate the iCub’s adaptivity by investing more effort to slow down their movements and to increase segmentation when repeating their demonstration. The results showed that this was true when participants experienced the Adaptive condition after the Unadaptive condition and not when the order was inverted, indicating that participants were particularly sensitive to the changes in the iCub’s level of commitment over the course of the experiment.


Author(s):  
Tor Finseth ◽  
Michael C. Dorneich ◽  
Nir Keren ◽  
Warren Franke ◽  
Stephen Vardeman ◽  
...  

Astronauts operate in an environment with multiple hazards that can develop into life-threatening emergency situations. Managing stress in emergencies may require cognitive resources and lead to diminishing performance. Stress training aims to maintain performance under stress by methodically increasing stressor levels to build inoculation against stress. An adaptive virtual reality (VR) training system was developed with real-time stress detection by using machine learning on psychophysiological responses. Using a VR simulation of a spaceflight emergency fire, stress classifications were used to trigger adaptations of the VR environmental stressors (e.g., smoke, alarms, flashing lights), with the goal of maintaining a manageable level of stress during training. Fifty-seven healthy subjects underwent task training over eight trials with adaptive training (adaptive, n=19); results were compared to trials with predetermined gradual increases in stressors (graduated, n=18), and with trials with constant low-level stressors (skill-only, n=20). Stress responses were measured through heart rate, heart rate variability (i.e., root mean squared of successive differences (RMSSD), low frequency to high frequency (LF/HF) ratio), and task performance (distance-from-fire). Heart rate decreased and RMSSD increased pre-post training for all experimental conditions. The LF/HF ratio decreased pre-post training for the adaptive condition, but not in the other conditions. Results suggests that all conditions had lower stress, but the adaptive condition was more successful. Task performance showed a marginal increase across trials for the adaptive condition. Preliminary results suggest that training with the adaptive stress system can prepare individuals for responding to stressors better than skill-only and graduated training.


2021 ◽  
pp. 1-20
Author(s):  
Jeevithashree DV ◽  
Puneet Jain ◽  
Abhishek Mukhopadhyay ◽  
Kamal Preet Singh Saluja ◽  
Pradipta Biswas

BACKGROUND: Users with Severe Speech and Motor Impairment (SSMI) often use a communication chart through their eye gaze or limited hand movement and care takers interpret their communication intent. There is already significant research conducted to automate this communication through electronic means. Developing electronic user interface and interaction techniques for users with SSMI poses significant challenges as research on their ocular parameters found that such users suffer from Nystagmus and Strabismus limiting number of elements in a computer screen. This paper presents an optimized eye gaze controlled virtual keyboard for English language with an adaptive dwell time feature for users with SSMI. OBJECTIVE: Present an optimized eye gaze controlled English virtual keyboard that follows both static and dynamic adaptation process. The virtual keyboard can automatically adapt to reduce eye gaze movement distance and dwell time for selection and help users with SSMI type better without any intervention of an assistant. METHODS: Before designing the virtual keyboard, we undertook a pilot study to optimize screen region which would be most comfortable for SSMI users to operate. We then proposed an optimized two-level English virtual keyboard layout through Genetic algorithm using static adaptation process; followed by dynamic adaptation process which tracks users’ interaction and reduces dwell time based on a Markov model-based algorithm. Further, we integrated the virtual keyboard for a web-based interactive dashboard that visualizes real-time Covid data. RESULTS: Using our proposed virtual keyboard layout for English language, the average task completion time for users with SSMI was 39.44 seconds in adaptive condition and 29.52 seconds in non-adaptive condition. Overall typing speed was 16.9 lpm (letters per minute) for able-bodied users and 6.6 lpm for users with SSMI without using any word completion or prediction features. A case study with an elderly participant with SSMI found a typing speed of 2.70 wpm (words per minute) and 14.88 lpm (letters per minute) after 6 months of practice. CONCLUSIONS: With the proposed layout for English virtual keyboard, the adaptive system increased typing speed statistically significantly for able bodied users than a non-adaptive version while for 6 users with SSMI, task completion time reduced by 8.8% in adaptive version than nonadaptive one. Additionally, the proposed layout was successfully integrated to a web-based interactive visualization dashboard thereby making it accessible for users with SSMI.


Open Mind ◽  
2020 ◽  
Vol 4 ◽  
pp. 88-101 ◽  
Author(s):  
Matthew Setzler ◽  
Robert Goldstone

Joint action (JA) is ubiquitous in our cognitive lives. From basketball teams to teams of surgeons, humans often coordinate with one another to achieve some common goal. Idealized laboratory studies of group behavior have begun to elucidate basic JA mechanisms, but little is understood about how these mechanisms scale up in more sophisticated and open-ended JA that occurs in the wild. We address this gap by examining coordination in a paragon domain for creative joint expression: improvising jazz musicians. Coordination in jazz music subserves an aesthetic goal: the generation of a collective musical expression comprising coherent, highly nuanced musical structure (e.g., rhythm, harmony). In our study, dyads of professional jazz pianists improvised in a “coupled,” mutually adaptive condition, and an “overdubbed” condition that precluded mutual adaptation, as occurs in common studio recording practices. Using a model of musical tonality, we quantify the flow of rhythmic and harmonic information between musicians as a function of interaction condition. Our analyses show that mutually adapting dyads achieve greater temporal alignment and produce more consonant harmonies. These musical signatures of coordination were preferred by independent improvisers and naive listeners, who gave higher quality ratings to coupled interactions despite being blind to condition. We present these results and discuss their implications for music technology and JA research more generally.


2020 ◽  
Author(s):  
Matt Setzler ◽  
Robert Goldstone

Joint action (JA) is ubiquitous in our cognitive lives. From basketball teams to teams of surgeons, humans often coordinate with one another to achieve some common goal. Despite this ubiquity, the individual mechanisms and group-level dynamics of complex, sophisticated JA are poorly understood. We examine coordination in a paragon domain for creative joint expression: improvising jazz musicians. Coordination in jazz music is improvised and subserves an aesthetic goal: the generation of a collective musical expression comprising coherent, highly nuanced musical structure (e.g. rhythm, harmony). In this study, dyads of professional jazz pianists improvised in a "coupled", mutually adaptive condition, and an "overdubbed" condition which precluded mutual adaptation, as occurs in common studio recording practices. Using a model of musical tonality, we quantify the flow of rhythmic and harmonic information between musicians as a function of interaction condition, and show that mutually responsive dyads produce more consonant harmonies, an ability which increases throughout the course of improvised performance. These musical signatures of coordination were paralleled in the subjective experience of improvisers, who preferred coupled trials despite being blind to condition. We present these results and discuss their implications for music technology and JA research more generally.


2018 ◽  
Vol 177 ◽  
pp. 592-604 ◽  
Author(s):  
Yanhe Xu ◽  
Yang Zheng ◽  
Yi Du ◽  
Wen Yang ◽  
Xuyi Peng ◽  
...  

2018 ◽  
Vol 69 (4) ◽  
pp. 379-385
Author(s):  
Koichi YOKOYAMA ◽  
Yoshiro HIRASAKI ◽  
Hideki OKAMOTO ◽  
Akito HISANAGA ◽  
Shingo ONO ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document