scholarly journals Consideration on Adaptive Condition of Soshikokito

2018 ◽  
Vol 69 (4) ◽  
pp. 379-385
Author(s):  
Koichi YOKOYAMA ◽  
Yoshiro HIRASAKI ◽  
Hideki OKAMOTO ◽  
Akito HISANAGA ◽  
Shingo ONO ◽  
...  
Keyword(s):  
2022 ◽  
Vol 11 (1) ◽  
pp. 1-17
Author(s):  
Alessia Vignolo ◽  
Henry Powell ◽  
Francesco Rea ◽  
Alessandra Sciutti ◽  
Luke Mcellin ◽  
...  

We tested the hypothesis that, if a robot apparently invests effort in teaching a new skill to a human participant, the human participant will reciprocate by investing more effort in teaching the robot a new skill, too. To this end, we devised a scenario in which the iCub and a human participant alternated in teaching each other new skills. In the Adaptive condition of the robot teaching phase , the iCub slowed down its movements when repeating a demonstration for the human learner, whereas in the Unadaptive condition it sped the movements up when repeating the demonstration. In a subsequent participant teaching phase , human participants were asked to give the iCub a demonstration, and then to repeat it if the iCub had not understood. We predicted that in the Adaptive condition , participants would reciprocate the iCub’s adaptivity by investing more effort to slow down their movements and to increase segmentation when repeating their demonstration. The results showed that this was true when participants experienced the Adaptive condition after the Unadaptive condition and not when the order was inverted, indicating that participants were particularly sensitive to the changes in the iCub’s level of commitment over the course of the experiment.


2020 ◽  
Author(s):  
Matt Setzler ◽  
Robert Goldstone

Joint action (JA) is ubiquitous in our cognitive lives. From basketball teams to teams of surgeons, humans often coordinate with one another to achieve some common goal. Despite this ubiquity, the individual mechanisms and group-level dynamics of complex, sophisticated JA are poorly understood. We examine coordination in a paragon domain for creative joint expression: improvising jazz musicians. Coordination in jazz music is improvised and subserves an aesthetic goal: the generation of a collective musical expression comprising coherent, highly nuanced musical structure (e.g. rhythm, harmony). In this study, dyads of professional jazz pianists improvised in a "coupled", mutually adaptive condition, and an "overdubbed" condition which precluded mutual adaptation, as occurs in common studio recording practices. Using a model of musical tonality, we quantify the flow of rhythmic and harmonic information between musicians as a function of interaction condition, and show that mutually responsive dyads produce more consonant harmonies, an ability which increases throughout the course of improvised performance. These musical signatures of coordination were paralleled in the subjective experience of improvisers, who preferred coupled trials despite being blind to condition. We present these results and discuss their implications for music technology and JA research more generally.


2018 ◽  
Vol 177 ◽  
pp. 592-604 ◽  
Author(s):  
Yanhe Xu ◽  
Yang Zheng ◽  
Yi Du ◽  
Wen Yang ◽  
Xuyi Peng ◽  
...  

2021 ◽  
pp. 1-20
Author(s):  
Jeevithashree DV ◽  
Puneet Jain ◽  
Abhishek Mukhopadhyay ◽  
Kamal Preet Singh Saluja ◽  
Pradipta Biswas

BACKGROUND: Users with Severe Speech and Motor Impairment (SSMI) often use a communication chart through their eye gaze or limited hand movement and care takers interpret their communication intent. There is already significant research conducted to automate this communication through electronic means. Developing electronic user interface and interaction techniques for users with SSMI poses significant challenges as research on their ocular parameters found that such users suffer from Nystagmus and Strabismus limiting number of elements in a computer screen. This paper presents an optimized eye gaze controlled virtual keyboard for English language with an adaptive dwell time feature for users with SSMI. OBJECTIVE: Present an optimized eye gaze controlled English virtual keyboard that follows both static and dynamic adaptation process. The virtual keyboard can automatically adapt to reduce eye gaze movement distance and dwell time for selection and help users with SSMI type better without any intervention of an assistant. METHODS: Before designing the virtual keyboard, we undertook a pilot study to optimize screen region which would be most comfortable for SSMI users to operate. We then proposed an optimized two-level English virtual keyboard layout through Genetic algorithm using static adaptation process; followed by dynamic adaptation process which tracks users’ interaction and reduces dwell time based on a Markov model-based algorithm. Further, we integrated the virtual keyboard for a web-based interactive dashboard that visualizes real-time Covid data. RESULTS: Using our proposed virtual keyboard layout for English language, the average task completion time for users with SSMI was 39.44 seconds in adaptive condition and 29.52 seconds in non-adaptive condition. Overall typing speed was 16.9 lpm (letters per minute) for able-bodied users and 6.6 lpm for users with SSMI without using any word completion or prediction features. A case study with an elderly participant with SSMI found a typing speed of 2.70 wpm (words per minute) and 14.88 lpm (letters per minute) after 6 months of practice. CONCLUSIONS: With the proposed layout for English virtual keyboard, the adaptive system increased typing speed statistically significantly for able bodied users than a non-adaptive version while for 6 users with SSMI, task completion time reduced by 8.8% in adaptive version than nonadaptive one. Additionally, the proposed layout was successfully integrated to a web-based interactive visualization dashboard thereby making it accessible for users with SSMI.


Sign in / Sign up

Export Citation Format

Share Document