scholarly journals The Educational Modeling of a Collaborative Game using MOT+LD

Author(s):  
G. Paquette ◽  
M. Leonard
Keyword(s):  
2020 ◽  
Author(s):  
Ma. Mercedes T. Rodrigo ◽  
Jaclyn L. Ocumpaugh ◽  
Danna Aduna ◽  
Emily Tabanao ◽  
Kaśka Porayska-Pomsta ◽  
...  

Filipino learners’ lack of English language proficiency is a major barrier to higher education opportunities and participation in high-value industries. Computer-based learning systems have the potential to increase educational quality, equity, and efficacy in the Global South. However, a key challenge is to design systems that are developmentally and socio-culturally appropriate and engaging for the target learners. In this paper, we describe the design, development, and preliminary testing of Ibigkas!, a collaborative, mobile phone-based game designed to provide phonemic awareness and vocabulary building support to Filipino learners aged 10-12. Cite as Rodrigo, M.M.T., Ocumpaugh, J., Diy, W.D., Moreno, M., De Santos, M., Cargo, N., Lacson, J., Santos, D., Aduna, D., Beraquit, J.I., Bringula, R., Caparros, M.R.M., Choi, A.T., Ladan, S., Lim, J., Manahan, D.M.A., Paterno, J.M.G., Saturinas, K., Tabanao, E., Tablatin, C., Torres, J., Porayska-Pomsta, K., Olatunji, I., Luckin, R. (2019) Ibigkas!: The Iterative Development of a Mobile Collaborative Game for Building Phonemic Awareness and Vocabulary. Computer-Based Learning in Context, 1(1), 28-42. DOI: 10.5281/zenodo.4057282


2008 ◽  
Vol 9 (2) ◽  
pp. 179-203 ◽  
Author(s):  
Christoph Bartneck ◽  
Juliane Reichenbach ◽  
Julie Carpenter

This paper presents two studies that investigate how people praise and punish robots in a collaborative game scenario. In a first study, subjects played a game together with humans, computers, and anthropomorphic and zoomorphic robots. The different partners and the game itself were presented on a computer screen. Results showed that praise and punishment were used the same way for computer and human partners. Yet robots, which are essentially computers with a different embodiment, were treated differently. Very machine-like robots were treated just like the computer and the human; robots very high on anthropomorphism / zoomorphism were praised more and punished less. However, barely any of the participants believed that they actually played together with a robot. After this first study, we refined the method and also tested if the presence of a real robot, in comparison to a screen representation, would influence the measurements. The robot, in the form of an AIBO, would either be present in the room or only be represented on the participants’ computer screen (presence). Furthermore, the robot would either make 20% errors or 40% errors (error rate) in the collaborative game. We automatically measured the praising and punishing behavior of the participants towards the robot and also asked the participant to estimate their own behavior. Results show that even the presence of the robot in the room did not convince all participants that they played together with the robot. To gain full insight into this human–robot relationship it might be necessary to directly interact with the robot. The participants unconsciously praised AIBO more than the human partner, but punished it just as much. Robots that adapt to the users’ behavior should therefore pay extra attention to the users’ praises, compared to their punishments.


2018 ◽  
Vol 38 (4) ◽  
pp. 71-83 ◽  
Author(s):  
Christian Santoni ◽  
Gabriele Salvati ◽  
Valentina Tibaldo ◽  
Fabio Pellacini

Author(s):  
Dariusz Czerwinski ◽  
Marek Milosz ◽  
Patryk Karczmarczyk ◽  
Mateusz Kutera ◽  
Marcin Najda

2020 ◽  
Vol 117 (12) ◽  
pp. 6370-6375 ◽  
Author(s):  
Margaret L. Traeger ◽  
Sarah Strohkorb Sebo ◽  
Malte Jung ◽  
Brian Scassellati ◽  
Nicholas A. Christakis

Social robots are becoming increasingly influential in shaping the behavior of humans with whom they interact. Here, we examine how the actions of a social robot can influence human-to-human communication, and not just robot–human communication, using groups of three humans and one robot playing 30 rounds of a collaborative game (n= 51 groups). We find that people in groups with a robot making vulnerable statements converse substantially more with each other, distribute their conversation somewhat more equally, and perceive their groups more positively compared to control groups with a robot that either makes neutral statements or no statements at the end of each round. Shifts in robot speech have the power not only to affect how people interact with robots, but also how people interact with each other, offering the prospect for modifying social interactions via the introduction of artificial agents into hybrid systems of humans and machines.


Sign in / Sign up

Export Citation Format

Share Document