human player
Recently Published Documents


TOTAL DOCUMENTS

38
(FIVE YEARS 18)

H-INDEX

3
(FIVE YEARS 1)

2021 ◽  
Vol 5 (CHI PLAY) ◽  
pp. 1-17
Author(s):  
Shaghayegh Roohi ◽  
Christian Guckelsberger ◽  
Asko Relas ◽  
Henri Heiskanen ◽  
Jari Takatalo ◽  
...  

This paper presents a novel approach to automated playtesting for the prediction of human player behavior and experience. We have previously demonstrated that Deep Reinforcement Learning (DRL) game-playing agents can predict both game difficulty and player engagement, operationalized as average pass and churn rates. We improve this approach by enhancing DRL with Monte Carlo Tree Search (MCTS). We also motivate an enhanced selection strategy for predictor features, based on the observation that an AI agent's best-case performance can yield stronger correlations with human data than the agent's average performance. Both additions consistently improve the prediction accuracy, and the DRL-enhanced MCTS outperforms both DRL and vanilla MCTS in the hardest levels. We conclude that player modelling via automated playtesting can benefit from combining DRL and MCTS. Moreover, it can be worthwhile to investigate a subset of repeated best AI agent runs, if AI gameplay does not yield good predictions on average.


Games ◽  
2021 ◽  
Vol 12 (3) ◽  
pp. 52
Author(s):  
Hanshu Zhang ◽  
Frederic Moisan ◽  
Cleotilde Gonzalez

This research studied the strategies that players use in sequential adversarial games. We took the Rock-Paper-Scissors (RPS) game as an example and ran players in two experiments. The first experiment involved two humans, who played the RPS together for 100 times. Importantly, our payoff design in the RPS allowed us to differentiate between participants who used a random strategy from those who used a Nash strategy. We found that participants did not play in agreement with the Nash strategy, but rather, their behavior was closer to random. Moreover, the analyses of the participants’ sequential actions indicated heterogeneous cycle-based behaviors: some participants’ actions were independent of their past outcomes, some followed a well-known win-stay/lose-change strategy, and others exhibited the win-change/lose-stay behavior. To understand the sequential patterns of outcome-dependent actions, we designed probabilistic computer algorithms involving specific change actions (i.e., to downgrade or upgrade according to the immediate past outcome): the Win-Downgrade/Lose-Stay (WDLS) or Win-Stay/Lose-Upgrade (WSLU) strategies. Experiment 2 used these strategies against a human player. Our findings show that participants followed a win-stay strategy against the WDLS algorithm and a lose-change strategy against the WSLU algorithm, while they had difficulty in using an upgrade/downgrade direction, suggesting humans’ limited ability to detect and counter the actions of the algorithm. Taken together, our two experiments showed a large diversity of sequential strategies, where the win-stay/lose-change strategy did not describe the majority of human players’ dynamic behaviors in this adversarial situation.


2021 ◽  
Vol 8 ◽  
Author(s):  
Lara Christoforakos ◽  
Alessio Gallucci ◽  
Tinatini Surmava-Große ◽  
Daniel Ullrich ◽  
Sarah Diefenbach

Robots increasingly act as our social counterparts in domains such as healthcare and retail. For these human-robot interactions (HRI) to be effective, a question arises on whether we trust robots the same way we trust humans. We investigated whether the determinants competence and warmth, known to influence interpersonal trust development, influence trust development in HRI, and what role anthropomorphism plays in this interrelation. In two online studies with 2 × 2 between-subjects design, we investigated the role of robot competence (Study 1) and robot warmth (Study 2) in trust development in HRI. Each study explored the role of robot anthropomorphism in the respective interrelation. Videos showing an HRI were used for manipulations of robot competence (through varying gameplay competence) and robot anthropomorphism (through verbal and non-verbal design cues and the robot's presentation within the study introduction) in Study 1 (n = 155) as well as robot warmth (through varying compatibility of intentions with the human player) and robot anthropomorphism (same as Study 1) in Study 2 (n = 157). Results show a positive effect of robot competence (Study 1) and robot warmth (Study 2) on trust development in robots regarding anticipated trust and attributed trustworthiness. Subjective perceptions of competence (Study 1) and warmth (Study 2) mediated the interrelations in question. Considering applied manipulations, robot anthropomorphism neither moderated interrelations of robot competence and trust (Study 1) nor robot warmth and trust (Study 2). Considering subjective perceptions, perceived anthropomorphism moderated the effect of perceived competence (Study 1) and perceived warmth (Study 2) on trust on an attributional level. Overall results support the importance of robot competence and warmth for trust development in HRI and imply transferability regarding determinants of trust development in interpersonal interaction to HRI. Results indicate a possible role of perceived anthropomorphism in these interrelations and support a combined consideration of these variables in future studies. Insights deepen the understanding of key variables and their interaction in trust dynamics in HRI and suggest possibly relevant design factors to enable appropriate trust levels and a resulting desirable HRI. Methodological and conceptual limitations underline benefits of a rather robot-specific approach for future research.


Acta Acustica ◽  
2021 ◽  
Vol 5 ◽  
pp. 11
Author(s):  
Kimie Onogi ◽  
Hiroshi Yokoyama ◽  
Akiyoshi Iida

For an isolated flute head joint, the effects of jet angle on harmonic structure of a single note are investigated within the practical range for human players. The mechanisms of these effects are discussed on the basis of both the radiated sound and the flow field measured with a hot-wire anemometer. The blowing parameters, viz., jet angle (angle between jet direction and window), jet offset (relative height of jet direction from the edge), lip-to-edge distance, and flow rate, were varied independently by using an artificial blowing device based on measured conditions for a human player, where the jet direction is defined as that measured without the head joint. The radiated sound revealed that jet angle varied the differential sound pressure level of the second to third harmonic (ΔSPL) less than jet offset, however, as much as flow rate and more than lip-to-edge distance. The spatial distribution of jet fluctuation center showed that, with increasing jet angle (the jet direction approaches vertical to the window), the jet deflected more inside, so that the actual jet offset was estimated to be further inside. The variation of ∆SPL with jet angle seems to be caused mainly by this shift in the actual jet offset.


2020 ◽  
Vol 65 (2) ◽  
pp. 31
Author(s):  
T.V. Pricope

Many real-world applications can be described as large-scale games of imperfect information. This kind of games is particularly harder than the deterministic one as the search space is even more sizeable. In this paper, I want to explore the power of reinforcement learning in such an environment; that is why I take a look at one of the most popular game of such type, no limit Texas Hold’em Poker, yet unsolved, developing multiple agents with different learning paradigms and techniques and then comparing their respective performances. When applied to no-limit Hold’em Poker, deep reinforcement learning agents clearly outperform agents with a more traditional approach. Moreover, if these last agents rival a human beginner level of play, the ones based on reinforcement learning compare to an amateur human player. The main algorithm uses Fictitious Play in combination with ANNs and some handcrafted metrics. We also applied the main algorithm to another game of imperfect information, less complex than Poker, in order to show the scalability of this solution and the increase in performance when put neck in neck with established classical approaches from the reinforcement learning literature.


Informatics ◽  
2020 ◽  
Vol 7 (3) ◽  
pp. 34
Author(s):  
Libor Pekař ◽  
Radek Matušů ◽  
Jiří Andrla ◽  
Martina Litschmannová

The Kalah game represents the most popular version of probably the oldest board game ever—the Mancala game. From this viewpoint, the art of playing Kalah can contribute to cultural heritage. This paper primarily focuses on a review of Kalah history and on a survey of research made so far for solving and analyzing the Kalah game (and some other related Mancala games). This review concludes that even if strong in-depth tree-search solutions for some types of the game were already published, it is still reasonable to develop less time-consumptive and computationally-demanding playing algorithms and their strategies Therefore, the paper also presents an original heuristic algorithm based on particular deterministic strategies arising from the analysis of the game rules. Standard and modified mini–max tree-search algorithms are introduced as well. A simple C++ application with Qt framework is developed to perform the algorithm verification and comparative experiments. Two sets of benchmark tests are made; namely, a tournament where a mid–experienced amateur human player competes with the three algorithms is introduced first. Then, a round-robin tournament of all the algorithms is presented. It can be deduced that the proposed heuristic algorithm has comparable success to the human player and to low-depth tree-search solutions. Moreover, multiple-case experiments proved that the opening move has a decisive impact on winning or losing. Namely, if the computer plays first, the human opponent cannot beat it. Contrariwise, if it starts to play second, using the heuristic algorithm, it nearly always loses.


2020 ◽  
pp. 203-214
Author(s):  
Chris Bleakley

Chapter 12 is the story of AlphaGo – the first computer program to defeat a top human player at the board game Go. On March 19, 2016, grandmaster Lee Sedol took on AlphaGo for a US$1 million prize in a best of five match. Experts expected that it would be easy money for Sedol. To most observers surprise, AlphaGo swept the first three games to win the match. AlphaGo was based on deep artificial neural networks (ANNs). The networks were trained with 30 million example moves followed 1.2 million games played against itself. AlphaGo was the creation of a London based company named Deep Mind Technologies. Founded in 2010 and acquired by Google 2014, DeepMind’s made a succession of high profile breakthroughs in artificial intelligence. Recently, their AlphaZero ANN displayed signs of general-purpose intelligence. It learned to play Chess, Shogi, and Go to world champion level in a few days.


Proceedings ◽  
2020 ◽  
Vol 54 (1) ◽  
pp. 36
Author(s):  
Alejandro Rodríguez-Arias ◽  
Bertha Guijarro-Berdiñas ◽  
Noelia Sánchez-Maroño

Multiagent systems (MASs) allow facing complex, heterogeneous, distributed problems difficult to solve by only one software agent. The world of video games provides problems and suitable environments for the use of MAS. In the field of games, Unity is one of the most used engines and allows the development of intelligent agents in virtual environments. However, although Unity allows working in multiagent environments, it does not provide functionalities to facilitate the development of MAS. The aim of this work is to create a multiagent system in Unity. For this purpose, a predator–prey problem was designed in which the agents must cooperate to arrest a thief driven by a human player. To solve this cooperative problem, it is required to create the representation of the environment and the agents in 3D; to equip the agents with vision, contact, and sound sensors to perceive the environment; to implement the agents’ behaviors; and, finally but not less important, to build a communication system between agents that allows negotiation, collaboration, and cooperation between them to create a complex, role-based chasing strategy.


2020 ◽  
Vol 10 (16) ◽  
pp. 5636
Author(s):  
Wafaa Alsaggaf ◽  
Georgios Tsaramirsis ◽  
Norah Al-Malki ◽  
Fazal Qudus Khan ◽  
Miadah Almasry ◽  
...  

Computer-controlled virtual characters are essential parts of most virtual environments and especially computer games. Interaction between these virtual agents and human players has a direct impact on the believability of and immersion in the application. The facial animations of these characters are a key part of these interactions. The player expects the elements of the virtual world to act in a similar manner to the real world. For example, in a board game, if the human player wins, he/she would expect the computer-controlled character to be sad. However, the reactions, more specifically, the facial expressions of virtual characters in most games are not linked with the game events. Instead, they have pre-programmed or random behaviors without any understanding of what is really happening in the game. In this paper, we propose a virtual character facial expression probabilistic decision model that will determine when various facial animations should be played. The model was developed by studying the facial expressions of human players while playing a computer videogame that was also developed as part of this research. The model is represented in the form of trees with 15 extracted game events as roots and 10 associated animations of facial expressions with their corresponding probability of occurrence. Results indicated that only 1 out of 15 game events had a probability of producing an unexpected facial expression. It was found that the “win, lose, tie” game events have more dominant associations with the facial expressions than the rest of game events, followed by “surprise” game events that occurred rarely, and finally, the “damage dealing” events.


Sign in / Sign up

Export Citation Format

Share Document