scholarly journals Driving Faster Than a Human Player

Author(s):  
Jan Quadflieg ◽  
Mike Preuss ◽  
Günter Rudolph
Keyword(s):  
Games ◽  
2021 ◽  
Vol 12 (3) ◽  
pp. 52
Author(s):  
Hanshu Zhang ◽  
Frederic Moisan ◽  
Cleotilde Gonzalez

This research studied the strategies that players use in sequential adversarial games. We took the Rock-Paper-Scissors (RPS) game as an example and ran players in two experiments. The first experiment involved two humans, who played the RPS together for 100 times. Importantly, our payoff design in the RPS allowed us to differentiate between participants who used a random strategy from those who used a Nash strategy. We found that participants did not play in agreement with the Nash strategy, but rather, their behavior was closer to random. Moreover, the analyses of the participants’ sequential actions indicated heterogeneous cycle-based behaviors: some participants’ actions were independent of their past outcomes, some followed a well-known win-stay/lose-change strategy, and others exhibited the win-change/lose-stay behavior. To understand the sequential patterns of outcome-dependent actions, we designed probabilistic computer algorithms involving specific change actions (i.e., to downgrade or upgrade according to the immediate past outcome): the Win-Downgrade/Lose-Stay (WDLS) or Win-Stay/Lose-Upgrade (WSLU) strategies. Experiment 2 used these strategies against a human player. Our findings show that participants followed a win-stay strategy against the WDLS algorithm and a lose-change strategy against the WSLU algorithm, while they had difficulty in using an upgrade/downgrade direction, suggesting humans’ limited ability to detect and counter the actions of the algorithm. Taken together, our two experiments showed a large diversity of sequential strategies, where the win-stay/lose-change strategy did not describe the majority of human players’ dynamic behaviors in this adversarial situation.


Author(s):  
Antonio M. Mora-García ◽  
Juan Julián Merelo-Guervós

A bot is an autonomous enemy which tries to beat the human player and/or some other bots in a game. This chapter describes the design, implementation and results of a system to evolve bots inside the PC game Unreal™. The default artificial intelligence (AI) of this bot has been improved using two different evolutionary methods: genetic algorithms (GAs) and genetic programming (GP). The first one has been applied for tuning the parameters of the hard-coded values inside the bot AI code. The second method has been used to change the default set of rules (or states) that defines its behaviour. Moreover, the first approach has been considered at two levels: individual and team, performing different studies at the latter level, looking for the best cooperation scheme. Both techniques yield very good results, evolving bots (and teams) which are capable of defeating the default ones. The best results are obtained for the GA approach, since it just performs a refinement considering the default behaviour rules, while the GP method has to redefine the whole set of rules, so it is harder to get good results. This chapter presents one possibility of AI programming: building a better model from a standard one.


Author(s):  
Antonio Miguel Mora ◽  
Francisco Aisa ◽  
Pablo García-Sánchez ◽  
Pedro Ángel Castillo ◽  
Juan Julián Merelo

Autonomous agents in videogames, usually called bots, have tried to behave as human players from their emergence more than 20 years ago. They normally try to model a part of a human expert player's knowledge with respect to the game, trying to become a competitive opponent or a good partner for other players. This paper presents a deep description of the design of a bot for playing 1 vs. 1 Death Match mode in the first person shooter Unreal Tournament™ 2004 (UT2K4). This bot uses a state-based Artificial Intelligence model which emulates a big part of the behavior/knowledge (actions and tricks) of an expert human player in this mode. This player has participated in international UT2K4 championships. The behavioral engine considers primary and secondary actions, and uses a memory approach. It is based in an auxiliary database for learning about the fighting arena, so it stores weapons and items locations once the bot has discovered them, as a human player would do. This so-called Expert Bot has yielded excellent results, beating the game default bots even in the hardest difficulty, and being a very hard opponent for medium-level human players.


2020 ◽  
Vol 10 (16) ◽  
pp. 5636
Author(s):  
Wafaa Alsaggaf ◽  
Georgios Tsaramirsis ◽  
Norah Al-Malki ◽  
Fazal Qudus Khan ◽  
Miadah Almasry ◽  
...  

Computer-controlled virtual characters are essential parts of most virtual environments and especially computer games. Interaction between these virtual agents and human players has a direct impact on the believability of and immersion in the application. The facial animations of these characters are a key part of these interactions. The player expects the elements of the virtual world to act in a similar manner to the real world. For example, in a board game, if the human player wins, he/she would expect the computer-controlled character to be sad. However, the reactions, more specifically, the facial expressions of virtual characters in most games are not linked with the game events. Instead, they have pre-programmed or random behaviors without any understanding of what is really happening in the game. In this paper, we propose a virtual character facial expression probabilistic decision model that will determine when various facial animations should be played. The model was developed by studying the facial expressions of human players while playing a computer videogame that was also developed as part of this research. The model is represented in the form of trees with 15 extracted game events as roots and 10 associated animations of facial expressions with their corresponding probability of occurrence. Results indicated that only 1 out of 15 game events had a probability of producing an unexpected facial expression. It was found that the “win, lose, tie” game events have more dominant associations with the facial expressions than the rest of game events, followed by “surprise” game events that occurred rarely, and finally, the “damage dealing” events.


Acta Acustica ◽  
2021 ◽  
Vol 5 ◽  
pp. 11
Author(s):  
Kimie Onogi ◽  
Hiroshi Yokoyama ◽  
Akiyoshi Iida

For an isolated flute head joint, the effects of jet angle on harmonic structure of a single note are investigated within the practical range for human players. The mechanisms of these effects are discussed on the basis of both the radiated sound and the flow field measured with a hot-wire anemometer. The blowing parameters, viz., jet angle (angle between jet direction and window), jet offset (relative height of jet direction from the edge), lip-to-edge distance, and flow rate, were varied independently by using an artificial blowing device based on measured conditions for a human player, where the jet direction is defined as that measured without the head joint. The radiated sound revealed that jet angle varied the differential sound pressure level of the second to third harmonic (ΔSPL) less than jet offset, however, as much as flow rate and more than lip-to-edge distance. The spatial distribution of jet fluctuation center showed that, with increasing jet angle (the jet direction approaches vertical to the window), the jet deflected more inside, so that the actual jet offset was estimated to be further inside. The variation of ∆SPL with jet angle seems to be caused mainly by this shift in the actual jet offset.


Sign in / Sign up

Export Citation Format

Share Document