Difference between revisions of "Robogame Strategy"
EwertonLopes (Talk | contribs) |
EwertonLopes (Talk | contribs) |
||
(One intermediate revision by the same user not shown) | |||
Line 1: | Line 1: | ||
{{Project | {{Project | ||
|title=Learning behaviors and user models to optimise the player's experience in robogames | |title=Learning behaviors and user models to optimise the player's experience in robogames | ||
− | |short_descr=Focused on the | + | |short_descr=Focused on the use of machine learning for supporting player modelling and behavior/strategy adjustment towards maintaining (or conversely, improving) human player engagement in PIRGs. |
|collaborator=Tiago Nascimento; | |collaborator=Tiago Nascimento; | ||
|coordinator=AndreaBonarini | |coordinator=AndreaBonarini | ||
− | |tutor= | + | |tutor=Francesco Amigoni; |
|students=EwertonLopes | |students=EwertonLopes | ||
|resarea=Robotics | |resarea=Robotics | ||
Line 14: | Line 14: | ||
|type=Thesis | |type=Thesis | ||
}} | }} | ||
+ | |||
+ | __TOC__ | ||
== Abstract == | == Abstract == |
Latest revision as of 16:35, 14 March 2016
Learning behaviors and user models to optimise the player's experience in robogames
| |
Short Description: | Focused on the use of machine learning for supporting player modelling and behavior/strategy adjustment towards maintaining (or conversely, improving) human player engagement in PIRGs. |
Coordinator: | AndreaBonarini (andrea.bonarini@polimi.it) |
Tutor: | Francesco Amigoni () |
Collaborator: | Tiago Nascimento () |
Students: | EwertonLopes (ewerton.lopes@polimi.it) |
Research Area: | Robotics |
Research Topic: | Robogames |
Start: | 2015/01/05 |
End: | 2018/12/31 |
Status: | Active |
Level: | PhD |
Type: | Thesis |
Contents
Abstract
Due to a steady progress in interactive systems and robots, a natural evolution in the context of gaming experience is to bring the elimination of screens and devices for presenting the users with the possibility to physically interact with autonomous agents without the need to produce an entire virtual reality, such as that in classical videogames. This new style of game interaction, known as Physically Interactive Robogames (PIRG), exploit the real world (in both dynamical unstructured and structured aspect) as environments and real, physical, autonomous entities as game companions. In this scenario, the present PhD research aims at investigating the use of machine learning techniques for developing complex behavior in PIRG autonomous robots. Specially, to the extent of supporting the development of on-line player modelling (which should also include an approach to intention detection) envisioning in-game behavior/strategy adjustment towards maintaining (or improving) the human player engagement. The planned methodology also aims to explore mobile robot bases with cheap sensors and algorithms requiring little power to be executed in real time ("green algorithms") in non-structured environments since these are constraints currently addressed in robogames and in the whole robotics community, to enable the spread of robots in the society and make them reach the market. As formal contribution to scientific community, the proposed research may open up ways for the exploitation of new methods and approaches for designing PIRG in view of its relationship with ML-based techniques and Human-Robot Interaction. Moreover, it should add a new layer of exploration to the problem of creating playing robots even more able of being perceived as rational agents, i.e., possibly smart enough to be accepted as opponents or teammates, thus, becoming more likely to reach the mass-market as a new robotic product.
Motivation
The explosion of advancements in computing power, artificial intelligence (AI) and hardware has resulted in a steady progress in intelligent systems, entertainment devices and robots. Interactive systems that perceive, act and communicate are more and more able to perform and occupy a growing number of roles in today’s society. Recently, taking advantages from that, video games companies, inserted into a mass market of entertainment, have aimed on establishing a new paradigm that involve players actively moving in front of the screen1, picking up objects around them, actually interacting with the game in a more realistic fashion with or without the need of ad-hoc intelligent devices. This scenario, although enabling impressive gaming experience through virtual reality, usually poses some limitations regarding, for example: price, movement constraints or even the requirement of an assembly of a specific playing environment structure (which directly affects price).
What seems to be a natural evolution for game playing experience, though, is to bring the elimination of screens and devices in order to present the users with the possibility to physically interact with autonomous agents in their homes without the need to produce an entire virtual reality. This pretty new style of games is defined as Physically Interactive RoboGames (PIRG) and has as the main objective the exploitation of the real world (in both its dynamical unstructured and structured aspects) as environment and one or more real, physical, autonomous robots as a game opponent/companion for human player(s) [25].
Like commercial virtual games, the main aspect of PIRG’s is to produce a sense of entertainment and pleasure that can be "consumed" by a large number of users. Furthermore, an important aspect of autonomous robots and systems during the game should be, as commonly expected, an exhibition of rational behavior and, in this sense, they must be capable enough to play the role of opponents or teammates effectively, since by practical means people tend do not play with or against a dull companion/opponent [25]. It can be observed that the development of an ability of AI adaptation is strongly important since it may help to keep the user’s enjoyment by accordingly (or at least ideally) respond to his skills and emotions, producing a more realist appearance of rationality. This observation has psychological foundation given that productivity and/or satisfaction can be raise by a proper alignment between personality and environment [4]. Indeed, it has been shown that when virtual AI- controlled game characters play too weak against the human player, the human player loses interest in the game. Conversely, the human player often gets frustrated and wants to quit playing when AI-controlled characters play too strong against him [5, 23]. Similar observation has been done during experiments involving PIRG’s as well [25].
Using that as a motivation, researchers often try to implement a model of the human player in order to supplement the decision making of actions by the AI engine and related components, allowing some adaptation to happen. A very popular approach in such domain is that of Dynamic Difficulty Adjustment (DDA), where the difficulty level of the game is adjusted dynamically in order to better fit the individual player. In the last years, player modeling has been a pretty hot topic in the computer science community and, specially, in the commercial game development one, giving rise to several sophisticated models, grouped into different taxonomies [5, 29, 31]. Proposed approaches often exploit different aspects, such as: actions, stategies, tactics, profiling, emotional traits and method of data extraction [5]. At the core of most recent attempts is the application of machine learning (ML) techniques that can explore the amount of data generated by the game and find useful patterns for reasoning also under uncertainty.
A part from the success of virtual games, research interest in practical development of PIRG’s is still in its "infancy" and despite of initial "proof-of-concept" progresses [25] the design and implementation of player modeling for supporting strategy selection envisioning to keep player enjoyment is yet a poorly addressed problem. Based on this, the present research is aimed at investigating how to develop efficient player modeling abilities in autonomous robots for the purpose of increasing player’s enjoyment in PIRGs. From a ML-based perspective, I focus on the ability of accessing player features for supporting strategy adjustment. To some extent, this can be viewed as a first attempt for implementing DDA in robogames. Additionally, I seek to test research results both in simulated environment as well as via the exploitation of mobile robot bases with cheap sensors and algorithms requiring little power to be executed in real time ("green algorithms") in non-structured environments, since these are interesting constraints currently addressed in robogames and in the whole Robotics community which may enable the spread of robots in the society and make them reach the market.