Difference between revisions of "Working notes about Robogame design"
m |
m |
||
Line 31: | Line 31: | ||
*** encoder: gives the angular movement of the axis on which is mounted (e.g., the engine axis) | *** encoder: gives the angular movement of the axis on which is mounted (e.g., the engine axis) | ||
*** accellerometers, gyros: give estimations of the accelleration or speed in a given direction | *** accellerometers, gyros: give estimations of the accelleration or speed in a given direction | ||
− | *** camera: provides an image that can be elaborated. Some elaboration can be done on electronics connected to the camera (e.g., the WIIMote has a camera and elaboration HW that provides directly the coordinates in the image plane of up to four infrared spots, any optical mouse estimates movement by elaborating up to 2000 images per second, looking at differences between subsequent images). image elements relatively simple to elaborate (e.g., with the [http://sourceforge.net/projects/opencvlibrary/ OpenCV] libraries) are color blobs (dimension, shape, position in image coordinates), edges (strong differences in light intensity, typically on edges of objects or on specifically designed markers (e.g., those used in the [[ |Lurch]] project) | + | *** camera: provides an image that can be elaborated. Some elaboration can be done on electronics connected to the camera (e.g., the WIIMote has a camera and elaboration HW that provides directly the coordinates in the image plane of up to four infrared spots, any optical mouse estimates movement by elaborating up to 2000 images per second, looking at differences between subsequent images). image elements relatively simple to elaborate (e.g., with the [http://sourceforge.net/projects/opencvlibrary/ OpenCV] libraries) are color blobs (dimension, shape, position in image coordinates), edges (strong differences in light intensity, typically on edges of objects or on specifically designed markers (e.g., those used in the [[LURCH_-_The_autonomous_wheelchair |Lurch]] project) |
** external | ** external |
Revision as of 15:27, 14 April 2009
These are intended as working notes, to be shared and contributed by all the people involved in this activity, to define a basic framework for designing interactive games with autonomous robots.
A game involving the interaction with an autonomous robot is in a sense a computer game, since the behavior of at least one of the players is managed by a computer. The main difference w.r.t. a computer game is that there is a physical interaction between the players. This means that all of them will probably have to move in some way, and, in order to do so, they have to perceive some signals from the other players.
As for any game, it is important to define, as initial specifications for the game, who are the target users and the target environment to address.
Users have to be involved in the game. This means that the game should stimulate interest. This is usually obtained with something that is challenging, but not too difficult to be faced by the human player. (E. L.Deci, R. Flaste, Why We Do What We Do: Understanding Self-Motivation - see also the work on flow by Chikszentmihalyi)
In this project we may consider users different by age (6-9 years, 10-15, 15-18, 18-25, 25-45, over 45) or also by gender (most videogames do not address the female potential users)
The environment we consider is an indoor environment, typically a home, or a school. This reduces problems due to outdoor environment and makes the game more usable.
A central c omponent is the autonomous robot. To have a realistic game we should keep the robot as cheap as possible (as a reference, consider the cost of a console (2-300 Euros) )
To characterize the robot we may consider the following aspects:
- movement: the robot should move as needed by the game, considering also safety aspects (it should not be dangerous in any case, and it should not be perceived as dangerous)
- speed; up to 0.5-1 m/sec
- accelleration
- kind of movement
- power and batteries: for moving robots, it's important to consider the weight, the power needed to move the robot as expected, and the relative battery capacity (which also influences the weight)
- sensors
- internal (on board)
- contact sensors (e.g., micro-switches): give signals when triggered by contact
- proximity sensors (e.g. infrared): give a signal when detect objects in a given range (typically, few centimeters); infrared proximity sensors are not fully reliable (may not detect objects absorbing infrared light, and may be affected by high temperature places (such as a strong lamp, or a heater)
- range sensors (e.g., sonar): give signals proportional to the distance to the closest object detected in the rangeof detection (usually approximated to a cone with 10-40 degrees of opening and spanning 2-4 meters)
- magnetic field (e.g., compass): give a direction w.r.t. the magnetic North; may be influenced by local magnetic fields (such as an iron frame of a bed, or the loudspeakers
- encoder: gives the angular movement of the axis on which is mounted (e.g., the engine axis)
- accellerometers, gyros: give estimations of the accelleration or speed in a given direction
- camera: provides an image that can be elaborated. Some elaboration can be done on electronics connected to the camera (e.g., the WIIMote has a camera and elaboration HW that provides directly the coordinates in the image plane of up to four infrared spots, any optical mouse estimates movement by elaborating up to 2000 images per second, looking at differences between subsequent images). image elements relatively simple to elaborate (e.g., with the OpenCV libraries) are color blobs (dimension, shape, position in image coordinates), edges (strong differences in light intensity, typically on edges of objects or on specifically designed markers (e.g., those used in the Lurch project)
- external
- internal (on board)