Difference between revisions of "MRT"
(New page: === Modular Robotic Toolkit === ==) |
m (→Navigation) |
||
(47 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
− | + | '''Modular Robotic Toolkit''' | |
− | == | + | |
+ | == Introduction == | ||
+ | This is the Modular Robotic Toolkit user manual, a comprehensive guide to | ||
+ | the software architecture for autonomous robots developed by the AIRLab, | ||
+ | Artificial Intelligence and Robotic Laboratory of the Dept. of Electronics | ||
+ | and Information at Politecnico di Milano. | ||
+ | |||
+ | Most of the content of these pages comes from a cleanup (under construction) of the [[Media:MRTmanual.pdf | MRT manual]] written by [[User:LuigiMalago | Luigi Malago']]. These WIKI pages are intended for continuous update and completion. Anybody wishing to cooperate is welcome! | ||
+ | |||
+ | MRT, Modular Robotic Toolkit, is a framework where a set of off-the-shelf modules can be easily combined and customized to realize robotic applications with minimal effort and time. The framework has been designed to | ||
+ | be used in different applications where a distributed set of robot and sensors | ||
+ | interact to accomplish tasks, such as: playing soccer in RoboCup, guiding | ||
+ | people in indoor environments, and exploring unknown environments in a | ||
+ | space setting. | ||
+ | |||
+ | The aim of this manual is to present the software architecture and make | ||
+ | the user comfortable with the use and configuration of the different modules | ||
+ | that can be integrated in the framework, so that it will be easy to develop | ||
+ | robotic applications using MRT. For this reason, each chapter will include | ||
+ | some examples of code and configuration files. | ||
+ | |||
+ | == MRT Architecture == | ||
+ | The MRT architecture is described in the [[Media:STARbook.pdf | paper]] published on the Springer STAR Series. | ||
+ | |||
+ | The SW is available from the SVN site of the department of Electronics. To register there follow this link | ||
+ | [https://acme.ws.dei.polimi.it/request_account.plp] | ||
+ | |||
+ | Once registered, you can find on [https://svn.ws.dei.polimi.it The SVN site] the directions to download SW. | ||
+ | |||
+ | == MrBrian: Multilevel Ruling Brian Reacts by Inferential ActioNs == | ||
+ | |||
+ | The content of this is for the moment on the pdf manual mentioned above. Anyone wishing to improve it should take the content and put it here below, in the proper section. | ||
+ | |||
+ | ===The Behavior-based Paradigm === | ||
+ | === The Overall Architecture === | ||
+ | === Fuzzy predicates === | ||
+ | ==== CANDO and WANT Conditions==== | ||
+ | ==== Informed Hierarchical Composition ==== | ||
+ | ==== Output Generation ==== | ||
+ | |||
+ | === Modules === | ||
+ | ==== Fuzzyfier ==== | ||
+ | ==== Preacher ==== | ||
+ | ==== Predicate Actions ==== | ||
+ | ==== Candoer ==== | ||
+ | ==== Wanter ==== | ||
+ | ==== Behavior Engine ==== | ||
+ | ==== Rules Behavior ==== | ||
+ | ==== Composer ==== | ||
+ | ==== Defuzzyfier ==== | ||
+ | ==== Parser and Messenger ==== | ||
+ | |||
+ | === Configuration Files and Examples === | ||
+ | ==== Fuzzy Sets ==== | ||
+ | ==== Fuzzy Predicates ==== | ||
+ | ==== Predicate Actions ==== | ||
+ | ==== CANDO and WANT Conditions ==== | ||
+ | ==== Playing with activations TODO ==== | ||
+ | ==== Defuzzyfication ==== | ||
+ | ==== Behavior Rules ==== | ||
+ | ==== Behavior List ==== | ||
+ | ==== Behavior Composition ==== | ||
+ | ==== Parser and Messenger ==== | ||
+ | ==== Using Mr. BRIAN ==== | ||
+ | |||
+ | == DCDT: The Middleware == | ||
+ | DCDT, Device Communities Development Toolkit, is the framework used to | ||
+ | integrate all the modules in MRT. This middleware has been implemented | ||
+ | to simplify the development of applications where different tasks run si- | ||
+ | multaneously and need to exchange messages. DCDT is a publish/sub- | ||
+ | scribe framework able to exploit different physical communication means in | ||
+ | a transparent and easy way. | ||
+ | |||
+ | The use of this toolkit helps the user dealing with processes and inter | ||
+ | process communication, since it makes the interaction between the processes | ||
+ | transparent with respect to their allocation. This means that the user does | ||
+ | not have to take care whether the processes run on the same machine or on | ||
+ | a distributed environment. | ||
+ | |||
+ | DCDT is a multi-threaded architecture consisting in a main active ob- | ||
+ | ject, called Agora, hosting and managing various software modules, called | ||
+ | Members. Members are basically concurrent programs/threads executed pe- | ||
+ | riodically or on the notification of an event. Each Member of the Agora can | ||
+ | exchange messages with other Members of the same Agora or with other | ||
+ | Agoras on different machines. | ||
+ | |||
+ | The peculiar attitude of DCDT toward different physical communication | ||
+ | channels, such as RS-232 serial connections, USB, Ethernet or IEEE 802.11b, | ||
+ | is one of the main characteristics of this publish/subscribe middleware. | ||
+ | |||
+ | === Agora and Members === | ||
+ | An Agora is a process composed of more threads and data structures required | ||
+ | to manage processes and messages. Each single thread, called Member | ||
+ | in the DCDT terminology, is responsible for the execution of an instance | ||
+ | of an object derived from the DCDT Member class. | ||
+ | It is possible to realize distributed applications running different Agoras | ||
+ | on different devices/machines, each of them hosting many Members. It is | ||
+ | also possible to have on the same machine more than one Agora that hosts | ||
+ | its own Members. In this way you can emulate the presence of different | ||
+ | robots without the need of actually having them connected and running. | ||
+ | There are two different type of Members, User Members and System | ||
+ | Members. The main difference between the two is that System Members are | ||
+ | responsible for the management of the infrastructure of the data structures | ||
+ | of the Agora, while User Members are implemented by the user according | ||
+ | to his needs. Moreover System Member are invisible to User Members. | ||
+ | |||
+ | The main System Members are: | ||
+ | |||
+ | * Finder: it is responsible for searching for other Agoras on local and | ||
+ | remote machines dynamically with short messages via multicast; | ||
+ | |||
+ | * MsgManager: this Member manages all the messages that are exchanged | ||
+ | along several members. In case each single Member handles | ||
+ | its own message queue locally, the MsgManager takes care of moving | ||
+ | messages from the main queue to the correct one; | ||
+ | |||
+ | * InnerLinkManager: its role is to arrange inter Agora communication | ||
+ | in case they are executed on the same machine; | ||
+ | |||
+ | * Link: those Members handle communication channels between two | ||
+ | Agoras, so that they messages can be exchanged. Each link is responsible | ||
+ | for the communication flow in one direction, so there are separate | ||
+ | Members for message receiving and sending; | ||
+ | |||
+ | Members of the Agora can exchange messages through the PostOffice, | ||
+ | using a typical a publish/subscribe approach. Each Member can subscribe | ||
+ | to the PostOffice of its Agora for a specific type of messages. Whenever a | ||
+ | Member wants to publish a message, it has to notify the PostOffice, without | ||
+ | taking into accont the final destinations of the deliveries. | ||
+ | |||
+ | === Messages === | ||
+ | DCDT Messages are characterized by header and payload fields. The header | ||
+ | contains the unique identifier of the message type, the size of the data contained | ||
+ | in the payload and some information regarding the producer. | ||
+ | Members use unique identification types to subscribe and unsubscribe | ||
+ | messages available throughout the community. Messages can be shared basically | ||
+ | according to three modalities: without any guaranty (e.g. UDP), | ||
+ | with some retransmissions (e.g. UDP retransmitted), with absolute receipt | ||
+ | guaranty (e.g. TCP). | ||
+ | |||
+ | Sometimes the payload of a Message can be empty. For example you | ||
+ | can use Messages to notify events, in this case the only information is the | ||
+ | specific event that matches a particular Message type. This implies that all | ||
+ | the Agoras must share the same list of Message types. | ||
+ | In MRT, the interfaces among different modules are realized through | ||
+ | messages. According to the publish/subscribe paradigm, each module may | ||
+ | produce messages that are received by those modules that have expressed | ||
+ | interest in them. In order to grant independence among modules, each module | ||
+ | knows neither which modules will receive its messages nor the senders | ||
+ | of the messages it has requested. In fact, typically, it does not matter which | ||
+ | module has produced an information, since modules are interested in the | ||
+ | information itself. | ||
+ | |||
+ | For instance, the localization module may benefit from knowing that | ||
+ | in front of the robot there is a wall at the distance of 2.4 m, but it is | ||
+ | not relevant which sensors has perceived this information or whether this | ||
+ | is coming from another robot. For this reason, our modules communicate | ||
+ | through XML (eXtensible Markup Language) messages whose structure is | ||
+ | defined by a shared DTD (Document Type Definitions). Each message | ||
+ | contains some general information, such as the time-stamp, and a list of | ||
+ | objects characterized by a name and its membership class. | ||
+ | For each object may be defined a number of attributes, that are tuples | ||
+ | of name, value, variability, and reliability. In order to correctly parse the | ||
+ | content of the messages, modules share a common ontology that defines the | ||
+ | semantic of the used symbols. | ||
+ | |||
+ | The advantages of using XML messages, instead of messages in a binary | ||
+ | format, are: the possibility of being readable by humans and of being edited | ||
+ | by any text editor, well-structured by the use of DTDs, easy to change and | ||
+ | extend, the existence of standard modules for the syntactic parsing. These | ||
+ | advantages are paid by an increase in the amount of transferred data, parsing | ||
+ | may need more time, binary data can not be included directly. | ||
+ | The middleware encapsulates all the functionalities to handle Members | ||
+ | and Messages in the Agora, so that the user does not have to take care of | ||
+ | executing each single thread neither handling message delivery. This results | ||
+ | in making the use of this middleware very easy. | ||
+ | |||
+ | === Configuration Files and Examples === | ||
+ | The Agora is the main object instantiated in every application that makes | ||
+ | use of DCDT. Each Agora is composed of threads, called Members, that | ||
+ | execute specific tasks and communicate through messages. | ||
+ | The Agora can be instantiated using two different options called STANDALONE | ||
+ | and NETWORK, according to the physical distribution of the | ||
+ | Members. If you want Members to exchange messages from different machines, | ||
+ | you need to use the NETWORK option, otherwise you can use the | ||
+ | STANDALONE. Both allow the communication along Members of the same | ||
+ | Agora and between Agoras on the same machine. | ||
+ | |||
+ | ====Agora==== | ||
+ | Different Agoras can communicate on the same machine through local sockets, | ||
+ | this simple code fragment shows how you can instantiate an Agora using | ||
+ | the STANDALONE option. | ||
+ | |||
+ | int main (int argc, char *argv[]) { | ||
+ | |||
+ | DCDT_Agora *agora; | ||
+ | |||
+ | agora = new DCDT_Agora(); | ||
+ | |||
+ | .... | ||
+ | } | ||
+ | |||
+ | Otherwise you can create an Agora using the NETWORK option. In this | ||
+ | case you need to write a configuration file with all the network parameters | ||
+ | required to allow communication between the different machines where the | ||
+ | Agoras are executed. The following code lines show the use of the overloaded | ||
+ | constructor that takes the configuration file as parameter. | ||
+ | int main (int argc, char *argv[]) { | ||
+ | DCDT_Agora *agora; | ||
+ | agora = new DCDT_Agora("dcdt.cfg"); | ||
+ | .... | ||
+ | } | ||
+ | The configuration file for the Agora includes the following information: | ||
+ | • network parameters of the machine: these parameters define the TCP/IP | ||
+ | address and the port of the Agora, plus the multicast address of the | ||
+ | local network; | ||
+ | 20 | ||
+ | • policy of the PostOffice: this parameter define the behavior of the | ||
+ | PostOffice, so far three different policies have been implemented: | ||
+ | – SLWB: Single Lock With Buffer; | ||
+ | – SLWDWCV: Single Lock With DispatcherWith Conditional Variable; | ||
+ | – SLWSM: Single Lock With Single Message; | ||
+ | [—-Queste voci sopra andrebbero spiegate meglio ma c’ da cercare | ||
+ | dove sono state documentate——-] | ||
+ | • list of the available communication channels: this is a list of static links | ||
+ | to other Agoras, using different physical communication channels, such | ||
+ | as: | ||
+ | – RS-232 serial connections; | ||
+ | – TCP/IP network addresses | ||
+ | You can use the DCDT framework also in dynamic environment when | ||
+ | the addresses of the machines are not known a priori, in this case the Finder | ||
+ | Member of the Agora can use the multicast address to search for other Agoras | ||
+ | on the local network. When the Agoras belong to different networks the | ||
+ | multicast address and the Finder are useless. In this case the communication | ||
+ | involves directly the Members, as it will be described below. | ||
+ | [——-INIZIO parte da rivedere (continua fino a FINE parte da rivedere)— | ||
+ | —– questa parte va rivista a tavolino con matteo, perch ci ono delle cose | ||
+ | che non vanno. es: non chiaro cosa si intende con memeberche che iteragiscono | ||
+ | ”directly”. vanno spiegati meglio i thread LinkRx e LinkTx. va | ||
+ | spieato meglio la differenza tra link e bridge. quando si dice the tutti i messaggi | ||
+ | vengono transmitted, non viene detto dove. e la questione del level va | ||
+ | chiarita.——–] | ||
+ | In Figure 3.2 you can see an example of three different types of communications | ||
+ | among Agoras. Agora1, Agora2 and Agora3 are executed on | ||
+ | the same computer, so they can exchange messages through the InnerLinkerManager | ||
+ | that handles the communication when the Agoras reside on the | ||
+ | same machine; Agoras1 and Agora4 belong to the same network and the | ||
+ | Finder is responsible for searching for other Agoras on different machines | ||
+ | dynamically; Agora4 and Agora5 belong to different networks, so the communication | ||
+ | involves the Members directly. | ||
+ | [——– che cosa sono i moduli LinkRx e LinkTx? Non sono mai stati | ||
+ | menzionati prima...——-] | ||
+ | For each communication channel, you need to determine whether the | ||
+ | local node acts as a link or as a bridge. In the first case only the messages | ||
+ | generated locally will be sent to other Agoras, in the second case the node | ||
+ | |||
+ | == MRT Vision Software == | ||
+ | |||
+ | The Vision Software is a module whose objective is the analysis of the images acquired form digital cameras in order to obtain symbolic information for the Milan Robocup Team project's robots. | ||
+ | |||
+ | The main task of the software can be described as the succession of four phases: image acquisition, image classification, blob growing and feature extraction, and information communication. | ||
+ | |||
+ | === Image Acquisition === | ||
+ | |||
+ | During the image acquisition phase the software acquires through Firewire interface the images of two cameras, a frontal camera and an omni-directional one. The software can operate in two resolutions (640x480 and 320x240) and three color spaces (RGB24, YUV411 and YUV422). | ||
+ | |||
+ | Image acquisition is defined by the virtual VideoAcquisition class, that describes the methods necessary for frame capturing, communication with the camera and camera parameters setting. The VideoAcquisition class is implemented by two classes: FileAcquisition and FirewireAcquisition. The former emulates the behavior of a camera whilst simply accessing an image from a file and copying its content to the central memory, without implementing the methods needed to interact with a real camera. The latter implements a real firewire accessed camera interface and implements methods used to initialize/close the camera, capture frames from one of the cameras with attention to buffer problems and set the device parameters. | ||
+ | |||
+ | === Blob Growing === | ||
+ | |||
+ | [TODO] | ||
+ | Blob growing is the process that individuates groups of pixels/receptors with the same color in the image in order to determine the presence of objects (e.g. the ball or other robots), while feature extraction aims to obtain information about particular transition of colors that represent features of the field (e.g. green-white-green for the lines). | ||
+ | |||
+ | === Color Classifier === | ||
+ | |||
+ | During image classification the colors of the images are approximated to some sample colors chosen offline. This way the software can operate the following tasks. | ||
+ | |||
+ | It's the part of the software responsible for the image classification process and for the creation of the necessary data structures. | ||
+ | |||
+ | The classification works in an immediate way. Depending on three color values (RGB or YUV triplets) it searches for the corresponding cell in an appropriate lookup matrix and returns a label which represents the sample color to which the real color is most similar. This way, after a whole image has been classified, it can be properly processed without having to worry about hundreds of possible colors but only about some useful ones. Therefore we can simplify the image processing removing color shades we know of no relevance for our application. | ||
+ | |||
+ | The creation process of the lookup matrix is the most important part of the color classifier, since the whole image analyzation process depends on the quality of the color classification. The matrix is built starting from k sample colors, to which the corresponding cells are labelled accordingly; then following one of two possible clustering algorithms (KNN and DBC), the others cells are labelled. The DBC version has not been debugged enough so KNN is the only supported one at the present stage. | ||
+ | |||
+ | KNN (K-Nearest Neighbours) that is based on the presence of a number (K) of other labels within a fixed distance form the target cell, that is therefore labelled according to the most frequent label found. | ||
+ | |||
+ | DBC (Density Based Clustering) is based on the local presence and densitiy of other already labelled objects near the target cell. | ||
+ | The matrix apporximates all the possible colors represented by a color space by use of a color compression factor. This approximation mainly serves to significantly reduce the memory required for the classification. | ||
+ | |||
+ | Since the module works both in RGB and YUV color spaces the color classifier is able to build an YUV matrix directly from the corresponding RGB one, and can load already existing matrix from file. The building of the classifier from a YUV file is not implemented yet, qt the present stage the classifier is saved in RGB format and if you want to use it as YUV classifier you have to run the "build YUV" method. | ||
+ | |||
+ | The Color Classifier is provided by the ColorClassifier interface, whose main methods are the following: | ||
+ | |||
+ | * Create_Matrix and Delete_Matrix: to manage the physical implementation of the lookup matrix; | ||
+ | * build: to build an RGB lookup matrix from some sample colors; | ||
+ | * buildYUV: to create a YUV lookup matrix from an already existent RGB one; | ||
+ | * save_matrix and load_matrix: to save/load an existing matrix to/from a file; | ||
+ | * get_color: to get the label corresponding to an RGB/YUV triplet. | ||
+ | |||
+ | The ColorClassifier interface is implemented by DbscanColorClassifier and KnnColorClassifier, that differ only in the clustering algorithm they implement. | ||
+ | |||
+ | === Receptor-based Vision === | ||
+ | |||
+ | It's responsible for the analysis of the images acquired by the omni-directional camera. Since this camera system uses a circular mirror mounted on top of the robot in order to see its surroundings in every direction, the images are characterized by two circular concentric distinct areas, of which the inner one represents the terrain/objects farthest away from the robot and the outer onethe terrain/objects nearest to it. Since the inner area is significantly smaller than the outer one its resolution is therefore lower and its main purpose is to recognize the presence of particularly colored objects, like the ball or the goal, while the outer one serves to localize with more precision the presence of objects and features of the field. | ||
+ | |||
+ | Given these characteristics of the image the receptor-based vision utilizes two circular maps of receptors, i.e. points positioned on the image that represent a pixel or more of the original image, filtering its color value in order to attenuate noise effects and then classifying it. The number of receptor to be used and their position and densitiy on the image are decided offline by means of the Recvision Setup tool. Since the number of receptor is far inferior to the number of pixels, the image results approximated in both color and effective resolution. | ||
+ | |||
+ | A receptor map consists of a number of crowns, i.e. concetric circumferences characterized by their distance from the center of the image and by the number of receptors it contains; therefore a receptor map is entirely determined by the list of its crowns and by the position of its center. | ||
+ | |||
+ | Given the crown list, that is determined offline using the Recvision Setup tool, the receptor map is built starting from the most inner crown, positioning each receptor at the same from the center and at an increasing angles, whose value depends on the number of receptor per crown. The process is repeated for each crown until completion of the map. | ||
+ | |||
+ | Every receptor obtains its color value based on its type, that determines the filter it has to apply to its corresponding pixel on the image, and gets classified by means of the color classifier depending on the color space in use. | ||
+ | |||
+ | After the whole map has been classified the original image is of no interest anymore since the analysis process operates directly on the receptors that contain all the needed information. The software creates blobs of contiguous receptors with the same color label and determines the presence of transitions of colors. | ||
+ | |||
+ | At completion of the analysis process the software creates a message containing information about the blobs and transitions found that will be sent to the robot kernel. | ||
+ | |||
+ | === Pixmap Vision === | ||
+ | |||
+ | This portion of software has a very similar task to Receptor-based Vision's one, but, instead of using receptors to represent the image, PixMap Vision works directly at pixel level and aims to obtain slightly different information. In fact its primary task is the localization of the ball in order to determine its relative position in the 3D space and its distance from the robot. | ||
+ | |||
+ | The whole image classification and analysis process is very similar to the one described for Receptor-base Vision. The most important difference is that this kind of analysis heavily depends on camera calibration data and information concerning the ball, since it is impossible to determine the distance of an object whose dimensions are unknown from a single image. Therefore the quality of this kind of analysis heavily depends on the quality of the calibration data provided. | ||
+ | |||
+ | At the end of the process, as for Receptor-base Vision, a message containing all gathered information is sent to the robot kernel. | ||
+ | |||
+ | === Useful tools === | ||
+ | |||
+ | ==== Areas ==== | ||
+ | |||
+ | This software is used as a testing utility whose task is to display, given an analyzed image, its color classification and the information gathered by the blob growing and feature extraction processes. The image is acquired through the FileAcquisition class and the color classification is done by means of an already created color classifier. | ||
+ | |||
+ | The usage is: ./areas yuv|rgb 640x480|320x240 <image file> <area file> <color classifier> [<output file>]. | ||
+ | That is, it accepts images in both YUV422 and RGB24 color spaces and supports 640x480 and 320x240 resolutions. <image file> should be a valid YUV422 raw image or RGB24 ppm file. <area file> is a text file containing all the information the program should represent on the image, such as blob's bounding boxes and lines, saved in the following format: | ||
+ | |||
+ | To define a bounding box of opposing vertices (x1,y1) and (x2,y2) to be drawn in c color: | ||
+ | DEF_AREA c x1 y1 x2 y2 r1 r2 t1 t2 "dummy" | ||
+ | |||
+ | To define a segment of vertices (x1,y1) and (x2,y2) to be drawn in c color: | ||
+ | DEF_AREA c x1 y1 x2 y2 r1 t1 | ||
+ | |||
+ | <color classifier> should be a valid RGB lookup matrix file. | ||
+ | |||
+ | The software loads the provided image and saves two different images, both of them with the visual representation of the information provided in <area file>: | ||
+ | |||
+ | * classified.ppm, that is a classified version of the image; | ||
+ | * output.ppm (or <output file> if provided), that is the non-classified version of the image. | ||
+ | |||
+ | ==== Recvision Setup ==== | ||
+ | ==== Camera Calib ==== | ||
+ | |||
+ | == MAP Anchors Percepts == | ||
+ | [[Media:BonariniMatteucciRestelli.pdf | Paper]] about the concepts supported by MAP. | ||
+ | |||
+ | == MUREA: Multi-Resolution Evidence Accumulation == | ||
+ | MUREA, MUlti-Resolution Evidence Accumulation, is a module that im- | ||
+ | plements a localization algoritm. | ||
+ | |||
+ | This module has several configurable parameters that allow its reuse in | ||
+ | different context, e.g., the map of the environment, the required accuracy, a timeout. | ||
+ | |||
+ | MUREA completely abstracts from the sensors used for acquiring localization information, since its interface relies on the concept or perception, | ||
+ | which is shared with the processing sub-module of each sensor | ||
+ | |||
+ | == SCARE Coordinates Agents in Robotic Environments == | ||
+ | == SPIKE Plans In Known Environments == | ||
+ | == Tips, Tricks and HowTo == | ||
+ | In this section we report some ways of obtaining desired effects in MRT. | ||
+ | |||
+ | ==== Percept persistence ==== | ||
+ | Disappearing percepts are maintained in map for some time (settable parameter) by lowering their reliability. For instance, the presence of the ball for the Robocup application is maintained also if the ball is no longer perceived. After a given time, the percept is still maintained for another given time in the model, but no longer provided to the other modules. This is used for modeling purposes. | ||
+ | |||
+ | ==== Navigation ==== | ||
+ | A module developed by [user:SimoneCeriani|Simone Ceriani] gives the possibility to know the position of the robot in a map. This is possible thanks to a module originally developed for [[Lurch]], which analyzes images to find given markers, and computes the position of the robot w.r.t. the markers. A Kalman filter module integrating odometry is also available. All is implemented as a MRT expert and can provide to MAP the robot position. | ||
+ | |||
+ | ==== Fixed duration tasks ==== | ||
+ | It is possible to define tasks active for a given time through SCARE, which gives the possibility to dfine roles active for some time |
Latest revision as of 13:59, 30 June 2010
Modular Robotic Toolkit
Contents
- 1 Introduction
- 2 MRT Architecture
- 3 MrBrian: Multilevel Ruling Brian Reacts by Inferential ActioNs
- 4 DCDT: The Middleware
- 5 MRT Vision Software
- 6 MAP Anchors Percepts
- 7 MUREA: Multi-Resolution Evidence Accumulation
- 8 SCARE Coordinates Agents in Robotic Environments
- 9 SPIKE Plans In Known Environments
- 10 Tips, Tricks and HowTo
Introduction
This is the Modular Robotic Toolkit user manual, a comprehensive guide to the software architecture for autonomous robots developed by the AIRLab, Artificial Intelligence and Robotic Laboratory of the Dept. of Electronics and Information at Politecnico di Milano.
Most of the content of these pages comes from a cleanup (under construction) of the MRT manual written by Luigi Malago'. These WIKI pages are intended for continuous update and completion. Anybody wishing to cooperate is welcome!
MRT, Modular Robotic Toolkit, is a framework where a set of off-the-shelf modules can be easily combined and customized to realize robotic applications with minimal effort and time. The framework has been designed to be used in different applications where a distributed set of robot and sensors interact to accomplish tasks, such as: playing soccer in RoboCup, guiding people in indoor environments, and exploring unknown environments in a space setting.
The aim of this manual is to present the software architecture and make the user comfortable with the use and configuration of the different modules that can be integrated in the framework, so that it will be easy to develop robotic applications using MRT. For this reason, each chapter will include some examples of code and configuration files.
MRT Architecture
The MRT architecture is described in the paper published on the Springer STAR Series.
The SW is available from the SVN site of the department of Electronics. To register there follow this link [1]
Once registered, you can find on The SVN site the directions to download SW.
MrBrian: Multilevel Ruling Brian Reacts by Inferential ActioNs
The content of this is for the moment on the pdf manual mentioned above. Anyone wishing to improve it should take the content and put it here below, in the proper section.
The Behavior-based Paradigm
The Overall Architecture
Fuzzy predicates
CANDO and WANT Conditions
Informed Hierarchical Composition
Output Generation
Modules
Fuzzyfier
Preacher
Predicate Actions
Candoer
Wanter
Behavior Engine
Rules Behavior
Composer
Defuzzyfier
Parser and Messenger
Configuration Files and Examples
Fuzzy Sets
Fuzzy Predicates
Predicate Actions
CANDO and WANT Conditions
Playing with activations TODO
Defuzzyfication
Behavior Rules
Behavior List
Behavior Composition
Parser and Messenger
Using Mr. BRIAN
DCDT: The Middleware
DCDT, Device Communities Development Toolkit, is the framework used to integrate all the modules in MRT. This middleware has been implemented to simplify the development of applications where different tasks run si- multaneously and need to exchange messages. DCDT is a publish/sub- scribe framework able to exploit different physical communication means in a transparent and easy way.
The use of this toolkit helps the user dealing with processes and inter process communication, since it makes the interaction between the processes transparent with respect to their allocation. This means that the user does not have to take care whether the processes run on the same machine or on a distributed environment.
DCDT is a multi-threaded architecture consisting in a main active ob- ject, called Agora, hosting and managing various software modules, called Members. Members are basically concurrent programs/threads executed pe- riodically or on the notification of an event. Each Member of the Agora can exchange messages with other Members of the same Agora or with other Agoras on different machines.
The peculiar attitude of DCDT toward different physical communication channels, such as RS-232 serial connections, USB, Ethernet or IEEE 802.11b, is one of the main characteristics of this publish/subscribe middleware.
Agora and Members
An Agora is a process composed of more threads and data structures required to manage processes and messages. Each single thread, called Member in the DCDT terminology, is responsible for the execution of an instance of an object derived from the DCDT Member class. It is possible to realize distributed applications running different Agoras on different devices/machines, each of them hosting many Members. It is also possible to have on the same machine more than one Agora that hosts its own Members. In this way you can emulate the presence of different robots without the need of actually having them connected and running. There are two different type of Members, User Members and System Members. The main difference between the two is that System Members are responsible for the management of the infrastructure of the data structures of the Agora, while User Members are implemented by the user according to his needs. Moreover System Member are invisible to User Members.
The main System Members are:
- Finder: it is responsible for searching for other Agoras on local and
remote machines dynamically with short messages via multicast;
- MsgManager: this Member manages all the messages that are exchanged
along several members. In case each single Member handles its own message queue locally, the MsgManager takes care of moving messages from the main queue to the correct one;
- InnerLinkManager: its role is to arrange inter Agora communication
in case they are executed on the same machine;
- Link: those Members handle communication channels between two
Agoras, so that they messages can be exchanged. Each link is responsible for the communication flow in one direction, so there are separate Members for message receiving and sending;
Members of the Agora can exchange messages through the PostOffice, using a typical a publish/subscribe approach. Each Member can subscribe to the PostOffice of its Agora for a specific type of messages. Whenever a Member wants to publish a message, it has to notify the PostOffice, without taking into accont the final destinations of the deliveries.
Messages
DCDT Messages are characterized by header and payload fields. The header contains the unique identifier of the message type, the size of the data contained in the payload and some information regarding the producer. Members use unique identification types to subscribe and unsubscribe messages available throughout the community. Messages can be shared basically according to three modalities: without any guaranty (e.g. UDP), with some retransmissions (e.g. UDP retransmitted), with absolute receipt guaranty (e.g. TCP).
Sometimes the payload of a Message can be empty. For example you can use Messages to notify events, in this case the only information is the specific event that matches a particular Message type. This implies that all the Agoras must share the same list of Message types. In MRT, the interfaces among different modules are realized through messages. According to the publish/subscribe paradigm, each module may produce messages that are received by those modules that have expressed interest in them. In order to grant independence among modules, each module knows neither which modules will receive its messages nor the senders of the messages it has requested. In fact, typically, it does not matter which module has produced an information, since modules are interested in the information itself.
For instance, the localization module may benefit from knowing that in front of the robot there is a wall at the distance of 2.4 m, but it is not relevant which sensors has perceived this information or whether this is coming from another robot. For this reason, our modules communicate through XML (eXtensible Markup Language) messages whose structure is defined by a shared DTD (Document Type Definitions). Each message contains some general information, such as the time-stamp, and a list of objects characterized by a name and its membership class. For each object may be defined a number of attributes, that are tuples of name, value, variability, and reliability. In order to correctly parse the content of the messages, modules share a common ontology that defines the semantic of the used symbols.
The advantages of using XML messages, instead of messages in a binary format, are: the possibility of being readable by humans and of being edited by any text editor, well-structured by the use of DTDs, easy to change and extend, the existence of standard modules for the syntactic parsing. These advantages are paid by an increase in the amount of transferred data, parsing may need more time, binary data can not be included directly. The middleware encapsulates all the functionalities to handle Members and Messages in the Agora, so that the user does not have to take care of executing each single thread neither handling message delivery. This results in making the use of this middleware very easy.
Configuration Files and Examples
The Agora is the main object instantiated in every application that makes use of DCDT. Each Agora is composed of threads, called Members, that execute specific tasks and communicate through messages. The Agora can be instantiated using two different options called STANDALONE and NETWORK, according to the physical distribution of the Members. If you want Members to exchange messages from different machines, you need to use the NETWORK option, otherwise you can use the STANDALONE. Both allow the communication along Members of the same Agora and between Agoras on the same machine.
Agora
Different Agoras can communicate on the same machine through local sockets, this simple code fragment shows how you can instantiate an Agora using the STANDALONE option.
int main (int argc, char *argv[]) {
DCDT_Agora *agora;
agora = new DCDT_Agora();
.... }
Otherwise you can create an Agora using the NETWORK option. In this case you need to write a configuration file with all the network parameters required to allow communication between the different machines where the Agoras are executed. The following code lines show the use of the overloaded constructor that takes the configuration file as parameter. int main (int argc, char *argv[]) { DCDT_Agora *agora; agora = new DCDT_Agora("dcdt.cfg"); .... } The configuration file for the Agora includes the following information: • network parameters of the machine: these parameters define the TCP/IP address and the port of the Agora, plus the multicast address of the local network; 20 • policy of the PostOffice: this parameter define the behavior of the PostOffice, so far three different policies have been implemented: – SLWB: Single Lock With Buffer; – SLWDWCV: Single Lock With DispatcherWith Conditional Variable; – SLWSM: Single Lock With Single Message; [—-Queste voci sopra andrebbero spiegate meglio ma c’ da cercare dove sono state documentate——-] • list of the available communication channels: this is a list of static links to other Agoras, using different physical communication channels, such as: – RS-232 serial connections; – TCP/IP network addresses You can use the DCDT framework also in dynamic environment when the addresses of the machines are not known a priori, in this case the Finder Member of the Agora can use the multicast address to search for other Agoras on the local network. When the Agoras belong to different networks the multicast address and the Finder are useless. In this case the communication involves directly the Members, as it will be described below. [——-INIZIO parte da rivedere (continua fino a FINE parte da rivedere)— —– questa parte va rivista a tavolino con matteo, perch ci ono delle cose che non vanno. es: non chiaro cosa si intende con memeberche che iteragiscono ”directly”. vanno spiegati meglio i thread LinkRx e LinkTx. va spieato meglio la differenza tra link e bridge. quando si dice the tutti i messaggi vengono transmitted, non viene detto dove. e la questione del level va chiarita.——–] In Figure 3.2 you can see an example of three different types of communications among Agoras. Agora1, Agora2 and Agora3 are executed on the same computer, so they can exchange messages through the InnerLinkerManager that handles the communication when the Agoras reside on the same machine; Agoras1 and Agora4 belong to the same network and the Finder is responsible for searching for other Agoras on different machines dynamically; Agora4 and Agora5 belong to different networks, so the communication involves the Members directly. [——– che cosa sono i moduli LinkRx e LinkTx? Non sono mai stati menzionati prima...——-] For each communication channel, you need to determine whether the local node acts as a link or as a bridge. In the first case only the messages generated locally will be sent to other Agoras, in the second case the node
MRT Vision Software
The Vision Software is a module whose objective is the analysis of the images acquired form digital cameras in order to obtain symbolic information for the Milan Robocup Team project's robots.
The main task of the software can be described as the succession of four phases: image acquisition, image classification, blob growing and feature extraction, and information communication.
Image Acquisition
During the image acquisition phase the software acquires through Firewire interface the images of two cameras, a frontal camera and an omni-directional one. The software can operate in two resolutions (640x480 and 320x240) and three color spaces (RGB24, YUV411 and YUV422).
Image acquisition is defined by the virtual VideoAcquisition class, that describes the methods necessary for frame capturing, communication with the camera and camera parameters setting. The VideoAcquisition class is implemented by two classes: FileAcquisition and FirewireAcquisition. The former emulates the behavior of a camera whilst simply accessing an image from a file and copying its content to the central memory, without implementing the methods needed to interact with a real camera. The latter implements a real firewire accessed camera interface and implements methods used to initialize/close the camera, capture frames from one of the cameras with attention to buffer problems and set the device parameters.
Blob Growing
[TODO] Blob growing is the process that individuates groups of pixels/receptors with the same color in the image in order to determine the presence of objects (e.g. the ball or other robots), while feature extraction aims to obtain information about particular transition of colors that represent features of the field (e.g. green-white-green for the lines).
Color Classifier
During image classification the colors of the images are approximated to some sample colors chosen offline. This way the software can operate the following tasks.
It's the part of the software responsible for the image classification process and for the creation of the necessary data structures.
The classification works in an immediate way. Depending on three color values (RGB or YUV triplets) it searches for the corresponding cell in an appropriate lookup matrix and returns a label which represents the sample color to which the real color is most similar. This way, after a whole image has been classified, it can be properly processed without having to worry about hundreds of possible colors but only about some useful ones. Therefore we can simplify the image processing removing color shades we know of no relevance for our application.
The creation process of the lookup matrix is the most important part of the color classifier, since the whole image analyzation process depends on the quality of the color classification. The matrix is built starting from k sample colors, to which the corresponding cells are labelled accordingly; then following one of two possible clustering algorithms (KNN and DBC), the others cells are labelled. The DBC version has not been debugged enough so KNN is the only supported one at the present stage.
KNN (K-Nearest Neighbours) that is based on the presence of a number (K) of other labels within a fixed distance form the target cell, that is therefore labelled according to the most frequent label found.
DBC (Density Based Clustering) is based on the local presence and densitiy of other already labelled objects near the target cell. The matrix apporximates all the possible colors represented by a color space by use of a color compression factor. This approximation mainly serves to significantly reduce the memory required for the classification.
Since the module works both in RGB and YUV color spaces the color classifier is able to build an YUV matrix directly from the corresponding RGB one, and can load already existing matrix from file. The building of the classifier from a YUV file is not implemented yet, qt the present stage the classifier is saved in RGB format and if you want to use it as YUV classifier you have to run the "build YUV" method.
The Color Classifier is provided by the ColorClassifier interface, whose main methods are the following:
- Create_Matrix and Delete_Matrix: to manage the physical implementation of the lookup matrix;
- build: to build an RGB lookup matrix from some sample colors;
- buildYUV: to create a YUV lookup matrix from an already existent RGB one;
- save_matrix and load_matrix: to save/load an existing matrix to/from a file;
- get_color: to get the label corresponding to an RGB/YUV triplet.
The ColorClassifier interface is implemented by DbscanColorClassifier and KnnColorClassifier, that differ only in the clustering algorithm they implement.
Receptor-based Vision
It's responsible for the analysis of the images acquired by the omni-directional camera. Since this camera system uses a circular mirror mounted on top of the robot in order to see its surroundings in every direction, the images are characterized by two circular concentric distinct areas, of which the inner one represents the terrain/objects farthest away from the robot and the outer onethe terrain/objects nearest to it. Since the inner area is significantly smaller than the outer one its resolution is therefore lower and its main purpose is to recognize the presence of particularly colored objects, like the ball or the goal, while the outer one serves to localize with more precision the presence of objects and features of the field.
Given these characteristics of the image the receptor-based vision utilizes two circular maps of receptors, i.e. points positioned on the image that represent a pixel or more of the original image, filtering its color value in order to attenuate noise effects and then classifying it. The number of receptor to be used and their position and densitiy on the image are decided offline by means of the Recvision Setup tool. Since the number of receptor is far inferior to the number of pixels, the image results approximated in both color and effective resolution.
A receptor map consists of a number of crowns, i.e. concetric circumferences characterized by their distance from the center of the image and by the number of receptors it contains; therefore a receptor map is entirely determined by the list of its crowns and by the position of its center.
Given the crown list, that is determined offline using the Recvision Setup tool, the receptor map is built starting from the most inner crown, positioning each receptor at the same from the center and at an increasing angles, whose value depends on the number of receptor per crown. The process is repeated for each crown until completion of the map.
Every receptor obtains its color value based on its type, that determines the filter it has to apply to its corresponding pixel on the image, and gets classified by means of the color classifier depending on the color space in use.
After the whole map has been classified the original image is of no interest anymore since the analysis process operates directly on the receptors that contain all the needed information. The software creates blobs of contiguous receptors with the same color label and determines the presence of transitions of colors.
At completion of the analysis process the software creates a message containing information about the blobs and transitions found that will be sent to the robot kernel.
Pixmap Vision
This portion of software has a very similar task to Receptor-based Vision's one, but, instead of using receptors to represent the image, PixMap Vision works directly at pixel level and aims to obtain slightly different information. In fact its primary task is the localization of the ball in order to determine its relative position in the 3D space and its distance from the robot.
The whole image classification and analysis process is very similar to the one described for Receptor-base Vision. The most important difference is that this kind of analysis heavily depends on camera calibration data and information concerning the ball, since it is impossible to determine the distance of an object whose dimensions are unknown from a single image. Therefore the quality of this kind of analysis heavily depends on the quality of the calibration data provided.
At the end of the process, as for Receptor-base Vision, a message containing all gathered information is sent to the robot kernel.
Useful tools
Areas
This software is used as a testing utility whose task is to display, given an analyzed image, its color classification and the information gathered by the blob growing and feature extraction processes. The image is acquired through the FileAcquisition class and the color classification is done by means of an already created color classifier.
The usage is: ./areas yuv|rgb 640x480|320x240 <image file> <area file> <color classifier> [<output file>]. That is, it accepts images in both YUV422 and RGB24 color spaces and supports 640x480 and 320x240 resolutions. <image file> should be a valid YUV422 raw image or RGB24 ppm file. <area file> is a text file containing all the information the program should represent on the image, such as blob's bounding boxes and lines, saved in the following format:
To define a bounding box of opposing vertices (x1,y1) and (x2,y2) to be drawn in c color:
DEF_AREA c x1 y1 x2 y2 r1 r2 t1 t2 "dummy"
To define a segment of vertices (x1,y1) and (x2,y2) to be drawn in c color:
DEF_AREA c x1 y1 x2 y2 r1 t1
<color classifier> should be a valid RGB lookup matrix file.
The software loads the provided image and saves two different images, both of them with the visual representation of the information provided in <area file>:
- classified.ppm, that is a classified version of the image;
- output.ppm (or <output file> if provided), that is the non-classified version of the image.
Recvision Setup
Camera Calib
MAP Anchors Percepts
Paper about the concepts supported by MAP.
MUREA: Multi-Resolution Evidence Accumulation
MUREA, MUlti-Resolution Evidence Accumulation, is a module that im- plements a localization algoritm.
This module has several configurable parameters that allow its reuse in different context, e.g., the map of the environment, the required accuracy, a timeout.
MUREA completely abstracts from the sensors used for acquiring localization information, since its interface relies on the concept or perception, which is shared with the processing sub-module of each sensor
SCARE Coordinates Agents in Robotic Environments
SPIKE Plans In Known Environments
Tips, Tricks and HowTo
In this section we report some ways of obtaining desired effects in MRT.
Percept persistence
Disappearing percepts are maintained in map for some time (settable parameter) by lowering their reliability. For instance, the presence of the ball for the Robocup application is maintained also if the ball is no longer perceived. After a given time, the percept is still maintained for another given time in the model, but no longer provided to the other modules. This is used for modeling purposes.
A module developed by [user:SimoneCeriani|Simone Ceriani] gives the possibility to know the position of the robot in a map. This is possible thanks to a module originally developed for Lurch, which analyzes images to find given markers, and computes the position of the robot w.r.t. the markers. A Kalman filter module integrating odometry is also available. All is implemented as a MRT expert and can provide to MAP the robot position.
Fixed duration tasks
It is possible to define tasks active for a given time through SCARE, which gives the possibility to dfine roles active for some time